source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For example, the passive voice pattern “<subject> were kidnapped” and the active voice pattern “kidnapped <direct object>” are merged into a single normalized pattern “kidnapped <patient>”.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, “<agent> kidnapped” or “kidnapped <patient>”), and (2) predicate-argument relations associated with both verbs and nouns (e.g., “kidnapped for <np>” or “vehicle with <np>”).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We did have a problem classifying some modified noun phrases where the modified phrase does not represent a qualified or restricted form of the head, like “chairman” and “vice chairman”, as these are both represented by the keyword “chairman”.
The AdaBoost algorithm was developed for supervised learning.
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
One hopes that such a corpus will be forth­ coming.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
1 1 0.
Here we present two algorithms.
0
The first modification — cautiousness — is a relatively minor change.
This paper talks about Unsupervised Models for Named Entity Classification.
0
On each step CoBoost searches for a feature and a weight so as to minimize either 40 or 40.
There are clustering approaches that assign a single POS tag to each word type.
0
The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, this result is consistent with the results of ex­ periments discussed in Wu and Fung (1994).
These clusters are computed using an SVD variant without relying on transitional structure.
0
The token-level term is similar to the standard HMM sampling equations found in Johnson (2007).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Interpolation search has a more expensive pivot but performs less pivoting and reads, so it is slow on small data and faster on large data.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The text type are editorials instead of speech transcripts.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse.
Two general approaches are presented and two combination techniques are described for each approach.
0
First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
9 66.4 47.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.
They have made use of local and global features to deal with the instances of same token in a document.
0
4.1 Local Features.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
We realize the importance of paraphrase; however, the major obstacle is the construction of paraphrase knowledge.
All the texts were annotated by two people.
0
The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Instead, we resort to an iterative update based method.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
attaching to terms denoting human beings.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
These models generally outperform our memory consumption but are much slower, even when cached.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting).
This paper talks about Unsupervised Models for Named Entity Classification.
0
For example, the independence assumptions mean that the model fails to capture the dependence between specific and more general features (for example the fact that the feature full.-string=New_York is always seen with the features contains (New) and The baseline method tags all entities as the most frequent class type (organization). contains (York) and is never seen with a feature such as contains (Group) ).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We plan to explore more powerful techniques for exploiting the diversity of parsing methods.
The texts were annotated with the RSTtool.
0
Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Note, however, that there might be situations in which Zco in fact increases.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This variant is tested in the experiments below.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs.
It is probably the first analysis of Arabic parsing of this kind.
0
NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.
They have made use of local and global features to deal with the instances of same token in a document.
0
of Tokens No.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
A similar structure is used in speech recognition.
This assumption, however, is not inherent to type-based tagging models.
0
In our work, we demonstrate that using a simple na¨ıveBayes approach also yields substantial performance gains, without the associated training complexity.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Each out-of-domain phrase pair is characterized by a set of simple features intended to reflect how useful it will be.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
18 We are grateful to ChaoHuang Chang for providing us with this set.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We conjecture that this trend may continue by incorporating additional information, e.g., three-dimensional models as proposed by Tsarfaty and Sima’an (2007).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(5) and ht into Equ.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
As lower frequency examples include noise, we set a threshold that an NE category pair should appear at least 5 times to be considered and an NE instance pair should appear at least twice to be considered.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Although these authors report better gains than ours, they are with respect to a non-adapted baseline.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The perplexity for the trigram language model used is 26:5.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too.
There are clustering approaches that assign a single POS tag to each word type.
0
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.
The texts were annotated with the RSTtool.
0
annotation guidelines that tell annotators what to do in case of doubt.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We call such a constituent an isolated constituent.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
(1992).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For that application, at a minimum, one would want to know the phonological word boundaries.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Figure 3 Figure 1.
Two general approaches are presented and two combination techniques are described for each approach.
0
Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
This paper conducted research in the area of automatic paraphrase discovery.
0
We will report the evaluation results in the next subsection.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
attaching to terms denoting human beings.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
On the English side, however, the vertices (denoted by Ve) correspond to word types.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , ψ) component.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
In TAG's the elementary tree and addresses where adjunction takes place are used to instantiate the operation.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Both parametric and non-parametric models are explored.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Each feature group can be made up of many binary features.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
As was explained in the results section, “strength” or “add” are not desirable keywords in the CC-domain.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Can we do . QmS: Yes, wonderful.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
This approach is compared to another reordering scheme presented in (Berger et al., 1996).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Fourth, we show how to build better models for three different parsers.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (φ, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|φ, θ) = P (T , W , θ, ψ, φ, t, w|α, β) = P (T , W , ψ|β) [Lexicon]  n n  (w,t)∈(w,t) j  P (tj |φtj−1 )P (wj |tj , θtj ) P (φ, θ|T , α, β) [Parameter] P (w, t|φ, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.
There are clustering approaches that assign a single POS tag to each word type.
0
Simple Type-Level Unsupervised POS Tagging
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
To show that the derivation tree set of a TAG is a local set, nodes are labeled by pairs consisting of the name of an elementary tree and the address at which it was adjoined, instead of labelling edges with addresses.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
However, those methods need initial seeds, so the relation between entities has to be known in advance.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Before presenting our results, we describe the datasets that we used, as well as two baselines.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The out-of-domain test set differs from the Europarl data in various ways.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , ψ) component.