source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
(3)In sentence (1), McCann can be a person or an orga nization.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The second modification is more important, and is discussed in the next section.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\xiao3jiang4 'little general.'
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
collected too.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
We used C = 1.0 as the L2 regularization constant in (Eq.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
This representation gives ir, an appropriate morphological decomposition, pre serving information that would be lost by simply listing ir, as an unanalyzed form.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
I ⢠JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- ⢠:zhang!
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
A contextual role represents the role that a noun phrase plays in an event or relationship.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
That is, we can use the discourse parser on PCC texts, emulating for instance a âco-reference oracleâ that adds the information from our co-reference annotations.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
35 76.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
In this case, Maury Cooper is extracted.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
We evaluate our model on seven languages exhibiting substantial syntactic variation.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
One annotator suggested that this was the case for as much as 10% of our test sentences.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
In this case, this knowledge source reports that the candidate is not a viable antecedent for the anaphor.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
Here, âEGâ represents âEastern Group Plcâ.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
We present two algorithms.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Performance typically stabilizes across languages after only a few number of iterations.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
We map the ATB morphological analyses to the shortened âBiesâ tags for all experiments.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
One obvious application is information extraction.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
37 84.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
However, it is robust, efficient, and easy to implement.4 To perform the maximization in (7), we used the popular L-BFGS algorithm (Liu and Nocedal, 1989), which requires gradient information.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
Sometimes, however, these beliefs can be contradictory.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
A token that is allCaps will also be initCaps.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
About half of the participants of last year’s shared task participated again.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of authorâs intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
The semantic agreement KS eliminates some candidates, but also provides positive evidence in one case: if the candidate and anaphor both have semantic tags human, company, date, or location that were assigned via NER or the manually labeled dictionary entries.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Terrorism systems must distinguish between people who perpetrate a crime and people who are victims of a crime.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
The role that each noun phrase plays in the kidnapping event is key to distinguishing these cases.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
The remaining arcs are marked OOV.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
For statistics on this test set, refer to Figure 1.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
On each language we investigate the contribution of each component of our model.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Therefore, performance is more closely tied to the underlying data structure than to the cache.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
Table 2 shows results for both settings and all methods described in sections 2 and 3.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
There has been additional recent work on inducing lexicons or other knowledge sources from large corpora.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
As you can see in the figure, the accuracy for the domain is quite high except for the âagreeâ set, which contains various expressions representing different relationships for an IE application.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
In contrast, our morphological probabilities are based on a unigram, lexeme-based model, and all other (local and non-local) contextual considerations are delegated to the PCFG.
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
Each parse is converted into a set of constituents represented as a tuples: (label, start, end).
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
18 77.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
6 Results and Analysis.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
86 78.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
In considering the recognition of these languages, we were forced to be more specific regarding the relationship between the structures derived by these formalisms and the substrings they span.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
At most one feature in this group will be set to 1.
|
The corpus was annoted with different linguitic information.
| 0 |
For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: ⢠Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
Similar behavior is observed when adding features.
|
A beam search concept is applied as in speech recognition.
| 0 |
e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
A few annotators suggested to break up long sentences into clauses and evaluate these separately.
|
There is no global pruning.
| 0 |
A search restriction especially useful for the translation direction from German to English is presented.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
For rules p : A fpo such that fp is constant function, giving an elementary structure, fp is defined such that fp() = (Si ... xi() where each z is a constant string.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
Consider first the examples in (2).
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Secondly, some segments in a proposed segment sequence may in fact be seen lexical events, i.e., for some p tag Prf(p —* (s, p)) > 0, while other segments have never been observed as a lexical event before.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
In all figures, we present the per-sentence normalized judgements.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.