source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference.
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
There is a fairly large body of work on SMT adaptation.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.
|
The texts were annotated with the RSTtool.
| 0 |
One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 1 |
Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.
|
Here we present two algorithms.
| 0 |
We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen bigrams in such a scheme would typically be modeled with a special backoff state.)
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Specifically, for both settings we report results on the median run for each setting.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
(b) does the translation have the same meaning, including connotations?
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
A morphological analyzer M : W—* L is a function mapping sentences in Hebrew (W E W) to their corresponding lattices (M(W) = L E L).
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
The accuracy of the sets in representing paraphrase ranged from 73% to 99%, depending on the NE categories and set sizes; the accuracy of the links for two evaluated domains was 73% and 86%.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function.
|
The corpus was annoted with different linguitic information.
| 0 |
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
We report results for the best and median hyperparameter settings obtained in this way.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Many human evaluation metrics have been proposed.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
In Table 7 we give results for several evaluation metrics.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
This paper does not necessarily reflect the position of the U.S. Government.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
(1992).
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
One hopes that such a corpus will be forth coming.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
9 65.5 46.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
95 76.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
kann 7.nicht 8.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
From the definition of TAG's, it follows that the choice of adjunction is not dependent on the history of the derivation.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
Log-linear combination (loglin) improves on this in all cases, and also beats the pure IN system.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
With this restriction the resulting tree sets will have independent paths.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
4.2 A Sample Segmentation Using Only Dictionary Words Figure 4 shows two possible paths from the lattice of possible analyses of the input sentence B X:Â¥ .:.S:P:l 'How do you say octopus in Japanese?' previously shown in Figure 1.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
2.1 Inverted Alignments.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP↑Sb (signifying an AuxP that has been lifted from a Sb).
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
We combined evidence from four contextual role knowledge sources with evidence from seven general knowledge sources using a DempsterShafer probabilistic model.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
In all figures, we present the per-sentence normalized judgements.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
We are currently exploring such algorithms.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
The P (W |T , Ï) term in the lexicon component now decomposes as: n P (W |T , Ï) = n P (Wi|Ti, Ï) i=1 n   tions are not modeled by the standard HMM, which = n ï£ n P (v|ÏTi f ) instead can model token-level frequency.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
This work was funded by NSF grant IRI-9502312.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
Morphologically derived words such as, xue2shengl+men0.
|
The texts were annotated with the RSTtool.
| 0 |
All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
We then gather all phrases with the same keyword.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Next, we describe four contextual role knowledge sources that are created from the training examples and the caseframes.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Unsupervised Models for Named Entity Classification Collins
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
30 16.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
Evaluation within a set The evaluation of paraphrases within a set of phrases which share a keyword is illustrated in Figure 4.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
First, any grammar must involve a finite number of elementary structures, composed using a finite number of composition operations.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Time for Moses itself to load, including loading the language model and phrase table, is included.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
[Hasegawa et al. 04] reported only on relation discovery, but one could easily acquire para phrases from the results.
|
Here we present two algorithms.
| 0 |
(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
We empirically control for the effect of our heuristics to make sure our pruning does not undermine the objectives of our joint task.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
However, for our purposes it is not sufficient to repre sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
52 77.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
It has been shown in (Chandra et al., 1981) that if M works in space log n then there is a deterministic TM which accepts the same language in polynomial time.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
5 64.7 42.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
We use w erations of sampling (see Figure 2 for a depiction).
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
We have not to date explored these various options.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
Unfortunately, we have much less data to work with than with the automatic scores.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
These clusters are computed using an SVD variant without relying on transitional structure.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
Set the decision list to include all rules whose (smoothed) strength is above some threshold Pmin.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.