source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
A consequence of the ability to generate tree sets with this property is that CC's under this definition can generate the following language which can not be generated by either TAG's or HG's.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
A final alternate approach would be to combine weighted joint frequencies rather than conditional estimates, ie: cI(s, t) + w,\(s, t)co(, s, t), suitably normalized.5 Such an approach could be simulated by a MAP-style combination in which separate 0(t) values were maintained for each t. This would make the model more powerful, but at the cost of having to learn to downweight OUT separately for each t, which we suspect would require more training data for reliable performance.
|
Here both parametric and non-parametric models are explored.
| 0 |
It is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsers.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Figure 4 shows some such phrase sets based on keywords in the CC-domain.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
phrase (markContainsVerb).
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
Naseem et al. (2009) and Snyder et al.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Removing the leaves from the resulting tree yields a parse for L under G, with the desired probabilities.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Since the lattice L for a given sentence W is determined by the morphological analyzer M we have which is precisely the formula corresponding to the so-called lattice parsing familiar from speech recognition.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
On each language we investigate the contribution of each component of our model.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
This overview is illustrated in Figure 1.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â â â 0 . 8 0 9 0.839 335 0 . 7 9
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
It is a relatively frequent word in the domain, but it can be used in different extraction scenarios.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
Entries for 2 < n < N store a vocabulary identifier, probability, backoff, and an index into the n + 1-gram table.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
A totally non stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Its only purpose is 3 This follows since each θt has St â 1 parameters and.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
In total there are O(K 2) parameters associated with the transition parameters.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
The second concerns the methods used (if any) to ex tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Set the decision list to include all rules whose (smoothed) strength is above some threshold Pmin.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
In this section, we describe the seven general knowledge sources and explain how the DempsterShafer model makes resolutions.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
Once the lexicon has been drawn, the model proceeds similarly to the standard token-level HMM: Emission parameters θ are generated conditioned on tag assignments T . We also draw transition parameters Ï.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
We therefore also normalized judgements on a per-sentence basis.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
An important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Thus at each iteration the algorithm is forced to pick features for the location, person and organization in turn for the classifier being trained.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
The judgements tend to be done more in form of a ranking of the different systems.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Once HMM parameters (θ, Ï) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
'Malaysia.'
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
The second weakness is purely conceptual, and probably does not affect the per formance of the model.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
One of the difficulties in Natural Language Processing is the fact that there are many ways to express the same thing or event.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
In Table 5 we present results from small test cor pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
Such constraints are derived from training data, expressing some relationship between features and outcome.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)).
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
This withdrawal by the treasury secretary is understandable, though.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
We consider two variants of Berg-Kirkpatrick et al.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
each word in the lexicon whether or not each string is actually an instance of the word in question.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
In this paper we study the problem of using a parallel corpus from a background domain (OUT) to improve performance on a target domain (IN) for which a smaller amount of parallel training material—though adequate for reasonable performance—is also available.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
00 76.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Training and testing is based on the Europarl corpus.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
|
Here both parametric and non-parametric models are explored.
| 0 |
The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
47 78.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.
|
The texts were annotated with the RSTtool.
| 0 |
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
However, those methods need initial seeds, so the relation between entities has to be known in advance.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
With the exception of the Dutch data set, no other processing is performed on the annotated tags.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
This is an iterative method that improves the estimation of the parameters at each iteration.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi cal rule-based approaches, and approaches that combine lexical information with sta tistical information.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
In Figure 4 we show an example of variation between the parsing models.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
4 53.7 43.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
M(wi) = Li).
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.