source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
This is appropriate in cases where it is sanctioned by Bayes’ law, such as multiplying LM and TM probabilities, but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domain.
|
The corpus was annoted with different linguitic information.
| 0 |
Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
If is a number string (such as one, two, etc), then the feature NumberString is set to 1.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
Global features are extracted from other occurrences of the same token in the whole document.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kind of cue.
|
A beam search concept is applied as in speech recognition.
| 0 |
For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
However, using the top-level semantic classes of WordNet proved to be problematic because the class distinctions are too coarse.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i).
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
The first modification — cautiousness — is a relatively minor change.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The semantic caseframe expectations are used in two ways.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Inflectional features marking pronominal elements may be attached to different kinds of categories marking their pronominal complements.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Replacing this with an ranked evaluation seems to be more suitable.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
SRILM (Stolcke, 2002) is widely used within academia.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
For each domain, we created a semantic dictionary by doing two things.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Our motivation for using DempsterShafer is that it provides a well-principled framework for combining evidence from multiple sources with respect to competing hypotheses.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
Since the transducers are built from human-readable descriptions using a lexical toolkit (Sproat 1995), the system is easily maintained and extended.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
It has been shown in (Chandra et al., 1981) that if M works in space log n then there is a deterministic TM which accepts the same language in polynomial time.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
As in boosting, the algorithm works in rounds.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Words and punctuation that appear in brackets are considered optional.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The approach uses both spelling and contextual rules.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Sentences and systems were randomly selected and randomly shuffled for presentation.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
It has no syntactic function.
|
A beam search concept is applied as in speech recognition.
| 0 |
Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Quantization can be improved by jointly encoding probability and backoff.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac tually tag the words as belonging to one or another class of expression.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
Here, the pruning threshold t0 = 10:0 is used.
|
The texts were annotated with the RSTtool.
| 0 |
For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
In addition to the tapes required to store the indices, M requires one work tape for splitting the substrings.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Two measures that can be used to compare judgments are: 1.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
4 70.4 46.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.
|
There is no global pruning.
| 0 |
In this case, we have no finite-state restrictions for the search space.
|
A beam search concept is applied as in speech recognition.
| 0 |
We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
|
The corpus was annoted with different linguitic information.
| 0 |
When finished, the whole material is written into an XML-structured annotation file.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s test).
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Figure 3 shows examples of semantic expectations that were learned.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The algorithm in Fig.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
We found that contextual role knowledge was more beneficial for pronouns than for definite noun phrases.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
In application settings, this may be a profitable strategy.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
If the expression is a word or a short phrase (like âcorporationâ and âcompanyâ), it is called a âsynonymâ.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Each knowledge source then assigns a probability estimate to each candidate, which represents its belief that the candidate is the antecedent for the anaphor.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
As can be seen from the last column in Table 1, both Head and Head+Path may theoretically lead to a quadratic increase in the number of distinct arc labels (Head+Path being worse than Head only by a constant factor), while the increase is only linear in the case of Path.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
0 X u} "' o; .2 X X><X X XX X X X X X X x X X X X X x X V X X X X .;t'*- XXX:OX X X X X X X 9 x X X XX XX X X X X X X X XXX:< X X>O<XX>!KXX XI<>< »C X X XX :X: X X "' X X XX >OO<X>D<XIK X X X X X X --XX»: XXX X X»C X X«X...C:XXX X Xll< X X ><XX>IIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R"2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
0 70.9 42.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
In a few cases, the criteria for correctness are made more explicit.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Future work along these lines will incorporate other layers of annotation, in particular the syntax information.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task.
|
All the texts were annotated by two people.
| 0 |
rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
The Berkeley parser gives state-of-the-art performance for all metrics.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion "orthographic word" is not universal.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Two subjects are each given a calendar and they are asked to schedule a meeting.
|
Here we present two algorithms.
| 0 |
(2) was extended to have an additional, innermost loop over the (3) possible labels.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
The token-level term is similar to the standard HMM sampling equations found in Johnson (2007).
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Manual and Automatic Evaluation of Machine Translation between European Languages
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
In the ATB, :: b astaâadah is tagged 48 times as a noun and 9 times as verbal noun.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
(1992).
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.