source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We focus on phrases which connect two Named Entities (NEs), and proceed in two stages.
|
There is no global pruning.
| 0 |
Again, the monotone search performs worst.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
This remains as future work.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004).
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
Before we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
In our particular case the majority requires the agreement of only two parsers because we have only three.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
(8) can now be rewritten5 as which is of the same form as the function Zt used in AdaBoost.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
This work was supported in part by the National Science Foundation under grant IRI9704240.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acci dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Unfortunately, we were unable to correctly run the IRSTLM quantized variant.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
As can be seen in Figure 3, the phrases in the âagreeâ set include completely different relationships, which are not paraphrases.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
Another attempt at using global information can be found in (Borthwick, 1999).
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Of course, we.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
The effect of a second reference resolution classifier is not entirely the same as that of global features.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
This suggests a direct parallel to (1): where ˜p(s, t) is a joint empirical distribution extracted from the IN dev set using the standard procedure.2 An alternative form of linear combination is a maximum a posteriori (MAP) combination (Bacchiani et al., 2004).
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
7).
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993).
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
⢠Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
They are: 5We are grateful to an anonymous reviewer for pointing this out.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acci dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
In many cases these failures in recall would be fixed by having better estimates of the actual prob abilities of single-hanzi words, since our estimates are often inflated.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.
|
There is no global pruning.
| 0 |
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
This is the first time that we organized a large-scale manual evaluation.
|
Here we present two algorithms.
| 0 |
An edge indicates that the two features must have the same label.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Second, BABAR performs reliable case resolution to identify anaphora that can be easily resolved using the lexical and syntactic heuristics described in Section 2.1.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
In the âPerson â Personâ domain, 618 keywords are found, and in the âCountry â Countryâ domain, 303 keywords are found.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
52 15.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
There, a lattice is used to represent the possible sentences resulting from an interpretation of an acoustic model.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
F1 85 Berkeley 80 Stanford.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
9 50.2 +P RI OR be st me dia n 47.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
The 1st block contains the simple baselines from section 2.1.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Finally, the concatenated 5 * 20% output is used to train the reference resolution component.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
Taking /V, to be the number of examples an algorithm classified correctly (where all gold standard items labeled noise were counted as being incorrect), we calculated two measures of accuracy: See Tab.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
We can represent any derived tree of a TAG by the two substrings that appear in its frontier, and then define how the adjunction operation concatenates the substrings.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn.
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The natural baseline approach is to concatenate data from IN and OUT.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
In general, different modalities (âplanned to buyâ, âagreed to buyâ, âboughtâ) were considered to express the same relationship within an extraction setting.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
The distribution specifies the relative weight, or importance, of each example — typically, the weak learner will attempt to minimize the weighted error on the training set, where the distribution specifies the weights.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
This paper does not necessarily reflect the position of the U.S. Government.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
We use MMAX for this annotation as well.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
The segmenter will give both analyses :1 cai2 neng2 'just be able,' and ?]cai2neng2 'talent,' but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping them.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
The focus of our work is on the use of contextual role knowledge for coreference resolution.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
Besides information structure, the second main goal is to enhance current models of rhetorical structure.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Arguably this consists of about three phonological words.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
10) and trained both EM and L-BFGS for 1000 iterations.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Table 2 shows our complete set of results.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
2 60.7 56.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
BABAR merely identifies caseframes that frequently co-occur in coreference resolutions.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Secondly, for all our models we provide better fine- and coarse-grained POS-tagging accuracy, and all pruned models outperform the Oracle results reported by them.12 In terms of syntactic disambiguation, even the simplest grammar pruned with HSPELL outperforms their non-Oracle results.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
Table 2 shows results for both settings and all methods described in sections 2 and 3.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Table 1 shows four words â 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
2.3 Rhetorical structure.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
In Table 5 we present results from small test cor pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
conceptual relationship in the discourse.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.