source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
we perform five runs with different random initialization of sampling state.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Along with locking and background kernel operations such as prefaulting, this explains why wall time is not one-eighth that of the single-threaded case. aLossy compression with the same weights. bLossy compression with retuned weights. the non-lossy options.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
BABAR applies a DempsterShafer probabilistic model to make resolutions based on evidence from the contextual role knowledge sources as well as general knowledge sources.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Table 1 shows results of the benchmark.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
A very small excerpt from an Italian-English graph is shown in Figure 1.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
With the exception of the Dutch data set, no other processing is performed on the annotated tags.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are "popular" or not.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993).
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
Human evaluation is one way to distinguish between the two cases.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
We empirically control for the effect of our heuristics to make sure our pruning does not undermine the objectives of our joint task.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô 1g n fl1; l2; l3g ; l0) !
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance).
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
Our initial experimentation with the evaluation tool showed that this is often too overwhelming.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
(2003), which gives 96.8% accuracy on the test set.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
We again assume a training set of n examples {x1 . xri} where the first m examples have labels {y1 ... yin}, and the last (n — m) examples are unlabeled.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
I ⢠JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- ⢠:zhang!
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
1 74.5 56.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
The paper is structured as follows.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
However, MADA is language-specific and relies on manually constructed dictionaries.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
2.2 Contextual Role Knowledge.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
This model is easily incorporated into the segmenter by building a WFST restrict ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
We refer to (T , W ) as the lexicon of a language and Ï for the parameters for their generation; Ï depends on a single hyperparameter β.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
While there are other obstacles to completing this idea, we believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
(Again, the goal of also in structural features.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
4 Evaluation Results.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
The similar explanation applies to the link to the âstakeâ set.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
In this section, we will explain the algorithm step by step with examples.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
If is not initCaps, then (not-initCaps, ) is set to 1.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , Ï) component.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Table 2 Similarity matrix for segmentation judgments.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâan, 2008), and the effect of variable word order (Collins et al., 1999).
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
3.1 Lexicon Component.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Note that in our construction arcs can never cross token boundaries.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
Better Arabic Parsing: Baselines, Evaluations, and Analysis
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
However, when we pre- tag the inputâas is recommended for Englishâ we notice a 0.57% F1 improvement.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
If there is a frequent multi-word sequence in a domain, we could use it as a keyword candidate.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.
|
Here we present two algorithms.
| 0 |
The key to the methods we describe is redundancy in the unlabeled data.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1).
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
We also see that the increase in the size of the label sets for Head and Head+Path is far below the theoretical upper bounds given in Table 1.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
Mikheev et al.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
gaolxing4 'happy'=> F.i'JF.i'JJI!JI!
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
The details are given in (Och and Ney, 2000).
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Also, the method of using keywords rules out phrases which donât contain popular words in the domain.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Sometimes, however, these beliefs can be contradictory.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
• We evaluated translation from English, in addition to into English.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
Our experiments all concern the analytical annotation, and the first experiment is based only on the training part.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
For example, out of 905 phrases in the CC- domain, 211 phrases contain keywords found in step 2.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Now we have sets of phrases which share a keyword and we have links between those sets.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
Finally, we would like to note that it is possible to devise similar algorithms based with other objective functions than the one given in Equ.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
ogy (Koskenniemi 1983; Antworth 1990; Tzoukermann and Liberman 1990; Karttunen, Kaplan, and Zaenen 1992; Sproat 1992); we represent the fact that ir, attaches to nouns by allowing t:-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing f,.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
In the rhetorical tree, nuclearity information is then used to extract a âkernel treeâ that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary).
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Though we do not directly compare state implementations, performance metrics in Table 1 indicate our overall method is faster.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the âcontextâ).
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
In considering the recognition of these languages, we were forced to be more specific regarding the relationship between the structures derived by these formalisms and the substrings they span.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Thus corresponding pseudo-labels for instances on which gj abstain are set to zero and these instances do not contribute to the objective function.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
We can make several observations on the cause of errors.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
A few annotators suggested to break up long sentences into clauses and evaluate these separately.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
Smith estimates Lotus will make a profit this quarterâ¦â, our system extracts âSmith esti mates Lotusâ as an instance.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
0 . 8 3 1 0.859 496 76.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
This solution also obviates the need to perform word sense disambiguation.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Reicheâs colleagues will make sure that the concept is waterproof.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.