source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper talks about Pseudo-Projective Dependency Parsing.
0
This is in contrast to dependency treebanks, e.g.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The algorithm in Fig.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
As observed by Kahane et al. (1998), any (nonprojective) dependency graph can be transformed into a projective one by a lifting operation, which replaces each non-projective arc wj wk by a projective arc wi —* wk such that wi —*∗ wj holds in the original graph.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
By design, they readily capture regularities at the token-level.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx).
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(6), with W+ > W_.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We extend Subramanya et al.’s intuitions to our bilingual setup.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
For speed, we plan to implement the direct-mapped cache from BerkeleyLM.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.
They focused on phrases which two Named Entities, and proceed in two stages.
0
This research was supported in part by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001001-18917 from the Space and Naval Warfare Systems Center, San Diego, and by the National Science Foundation under Grant IIS00325657.
This corpus has several advantages: it is annotated at different levels.
0
Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Human judges also pointed out difficulties with the evaluation of long sentences.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Put another way, the minimum of Equ.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
2.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Smith estimates Lotus will make profit this quarter…”.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Human judges also pointed out difficulties with the evaluation of long sentences.
This paper talks about Unsupervised Models for Named Entity Classification.
0
This procedure is repeated for T rounds while alternating between the two classifiers.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Words and punctuation that appear in brackets are considered optional.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Sie.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
For the experiments, we use a simple preprocessing step.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The phrases have to be the expressions of length less than 5 chunks, appear between two NEs.
The corpus was annoted with different linguitic information.
0
Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Also, expanding on the techniques for the automatic generation of extraction patterns (Riloff 96; Sudo 03) using our method, the extraction patterns which have the same meaning can be automatically linked, enabling us to produce the final table fully automatically.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
gaolbu4-gaolxing4 (hap-not-happy) 'happy?'
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For each token , zero, one, or more of the features in each feature group are set to 1.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Our coreference resolver performed well in two domains, and experiments showed that each contextual role knowledge source contributed valuable information.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The second row represents the performance of the median hyperparameter setting.
Here both parametric and non-parametric models are explored.
0
This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
Here both parametric and non-parametric models are explored.
0
For example, one parser could be more accurate at predicting noun phrases than the other parsers.
They have made use of local and global features to deal with the instances of same token in a document.
0
In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
2.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
3 61.7 38.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
This measure has the advantage of being completely automatic.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
(c) Coordination ambiguity is shown in dependency scores by e.g., ∗SSS R) and ∗NP NP NP R).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
English was again paired with German, French, and Spanish.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Ex: He was found in San Jose, where ...
This corpus has several advantages: it is annotated at different levels.
0
The kind of annotation work presented here would clearly benefit from the emergence of standard formats and tag sets, which could lead to sharable resources of larger size.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Gather phrases using keywords Next, we select a keyword for each phrase – the top-ranked word based on the TF/IDF metric.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.
A beam search concept is applied as in speech recognition.
0
Here, the pruning threshold t0 = 10:0 is used.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
For example, the two NEs “Eastern Group Plc” and “Hanson Plc” have the following contexts.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
For inference, we are interested in the posterior probability over the latent variables in our model.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
This corpus has several advantages: it is annotated at different levels.
0
The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
These clusters are computed using an SVD variant without relying on transitional structure.
0
(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
At this stage the lattice path corresponds to segments only, with no PoS assigned to them.
There are clustering approaches that assign a single POS tag to each word type.
0
We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
The semantic caseframe expectations are used in two ways.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Given the limited number of judgements we received, we did not try to evaluate this.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
1 74.5 56.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The morphological anal­ysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
This procedure is repeated for T rounds while alternating between the two classifiers.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.
Two general approaches are presented and two combination techniques are described for each approach.
0
There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
In addition, the restricted version of CG's (discussed in Section 6) generates tree sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We adapt the string pumping lemma for the class of languages corresponding to the complexity of the path set.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Nonstochastic lexical-knowledge-based approaches have been much more numer­ ous.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The development of automatic scoring methods is an open field of research.
Their results show that their high performance NER use less training data than other systems.
0
The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
There are clustering approaches that assign a single POS tag to each word type.
0
3 54.4 33.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The probabilities are incorporated into the DempsterShafer model using Equation 1.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
27 80.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We have already mentioned the closely related work by Matsoukas et al (2009) on discriminative corpus weighting, and Jiang and Zhai (2007) on (nondiscriminative) instance weighting.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Thus, the length of any string in L is a linear combination of the length of strings in some fixed finite subset of L, and thus L is said to have the constant growth property.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Pseudo-code describing the generalized boosting algorithm of Schapire and Singer is given in Figure 1.