source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Sheffield Guthrie and Hepple (2010) explore several randomized compression techniques, but did not release code.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
52 77.
These clusters are computed using an SVD variant without relying on transitional structure.
0
For all languages we do not make use of a tagging dictionary.
Two general approaches are presented and two combination techniques are described for each approach.
0
C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
For illustration, an English translation of one of the commentaries is given in Figure 1.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
29 — 95.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
They are set to fixed constants.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.
There is no global pruning.
0
In Section 4, we present the performance measures used and give translation results on the Verbmobil task.
They have made use of local and global features to deal with the instances of same token in a document.
0
Named Entity Recognition: A Maximum Entropy Approach Using Global Information
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For speed, we plan to implement the direct-mapped cache from BerkeleyLM.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
M spawns as many processes as there are ways of breaking up ri , .. • , zt, and rules with A on their left-hand-side.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Even if an example like this is not labeled, it can be interpreted as a "hint" that Mr and president imply the same category.
Combining multiple highly-accurate independent parsers yields promising results.
0
Features and context were initially introduced into the models, but they refused to offer any gains in performance.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The tree t2 must be on one of the two branches.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.
They found replacing it with a ranked evaluation to be more suitable.
0
It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Thus, any language that is letter equivalent to a semilinear language is also semilinear.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
(b) does the translation have the same meaning, including connotations?
They have made use of local and global features to deal with the instances of same token in a document.
0
(1) CEO of McCann . . .
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
and f,.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The computing time is low, since no reordering is carried out.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
5 ‘Underspecified Rhetorical Markup Language’ 6 This confirms the figure given by (Schauer, Hahn.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Statistics for all data sets are shown in Table 2.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Fourth, we show how to build better models for three different parsers.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Also, expanding on the techniques for the automatic generation of extraction patterns (Riloff 96; Sudo 03) using our method, the extraction patterns which have the same meaning can be automatically linked, enabling us to produce the final table fully automatically.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Terrorism systems must distinguish between people who perpetrate a crime and people who are victims of a crime.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Overall, language modeling significantly impacts decoder performance.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Also, the method of using keywords rules out phrases which don’t contain popular words in the domain.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 — — — 0 . 8 0 9 0.839 335 0 . 7 9
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
(b) does the translation have the same meaning, including connotations?
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
As suggested at the end of Section 3, the restrictions that have been specified in the definition of LCFRS's suggest that they can be efficiently recognized.
It is probably the first analysis of Arabic parsing of this kind.
0
32 81.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We applied the AutoSlog system (Riloff, 1996) to our unannotated training texts to generate a set of extraction patterns for each domain.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Notice that even though IG's and LFG's involve CFG-like productions, they are (linguistically) fundamentally different from CFG's because the composition operations need not be linear.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
96 75.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.'
The texts were annotated with the RSTtool.
0
The kind of annotation work presented here would clearly benefit from the emergence of standard formats and tag sets, which could lead to sharable resources of larger size.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Surprisingly, this effect is much less obvious for out-of-domain test data.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, it is possible to personify any noun, so in children's stories or fables, i¥JJ1l.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Other errors include NE tagging errors and errors due to a phrase which includes other NEs.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This group of features attempts to capture such information.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Thus, the derivation trees for TAG's have the same structure as local sets.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%.
They found replacing it with a ranked evaluation to be more suitable.
0
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
Their results show that their high performance NER use less training data than other systems.
0
This process is repeated 5 times by rotating the data appropriately.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
besuchen 9.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.
Here we present two algorithms.
0
Consider the case where IX].
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.
Here both parametric and non-parametric models are explored.
0
Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
This corpus has several advantages: it is annotated at different levels.
0
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In this domain the major scenarios involve the things they agreed on, rather than the mere fact that they agreed.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Statistics for all data sets are shown in Table 2.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
shortest match at each point.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We checked whether the discovered links are listed in WordNet.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
The most frequent NE category pairs are “Person - Person (209,236), followed by “Country - Coun- try” (95,123) and “Person - Country” (75,509).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available.
Replacing this with a ranked evaluation seems to be more suitable.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
2 62.2 39.
A beam search concept is applied as in speech recognition.
0
Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.'