source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The index in this array is the vocabulary identifier.
|
Here we present two algorithms.
| 0 |
The features are used to represent each example for the learning algorithm.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Given counts cn1 where e.g. c1 is the vocabulary size, total memory consumption, in bits, is Our PROBING data structure places all n-grams of the same order into a single giant hash table.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
, for A. T.&T. nonalpha.. .
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
We offer a state function s(wn1) = wn� where substring wn� is guaranteed to extend (to the right) in the same way that wn1 does for purposes of language modeling.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
We map the ATB morphological analyses to the shortened âBiesâ tags for all experiments.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The record for wn1 stores the offset at which its extensions begin.
|
Here both parametric and non-parametric models are explored.
| 0 |
We call this approach parser switching.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
Until now, all evaluations of Arabic parsingâincluding the experiments in the previous sectionâhave assumed gold segmentation.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
A Stochastic Finite-State Word-Segmentation Algorithm for Chinese
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0ˆ = argmax pf(s, t) log pθ(s|t) (8) terize the usefulness of (s, t), weighted by Ai. θ s,t pf(s, t)po(s, t) log pθ(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- θ s,t pf(s, t)co(s, t) log pθ(s|t), set maximum likelihood as before: !�argmax po (s, t) ! θ s,t �ˆ = argmax ˜p(s, t) log p(s|t; 0).
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
18 77.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Time includes all queries but excludes random number generation and data structure population.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The test accuracy more or less asymptotes.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
There, a lattice is used to represent the possible sentences resulting from an interpretation of an acoustic model.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
So, it is too costly to make IE technology âopen- domainâ or âon-demandâ like IR or QA.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
Each production can push or pop symbols on the stack as can be seen in the following productions that generate tree of the form shown in Figure 4b.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
As such, global information from the whole context of a document is important to more accurately recognize named entities.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
This may be the sign of a maturing research environment.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
Judges varied in the average score they handed out.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
Annotators argued for the importance of having correct and even multiple references.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
In Section 6 we dis cuss other issues relating to how higher-order language models could be incorporated into the model.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
The perplexity for the trigram language model used is 26:5.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
For all lists except locations, the lists are processed into a list of tokens (unigrams).
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Models that employ this strategy are denoted hsp.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
With respect to exact match, the improvement is even more noticeable, which shows quite clearly that even if non-projective dependencies are rare on the token level, they are nevertheless important for getting the global syntactic structure correct.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
If two phrases can be used to express the same relationship within an information extraction application (âscenarioâ), these two phrases are paraphrases.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Better grammars are shown here to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
This approach is similar to BABAR in that they both acquire knowledge from earlier resolutions.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Tokens were converted to vocabulary identifiers in advance and state was carried from each query to the next.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
The difference in performance between pronouns and definite noun phrases surprised us.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Sometimes, however, these beliefs can be contradictory.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
The system of Berg-Kirkpatrick et al.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
The web-based Annis imports data in a variety of XML formats and tagsets and displays it in a tier-orientedway (optionally, trees can be drawn more ele gantly in a separate window).
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
For example, we can easily imagine that the number of paraphrases for âA buys Bâ is enormous and it is not possible to create a comprehensive inventory by hand.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
The input to AdaBoost is a set of training examples ((xi , yi), , (x„.„ yrn)).
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The 1st block contains the simple baselines from section 2.1.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
Such a classification can be seen as a not-always-correct summary of global features.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
to explore how well we can induce POS tags using only the one-tag-per-word constraint.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
As was explained in the results section, âstrengthâ or âaddâ are not desirable keywords in the CC-domain.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
set was based on an earlier version of the Chang et a!.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
The inverted alignment probability p(bijbiô1; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
and âHâ represents âHanson Plcâ.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
German city names are replaced by category markers.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuterâs text collection8 that had a subject code corresponding to natural disasters.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Each connected path (l1 ... lk) E L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 ... wn and a morphological analyzer, we look for the most probable parse tree π s.t.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
basically complete, yet some improvements and extensions are still under way.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Presence of the determiner J Al. 2.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
47 78.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place.
|
The corpus was annoted with different linguitic information.
| 0 |
It is difficult to motivate these days why one ministry should be exempt from cutbacks â at the expense of the others.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types.2 In the first encoding scheme, called Head, we use a new label d↑h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
hanzi in the various name positions, derived from a million names.
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
The proof is given in (Tillmann, 2000).
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
|
The corpus was annoted with different linguitic information.
| 0 |
Nonetheless there is no alternative to Reicheâs plan.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
36 79.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
This pumping lemma states that if there is tree, t = t2t3t4t5, generated by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form ti it tstt ts for each i > 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.