source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Memory mapping also allows the same model to be shared across processes on the same machine.
|
Here we present two algorithms.
| 0 |
This section describes AdaBoost, which is the basis for the CoBoost algorithm.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
(Again, the goal of also in structural features.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
The ability to redistribute belief values across sets rather than individual hypotheses is key.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
This result suggests the benefit of using the automatic discovery method.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
However, 1 http://maxent.sourceforge.net 3.2 Testing.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
We further thank Dr. J.-S.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
computing the precision of the other's judgments relative to this standard.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
We carried out translation experiments in two different settings.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
However, the characterization given in the main body of the text is correct sufficiently often to be useful.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2, where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
We combined evidence from four contextual role knowledge sources with evidence from seven general knowledge sources using a DempsterShafer probabilistic model.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of substrings it spans, and the effect of the composition operations on sequences of substrings.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
This algorithm can be applied to statistical machine translation.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Our assumption is that caseframes that co-occur in resolutions often have a 2 This normalization is performed syntactically without semantics, so the agent and patient roles are not guaranteed to hold, but they usually do in practice.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the âfrequencyâ of Corp. is 2).
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.
|
The corpus was annoted with different linguitic information.
| 0 |
7 www.cis.upenn.edu/â¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).
|
The corpus was annoted with different linguitic information.
| 0 |
In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.
|
Here both parametric and non-parametric models are explored.
| 0 |
We used section 23 as the development set for our combining techniques, and section 22 only for final testing.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Morphological disambiguators that consider a token in context (an utterance) and propose the most likely morphological analysis of an utterance (including segmentation) were presented by Bar-Haim et al. (2005), Adler and Elhadad (2006), Shacham and Wintner (2007), and achieved good results (the best segmentation result so far is around 98%).
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
For example, we can easily imagine that the number of paraphrases for âA buys Bâ is enormous and it is not possible to create a comprehensive inventory by hand.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Overall, language modeling significantly impacts decoder performance.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
RandLM is the clear winner in RAM utilization, but is also slower and lower quality.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
Many hanzi have more than one pronunciation, where the correct.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
In addition, each feature function is a binary function.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
This is a unique object for which we are able to define a proper probability model.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Presence of the determiner J Al. 2.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
As a partial solution, for pairs of hanzi that co-occur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost.
|
Here we present two algorithms.
| 0 |
The
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
We first define "pseudo-labels",-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
With some minor API changes, namely returning the length of the n-gram matched, it could also be faster—though this would be at the expense of an optimization we explain in Section 4.1.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
30 16.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
In fact, during the first rounds many of the predictions of Th., g2 are zero.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Can we do . QmS: Yes, wonderful.
|
All the texts were annotated by two people.
| 0 |
Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
A summary of the corpus used in the experiments is given in Table 3.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
With regard to purely morphological phenomena, certain processes are not han dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
(1), Pr(eI 1) is the language model, which is a trigram language model in this case.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
The probability distribution that satisfies the above property is the one with the highest entropy.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.
|
A beam search concept is applied as in speech recognition.
| 0 |
The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Each learner is free to pick the labels for these instances.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
3.1 Corpora.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
, Sun day, then the feature DayOfTheWeek is set to 1.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
This feature has a linguistic justification.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
F-measure is the harmonic mean of precision and recall, 2PR/(P + R).
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Our smoothing procedure takes into account all the aforementioned aspects and works as follows.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Time includes all queries but excludes random number generation and data structure population.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The first concerns how to deal with ambiguities in segmentation.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
However, lexically similar NPs usually refer to the same entity in two cases: proper names and existential noun phrases.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
In this work we extended the AdaBoost.MH (Schapire and Singer 98) algorithm to the cotraining case.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
This problem arises because our keywords consist of only one word.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
First of all, most previous articles report perfor mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.