source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Unfortunately, we were unable to correctly run the IRSTLM quantized variant.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
(3)) to be defined over unlabeled as well as labeled instances.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
Both BBN and NYU have tagged their own data to supplement the official training data.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
(Yarowsky 95) describes an algorithm for word-sense disambiguation that exploits redundancy in contextual features, and gives impressive performance.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Other errors include NE tagging errors and errors due to a phrase which includes other NEs.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
Thus at each iteration the algorithm is forced to pick features for the location, person and organization in turn for the classifier being trained.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
The NE tagger is a rule-based system with 140 NE categories [Sekine et al. 2004].
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For definite NPs, the results are a mixed bag: some knowledge sources increased recall a little, but at the expense of some precision.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
97 78.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We proposed an unsupervised method to discover paraphrases from a large untagged corpus.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
The zone to which a token belongs is used as a feature.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.
|
All the texts were annotated by two people.
| 0 |
The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
2.2.3 Lexical Caseframe Expectations The second type of contextual role knowledge learned by BABAR is Lexical Caseframe Expectations, which are used by the CFLex knowledge source.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Time includes all queries but excludes random number generation and data structure population.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
1 2 3.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
In the ATB, :: b astaâadah is tagged 48 times as a noun and 9 times as verbal noun.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
5.2 Discussion.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
7.96 5.55 1 l...................................................................................................................................................................................................J..
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Alon Lavie advised on this work.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
The
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
(2010) and the posterior regular- ization HMM of Grac¸a et al.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Consider first the examples in (2).
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
evaluated to account for the same fraction of the data.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
27 80.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We concentrate on those sets.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
Step 4.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
Finally, the concatenated 5 * 20% output is used to train the reference resolution component.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
Although these authors report better gains than ours, they are with respect to a non-adapted baseline.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
From the discussion so far it is clear that a number of formalisms involve some type of context-free rewriting (they have derivation trees that are local sets).
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
In the âPerson â Personâ domain, 618 keywords are found, and in the âCountry â Countryâ domain, 303 keywords are found.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Finally, we would like to note that it is possible to devise similar algorithms based with other objective functions than the one given in Equ.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
IRST is not threadsafe.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
Not every annotator was fluent in both the source and the target language.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
2.2 Syntactic structure.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
needs to be in initCaps to be considered for this feature.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The PROBING model is 2.4 times as fast as the fastest alternative, SRILM, and uses less memory too.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
5.2 Setup.
|
Here we present two algorithms.
| 0 |
Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx).
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
(3) shows learning curves for CoBoost.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.
|
A beam search concept is applied as in speech recognition.
| 0 |
Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
However, when we pre- tag the inputâas is recommended for Englishâ we notice a 0.57% F1 improvement.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
This is the form of recursive levels in iDafa constructs.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
We claim that no particular morphological segmentation is a-priory more likely for surface forms before exploring the compositional nature of syntactic structures, including manifestations of various long-distance dependencies.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
A Stochastic Finite-State Word-Segmentation Algorithm for Chinese
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
We consider the unsupervised POS induction problem without the use of a tagging dictionary.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
The similar explanation applies to the link to the âstakeâ set.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Here, all tokeninternal collocations of tags unseen in our training data are pruned away.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Since these are distinct syntactic units, they are typically segmented.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Furthermore, by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences, one can apply the segmenter to other problems, such as speech recognition (Pereira, Riley, and Sproat 1994).
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
Our original hope in combining these parsers is that their errors are independently distributed.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
All of these systems were run on data that was not seen during their development.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Figure 2 shows timing results.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).
|
There is no global pruning.
| 0 |
The negative logarithm of t0 is reported.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Table 2 shows these similarity measures.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
The following three sections elaborate these different stages is more detail.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
For example, the two NEs âEastern Group Plcâ and âHanson Plcâ have the following contexts.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
7 Unlike Dickinson (2005), we strip traces and only con-.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.