source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
The work of the second author as well as collaboration visits to Israel was financed by NWO, grant number 017.001.271.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
3) A tight coupling with the speech recognizer output.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
3.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Evaluation We use 8 different measures to evaluate the performance of our system on the joint disambiguation task.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
In our situation, the competing hypotheses are the possible antecedents for an anaphor.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
We computed BLEU scores for each submission with a single reference translation.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
4.5 Transliterations of Foreign Words.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
Table 2 shows results for both settings and all methods described in sections 2 and 3.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
We use label propagation in two stages to generate soft labels on all the vertices in the graph.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
This is motivated by taking β po(s|t) to be the parameters of a Dirichlet prior on phrase probabilities, then maximizing posterior estimates p(s|t) given the IN corpus.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Nu mb er filters candidate if number doesnât agree.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Reading the following record’s offset indicates where the block ends.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Our code is thread-safe, and integrated into the Moses, cdec, and Joshua translation systems.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Vocabulary lookup is a sorted array of 64-bit word hashes.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Variants of alif are inconsistently used in Arabic texts.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
11 taTweel (-) is an elongation character used in Arabic script to justify text.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
each word in the lexicon whether or not each string is actually an instance of the word in question.
|
The corpus was annoted with different linguitic information.
| 0 |
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Once again we present both a non-parametric and a parametric technique for this task.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
8 57.3 +F EA TS be st me dia n 50.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
This overview is illustrated in Figure 1.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Memory usage is the same as with binary search and lower than with set.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
We define an ATM, M, recognizing a language generated by a grammar, G, having the properties discussed in Section 43.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
One key issue here is to seek a discourse-based model of information structure.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
(Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (âcommunicative-weight assignmentâ).
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
Evaluation results for links
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
As was explained in the results section, âstrengthâ or âaddâ are not desirable keywords in the CC-domain.
|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
| 0 |
Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Performance typically stabilizes across languages after only a few number of iterations.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
BABAR uses a named entity recognizer to identify proper names that refer to people and companies.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
Formally, we define dependency graphs as follows: 3.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
(Again, the goal of also in structural features.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
For this experiment, we compare our model with the uniform tag assignment prior (1TW) with the learned prior (+PRIOR).
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô1 iô2) max bI 1 I Yi=1 [p(bijbiô1; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô1 iô2) p(bijbiô1; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô1 iô2) is the trigram language model probability.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
For example, if {N P1, N P2, N P3} are all coreferent, then each NP must be linked to one of the other two NPs.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
These 140 NE categories are designed by extending MUCâs 7 NE categories with finer sub-categories (such as Company, Institute, and Political Party for Organization; and Country, Province, and City for Location) and adding some new types of NE categories (Position Title, Product, Event, and Natural Object).
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Particles are uninflected.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.
|
There is no global pruning.
| 0 |
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
|
All the texts were annotated by two people.
| 0 |
All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Note that in our construction arcs can never cross token boundaries.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
Finally, we find links between sets of phrases, based on the NE instance pair data (for example, different phrases which link âIBMâ and âLotusâ) (Step 4).
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For example, kidnapping victims should be extracted from the subject of the verb âkidnappedâ when it occurs in the passive voice (the shorthand representation of this pattern would be â<subject> were kidnappedâ).
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types.2 In the first encoding scheme, called Head, we use a new label d↑h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure.
|
The texts were annotated with the RSTtool.
| 0 |
This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1).
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
4 70.4 46.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
2.5 Connectives with scopes.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
This style of naming is never required and seems to be losing currency.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Besides information structure, the second main goal is to enhance current models of rhetorical structure.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Keywords with more than one word In the evaluation, we explained that âchairmanâ and âvice chairmanâ are considered paraphrases.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Table 1: Syntactic Seeding Heuristics BABARâs reliable case resolution heuristics produced a substantial set of anaphor/antecedent resolutions that will be the training data used to learn contextual role knowledge.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
In various dialects of Mandarin certain phonetic rules apply at the word.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃcient to consider only the best 50 words.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
It can be shown that this objective is convex in q.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
The 14 general-language features embody straightforward cues: frequency, “centrality” as reflected in model scores, and lack of burstiness.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
We have argued that the proposed method performs well.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
(1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994).
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
There are 13 types of extensions needed to describe the verbgroup reordering.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis tance matrix, and plotting the first two most significant dimensions.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Unigram lookup is dense so we use an array of probability and backoff values.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Sheffield Guthrie and Hepple (2010) explore several randomized compression techniques, but did not release code.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
In contrast, NNP (proper nouns) form a large portion of vocabulary.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
The judgements tend to be done more in form of a ranking of the different systems.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
It was also proposed to allow annotators to skip sentences that they are unable to judge.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
In our work, we demonstrate that using a simple na¨ıveBayes approach also yields substantial performance gains, without the associated training complexity.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
1).
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
This differs from other implementations (Stolcke, 2002; Pauls and Klein, 2011) that use hash tables as nodes in a trie, as explained in the next section.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
3) A tight coupling with the speech recognizer output.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.