source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | MITLM 0.4 (Hsu and Glass, 2008) is mostly designed for accurate model estimation, but can also compute perplexity. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | F1 85 Berkeley 80 Stanford. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô1; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô1 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Ends with the feminine affix :: p. 4. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The 13 positions of the source sentence are processed in the order shown. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | The taggers were trained on datasets labeled with the universal tags. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | In all figures, we present the per-sentence normalized judgements. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Each visited entry wni stores backoff b(wni ). |
This assumption, however, is not inherent to type-based tagging models. | 0 | However, our full model takes advantage of word features not present in Grac¸a et al. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | This approach leads to a search procedure with complexity O(E3 J4). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | English was again paired with German, French, and Spanish. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684). |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ work in course projects, and to some extent of paid assistentships. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In considering recognition of LCFRS's, we make further assumption concerning the contribution of each structure to the input string, and how the composition operations combine structures in this respect. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Methods that allow multiple segmentations must provide criteria for choosing the best segmentation. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 40 75. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Vocabulary lookup is a sorted array of 64-bit word hashes. |
All the texts were annotated by two people. | 0 | 3.2 Stochastic rhetorical analysis. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | can expect famous names like Zhou Enlai's to be in many dictionaries, but names such as :fi lf;f; shi2jil-lin2, the name of the second author of this paper, will not be found in any dictionary. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Phrase-level granularity distinguishes our work from previous work by Matsoukas et al (2009), who weight sentences according to sub-corpus and genre membership. |
Here we present two algorithms. | 0 | The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Somewhat surprisingly, there do not appear to be large systematic differences between linear and MAP combinations. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | We call this technique constituent voting. |
Here both parametric and non-parametric models are explored. | 0 | For this reason, naïve Bayes classifiers are well-matched to this problem. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Words and punctuation that appear in brackets are considered optional. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In (1) the sequencema3lu4 cannot be resolved locally, but depends instead upon broader context; similarly in (2), the sequence :::tcai2neng2 cannot be resolved locally: 1. |
There is no global pruning. | 0 | 4.1 The Task and the Corpus. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X â©Y =S 1 â ) m1 (X ) â m2 (Y ) m1 (X ) â m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)). |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | However, it is desirable if we can separate them. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | While Berg-Kirkpatrick et al. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The extent to which this constraint is enforced varies greatly across existing methods. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination. |
There is no global pruning. | 0 | In Section 2, we brie y review our approach to statistical machine translation. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | About half of the participants of last year’s shared task participated again. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs). |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Hence we decided to restrict ourselves to only information from the same document. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | f, nan2gual+men0 'pumpkins' is by no means impossible. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | However, their system is a hybrid of hand-coded rules and machine learning methods. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | In MSA, SVO usually appears in non-matrix clauses. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Lack of correct reference translations was pointed out as a short-coming of our evaluation. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | The data is sorted based on the frequency of the context (âa unit ofâ appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. âNBCâ and âGeneral Electric Co.â appeared 10 times with the context âa unit ofâ). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures. |
This assumption, however, is not inherent to type-based tagging models. | 0 | In our model, we associate these features at the type-level in the lexicon. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Sentence (2) and (3) help to disambiguate one way or the other. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | This may be the sign of a maturing research environment. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 36. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | First, we identify sources of syntactic ambiguity understudied in the existing parsing literature. |
Here we present two algorithms. | 0 | The first m pairs have labels yi, whereas for i = m + 1, , n the pairs are unlabeled. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Table 9: Dev set results for sentences of length ⤠70. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For example, as Gan (1994) has noted, one can construct examples where the segmen tation is locally ambiguous but can be determined on the basis of sentential or even discourse context. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Linguistic intuitions like those in the previous section inform language-specific annotation choices. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The range of the score is between 0 and 1 (higher is better). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | We therefore also normalized judgements on a per-sentence basis. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Linear probing places at most one entry in each bucket. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | For each co-occurrence relation (noun/caseframe for CFLex, and caseframe/caseframe for CFNet), BABAR computes its log-likelihood value and looks it up in the Ï2 table to obtain a confidence level. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | The inventions disclosed herein are the subject of a patent application owned by the University of Utah and licensed on an exclusive basis to Attensity Corporation. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | If two nouns have mutually exclusive semantic classes, then they cannot be coreferent. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”. |
This corpus has several advantages: it is annotated at different levels. | 0 | Either save money at any cost - or give priority to education. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Figure 2: Order in which source positions are visited for the example given in Fig.1. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Figure 2 shows examples of extracted NE pair instances and their contexts. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Using structural information As was explained in the results section, we extracted examples like âSmith estimates Lotusâ, from a sentence like âMr. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The second case involves existential noun phrases (Allen, 1995), which are noun phrases that uniquely specify an object or concept and therefore do not need a prior referent in the discourse. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | We report results for the best and median hyperparameter settings obtained in this way. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | In. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 8 We use head-finding rules specified by a native speaker. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 2.2.3 Lexical Caseframe Expectations The second type of contextual role knowledge learned by BABAR is Lexical Caseframe Expectations, which are used by the CFLex knowledge source. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Unlike the WSJ corpus which has a high frequency of rules like VP âVB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects). |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The monotone search performs worst in terms of both error rates mWER and SSER. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.