id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_96800 | To extract dependency relations from these parse trees, we scan for attachment rules (e.g., L 1 H → Y A L ′ H ) and record that A depends on H. The schema omits the rules for right arguments since they are symmetric. | the posterior P (θ|s, t, α) is also a product of Dirichlets, also factoring into a Dirichlet for each nonterminalN , where the parameters αr are augmented by the number of times rulē r is observed in tree t: We can see that αr acts as a pseudocount of the number of timesr is observed prior to t. To make use of this prior, we use the Variational Bayes (VB) technique for PCFGs with Dirichlet Priors presented by Kurihara and Sato (2004). | neutral |
train_96801 | This is illustrated by the following example (from the IWSLT 2007 Arabic-English translation task): Source 1: Asf lA ymknk *lk hnAk klfp HwAly vmAnyn dwlAr lAlsAEp AlwAHdp Ref: sorry you can't there is a cost the charge is eighty dollars per hour 1-best: i'm sorry you can't there in the cost about eighty dollars for a one o'clock Source 2: E*rA lA ymknk t$gyl AltlfAz HtY tqlE AlTA}rp Ref: sorry you cannot turn the tv on until the plane has taken off 1-best: excuse me i you turn tv until the plane departs The phrase lA ymknk (you may not/you cannot) is translated differently (and wrongly in the second case) due to different segmentations and phrase translations chosen by the decoder. | the parameter values were 0.5 for the gap penalty, a maximum substring length of k = 4, and weights of 0, 0.1, 0.2, 0.7. | neutral |
train_96802 | Due to hardware limitation, we are not able to fit the unfiltered phrase tables completely into the memory. | many words remain unaligned on account of their very low frequency. | neutral |
train_96803 | We choose to focus on the items on these lists that seem most likely to be effective cues for our task. | to put it another way: apparently plausible candidates that often appear in sentences with multiple good candidates (i.e., piggybackers) receive a low distilled score, despite a high initial score. | neutral |
train_96804 | We mentioned in the introduction some significant challenges to developing a machine-learning approach to discovering DE operators. | very little of this work is computational. | neutral |
train_96805 | Like Collins and Singer (1999), we assume that the named entities have already been correctly extracted from the text, and our task is merely to label them. | even without the pronoun features, that is, using the same feature set, our system scores equivalently to the EM model, at 83% (this score is on dev, 25% people). | neutral |
train_96806 | In the machine translation literature, this process is commonly referred to as decoding. | each node represents a point in time and arcs between nodes indicates a word occurs between the connected nodes' times. | neutral |
train_96807 | We note, however, that since these are not true phonemes (but rather phonemes copied over from pronunciation dictionaries and word transcripts), we must cautiously interpret these results. | (1997) had previously used phoneme lattices, although with ad hoc edit costs and without efficient indexing. | neutral |
train_96808 | For example, if we have never seen the word houses in language model training, but have examples of house, we still can expect houses are to be more probable than houses fly. | a lattice is a directed acyclic graph that is used to compactly represent the search space for a speech recognition system. | neutral |
train_96809 | We accomplish this goal by ranking the set of utterances by our confidence that they contain the query word, a task known as Ranked Utterance Retrieval (RUR). | we could take the most probable degradations until their cumulative probability exceeds some threshold γ. | neutral |
train_96810 | F (i, j) is the harmonic mean of recall (fraction of the messages in the i also present in j) and precision (fraction of messages in j also present in i), and F is a weighted sum over all ground-truth conversations (i.e., F is microaveraged). | another thread of related work is document expansion. | neutral |
train_96811 | Segmented Corpus hnAk w-vlAv-wn bn-w Al-ywm Al-jmAEp Morpheme Feature:Value hnAk:1 w:2 vlAv:1 wn:1 bn:1 Al:2 ywm:1 jmAEp:1 hnAk:1 wvlAvwn:1 bnw:1 Alywm:1 Alj-mAEp:1 Bigram Context Feature:Value ## vl:1 #w wn:1 Av ##:1 ## w#:1 bn ##:1 ## yw:1 Al ##:2 ## jm:1 ## ##:5 Furthermore, the corresponding features for the segmented word w-vlAv-wn are shown in Figure 1. | we observe that the maximum number of morphemes that a word contains is usually a small constant for many languages; in the Arabic Penn Treebank, the longest word contains 14 characters, but the maximum number of morphemes in a word is only 5. | neutral |
train_96812 | Finally, we note that several of the features (the third-and eighth-ranked reward and twelfthranked penalty) shape the translation of shuo 'said', preferring translations with an overt complementizer that and without a comma. | this seems to be because the is often part of a fixed phrase, such as the White House, and therefore comes naturally as part of larger phrasal rules. | neutral |
train_96813 | Coarse-to-fine parse + rerank employed both of these approximations. | each n-ary rule consists of a root symbol, a sequence of lexical items and non-terminals on the source-side, and a fragment of a syntax tree on the target side. | neutral |
train_96814 | The Syntax-Augmented MT model of Zollmann and Venugopal (2006), for instance, produces a very large nonterminal set using "slash" (NP/NN → the great) and "plus" labels (NP+VB → she went) to assign syntactically motivated labels for rules whose target words do not correspond to constituents in phrase structure parse trees. | the true score of each translation is "fragmented" across many derivations, so that each translation's most probable derivation is the only one that matters. | neutral |
train_96815 | Our implementation of the deterministic dependency parser using maximum entropy models as the underlying classifiers achieves 87.8% labeled attachment score and 88.8% unlabeled attachment score on standard Penn Treebank evaluation. | in the rest of the paper, we first introduce our dependency parser based reordering approach based on the analysis of the key issues when translating SVO languages to SOV languages. | neutral |
train_96816 | The features are classified with special considerations of phrase lengths. | we enumerate all reordering examples, rather than only extract the smallest straight and largest inverted examples. | neutral |
train_96817 | We now present our method to automatically discover high quality wish templates using the WISH corpus. | a template receives a high score if it is "used" by many frequent wishes but does not match many frequent content-only wishes. | neutral |
train_96818 | 8 Let freq(x j ; d) denote the number of occurrences of the jth word in the vocabulary in document d. where N is the number of documents in the training set. | das and Chen (2001) and Antweiler and Frank (2004) ask whether messages posted on message boards can help explain stock performance, while Li (2005) measures the association between frequency of words associated with risk and subsequent stock returns. | neutral |
train_96819 | We consider a text regression problem: given a piece of text, predict a R-valued quantity associated with that text. | the word otc refers to "over-the-counter" trading, a high-risk market. | neutral |
train_96820 | It might initially seem that transliteration is an easy task, requiring only finding a phonetic mapping between character sets. | for Russian, we compare to the model presented in (Klementiev and Roth, 2006b), a weakly supervised algorithm that uses both phonetic information and temporal information. | neutral |
train_96821 | report 88.53% word accuracy for their SbA technique using leave-one-out testing on the entire NETtalk set (20K words). | the SVM sometimes overgeneralizes, as in the last example in Table 4. | neutral |
train_96822 | As Table 1 shows, across all grammars and conditions after 2,000 iterations incremental initialization produces samples with much better word segmentation token f-score than does batch initialization, with the largest improvement on the unigram adaptor grammar. | pitman-Yor processes can control the strength of this effect somewhat by moving mass from existing tables to the base distribution. | neutral |
train_96823 | We identified and corrected the following sources of inconsistencies: Periods and abbreviations. | this paper is based on work funded in part by the Defense Advanced Research Projects Agency through IBM. | neutral |
train_96824 | 2 m, giving us O(2 m (m!) | 2 mn( n 2 m ) m ), but the actual runtime is reduced by several orders of magnitude. | neutral |
train_96825 | But the CCG account is a competence model as well as a performance model, in that it seeks to unify category representations used in processing with learned generalizations about argument structure; whereas the model described in this paper is exclusively a performance model, allowing generalizations about lexical argument structures to be learned in some other representation, then combined with probabilistic information about parsing strategies to yield a set of derived incomplete constituents. | the remainder of this paper is organized as follows: Section 2 describes related approaches to parsing with stack bounds; Section 3 describes an existing bounded-stack parsing framework using a rightcorner transform defined over individual trees; Section 4 describes a redefinition of this transform to ap-ply to entire probabilistic grammars, cast as infinite sets of generable trees; and Section 5 describes an evaluation of this transform on the Wall Street Journal corpus of the Penn treebank showing improved results for a transformed bounded-stack version of a probabilistic grammar over the original unbounded grammar. | neutral |
train_96826 | The language models can be integrated out analytically (Section 3.1). | iterative updates of this form are applied until the change in the lower bound is less than 10 −3 . | neutral |
train_96827 | The fraction represents the posterior estimate of the language models: standard Dirichlet-multinomial conjugacy gives a sum of counts plus a Dirichlet prior (equation 3). | we note two orthogonal but related approaches to extracting nonlinear discourse structures from text. | neutral |
train_96828 | Previous sections have treated the content of a document set as a single (perhaps learned) unigram distribution. | in order to ensure tight lexical cohesion amongst the specific topics, we assume that each sentence draws a single specific topic Z S used for every specific content word in that sentence. | neutral |
train_96829 | Finally, in section 5 we sum up and point to future directions. | we use a trigram language model that is based on the puzzle domain transcriptions. | neutral |
train_96830 | To determine the search radius, we need a working definition of "local listing". | we wanted to test using queries for which we know there is a matching listing in the city/state provided by the caller. | neutral |
train_96831 | However, every entry in the training pronunciation dictionary is a fully diacritized word mapped to a set of possible contextdependent pronunciations. | we use this system as our baseline for comparison. | neutral |
train_96832 | Table 2 presents the comparison of BASEWR with the XWR system. | the speech signal is mapped to a more accurate representation of the training transcript, which we hypothesize will lead to a better estimation of the acoustic models. | neutral |
train_96833 | This effectively builds a skeleton of the desired lattice and delays the creation of the final word lattice until a single replacement operation is carried out in the top cell (S, 1, J). | each still represents the entire search space of all translation hypotheses covering the span. | neutral |
train_96834 | This suggests that our construct-driven approach yields pronunciation features that are empirically comparable or even better than the amscore. | endeavors into automated scoring for spontaneous speech have been sparse given the challenge of both recognizing and assessing spontaneous speech. | neutral |
train_96835 | The amount of data for language modeling is orders of magnitude less than that of the acoustic data in continuous space. | we also realize that different problems such as segmentation (e.g. | neutral |
train_96836 | Then, given a hypothesized sequence of words the decoder extracts the corresponding feature vectors. | we were able to estimate the GMLM probabilities only for words that have at least 50 or more examples. | neutral |
train_96837 | One obvious choice is to use Maximum Likelihood (ML) criterion. | rapid adaptation for acoustic modeling, using such methods as Maximum Likelihood Linear Regression (MLLR) (Legetter & Woodland, 1995), is possible using very small amount of acoustic data, thanks to the inherent structure of acoustic models that allow large degrees of parameter tying across different words (several thousand context dependent states are shared by all the words in the dictionary). | neutral |
train_96838 | We apply this idea to motivate two language models: a novel class-based language model and regularized minimum discrimination information (MDI) models. | one can improve exponential language models by adding features (or a prior model) that shrink parameter values while maintaining training performance, and from this observation we develop Heuristics 1 and 2. | neutral |
train_96839 | We train our model using empirical Bayesian estimation. | users are ordered by their posterior probabilities. | neutral |
train_96840 | We take F and O as two random variables, and the question of constructing or clustering the object groups can be defined as finding compressed representation of each variable that reserves the information about another variable as high as possible. | we consider this is due partly to the sparseness of the cor-pus, by enlarging the scale of the corpus or using the search engine (e.g. | neutral |
train_96841 | More and more people are willing to record their feelings (blog), give voice to public affairs (news review), express their likes and dislikes on products (product review), and so on. | to existing explicit adjacency approaches, the proposed approach detects the sentiment association between F and O based on review feature categories and opinion word groups gained from the review corpus. | neutral |
train_96842 | Typically we use relative-frequency estimation of Pr [c] and Pr[f i | c] for c ∈ {C + , C − }. | learning from noisy examples has been studied for a long time in the learning theory community (Angluin and laird, 1988). | neutral |
train_96843 | There are several methods to extract useful information from very large corpora. | most predicate argument relations concerning nouns with copula were easily recognized from syntactic preference, and thus the low coverage would not quite affect the performance of discourse analysis. | neutral |
train_96844 | This far surpasses the ML-PCFG (F1 of 70.7%), and is similar to Zuidema's (2007) DOP result of 83.8%. | the grammar sizes are not strictly comparable, as the Berkeley binarised grammars prohibit non-binary rules, and are therefore forced to decompose each of these rules into many child rules. | neutral |
train_96845 | These rules mostly describe cases when the S category is used for a full sentence, which most often include punctuation such as the full stop and quotation marks. | tree substitution grammars (tSGs) are a compelling alternative to context-free grammars for modelling syntax. | neutral |
train_96846 | This grammar differs from the state split grammars in that it factors into two separate projections, a dependency projection and a PCFG. | ing coarser projections falls off for HA * , the work saved with CTF increases with the addition of highly coarse grammars. | neutral |
train_96847 | In standard CTF, we exhaustively parse in each projection level, but skip edges whose projections in the previous level had sufficiently low scores. | since the weights of the rules in the smaller grammars are the minimum of a large set of rules in the target grammar, these grammars have costs that are so cheap that all edges in those grammars will be processed long before much progress is made in the refined, more expensive levels. | neutral |
train_96848 | These predicates can be used recursively at every level of the tree to specify the relation between the most important segments. | we used VerbNet (Kipper et. | neutral |
train_96849 | (2008)], and distributional methods [e.g., Bergsma et al. | at the same time, we want to avoid assigning similar objects to different classes. | neutral |
train_96850 | Second, we compare our cut-based approach with the five aforementioned approaches to anaphoricity determination (namely, Ng and Cardie (2002a), Ng (2004), Luo (2007), Denis and Baldridge (2007), and Finkel and Manning (2008)) in terms of their effectiveness in improving a learning-based coreference system. | luo's algorithm attempts to find the most probable coreference partition of a given set of mentions. | neutral |
train_96851 | To some extent this may be due to the fact that Trimmer uses smaller (trimmed) fragments of source sentences in its summaries. | one or two summarizers still tended to do well. | neutral |
train_96852 | The Eisner algorithm, originally designed for generative parsing, decomposes the probability of a dependency parse into the probabilities of each attachment of a dependent to its parent, and the probabilities of each parent stopping taking dependents. | the model parameters, θ , are normally distributed, with mean µ (typically zero) and variance σ 2 . | neutral |
train_96853 | Arguably MUC-6 and MUC-7 should not count as separate domains, but because they were annotated separately, for different shared tasks, we chose to treat them as such, and feel that our experimental results justify the distinction. | when a domain lacks evidence for a parameter the opposite occurs, and the prior (whose value is determined by evidence in the other domains) will have a greater effect on the parameter value. | neutral |
train_96854 | For large datasets, these s i s might not even fit in memory, and resorting to physical disk would be very slow. | intuitively, updates become more reliable with larger m, so we can afford to trust them more and incorporate them more aggressively. | neutral |
train_96855 | We get a similarly striking result for document classification, but the results for word segmentation and word alignment are more modest. | sEM can be seen as stochastic gradient in the space of sufficient statistics. | neutral |
train_96856 | In order to leverage the sentence information, we adjust a word's weight by the salience scores of the sentences containing that word. | we also find that this assumption also holds using statistics obtained from the meeting corpus used in this study. | neutral |
train_96857 | (Matsuo and Ishizuka, 2004) proposed a co-occurrence distribution based method using a clustering strategy for extracting keywords for a single document without relying on a large corpus, and reported promising results. | the graph method does not perform as well as the TFIDF approach. | neutral |
train_96858 | For instance, Cassell et al. | when the system claims the floor, the state can be SY ST EM , BOT H S , or BOT H U ). | neutral |
train_96859 | A more principled approach to setting the costs would be to estimate from perceptual experiments or user studies what the impact of remaining in gap or overlap is compared to that of a cut-in or false interruption. | kronild (2006) proposes a much more complex model, based on Harel statecharts, which are an extension of finite-state machines for modeling and visualizing abstract control (Harel, 1987). | neutral |
train_96860 | Finally, we plan to investigate more principled approaches, such as Partially Observable Markov Decision Processes or Dynamic Bayesian Networks, to model the different sources of uncertainty (detection errors and inherent ambiguity) and track the state distribution over time. | in the vast majority of cases, the time after which the user resumes speaking is significantly longer than the time the system takes to endpoint. | neutral |
train_96861 | Both genders convey friendliness by laughing more, and using collaborative completions. | for these features we drew mainly on the LIWC lexicons of Pennebaker et al. | neutral |
train_96862 | This simple heuristic was errorful, but did tend to find completions beginning with and or or (1 below) and wh-questions followed by an NP or PP phrase that is grammatically coherent with the end of the question (2 and 3): (1) FEMALE: The driving range. | the feature F0 MIN for a conversation side was computed by taking the F0 min of each turn in that conversation side (not counting zero values of F0), and then averaging these values over all turns in the side. | neutral |
train_96863 | In such a way, there are only a linear number of open, case 3 cells, hence the parsing has quadratic worst-case complexity. | the quadratic bound does not include any potential reduced work in the remaining open cells. | neutral |
train_96864 | Moreover, the structure of the trees means that the parser is also building up structure from left to right. | this methodology is not perfect, since it fails to account for the ease of recognition of very short sentences (which are common in a speech corpus like Switchboard), and thus slightly underweights performance on longer sentences. | neutral |
train_96865 | The work presented here uses the same motivations as those cited above (to be described in more detail below), in that it attempts to model the syntactic structure relating unfinished erroneous con-stituents to the repair of those constituents. | the final modification to examine acts effectively as another control to the previous two annotation schemes. | neutral |
train_96866 | For testing, trees in section 4, subsections 0 and 1, were used. | these syntactic models hold promise for integration into systems for processing of streaming speech. | neutral |
train_96867 | For all three substring lengths, the model predicts difficulty to be greater in the ambiguous conditions at the critical words (tossed/thrown a frisbee). | 7, i is identified with the actual string position within the sentence. | neutral |
train_96868 | While Tabor and Hutchins (2004) work out these questions in detail for the types of sentences they model, it is unclear how the model could be scaled up to make predictions for arbitrary types of sentences. | using probabilistic models trained on large-scale corpora, effects such as global and incremental disambiguation preferences have been shown to be a result of the rational use of syntactic probabilities (Jurafsky, 1996;Hale, 2001;Narayanan and Jurafsky, 2001;Levy, 2008b;Levy et al., 2009). | neutral |
train_96869 | As cohesion concerns only movement in the source, we can completely ignore the language model context, making state effectively an (f , HC ) tuple. | we initialize the interruption count with zero. | neutral |
train_96870 | Our algorithm has clear applications in diverse tasks such as discriminative training, system combination and multi-source translation. | our first key idea is to view the oracle extraction as a bottom-up model scoring process on the hypergraph. | neutral |
train_96871 | The hybrid two-pass approach can also be compared with serial combination architectures for hybrid MT (e.g., Ueffing et al. | a Chinese-English experiment was conducted on the two-pass hybrid model. | neutral |
train_96872 | We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. | the simplex-downhill algorithm looks for a lower point by trying the reflection (line 6), expansion (line 10) and contraction (line 17) points in the order showed in the algorithm, which turned out to be very efficient. | neutral |
train_96873 | The motivations for this approach are: the geometric mean is a way to approximate a boolean AND operation between the vectors, while at the same time keeping track of the magnitude of the frequencies. | the data set thus collected is a ranked list of suggestions for each query 1 , and can be used to evaluate any other suggestion-ranking system. | neutral |
train_96874 | For each word w i collect all words that appear close to w i in the web corpus (i.e., a bag-fowords models). | even if they are popular queries, they may not appear as such in well-formed text found in web documents. | neutral |
train_96875 | About six customers could relate more with user reviews as they felt expert reviews were more like a 'sales pitch'. | the first example is a fact about the camera. | neutral |
train_96876 | Our evaluation demonstrates a marginally significant positive effect of a feature space that includes these and other syntactic features over the purely unigrambased feature space. | the Part-of-Speech bigram features and the not-in-scope features achieve a marginally significant improvement over the unigrams-only baseline. | neutral |
train_96877 | 400 randomly selected image captions were manually annotated by a single annotator with their Image Markers and Image Marker Referents and used for testing and for cross-validation respectively in the two methods described below. | creating a dataset of image regions manually annotated and delineated by domain experts, is a costly enterprise. | neutral |
train_96878 | The transition probabilities show that nearly 60% of the time the dialogue transitions from State 3 to State 0; this may indicate that after establishing what the student does or does not know in State 3, the tutoring switches to a less collaborative "teaching" mode represented by State 0. | as noted in (Midgley et al., 2006), in order to establish that two dialogue acts are truly related as an adjacency pair, it is important to determine whether the presence of the first member of the pair is associated with a significantly higher probability of the second member occurring. | neutral |
train_96879 | In this way, we are allowed to represent an utterance as a point in a high-dimensional space where traditional distance metrics and clustering techniques can be naturally applied. | an interesting question is: Can we do semi-supervised speaker clustering? | neutral |
train_96880 | This task was quite simple, with glosses amenable to Web approaches, and is promising for automatically extending the coverage of a Malay lexicon. | bond and Paik (2000)) or both (e.g. | neutral |
train_96881 | The baseline algorithm has been found to be very useful in automatic speech recognition of agglutinative languages (Kurimo et al., 2006). | as the recall is not very high, the segmentation is more conservative than the linguistic reference. | neutral |
train_96882 | Secondly, we take the segmentation generated by Sakhr Software in Egypt using their Arabic Morphological Tagger, as an alternative segmentation into subword units. | it often oversegments morphemes that are rare or not seen at all in the training data. | neutral |
train_96883 | For RC, we also followed the same approach but, in order to cope with data sparse-ness, we also attempted a different RC strategy by merging data related to different syntactic predicates within the same frame. | the verb to purchase for the above example; (ii) Frame Disambiguation, where the correct frame for every target word (which may be ambiguous) is determined, e.g. | neutral |
train_96884 | Only for the best configuration in Table 1 (PK+TK, results in bold) the amount of training data for the BD model was increased from 2% to 90%, resulting in a popular splitting for this task (Erk and Pado, 2006). | the general semantic parsing work-flow includes 4 main steps: (i) Target Word Detection, where the semantically relevant words bringing predicative information (the frame targets) are detected, e.g. | neutral |
train_96885 | We also evaluated the dialog system performance with the agenda graphs which are manually (HC-AG) or automatically designed (AC-AG). | using sequences of dialog examples obtained with the dialog corpus, relative frequencies of all outgoing edges are calculated to weight directed edges: denotes the number of dialog examples having directed edge from v i to v j . | neutral |
train_96886 | G is composed of nodes (v) which correspond to possible intermediate steps in the process of completing the specified task, and edges (e) which connect nodes. | we can improve the clustering performance by using a distance metric learning algorithm to consider the correlation between features. | neutral |
train_96887 | In addition, the discourse structure is essential to determine whether the current utterance in the dialog is part of the current subtask or starts a new task. | most pairs of utterances share no common feature, and their distance is close to 1.0. | neutral |
train_96888 | Although repetitions cover a large percentage of the data, it is believed that inconsistencies in the user interaction (the right listing is displayed but not confirmed by the user) prevented further improvement. | personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. | neutral |
train_96889 | It is also not surprising that the set of all features (D) yields higher accuracies than sets (A), (B), and (C). | our difference from the naive baseline was 18.54% where Liscombe et al. | neutral |
train_96890 | The difference in performance compared to training on the nonaugmented data is not statistically significant. | we reason, we can create an artificial data set of NSUs by extracting phrasal projections from an annotated treebank. | neutral |
train_96891 | Selecting a suitable combination of LM and LUM is necessary for robust speech understanding against various user utterances. | such a large amount of data may not be available in a real situation. | neutral |
train_96892 | Domain-dependent training data are particularly difficult to obtain. | there are many types of LMs such as finite-state grammars and N-grams, and many types of LUMs such as finite-state transducers (FSt), weighted finite-state transducers (WFSt), and keyphrase-extractors (extractor). | neutral |
train_96893 | Actively sampled data can have very different characteristics than passively sampled data. | it is like CurrentPA except we move Step 0 to be executed one time before the loop and then use that same PA value on each iteration of the AL loop. | neutral |
train_96894 | Learning curves are presented in Figures 4 and 5. | since we don't have labels for the entire corpus, we don't know this ratio. | neutral |
train_96895 | An edge contains a English non-terminal (NT) symbol (NP, VP, etc), border words for LM combination, pointers to child edges, and a score. | consider first a single edge set from [i,k], eg, all the NP edges. | neutral |
train_96896 | Next we discuss the error analysis itself and the conclusions we draw from it. | the features represent this transformation, and the feature weights are meant to indicate whether the transformation is associated with good compressions or not. | neutral |
train_96897 | In the case of speech-to-speech translation we would particularly like to be able to adapt the system based on its usage automatically without having to ship data back to the laboratory for retraining. | we would also like to thank Cepstral LLC and Mobile Technologies LLC, for support of some of the lower level software components. | neutral |
train_96898 | We have presented a distributional free statistical method to design a name perplexity system, such that each perplexity class maximizes the number of names for which the prior coreference belongs to the same interval. | the first and last names perplexities play different roles in establishing the prior probability of coreference. | neutral |
train_96899 | Answer Validation is a topic of significant interest within the Question Answering community. | the Answer Credibility filter was able to correctly increase the answer score of that document so that it was ranked as the most reliable source for the answer and chosen as the correct final result. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.