id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_6700
In FactBank, markers of possibility or probability, such as could or likely, uniquely determine the corresponding tag (Saurí 2008, page 233).
the Turkers allow the bias created by these lexical items to be swayed by other factors.
contrasting
train_6701
Normalization: Saddam accepted a border demarcation treaty Annotations: PS+: 6, PR+: 2, CT+: 2 Another difference is that nouns appearing in a negative context were tagged as CT+ by the Turkers but as CT− or PR− in FactBank.
its equity in the net income of National Steel declined to $6.3 million from $10.9 million as a result of softer demand and lost orders following prolonged labor talks and a threatened strike.
contrasting
train_6702
We consider propositions to which no truth value can be attributed, given the speaker's mental state, as instances of semantic uncertainty.
uncertainty may also arise at the discourse level, when the speaker intentionally omits some information from the statement, making it vague, ambiguous, or misleading.
contrasting
train_6703
The latter assigns a label sequence to a sentence (a sequence of tokens) thus it naturally deals with the context of a particular word.
context information for a token is built into the feature space of the token classification approaches.
contrasting
train_6704
The following instance of may in FactBank was correctly marked as non-cue by the cue detector when trained on Wikipedia texts.
it was marked as a cue when trained on biological texts since in this case, there were insufficient training examples of may not being a cue: (16) "Well may we say 'God save the Queen,' for nothing will save the republic," outraged monarchist delegate David Mitchell said.
contrasting
train_6705
On the one hand, this is desirable because it reduces the possibility of confounding effects from prior knowledge.
it would be interesting for future work to extend Fruit Carts-style domains to more realistic object construction and placement tasks.
contrasting
train_6706
For example, the proposition conveying the value of a bar (Proposition 2) can be represented via the predicate "presents" (Predicate 15) and the proposition conveying the average of all bar values (Proposition 4) via the predicate "averages" (Predicate 10).
some propositions require more than one predicate.
contrasting
train_6707
Among the eight levels defined in that study, the levels of interest in our work are simple sentences, conjoined sentences, sentences with a relative clause modifying the object of the main verb, non-finite clauses in adjunct positions, and sentences with more than one level of embedding.
the definition of sentence types at each level is too general.
contrasting
train_6708
The first forest has a center-embedded relative clause and receives a score of 4 for the clause criteria: the product of the complexity of the relative clause (2) and its position (2).
the second forest has a right-branching relative clause and receives a score of 2 for the same criteria: the product of the complexity of the relative clause (2) and its position (1).
contrasting
train_6709
Note that in the first conjoined sentence, the time period mentioned in the first conjunct (from 1998 to 2006) subsumes the time period mentioned in the second conjunct (between 2000 and 2001).
in the second conjoined sentence, the time period mentioned in the first conjunct (between 1998 to 1999) precedes the time period mentioned in the second conjunct (between 2000 and 2001).
contrasting
train_6710
Bohnet, Klatt, and Wanner (2002) also has similar goals to the present research, as it is aimed at automatically classifying German adjectives.
the classification used is not purely semantic, polysemy is not taken into account, and the evidence and techniques used are more limited than the ones used here.
contrasting
train_6711
Thus, we had expected that polysemous adjectives form a homogeneous group of lexical items, characterized precisely by the fact that they exhibit properties from each class to a certain degree.
this expectation is not borne out in the results of the experiments.
contrasting
train_6712
Because the maximal depth of a lexical node in an elementary tree of G 1 is 2, we deduce that every tree generated by G 1 contains a lexical node with depth at most 2.
all lexical nodes in the tree t 1 have depth 3.
contrasting
train_6713
−4.37441 −6.88172 −10.6397 −0.141938 −3.65821 −8.81816 −3.60823 −4.29886 −7.64586 c. −3.57338 −6.86285 −10.9656 −0.131813 −3.8662 −8.85551 −3.42688 −4.28615 −7.82392 d. −4.61674 −6.49064 −13.0595 −0.255452 −3.73302 −5.55244 −3.60481 −3.97708 −7.85289 e. −3.85647 −6.61406 −12.5666 −0.178075 −3.92356 −5.90511 −3.46416 −4.03158 −7 The conditional probability of the word(s) party or political bureau given document history computed by 5-gram/PLSA or 5-gram/PLSA+4-SLM/PLSA is significantly boosted due to the appearance of semantic related words such as cpc and communist party in the previous sentences, this clearly shows that the composite language models (5-gram/PLSA and 5-gram/PLSA+4-SLM/PLSA) trigger long-span document-level discourse topics to influence word prediction.
there is no effect when using linear combination models (i.e., 5-gram+PLSA and 5-gram+4-SLM+PLSA).
contrasting
train_6714
MBOTs are desirable for natural language processing applications because they are closed under composition and can be used to represent sequences of transformations of the type performed by STSGs.
the string translations produced by MBOTs representing compositions of STSGs are strictly more powerful than the string translations produced by STSGs, which are equivalent to the translations produced by SCFGs.
contrasting
train_6715
It has been shown to parse exhaustively with very competitive or superior efficiency compared with other highly optimized CYK parsers (Dunlop, Bodenstab, and Roark 2011).
to the results in Roark and Hollingshead (2009), here we present results with both leftand right-binarized PCFGs induced using a Markov order-2 transform, as detailed in Section 3.3, and also present results for parsing Chinese.
contrasting
train_6716
The event interpretation is characterized as dynamic because it implies a change from 'not being adapted' to 'being adapted.'
in Example (8) the same nominalization is understood as a result because it denotes a specific object that is the outcome of the action of adapting a creative work into a film.
contrasting
train_6717
Regarding this denotative distinction, several linguistic criteria have been proposed in order to identify each of these denotations, mostly for English, although there are some proposals for Spanish (Picallo 1999), French, Greek, Russian, andPolish (Alexiadou 2001) (see Table 2 in Section 4.1).
authors differ on the argument-taking capacity of deverbal nominalizations: Some linguists maintain that only event deverbal nominalizations can take arguments (Zubizarreta 1987;Grimshaw 1990), whereas others consider that both event and result nominalizations can take arguments (Pustejovsky 1995;Picallo 1999;Alexiadou 2001).
contrasting
train_6718
Thus, as originally presented, Kay's algorithm is correct only for a very restricted set of unification grammars.
we draw from the larger term set T F that includes in addition the collection of path-terms that combine constants with sequences of attributes.
contrasting
train_6719
2008); these clearly help in expressing linguistic generalizations but can be formally treated in the obvious way by translating their occurrences into the more basic descriptions that they abbreviate.
the restriction operator (Kaplan and Wedekind 1993) requires more careful consideration.
contrasting
train_6720
We can thus establish the context-free result for a broader family of formalisms that share the property of being endowed with a context-free base.
it is not clear whether the string set corresponding to an underlying Head-driven Phrase Structure Grammar (HPSG) feature structure is context-free.
contrasting
train_6721
In Hungarian, the F-scores for dative objects improve by over 33 percentage points to 73.49% F-score when switching to the PRED-M model.
although all the scores improve for German, improvements are generally low when switching from the NO-M to the PRED-M model.
contrasting
train_6722
2012), a German Treebank, non-local dependencies are expressed via an annotation of topological fields (Höhle 1986) and special edge labels.
some other treebanks, among them NeGra and TIGER, give up the annotation backbone based on CFG and allow annotation with crossing branches (Skut et al.
contrasting
train_6723
In many cases the correct segmentation cannot be determined from local context alone, but can be disambiguated by more global syntactic constraints (in ‫ים‬ ‫שמ‬ ‫ראיתי‬ ‫חולים‬ ‫,כ‬ the middle token is ambiguous between ‫ים‬ ‫שמ‬ ['sky'] and ‫ים‬ ‫ש-מ‬ ['that/ rel water'], and the sequence can be interpreted as either 'I saw blue skies' or 'I saw that blue water.'
‫הבאר‬ ‫ן‬ ‫מ‬ ‫ו‬ ‫רצ‬ ‫פ‬ ‫חולים‬ ‫כ‬ ‫ים‬ ‫שמ‬ ‫ראיתי‬ is unambiguous because the past verb ‫ו‬ ‫רצ‬ ‫פ‬ requires the relativizer ‫,ש‬ allowing only the segmented ‫ים‬ ‫ש-מ‬ reading 'I saw that blue water broke from the well'.
contrasting
train_6724
The treebank estimates are based on a small vocabulary, the external lexicon estimates are based on a very large vocabulary, and a proper combination of the two emission probabilities is not trivial.
the tagging probabilities do not depend on the vocabulary size, allowing a very simple combination.
contrasting
train_6725
Looking just at nominals, we see in the gold corpus that 62% of the dependents in a modification relation have no inherent rationality (this is the case notably for adjectives), whereas this number for idafa is only 18%.
the dependent of an idafa is irrational 66% of the time, whereas for modification that number is only 16%.
contrasting
train_6726
This is a result of their high relevance and their high prediction accuracy.
cASE and STATE are the best performers in the gold condition (i.e., highly relevant) but not in the predicted condition (where cASE is actually the worst feature).
contrasting
train_6727
In contrast, English NE boundaries are easier to identify with explicit words and capitalization clues.
classification of English NE type is considered more challenging (Ji and Grishman 2006).
contrasting
train_6728
The reason for this is that News Agency is better aligned to "通信社", rather than be deleted, which would occur if "北韩中央" is chosen as the corresponding Chinese NE.
type consistency constraints can help correct the NE type that is less reliably identified.
contrasting
train_6729
Further inspection of those seven cases (which prefer the sub-string) reveals that three of them are with unmatched components (Category IV); therefore only four of them are, in fact, due to the problem of F7.
among the nine errors caused mainly by F10, only three of them chose the sub-strings of their corresponding ENE reference, and the remaining six errors selected the strings unrelated to the reference due to spurious anchors and an acronym.
contrasting
train_6730
We find that the best summary indeed conveys some of the main issues also reported in the human summaries.
the low-scoring summary presents a story line about one of the lawyers involved in the case, which is a peripheral topic described in only one of the input documents.
contrasting
train_6731
Such a structure of the abstract clusters can be explained by the fact that relationships, marriages, collaborations, and political systems are all cognitively mapped to the same source domain of MECHANISM.
to concrete concepts, such as tea, water, coffee, beer, drink, liquid, that are clustered together when they have similar meanings, abstract concepts tend to be clustered together if they are associated with the same source domain.
contrasting
train_6732
This is due to the fact that it does not reach beyond the concepts present in the seed set.
most metaphors tagged by the clustering method (87%) are non-synonymous to those in the seed set and some of them are novel.
contrasting
train_6733
Notably, z is unobserved, and we instead observe only the answer y, which is obtained by evaluating z on a world/database w. There are an exponential number of possible trees z, and usually dynamic programming can be used to efficiently search over trees.
in our learning setting (independent of the semantic formalism), we must enforce the global constraint that z produces y.
contrasting
train_6734
Observe that ascent and descent are both listed in the same category 694 (SLOPE), which makes sense here because both words are pertinent to the concept of slope.
two separate clues independently inform our system that the words are opposites of each other: (1) Category 49 has the word upwardness in the same paragraph as ascent, and category 50 has the word downwardness in the same paragraph as descent.
contrasting
train_6735
This is to be expected, as most core arguments fall under the Arg0 and Arg1 classes, which can typically be disambiguated based on syntactic information (i.e., subject vs. object).
there are no syntactic hints for adjunct arguments, so the system learns to rely more on SP information in this case.
contrasting
train_6736
[...] In the experiments, we work only with the terms present in WordNet [...] The evaluation is based only on the WordNet relations.
the harvesting algorithm extracts much more.
contrasting
train_6737
This means that if the learned taxonomy is less structured we replicate the cut k l − 1 for k r − k l times (where k l is the maximum depth of the learned taxonomy), whereas if it is more structured we stop at cut k r − 1.
to previous evaluation models, our aim is to reward (instead of penalize) more structured taxonomies provided they still match the gold standard one.
contrasting
train_6738
Therefore, there is no evidence of the precision of their method on new domains, where the category nodes are unknown.
if Hearst's patterns, which are at the basis of K&H's hypernymy harvesting algorithm, could show adequate precision, we would use them in combination with our definitional patterns.
contrasting
train_6739
Moreover, these resources do not yet tackle the dynamic evolution of language.
our WSI approach to search result clustering automatically discovers both lexicographic and encyclopedic senses of a query (including new ones), thus taking into account all of the mentioned issues.
contrasting
train_6740
In this approach, however, topics (estimated from a universal data set) are query-independent and thus their number needs to be established beforehand.
we aim to cluster snippets on the basis of a dynamic and finergrained notion of sense.
contrasting
train_6741
On the one hand, it may terminate in a non-terminal configuration where no transition can be applied.
it may fail to terminate at all, because the system allows an infinite sequence of transitions.
contrasting
train_6742
In this case, the system is precise but the recall is low, given that many relations are not detected.
a low value for balance causes many lowprecision constraints to have a positive weight, which increases recall but also decreases precision (see Figure 18).
contrasting
train_6743
The crucial difference is that when making this choice in a traditional historybased model, the model designer inevitably makes strong independence assumptions because features that are not included are deemed totally irrelevant.
iSBNs can avoid such a priori independence assumptions because information can be passed repeatedly from latent variables to latent variables along the edges of the graphical model.
contrasting
train_6744
It is also the only merging method we are aware of that addresses coordinated compounds.
it requires a factored decoder that can carry part-of-speech tags through the translation process.
contrasting
train_6745
The rate of novel compounds is relatively low in the data sets used in our experiments, mainly due to the fact that most experiments are carried out with in-domain test data.
the best methods tend to vary between the domain-restricted automotive corpora and Europarl.
contrasting
train_6746
The difference between a baseline using a POS-model but no compound splitting and the split-merge EPOS-models is small in all experiments and does not seem to increase with the size of the training corpus.
only the latter can produce novel compounds.
contrasting
train_6747
We compare our models with J&N'07 using the benchmark data set from SemEval 2007.
because we are not aware of any other work using the FrameNet 1.5 full text annotations, we report our results on that data set without comparison to any other system.
contrasting
train_6748
First, they identified locative, temporal, and directional prepositions using a dependency parser so as to retain them as valid LUs.
we pruned all types of prepositions because we found them to hurt our performance on the development set due to errors in syntactic parsing.
contrasting
train_6749
26 For automatically identified targets, the F 1 score falls because the model fails to predict frames for unseen lemmas.
our model outperforms J&N'07 by 4 F 1 points.
contrasting
train_6750
The precision and recall measures are significant as well (p < 0.05 and p < 0.01, respectively).
because targets identified by J&N'07 and frames classified by our frame identification model resulted in scores on par with the baseline, we note that the significant results follow due to better target identification.
contrasting
train_6751
Thus, the initial value on Figure 4 corresponds to a damping factor of 0.001.
a damping factor of 1 yields to the same results as the STATIC method (c.f.
contrasting
train_6752
The matrix also shows that all our three methods agree more than 80% of the time, with PPR and STATIC having a relatively smaller agreement.
related work using the same techniques over domain-specific words (Agirre, López de Lacalle, and Soroa 2009) shows that the results of our Personalized PageRank models departs significantly from MFS and STATIC.
contrasting
train_6753
Their technique selects feature subsets that minimize the distance between training text and unlabeled test text, but unlike our techniques, theirs cannot learn representations with features that do not appear in the original feature set.
we learn hidden features through statistical language models.
contrasting
train_6754
McClosky, Charniak, and Johnson (2010) use classifiers from multiple source domains and features that describe how much a target document diverges from each source domain to determine an optimal weighting of the source-domain classifiers for parsing the target text.
it is unclear if this "source-combination" technique works well on domains that are not mixtures of the various source domains.
contrasting
train_6755
As M, N → ∞, five out of every six edges from the complete lattice appear in the PL-MRF.
the PL-MRF makes the branches conditionally independent from one another, except through the trunk.
contrasting
train_6756
Our inference algorithm passes information from the branches inwards to the trunk, and then upward along the trunk, in time O(K 4 MN).
a fully connected lattice model has tree-width = min(M, N), making inference and learning intractable (Sutton, McCallum, and Rohanimanesh 2007), partly because of the difficulty in enumerating and summing over the exponentially-many configurations y for a given x.
contrasting
train_6757
The WEB1T-n-GRAM-R uses none of the local context to decide which features to provide, and the NB-R uses only the immediate left and right context, so both models ignore most of the context.
the remaining graphical models use Viterbi decoding to take into account all tokens in the surrounding sentence, which helps to explain their relative improvement over WEB1T-n-GRAM-R on polysemous words.
contrasting
train_6758
There is a logical underpinning or at least a formal conceptual scheme, in which the semantics of sentences can be represented.
the mapping of sentences to their formal representations is itself not defined in a fully formal way, but requires external background knowledge, heuristics, or user feedback.
contrasting
train_6759
There are a number of other important features that could be considered, for example, support for existential quantification, equality, and types of supported speech acts (such as declarative, interrogative, directive, and indirect speech acts).
to achieve a simple classification into a sequence of five classes, these features will turn out to be sufficient and lead to a classification that seems consistent with the intuitive understanding of expressiveness.
contrasting
train_6760
Concerning the last three properties, the data show similar language counts for academic and industrial CNLs: 50 and 43 languages, respectively.
only ten CNLs were found that originated from a governmental environment.
contrasting
train_6761
I would like to conclude with the observation that the study of controlled languages is a very dynamic and highly interdisciplinary field, for the most part occupying small niches in the academic, industrial, and governmental worlds.
adding all these niches together gives us a large body of past and ongoing work.
contrasting
train_6762
Those dialects, in turn, differ quite a bit from each other.
due to MSA's prevalence in written form, almost all Arabic data sets have predominantly MSA content.
contrasting
train_6763
Harvesting data from such sources is a viable option for computational linguists to create large data sets to be used in statistical learning setups.
because all Arabic varieties use the same character set, and furthermore much of the vocabulary is shared among different varieties, it is not a trivial matter to distinguish and separate the dialects from each other.
contrasting
train_6764
For example, the typical Arabic speaker has little trouble understanding the Egyptian dialect, thanks in no small part to Egypt's history in movie-making and television show production, and their popularity across the Arab world.
the Moroccan dialect, especially in its spoken form, is quite difficult to understand by a Levantine speaker.
contrasting
train_6765
Bayesian methods have been applied to a number of segmentation tasks in natural language processing, including word segmentation, TSG learning, and learning machine translation rules, as a way of controlling the overfitting produced when Expectation Maximization would tend to prefer longer segments.
it is important to note that the Bayesian priors in most cases control the size and number of the clusters, but do not explicitly control the size of rules.
contrasting
train_6766
If the stack contains exactly one word, we terminate and output a tree, which was true also in the old system.
if the stack contains more than one word, we now go on parsing but are forbidden to make any Shift transitions.
contrasting
train_6767
The following parameters are the priors of the Dirichlet and beta distributions used by the models.
to the number of topics, which controls model complexity, the priors allow users of the models to specify their prior knowledge and beliefs about the data.
contrasting
train_6768
As for the topic distribution priors, symmetric priors are often used, with a default value of 0.01 for all the vector elements (yielding sparse word distributions, as indicated earlier), meaning that each topic is expected to assign high probabilities to only a few top words (Steyvers and Griffiths 2007).
to the topic distribution priors, Wallach, Mimno, and McCallum (2009) found in their experiments on LDA that using an asymmetric β β β (D) was of no benefit.
contrasting
train_6769
For the same reason, using a symmetric β β β (A) is a sensible choice for AT.
to LDA and AT, our DADT model distinguishes between document words and author words, and thus uses both β β β (D) and β β β (A) as priors.
contrasting
train_6770
Similarly to LDA, we then setθ θ θ (A) to its expected value according to Equation (8 a .
aT is limited because all the documents by the same authors are generated in an identical manner (Section 3.3.1).
contrasting
train_6771
Of the LDA and AT variants presented in Sections 3.2.3 and 3.3.3, DADT might seem most similar to AT-FA.
there are several key differences between DADT and AT-FA.
contrasting
train_6772
This allows us to roughly gauge how much information is lost by converting texts from token representations to topic representations.
this approach ignores the probabilistic nature of the underlying topic model, and thus does not fully test the utility of the author representations yielded by the model-these are better tested by the next approach.
contrasting
train_6773
However, this approach ignores the probabilistic nature of the underlying topic model, and thus does not fully test the utility of the author representations yielded by the model-these are better tested by the next approach.
to dimensionality reduction methods, probabilistic methods utilize the underlying model's definitions directly to estimate the probability that a given author wrote a given test text.
contrasting
train_6774
In addition, we found that following an approach whereπ andθ θ θ (D) are sampled separately for each author (similarly to AT-FA-P2) yields comparable performance to sampling only once by following the previously-unknown author assumption.
the former approach is too computationally expensive to run on data sets with many candidate authors.
contrasting
train_6775
IMDb1M can be seen as complementary to the IMDb62 data set, as IMDb62 allows us to test scenarios in which the user population is made up of prolific users, whereas IMDb1M contains a more varied sample of the population.
because we did not impose a minimum threshold on the number of reviews or posts, the IMDb1M population is very challenging as it includes many users with few texts (e.g., about 56% of the users in IMDb1M wrote only one text).
contrasting
train_6776
It is likely that performing an exhaustive grid search for the optimal parameter settings for each method would allow us to obtain somewhat improved results.
such a search would be computationally expensive, as the model needs to be retrained and tested for each fold, parameter set, and method.
contrasting
train_6777
Specifically, when we used uninformed uniform priors on the document/author word split (δ (D) = δ (A) = 1), and the same wordin-topic priors for both document and author words ( = 0), the obtained accuracy was comparable to AT-P's accuracy.
setting δ (D) = 1.222 and δ (A) = 4.889, which encodes our prior belief that on average 80% (with a standard deviation of 15%) of each document is composed of author words, significantly improved performance.
contrasting
train_6778
DADT-P's testing result is comparable to the third-best accuracy (out of 17) obtained in the PAN'11 competition (Argamon and Juola 2011) (competitors were ranked according to macro-averaged and micro-averaged precision, recall, and F1; the micro-averaged measures are all equivalent to the accuracy measure in this case, because each of the test texts is assigned to a single candidate author).
to the best of our knowledge, DADT-P obtained the best accuracy for a fully supervised method that uses only unigram features.
contrasting
train_6779
The reason why DADT-P's performance dropped when only stopwords were used may be that DADT was designed under the assumption that all the tokens in the corpus are retained.
we are encouraged by the fact that DADT-P's performance drop on Judgment was not very large when only stopwords were retained, as it indicates that DADT captures stylistic elements in the authors' texts.
contrasting
train_6780
As in our previous experiments, DADT-P consistently outperformed AT-P, which indicates that using disjoint sets of document and author topics yields author representations that are more suitable for authorship attribution than using only author topics.
to the previous experiments, Token SVM outperformed DADT-P in one case: the IMDb62 data set.
contrasting
train_6781
DADT's improved performance in comparison with methods based on LDA and AT comes at a price of more parameters to tune.
the most important parameter is the number of topics-we found that the prior values that yielded good results on the small data sets also obtained good performance on the large data sets without further tuning.
contrasting
train_6782
One approach is to use tools that obfuscate author identity, as developed by, for example, Kacmarcik and Gamon (2006) and Brennan and Greenstadt (2009).
as this may lead to an "arms race" between such tools and authorship analysis methods, perhaps the best approach is to forgo anonymity completely, as advocated by some researchers and editors (Groves 2010).
contrasting
train_6783
Recently, rating prediction algorithms that are based on matrix factorization have become increasingly popular, due to their high accuracy and scalability (Koren, Bell, and Volinsky 2009).
such algorithms often deliver inaccurate rating predictions for users with few ratings (this is known as the new user problem).
contrasting
train_6784
For example, event mentions are typically predications that require more complex lexico-semantic processing, and furthermore, the capability of extracting features that characterize them has been available only since semantic parsers based on PropBank (Palmer, Gildea, and Kingsbury 2005) and FrameNet (Baker, Fillmore, and Lowe 1998) corpora have been developed.
entity coreference resolution has been intensively studied and many successful techniques for identifying mention clusters have been developed (Cardie and Wagstaf 1999;Haghighi and Klein 2009;Stoyanov et al.
contrasting
train_6785
The only requirement for them to infer coreference clusters of event mentions is to have the observable objects (i.e., the event mentions) identified in the order they occur in the documents as well as to have all the linguistic features associated with these objects extracted.
in order to see how well these models perform, we need to compare their results with manually annotated clusters of event mentions.
contrasting
train_6786
For the model depicted in Figure 3(d), for instance, the posterior probability is given by: In this model, P(FR i,j | HL i,j , θ) is a global distribution parameterized by θ, and FT is a feature type variable from the set X = HL, POS, FR .
one limitation of this particular model is that it requires domain knowledge in order to establish the dependencies between the feature type variables.
contrasting
train_6787
In this way, the new nonparametric extension will have the benefits of capturing the uncertainty regarding the number of mixture components that are characterized by a potentially infinite number of feature values.
to make this hybrid work, we have to devise a mechanism in which only a finite set of relevant feature values will be selected to explain each observation (i.e., event mention) in the HDP inference process.
contrasting
train_6788
9 The results confirm the fact that the sampling scheme of the feature values used in the iFHMM-iHMM framework does not guarantee the selection of the most salient features.
the constant trend in the performance values shown in Figure 9 proves that iFHMM-iHMM is a robust generative model for handling noisy and redundant features.
contrasting
train_6789
Moreover, we believe that the HDP extension can be used for solving clustering problems that involve a small number of feature types and a priori known facts about the salience of these feature types.
when no such prior information is known with respect to the number of feature types, or the total number of features is relatively large, we believe that the iFHMM-iHMM model is a more suitable choice.
contrasting
train_6790
One way to address this is to also extract rules that use part-of-speech (POS) tags in place of words.
since words can have multiple POS tags, we would then need to infer POS tags for the words in order to determine which rule is applicable.
contrasting
train_6791
For example, for a three-word source sentence, there cannot exist a directed path from a node with coverage vector 0, 1, 0 to a node with coverage vector 0, 0, 1 .
there may or may not be a path from a node with vector 0, 1, 0 to one with 0, 1, 1 .
contrasting
train_6792
For example, we can paraphrase a high percentage of by a large number of in the sentence a form of asbestos has caused a high percentage of cancer deaths.
text paraphrasing may have more effect on the grammaticality of a sentence than lexical substitution.
contrasting
train_6793
Note that the receiver does not need to know the original cover text.
not all the linguistic transformations can meet this requirement.
contrasting
train_6794
In the SemEval-2007 lexical substitution task participants were asked to discover possible replacements of a target word so the evaluation metrics provided are designed to give credit for each correct guess and do not take the ordering of the guesses into account.
in the ranking task a system is already given a fixed pool of substitutes and is asked to recover the order of the list.
contrasting
train_6795
That is, from a security perspective, rejecting an acceptable substitute does not damage the quality of stego text.
it will lower the payload capacity so more stego text transmission is needed in order to send the secret message, which may raise a security concern.
contrasting
train_6796
Note that it is acceptable to have conjoin and join encoded by the same codeword 00 because both of them have access to all the two-bit codewords.
both bind and draw have only one neighbor, which means that only two codewords can be accommodated by these nodes, namely, bits 0 and 1.
contrasting
train_6797
The combined classifier resulted in a much higher accuracy than any of the two methods alone.
the use of BN is not central to this work, and its structure does not reflect any insights or intuitions on the structure of the problem domain or on interdependencies among features.
contrasting
train_6798
The main advantage of the rule-based NER systems is that they are based on a core of solid linguistic knowledge (Shaalan 2010).
any maintenance or updates required for these systems is labor-intensive and time-consuming; the problem is compounded if the linguists with the required knowledge and background are not available.
contrasting
train_6799
Arabic is at a disadvantage in this regard because the script does not orthographically mark proper names in this way.
many researchers (e.g., Benajiba, Diab, and Rosso 2008a;Mohit et al.
contrasting