id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7100
(2013) also use a specific scheme of discourse relations.
rather than relying on gold discourse annotations, they jointly predict sentiment, aspect, and discourse relations and show that the model improves accuracy of both aspect and sentiment polarity at the sub-sentential level.
contrasting
train_7101
This might concern framing relations like BACKGROUND and CIRCUMSTANCE but also some temporal relations such as SEQUENCE or SYNCHRONOUS.
relations can be grouped according to their similar effects on both subjectivity and polarity analysis.
contrasting
train_7102
For example, earthquake and election are considered bursty.
non-bursty words are those that appear more consistently throughout documents discussing different topics-use and they, for example.
contrasting
train_7103
(2008) present results on three language pairs (English-Spanish, English-Chinese, and English-Arabic).
evaluation is only done over nouns, which is a bursty word class, and lexicons are limited to high-frequency words.
contrasting
train_7104
Another option, taken by Goldberg and Nivre, is to follow the one-best action predicted by the classifier.
initial experiments showed that the one-best approach did not work well.
contrasting
train_7105
Most transliteration systems are trained on a list of transliteration pairs which consist of a word and its transliteration.
manually labeled transliteration pairs are only available for a few language pairs.
contrasting
train_7106
The only model parameter that cannot be estimated on the labeled training data is the interpolation parameter λ.
the test data consists of both transliterations and non-transliterations.
contrasting
train_7107
One possible solution in the current set-up could be to convert Chinese to Pinyin and then apply transliteration mining.
we did not try it under the scope of this work.
contrasting
train_7108
However, the system achieves more balanced precision and recall and a higher F-measure than the unsupervised system.
to the unsupervised and semi-supervised systems, the supervised transliteration mining system estimates the posterior probability of non-transliteration λ on the test data.
contrasting
train_7109
1 Qualitatively, certain patterns are evident; for example, there is high similarity between TIBs 8, 9, and 10 (metalinguistic or unrelated speech) and between TIBs 2 and 3 (different requests for confirmation) in Dementia.
the latter pair are not as similar among Controls, perhaps because of a different degree of reduction in TIB 2.
contrasting
train_7110
Our pilot work involved focusing only on TIBs in the CCC database, which involves free-form conversation.
without a task to complete, the POMDP consistently decided that the best way to avoid confusion is to simply say nothing at all; therefore, we only consider the DementiaBank data here, as described subsequently.
contrasting
train_7111
Instead, our task is to identify the central concept the question is testing (e.g., measuring speed), and to eliminate words that are part of an example or narrative (e.g., turtle, path) that are unlikely to contribute much utility (or, may introduce noise) to the QA process.
because at a high level focus words identify the information need of a question, which is what we aim to do as well, we continue to use the same terminology in this work.
contrasting
train_7112
We hypothesize that better algorithms could be implemented by switching to a learning-based approach.
the simple unsupervised algorithm proposed requires no data annotations, and it captures the crucial intuition that some words in the question contribute more towards the overall information need than others.
contrasting
train_7113
Once again, we observe that the performance of the combined model (CR + TAG + embeddings, line 4) is better than the performance of the CR model by itself (line 1) or CR + embeddings (line 3).
here this difference is not significant.
contrasting
train_7114
This coincides with pipeline (a) of Figure 11, that is, the induction of a LCFRS of arbitrary fanout.
if the recursive partitioning is transformed by Algorithm 4 or if the left-branching or right-branching recursive partitioning is used (as in Example 20 and as in pipelines (b) and (c) of Figure 11), then the fanout of the induced LCFRS decreases and its derivations are binarized.
contrasting
train_7115
This is because, by definition, no cycles occur in the given dependency structure.
if we replace nonterminals of the form J by any other choice of symbols, and combine rules induced by different hybrid trees from a corpus, then cycles may arise if the relationship between inherited and synthesized arguments is confused.
contrasting
train_7116
This makes the tree language less refined, and one may expect the scores therefore to be generally lower.
in many cases where strict labeling suffers from a higher proportion of parse failures (so that the parser must fall back on the default structure), it is child labeling that has the higher scores.
contrasting
train_7117
With the combination of POS tags and DEPRELs, the tree language can be most accurately described, and we see that this achieves some of the highest UAS and LAS in the case of child labeling.
the scores can be low in the presence of many parse failures, due to the fall-back on the default structure.
contrasting
train_7118
With left-branching and right-branching recursive partitionings, parsing tends to be faster than with the other recursive partitionings.
in many cases we do not observe the predicted asymptotic differences, which may be due to larger grammar sizes.
contrasting
train_7119
In other words, strictly speaking, +{ and story have different lexical meanings.
contextual information makes up for what is missing in the literal meaning of story (the context for Example (3) is a news story about a thief who left a note on the wall of a home he broke into to encourage the home owner to work harder), and we believe that stories is an appropriate translation for +{ in that context.
contrasting
train_7120
This observation is consistent with the findings of other researchers-in particular, those of Lavie, Parlikar, and Ambati (2008) and Ambati and Lavie (2008).
when there is sufficient structure in the parse trees on both sides that are hierarchically aligned, Hiero-style translation rules that capture the translation divergences between Chinese and English can be extracted.
contrasting
train_7121
In each case, rejection of the null hypothesis implies dependence between the geographical and linguistic signals.
each test can incorrectly fail to reject the null hypothesis if its assumptions are violated, even if given an arbitrarily large amount of data.
contrasting
train_7122
For some methods, it is possible to calculate a p-value from the test statistic using a closed form estimate of the variance.
for consistency, we use a permutation approach to characterize the null distribution over the test statistic values.
contrasting
train_7123
After adjusting the p-values using the FDR procedure, a 500-mile cutoff results in three significant linguistic variables.
6 recall that the approach of selecting parameters by maximizing the number of positive test results tends to produce a large number of type I errors.
contrasting
train_7124
2013), classifying argument components into claims and premises (Mochales-Palau and Moens 2011;Rooney, Wang, and Browne 2012;Stab and Gurevych 2014b), and identifying argumentative relations (Mochales-Palau and Moens 2009;Peldszus 2014;Stab and Gurevych 2014b).
an approach that covers all subtasks is still missing.
contrasting
train_7125
Recently, Peldszus and Stede (2015) proposed an approach based on Minimum Spanning Trees, which jointly models argumentation structures.
it links all argument components in a single tree structure.
contrasting
train_7126
Recently, they translated the corpus to English, resulting in the first parallel corpus for computational argumentation.
the corpus does not include non-argumentative text units.
contrasting
train_7127
Like convergent arguments, a linked argument includes two premises.
neither of the two premises independently supports the claim.
contrasting
train_7128
Some people argue for and others against and there is still no agreement whether cloning technology should be permitted.
as far as I'm concerned, [cloning is an important technology for humankind] MajorClaim1 since [it would be very useful for developing novel cures] Claim1 .
contrasting
train_7129
We also experimented with features capturing PDTB relations between the target and source component.
those were not effective for capturing argumentative relations.
contrasting
train_7130
Omitting probability and embedding features yields the best accuracy.
we select the best system by means of the macro F1 score, which is more appropriate for imbalanced data sets.
contrasting
train_7131
A false negative occurs when an annotator fails to put a unit where she should, namely, where the reference (if any) tells us there should be a unit, and a false positive is the opposite situation.
no reference exists in the case of agreement measures, and there is a symmetry between false positives and false negatives: If annotator 1 puts a unit where annotator 2 doesn't, it is a false positive if we consider annotator 2 as the reference, or a false negative if we consider annotator 1 as the reference.
contrasting
train_7132
This leads to an observed categorial agreement of 50% if we consider, as measures do with predefined units, that all units are of the same importance.
if we rely on unit lengths, the first unit counts four times as much as the second one (if we work at word level), and the observed agreement would artificially reach 80% (instead of 50%).
contrasting
train_7133
They are inherently and frequently present in unitizing: Because annotators have to put units by themselves on a continuum, it is part of the game that they do not put units where others do.
this question goes beyond the scope of unitizing, and the results of this section concern any categorization measure.
contrasting
train_7134
A possible method, which I will call atomization of the continuum, is to compare each atom of the continuum (for instance, at word level) from one annotator to the corresponding atom from another annotator.
this deeply changes the nature of the data (the contiguity of units), and has severe limitations, as demonstrated in Section 3.4.1 of Mathet, Widlöcher, and Métivier (2015).
contrasting
train_7135
Figure 12 shows that γ cat and cu α are about the same, decreasing quite regularly from 1 to 0, as expected.
γ goes from 1 to 0.35, because positions are still correct.
contrasting
train_7136
Spearman's rank correlation ranges between −1 and 1.
unlike Kendall's τ, here the sign does not matter, and high absolute values indicate better performance.
contrasting
train_7137
This tendency is observed across the data sets, and it is quantitatively verified in previous Section 5.2 (i.e., good translations tend to share the tree structure and labels with the reference translations).
translation 12(c) is much more ungrammatical.
contrasting
train_7138
The earliest work on using discourse in machine translation that we are aware of dates back to 2000: Marcu, Carlson, and Watanabe (2000) proposed rewriting discourse trees for MT.
this research direction was largely ignored by the research community as the idea was well ahead of its time: Note that it came even before the current standard phrase-based SMT model was envisaged (Koehn, Och, and Marcu 2003).
contrasting
train_7139
One way the system can consider the word used by the writer at evaluation time is by proposing a correction only when the confidence of the classifier is high enough.
in this partial solution, the source word is not used in training if the classifier is trained on native data.
contrasting
train_7140
Only for verb agreement mistakes, there is about 1 point difference in F1 performance, whereas for the other errors results are quite close.
adaptation using target data substantially outperforms adaptation using unrelated language data, by 1.1 to 2.7 F1 points for different errors.
contrasting
train_7141
The F-measure by itself does not indicate what the quality of the text is as a result of running the system; instead, this is done by looking at accuracy and comparing it with the learner baseline.
accuracy measure is only a single point and in order to get improvement it pushes the system toward low recall.
contrasting
train_7142
Moreover, adding information about either the position or the dependency labels increases the explained variance for all models.
for the TEXTUAL and LM models the position of the word adds considerable amount of information.
contrasting
train_7143
For the VISUAL model it is less clear what to expect: On the one hand, because of their chain structure, RNNs are better at keeping track of short-distance rather than long-distance dependencies and thus we can expect tokens in positions closer to the end of the sentence to be more important.
in English the information structure of a single sentence is expressed via linear ordering: The TOPIC of a sentence appears sentence-initially, and the COMMENT follows.
contrasting
train_7144
of the models are indeed different, and that the features encoded by TEXTUAL are more associated with syntactic constructions than in the case of VISUAL.
when comparing LM with TEXTUAL, the difference between context types is much less pronounced, with distributions overlapping.
contrasting
train_7145
Using the IMAGINET model as our case study, our analyses of the hidden activation patterns show that the VISUAL model learns an abstract representation of the information structure of a single sentence in the language, and pays selective attention to lexical categories and grammatical functions that carry semantic information.
the language model TEXTUAL is sensitive to features of a more syntactic nature.
contrasting
train_7146
This includes tree-structured RNN models such as the Tree-LSTM introduced in Tai, Socher, and Manning (2015), or the CNN architecture of Kim (2014) for sentence classification.
the presented analysis and results regarding word positions can only be meaningful for RNNs as they compute their representations sequentially and are not limited by fixed window sizes.
contrasting
train_7147
The classical definition of ungraded lexical entailment is as follows: Given a concept word pair (X, Y), Y is a hypernym of X if and only if X is a type of Y, or equivalently every X is a Y.
4 graded lexical entailment defines the strength of the lexical entailment relation between the two concepts.
contrasting
train_7148
The usefulness of these evaluation sets is evident from their wide usage in the LE literature over recent years: They guided the development of semantic research focused on taxonomic relations.
none of the evaluation sets contains graded LE ratings.
contrasting
train_7149
Because the main focus of this work is not on the distinction between abstract and concrete concepts, we have not explicitly controlled for the balanced amount of concrete/abstract pairs in HyperLex.
because the source USF data set provides concreteness scores, we believe that HyperLex will also enable various additional analyses regarding this dimension in future work.
contrasting
train_7150
2015), which is able to better reason over pairs consisting of two unseen concepts during inference.
shwartz, Goldberg, and Dagan (2016) argue that a random split emulates a more typical "real-life" reasoning scenario, where inference involves an unseen concept pair (X, Y), in which X and/or Y have already been observed separately.
contrasting
train_7151
Positive correlation scores for all models reveal that pairs with high graded LE scores naturally imply some degree of semantic similarity (e.g., author / creator).
the scores with similarity-specialized models are much lower than the human performance in the graded LE task, which suggests that they cannot capture intricacies of the task accurately.
contrasting
train_7152
IAA scores on both POS subsets are very similar and reasonably high, implying that human raters did not find it more difficult to rate verb pairs.
we observe differences in performance over the two POS-based HyperLex subsets.
contrasting
train_7153
The impact of lexical memorization is illustrated by Table 16, using a sample of concept pairs containing "prototypical hypernyms" (Roller and Erk 2016) such as animal: The regression models assign high scores even to clear negatives such as (plant, animal).
the effect of lexical memorization also partially explains the improved performance of all regression models over UNSUPERVISED baselines for a random split, as many (X t , animal) pairs are indeed assigned high scores in the test set.
contrasting
train_7154
In current binary evaluation protocols targeting ungraded LE detection and directionality, even simple methods modeling lexical generality are able to yield very accurate predictions.
our preliminary analysis in Section 7.2 demonstrates their fundamental limitations for graded lexical entailment.
contrasting
train_7155
they pick their grain of something with the others), meaning that they are benefiting from something as a side effect.
syntactic analysis can help us identify the parts of this MWE that allow for variation, here the determiner that can be changed into a possessive pronoun.
contrasting
train_7156
On the one hand, we find MWE identification before translation methods (the so-called static approaches) that concatenate MWEs as a preprocessing step or conversely split compositional closed compounds in Germanic languages to distinguish them from noncompositional compounds.
we find MWE identification during the translation process itself (so-called dynamic approaches).
contrasting
train_7157
Evaluating MWE processing extrinsically through the use cases of parsing and MT is an attractive alternative.
as Sections 4 and 5 will further specify, caution is needed when measuring the impact of MWE identification on, for example, parsing quality.
contrasting
train_7158
Our survey focuses on interactions of MWE processing with parsing and MT.
we cannot discuss these interactions without providing an overview of approaches in discovery and identification.
contrasting
train_7159
In other words, discovery helps extending and creating lexicons that are used in identification.
there are also some challenges in MWE discovery, the most relevant of which are discontiguity and variability.
contrasting
train_7160
Discontiguity is generally dealt with when parsing the input corpus prior to MWE discovery (Section 4.1).
syntactic analysis may introduce ambiguities because some MWEs do not allow arbitrary modification and will be wrongly merged with literal uses or accidental cooccurrence (to take turns does not mean that one deviates from a straight trajectory several times.
contrasting
train_7161
Word and MWE senses can be modeled using entries of semantic lexicons like WordNet synsets (McCarthy, Venkatapathy, and Joshi 2007).
most discovery methods use distributional models (or word embeddings) instead, where senses are represented as vectors of co-occurring context words (Baldwin et al.
contrasting
train_7162
On the one hand, evaluation sets are not always available and, when they are, results may be hard to generalize because of their small size.
large-scale discovery, although potentially useful, is hard to evaluate.
contrasting
train_7163
The organization of this chapter is excellent.
i feel that the inclusion of distinct-category SCFG decoding does not fit well into this chapter, as string decoding in string-to-tree SMT requires no knowledge of the source syntax.
contrasting
train_7164
The first aspect is the number of teams that do not publish their system's description, which amounts to approximately 9%, and could indeed be related to withdrawals.
exactly because the paper is missing, information on why a team withdrew is not available.
contrasting
train_7165
Ranking of systems forms a large part of the appeal of shared tasks.
rankings should not be overemphasized and are far from being the final goal of shared tasks.
contrasting
train_7166
The normalization for semirings of this type is O(|Q| 3 ) according to Mohri (2009).
given the restricted graphs that we define, this complete semiring behaves as a k-closed semiring (see our Equations (12) and (13)) because the characteristic matrix M is an upper triangular matrix, and, therefore, the time complexity is O(|δ|).
contrasting
train_7167
Given a continuous left-to-right HMM H, a similar algorithm for computing α H (•, •) can be defined.
first, it is necessary to compute the probability of all real sequences that can be generated in a loop.
contrasting
train_7168
The size of this chart is O(n2 n ) in the number of ETs-still exponential in the worst case.
as with the more general features, the ENUM algorithm, memoizing recursive calls with the chart, can take advantage of the hard constraints of the UG to lower the time complexity in practice.
contrasting
train_7169
Our polynomial-time SAT algorithm does not translate directly into a Viterbi algorithm that is polynomial in the size of the input UG.
we emphasize that for any NP-hard problem with hard constraints, the weighted generalization is also NP-hard.
contrasting
train_7170
A tree is a special kind of graph where the vertices are arranged in a hierarchical way, with the property that the set of vertices in any subtree have only one interconnection with the set of the remaining vertices.
for a general graph this is not possible, meaning that we cannot group vertices in a hierarchical structure and pretend that there are a small number of interconnections between vertices in any subtree and the remaining vertices.
contrasting
train_7171
This structure does not introduce high treewidth because we can put "and" and each c i into a separate bag, forming a chain of bags of width 1.
when we use the string order as a constraint, we first introduce vertices c 1 , c 2 , .
contrasting
train_7172
Furthermore, all of these proposals still retain the stack and buffer architecture of the transition-based dependency parsing system they extend.
the proposal in this article introduces the novel idea of using a cache component in stack-based transition systems.
contrasting
train_7173
There are also major notational differences with respect to our proposal: Quernheim and Knight (2012) essentially view computations as top-down rewriting processes, and the rewriting relation is defined via the introduction of specialized DAGs, called incomplete DAGs.
in our definition of run in Sections 3.1 and 7.2, there is no commitment to a specific rewriting process, which makes the notation somewhat simpler.
contrasting
train_7174
Similar to our basic (non-extended) model, graph acceptors of this type recognize graph languages of bounded degree.
because of the overlapping of tiles in runs and the occurrence constraint, they are considerably more powerful than our DAG automata (and thus too powerful for our purposes) unless the tiles are required to have the radius 0 (i.e., they are single nodes).
contrasting
train_7175
Thus, in this respect our DAG automata are similar to those by Kamimura and Slutzki (1981), whose path languages are trivially regular.
if we restrict recognizable DAG languages to DAGs with only one root, then this no longer holds.
contrasting
train_7176
More precisely, if v i (i = 1, 2) is the node such that e i ∈ in(v i ), then D[e 1 ↔ e 2 ] has e 1 ∈ in(v 2 ) and e 2 ∈ in(v 1 ) but is otherwise identical to D. It is not difficult to see that the edge interchange operator we have just defined might introduce cycles, that is, D[e 1 ↔ e 2 ] may no longer be a DAG.
in what follows we will use this operator in a restricted way, such that the resulting graph is still a DAG.
contrasting
train_7177
We have thus shown that the path language of every recognizable DAG language is a regular string language.
the proof of this statement relies crucially on the fact that DAGs in [[M]] may have several roots: We considered the disconnected graph the latter may be connected and may thus, in fact, be an element of [[M ]], while containing the roots of both D 1 and D 2 .
contrasting
train_7178
For this reason, in the following discussion we use as a term of comparison quantity tw(D), the treewidth of D. When processing D , the running time of Algorithm 2 depends on tw(LG(D )), the treewidth of the line graph of D , which in turn depends on the choice of the binarization of D. There are several ways in which we can binarize D, resulting in different values of tw(LG(D )).
a bad choice of binarization for D may result in tw(LG(D )) much larger than tw(D).
contrasting
train_7179
With this guideline, the structure of the book appears smoother from a neural network entry: It first lays the background of neural network methods, and then discusses the traits of natural language data, including challenges to address and sources of information that we can exploit, so that specialized neural network models introduced later are designed in ways that accommodate natural language data.
some fundamentals in natural language processing are not covered in the book, for example, linguistic theories and backgrounds of the natural language processing tasks, and proper preparation of corpus data.
contrasting
train_7180
As mentioned earlier, neural network practitioners may feel that the neural network content of the book is a bit light, and this part can be almost entirely skipped by these readers.
for people coming from more traditional branches of statistical learning, Chapter 5 is still well worth reading.
contrasting
train_7181
The relative unclarity of how to determine the arguments of discourse relations in an RST tree complicates efforts to capture semantically relevant information in these structures, and thus undermines a semantic argument for analyzing discourse in terms of constituent structures like RST tree structures.
although most computational approaches have eschewed a principled answer to this question, almost all proposals agree that the EDUs should play a part in the relational structure, as should the relations that link them.
contrasting
train_7182
Without a notion of head, two dependency trees will represent one RST tree consisting of one multinuclear relation.
assuming some convention to determine the head in an RST tree will result in a dependency structure that represents two RST c-trees; for instance, assuming that a tree consisting of one multinuclear relation has its head in the leftmost argument will mean that the RST trees [a N , b] and [a N , b N ] yield the same unlabeled dependency structure, a → b. Consequently, RST non-terminal constituents have been translated as dependencies with some information loss; see for example .
contrasting
train_7183
To go back to our case in Example (2), we see that the candidate tree has three errors on what the head of a span is, which correspond to the three UAS arrow errors.
if the heads of spans are all correct, then the spans will determine correct attachments.
contrasting
train_7184
On the other hand, if the heads of spans are all correct, then the spans will determine correct attachments.
to furnish an equivalent dependency based measure to our δ SH , we need also to consider the order in which dependents are attached to the head.
contrasting
train_7185
Our model outperforms previous work significantly.
to identity anaphors, which indicate coreference between a noun phrase and its antecedent, bridging anaphors link to their antecedent(s) via lexico-semantic, frame, or encyclopedic relations.
contrasting
train_7186
Barzilay and Lapata (2008) model local coherence with the entity grid based on coreference only.
example (1) does not exhibit any coreferential entity coherence, and therefore entity coherence can only be established when bridging is resolved.
contrasting
train_7187
They achieved a kappa score of 0.78 for six top-level categories.
the confusion matrix in Riester, Lorenz, and Seemann (2010) shows that the anaphoric bridging category is frequently confused with other categories: The two annotators agreed on fewer than a third of bridging anaphors.
contrasting
train_7188
(2004a) combines lexico-semantic and salience features to resolve mereological bridging in the GNOME corpus.
their results came from a limited evaluation setting: In the first two experiments they distinguished only between the correct antecedent and one or three false candidates.
contrasting
train_7189
There is partial overlap between bridging resolution and implicit semantic role labeling (i.e., in some bridging cases, antecedents are implicit semantic roles of bridging anaphors).
bridging resolution considers all possible nominal bridging anaphors in running text.
contrasting
train_7190
In Example (1), the antecedent the Polish center could trigger the anaphor walls.
bridging anaphors can be solely indicated by referential patterns as nonsense Example 16shows that the wug is clearly a bridging anaphor although we do not know the antecedent.
contrasting
train_7191
For instance, walls in Example (1) are necessary parts of the antecedent the Polish center according to common sense knowledge.
windows and carpets are only probable or inducible parts of a building but still function as bridging anaphors in Example (1).
contrasting
train_7192
f 9 IsSetElement is used to identify set-membership bridging cases (see Example (12)), by checking whether the mention head is a number or indefinite pronoun (one, some, none, many, most) or modified by each, one.
not all numbers are bridging cases, and we use f 11 IsYear to exclude some such cases.
contrasting
train_7193
For our general model formulation, we assume the antecedent entity setting as the general case which subsumes the other setting.
candidate generation, feature computation, and evaluation will vary for the two settings and therefore we explore both scenarios in the experiments.
contrasting
train_7194
Set-membership relations between anaphor and antecedent evade the preposition pattern, because the anaphor often has no common noun head (Example (29)).
in such a bridging relation, the antecedent is semantically compatible with the verb the anaphor depends on.
contrasting
train_7195
This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets.
with respect to multiplicatives, intrinsic proof nets have not yet been given for displacement calculus, and proof nets for additives, which have applications to polymorphism, are not easy to characterize.
contrasting
train_7196
Because this task may be very complex (in fact the search space may grow exponentially), a technique called synchronous parsing is used to constrain the search space.
this approach exhibits the problem of spurious ambiguity, which that paper elaborates in depth.
contrasting
train_7197
For instance, context-free grammars, which correspond to push-down automata, have access to unlimited amounts of memory.
they are known not to be efficiently learnable.
contrasting
train_7198
Trying to improve on the results with k = 3, we expected that adding a (limited size) right context, which is performed using k, l-local merging, would provide a gentle way of restricting state-merging (with respect to k-testable with the same k values).
this turns out not to be the case.
contrasting
train_7199
In the case of the 3-testable merging criterion, state q 3 is merged with other states (like q 3 ) that show the same incoming path of three transitions (a b c).
in the 2, 1-local merging criterion case, state q 2 is merged with other states (like q 2 ) that also have the same incoming path of two transitions (a b) and an outgoing transition (c).
contrasting