id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_11700
It shares much of its motivation with co-training (Blum and Mitchell, 1998) in improving initial models by leveraging additional data that is easy to obtain.
as the examples of Section 2.3 illustrate, COREF's interactions with its users offer substantially more information about interpretation than the raw text generally used for co-training.
contrasting
train_11701
We could run TiMBL with different values of k, as this should lead to better feature integration.
this is difficult to explore without development data, and initial experiments with higher k values were not promising (see section 4.2).
contrasting
train_11702
On the one hand, one can simply additively combine features into a larger vector for training, as described in section 4.2.
one can use one set of features to constrain another set, as described in section 5.
contrasting
train_11703
Media also offers the possibility to compare with the state-of-the-art, which our re-rankers seem to improve.
we need to consider that many Media corpus versions exist and this makes such comparisons not completely reliable.
contrasting
train_11704
Thus, one essential issue is to identify more complex expressions which, in appropriate contexts, convey the same (or similar) meaning.
more generally, we are also interested in pairs of expressions in which only a uni-directional inference relation holds 1 .
contrasting
train_11705
One way to deal with textual inference is through rule representation, for example X wrote Y ≈ X is author of Y.
manually building collections of inference rules is time-consuming and it is unlikely that humans can exhaustively enumerate all the rules encoding the knowledge needed in reasoning with natural language.
contrasting
train_11706
If the score is above a threshold the rule is applied.
these lexical-syntactic rules are only used in about 3% of the attempted proofs and in most cases there is no lexical variation.
contrasting
train_11707
At a manual analysis, close to 80% of these are correct rules; this is higher than the estimated accuracy of DIRT, probably due to the bias of the data which consists of pairs which are entailment candidates.
given the small number of inference rules identified this way, we performed another analysis.
contrasting
train_11708
These annotations are often used as training data for semantic role labeling systems.
the applicability of these systems is limited to those words for which labeled data exists, and their accuracy is strongly correlated with the amount of labeled data available.
contrasting
train_11709
To measure the working memory burden of a text, we'd like to capture the number of discourse entities that a reader must keep in mind.
the "unique entities" identified by the named entity recognition tool may not be a perfect representation of this -several unique entities may actually refer to the same real-world entity under discussion.
contrasting
train_11710
First, as can be expected the ASR 1-best output is typically error-prone especially when a user query originates from a noisy environment.
aSR word confusion networks which compactly encode multiple word hypotheses with their probabilities have the potential to alleviate the errors in a 1-best output.
contrasting
train_11711
Equation 12 shows a method for parsing the 1best ASR output using the FST.
a similar method can be applied for parsing WCNs.
contrasting
train_11712
If a word does not appear in the query at all, its weight becomes equal to the usual tf w,s idf w,S : The overall novelty ranking formula is based on the query-dependent PageRank introduced in Equation (2).
since we already incorporate the relatedness to the query in these two settings, we focus only on related sentences and thus may drop the relatedness to the query part from (2): We set λ to the same value as in OTTERBACHER.
contrasting
train_11713
We see in Table 4 from setup #5 experiments that training and testing on error-containing utterances led to a dramatic improvement in F 1score.
our results for experiments using setup #6 (where training data was filtered to contain errorful data but test data was fully preserved) are consistently worse than those of either setup #2 (where both train and test data was untouched) or setup #5 (where both train and test data were prefiltered).
contrasting
train_11714
Word segmentation and tagging are the necessary initial steps for almost any language processing system, and Chinese parsers are no exception.
automatic Chinese word segmentation and tagging has been recognized as a very difficult task (Sproat and Emerson, 2003), for the following reasons: First, Chinese text provides few cues for word boundaries (Xia, 2000;Wu, 2003) and part-ofspeech (POS) information.
contrasting
train_11715
Recall that based on the deterministic segmentation and tagging results produced by PKU's tokenizertagger, our system can only parse 80 out of the 101 sentences, and among the 21 completely failed sentences, 20 sentences failed due to segmentation and tagging mistakes.
after the application of the hand-crafted FST rules for postprocessing, 100 out of the 101 sentences can be parsed.
contrasting
train_11716
Following a strategy widely used in Chinese word segmentation, we did this by regarding the problem as a character tagging problem.
since we intended to learn rules that deal with segmentation and POS tagging simultaneously, we could not adopt the BIO-coding approach.
contrasting
train_11717
Label 4 is used for group addressing.
this results in a very skewed class distribution because the next speaker is the intended addressee 41% of the time, and 38% of instances are plural -the L 1 L 2 L 3 35.17% 30.34% 34.49% Table 2: Distribution of addressees for singular you remaining two classes therefore make up a small percentage of the data.
contrasting
train_11718
For the generic vs. referential task, the discourse and multimodal classifiers both outperform the majority class baseline (p < .001), achieving accuracy scores of 68.71% and 68.48% respectively.
to when using manual transcriptions and annotations (see Section 6.1), removing forward-looking (FL) information reduces performance (p < .05).
contrasting
train_11719
As all possible analyses are computed, any number of best parses can be extracted.
other treebank parsers use sophisticated search strategies to find the most probable analysis without examining the set of all possible analyses (Charniak et al., 1998;Klein and Manning, 2003).
contrasting
train_11720
The reason for this high complexity is the problem of unrestricted crossing configurations, appearing when dependency subtrees are allowed to interleave in every possible way.
just as it has been noted that most non-projective structures appearing in practice are only "slightly" nonprojective , we characterise a sense in which the structures appearing in treebanks can be viewed as being only "slightly" ill-nested.
contrasting
train_11721
For example, for the imperfective passive, the CC C root pattern appears in the template C1C EC, and the root is what is left if the two vowels in the stem are skipped over.
we want to extract both the derivational pattern and the root, and the problem for finite state methods, as discussed in Section 1.2, is that both are spread throughout the stem.
contrasting
train_11722
While significant, the improvement from the Baseline to LexFilter is quite small, which is due to the Baseline's own rather strong illegal analyses filtering heuristic.
unlike the oracle segmentation case, here the semisupervised lexical probabilities (LexProbs) have a major effect on the parser performance (∼ 69 to ∼ 73.5 F-score), an overall improvement of ∼ 6.6 F-points over the Baseline, which is the previous state-of-the art for this joint task.
contrasting
train_11723
Moreover, simultaneous interpretation requires a soundproof booth with audio equipment, which adds an overall cost that is unacceptable for all but the most elaborate multilingual events.
a simultaneous translation system also needs time and effort for preparation and adaptation towards the target application, language and domain.
contrasting
train_11724
This coefficient shows a global agreement between all the judges, which goes beyond Cohen's Kappa coefficient.
a low coefficient requires a more detailed analysis, for instance, by using Kappa for each pair of judges.
contrasting
train_11725
T036 is more fluent due to the less technical nature of the speech and the more general vocabulary used.
the T036-2 and T036-3 excerpts get a lower quality score, due to the description of data collections or institutions, and thus the use of named entities.
contrasting
train_11726
Indeed, there is a high lexical repetition, a large number of named entities, and the quality of the excerpt is very training-dependant.
the system runs into trouble to process foreign names, which are very often not understandable.
contrasting
train_11727
Although we can not compare the performance of the restricted automatic system to that of the restricted interpreter (since data sets of questions are different), it seems that of the interpreter is better.
the loss due to subjective evaluation seems to be higher for the interpreter than for the automatic system.
contrasting
train_11728
This clearly shows the difficulty of the whole task.
the human end-to-end evaluation of the system in which the system is compared with human interpretation shows that the current translation quality allows for understanding of at least half of the content, and therefore, may be already quite helpful for people not understanding the language of the lecturer at all.
contrasting
train_11729
In principle, we can use all of the induced affixes as features for training a POS tagger and an NE recognizer.
we choose to use only those features that survive our feature selection process (to be described below), for the follow-ing reasons.
contrasting
train_11730
(2007) reduce the number of POS tags from 45 to 5 when training a factorial dynamic CRF on a small dataset (with only 209 sentences) in order to reduce training and inference time.
we propose a relatively simple model for jointly learning Bengali POS tagging and NER, by exploiting the limited dependencies between the two tasks.
contrasting
train_11731
Note, however, that this observation does not hold for English, since many prepositions and determiners are part of an NE.
this observation largely holds for Bengali because prepositions and determiners are typically realized as noun suffixes.
contrasting
train_11732
To make these unsupervised taggers practical, one could attempt to automatically construct a POS lexicon, a task commonly known as POS induction.
pOS induction is by no means an easy task, and it is not clear how well unsupervised pOS taggers work when used in combination with an automatically constructed pOS lexicon.
contrasting
train_11733
Similar performance trends can be observed when Lexicon 2 is used (see Figure 3(b)).
both baselines achieve comparatively lower tagging accuracies, as a result of the higher unseen word rate associated with Lexicon 2.
contrasting
train_11734
However, phrase-based models can fail to reorder words or phrases which would seem obvious if it had access to the POS tags of the individual words.
for example, a translation from french to English will usually correctly reorder the french phrase with POS tags NOUN ADJECTIVE if the surface forms exists in the phrase table or language model, e.g., phrase-based models may not reorder even these small two-word phrases if the phrase is not in the training data or involves rare words.
contrasting
train_11735
The ATS models do provide an integrated approach, but their lexical translation is limited to the word level.
to prior work, we present a integrated approach that allows POS-based reordering and phrase translation.
contrasting
train_11736
The beam search decoding algorithm is unchanged from traditional phrase-based and factored decoding.
the creation of translation options is extended to include the use of factored templates.
contrasting
train_11737
The factored template models were retrained with increased maximum phrase length but this made no difference or negatively impacted translation performance, Figure 1.
using larger phrase lengths over 5 words does not increase translation performance, Figure 1: Varying max phrase length as had been expected.
contrasting
train_11738
It can be seen from Table 3 that generalizing the reordering model on POS tags (line 2a) improves performance, compared to the non-lexicalized reordering model (line 2).
this performance does not improve over the lexicalized reordering model on surface forms (line 1a).
contrasting
train_11739
In fact, WordNet has been used as a de-facto standard repository of meanings.
to our knowledge, the meanings represented by WordNet have been only used for WSD at a very fine-grained sense level or at a very coarse-grained class level.
contrasting
train_11740
In this way, Wikipedia provides a new very large source of annotated data, constantly expanded (Mihalcea, 2007).
some research have been focused on using predefined sets of sense-groupings for learning class-based classifiers for WSD (Segond et al., 1997), (Ciaramita and Johnson, 2003), (Villarejo et al., 2005), (Curran, 2005) and (Ciaramita and Altun, 2006).
contrasting
train_11741
Most of the later approaches used the original Lexicographical Files of WN (more recently called SuperSenses) as very coarse-grained sense distinctions.
not so much attention has been paid on learning class-based classifiers from other available sense-groupings such as WordNet Domains (Magnini and Cavaglià, 2000), SUMO labels (Niles and Pease, 2001), EuroWordNet Base Concepts , Top Concept Ontology labels (Alvez et al., 2008) or Basic Level Concepts (Izquierdo et al., 2007).
contrasting
train_11742
More specifically, it is the observation that coordination involves two or more constituents of the same categories.
there are a significant number of more complex cases of coordination that defy this generalization and that make the parsing task of detecting the right scope of individual conjuncts and correctly delineating the correct scope of the coordinate structure as a whole difficult.
contrasting
train_11743
When comparing the results of experiment 1 (nbest parsing) with the present one, it is evident that the F-scores are very similar: 74.53 for the 50-best reranking setting, and 74.46 for the one where we provided the gold scope.
a comparison of precision and recall shows that there are differences: 50-best reranking results in higher recall, providing gold scope for coordinations in higher precision.
contrasting
train_11744
If the same parser is used for this step and for the final parse, we can be certain that only scopes are extracted that are compatible with the grammar of the final parser.
parse forests are generally stored in a highly packed format so that an exhaustive search of the structures is very inefficient and proved impossible with present day computing power.
contrasting
train_11745
Since our proposed conjuncts cannot cross these boundaries, the correct second conjunct, ausgebildete Industriekauffrau aus Oldenburg, cannot be suggested.
if we remove these chunk boundaries, the number of possible conjuncts increases dramatically, and parsing times become prohibitive.
contrasting
train_11746
When identifying sentences from which story highlights are generated, the situation is slightly different, as the number of story highlights is not fixed.
most stories have between three and four highlights, and on average between four and five sentences per story from which the highlights were generated.
contrasting
train_11747
Essentially, a system could simply pick the first two sentences of each article and might thus achieve higher precision scores, since it is less likely to return 'wrong' sentences.
if the scores are similar but there is a difference in the number of unique sentences extracted, this means a system has gone beyond the first 4 sentences and extracted others from deeper down inside the text.
contrasting
train_11748
Candidacy for either transliteration or translation is not necessarily determined by orthographic features.
to English (and many other languages), proper names in Hebrew are not capitalized.
contrasting
train_11749
Precision and recall obtained are 80% and 82%, respectively.
although foreign words are indeed often TTTs, many originally Hebrew words should sometimes be transliterated.
contrasting
train_11750
Foreign words, which retain the sound patterns of their original language with no semantic translation involved, are also (back-)transliterated.
names of countries may be subject to translation or transliteration, as demonstrated in the following examples: We use information obtained from POS tagging (Bar-Haim et al., 2008) to address the problem of identifying TTTs.
contrasting
train_11751
The method uses only positive examples for learning which words to transliterate and achieves over 38% error rate reduction when compared to the baseline.
to previous stud-ies this method does not use any parallel corpora for learning the features which define the transliterated terms.
contrasting
train_11752
This indicates that, in our experimental conditions, optimization efforts can never reach the global maximum, but it also indicates that searching for less expensive solutions nevertheless might lead (at least) to a local maximum.
if it is true that the goal function is not monotonic, there is no guarantee that the optimal solution actually constitutes the local maximum, i.e.
contrasting
train_11753
The search stops with the first consistent solution (as we suggest in the present paper).
it is difficult to quantify the number of cascades needed to come to it and moreover, the full ILP machinery is being used (so again, constraints need to be extensionalized).
contrasting
train_11754
The fact that the former symbol should be more specific than the latter can be represented using SPEC atoms like dog n 1 dog n. Note that even a deep grammar will not fully disambiguate to semantic predicate symbols, such as WordNet senses, and so dog n 1 can still be consistent with multiple symbols like dog n 1 and dog n 2 in the semantic representation.
unlike the output of a POS tagger, an RMRS symbol that's output by a deep grammar is consistent with symbols that all have the same arity, because a deep grammar fully determines lexical subcategorisation.
contrasting
train_11755
This is useful for hybrid systems which exploit shallower analyses when deeper parsing fails, or which try to match deeply parsed queries against shallow parses of large corpora; and in fact, RMRS is gaining popularity as a practical interchange format for exactly these purposes (Copestake, 2003).
rMrS is still relatively ad-hoc in that its formal semantics is not defined; we don't know, formally, what an rMrS means in terms of semantic representations like (2) and (3), and this hinders our ability to design efficient algorithms for processing rMrS.
contrasting
train_11756
At this point, we cannot provide an efficient algorithm for testing entailment of RMRS.
we propose the following novel syntactic characterisation as a starting point for research along those lines.
contrasting
train_11757
To our knowledge, this is the first time that the regularity of CCG's derivational structures has been exposed.
if we take the word order into account, then the classes of PF-CCG-induced and TAG-induced dependency trees are incomparable; in particular, CCG-induced dependency trees can be unboundedly non-projective in a way that TAG-induced dependency trees cannot.
contrasting
train_11758
In this respect, it is the analogue of a TAG derivation tree (in which the lexicon entries are elementary trees), and we just saw that PF-CCG and TAG generate the same tree languages.
ccG and TAG are weakly equivalent (Vijay-Shanker and Weir, 1994), i.e.
contrasting
train_11759
We have already argued that this tree can be induced by a TAG.
it contains no two adjacent nodes that are connected by a/a b\a a/a b\b a b\b 1 2 3 4 Figure 8: The divergence between CCG and TAG an edge; and every nontrivial PF-CCG derivation must combine two adjacent words at least at one point during the derivation.
contrasting
train_11760
The minimization process serves to shrink down the FSM to an equivalent automata with a suitable size for parsing.
it is usually the case that the size is not small enough to meet the time and memory limitations in parsing.
contrasting
train_11761
Otherwise, I 1 must be the concatenation of two intervals I 1v and I 1v 1 with I 1v 2 v and 1v is also adjacent to some interval in v 2 .
v 0 and v 2 are disjoint.
contrasting
train_11762
Due to previous results (Rambow and Satta, 1999), we know that this is not always possible.
our algorithm may fail even in cases where a binarization exists-our notion of adjacency is not strong enough to capture all binarizable cases.
contrasting
train_11763
Handling terminology is an important matter in a translation workflow.
current Machine Translation (MT) systems do not yet propose anything proactive upon tools which assist in managing terminological databases.
contrasting
train_11764
Massive amounts of parallel data are certainly available in several pairs of languages for domains such as parliament debates or the like.
having at our disposal a domain-specific (e.g.
contrasting
train_11765
We trained a phrase table on TRAIN, using the standard approach.
9 because of the small training size, and the rather huge OOV rate of the translation tasks we address, we did not train translation models on word-tokens, but at the character level.
contrasting
train_11766
Better SMT performance could be obtained with a system based on morphemes, see for instance (Toutanova et al., 2008).
since lists of morphemes specific to the medical domain do not exist for all the languages pairs we considered here, unsupervised methods for acquiring morphemes would be necessary, which is left as a future work.
contrasting
train_11767
Here there is a clear distinction as raters preferred SMAC to LT, indicating that they did find usefulness in systems that modeled aspects and sentiment.
there are still 25.5% of agreement items where the raters did choose a simple leading text baseline.
contrasting
train_11768
Therefore, it is not straightforward to compute the precision (the ratio of correctly detected errors to all error candidates) of this method.
by ignoring variation n-grams of length ≤ 5, Dickinson and Meurers found that 2436 of the 2495 distinct variation nuclei (each nucleus is only counted for the longest n-gram it appears in) were true errors, i.e.
contrasting
train_11769
This process can be applied several times: once we have grouped some characters together, they become the new basic unit to consider, and we can re-run the same method to get additional groupings.
we have not seen in practice much benefit from running it more than twice (few new candidates are extracted after two iterations).
contrasting
train_11770
The search process can be rewritten as: Given the fact that the number of segmentations f K 1 grows exponentially with respect to the number of characters K, it is impractical to firstly enumerate all possible f K 1 and then to decode.
it is possible to enumerate all the alternative segmentations for a substring of c J 1 , making the utilisation of word lattices tractable in PB-SMT.
contrasting
train_11771
(Ma et al., 2007) proposed an approach to improve word alignment by optimising the segmentation of both source and target languages.
the reported experiments still rely on some monolingual segmenters and the issue of scalability is not addressed.
contrasting
train_11772
(Dyer et al., 2008) extended this approach to hierarchical SMT systems and other language pairs.
both of these methods require some monolingual segmentation in order to generate word lattices.
contrasting
train_11773
The entailment relation has been defined insofar in terms of truth values, assuming that h is a complete sentence (proposition).
there are major aspects of inference that apply to the subsentential level.
contrasting
train_11774
Our experiments show that several knowledgebased and corpus-based measures of similarity perform comparably when used for the task of short answer grading.
since the corpusbased measures can be improved by accounting for domain and corpus size, the highest performance can be obtained with a corpus-based measure (LSA) trained on a domain-specific corpus.
contrasting
train_11775
In particular, in kernels for the processing of PASs (in PropBank 1 format (Kingsbury and Palmer, 2002)) extracted from question/answer pairs were proposed.
the relatively high kernel computational complexity and the limited improvement on bag-of-words (BOW) produced by this approach do not make the use of such technique practical for real world applications.
contrasting
train_11776
Therefore the answer words should be different and useless to generalize rules for answer classification.
error analysis reveals that although questions are not shared between training and test set, there are common words in the answers due to typical Web page patterns which indicate if a retrieved passage is an incorrect answer, e.g.
contrasting
train_11777
Most of the existing studies on linguistic networks, however, focus only on the local structural properties such as the degree and clustering coefficient of the nodes, and shortest paths between pairs of nodes.
although it is a well known fact that the spectrum of a network can provide important information about its global structure, the use of this powerful mathematical machinery to infer global patterns in linguistic networks is rarely found in the literature.
contrasting
train_11778
The performance of Lesk (63.89%) is also much higher than in our previous experiments, thanks to the higher chance of finding a 1:1 correspondence between the two sections.
we observed that this does not always hold, as also supported by the better results of CQC.
contrasting
train_11779
Word Sense Disambiguation is a large research field (see (Navigli, 2009) for an up-to-date overview).
in this paper we focused on a specific kind of WSD, namely the disambiguation of dictionary definitions.
contrasting
train_11780
Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing.
they have several inherent problems deriving from the restriction of locality in the packed parse forest.
contrasting
train_11781
These learn patterns associated with individual entity classes, making use of many contextual, orthographic, linguistic and external knowledge features.
they rely heavily on large annotated training corpora.
contrasting
train_11782
Adjacent characters in the same orthographic class were collapsed.
we distinguish single from multiple characters by duplication.
contrasting
train_11783
American, Islamic) generally link to nominal articles.
they are treated by CoNLL 6 Improving Wikipedia performance The baseline system described above achieves only 58.9% and 62.3% on the CoNLL and BBN TEST sets (exact-match scoring) with 3.5million training tokens.
contrasting
train_11784
We plan to increase the largest training set the C&C tagger can support so that we can fully exploit the enormous Wikipedia corpus.
we have shown that Wikipedia can be used a source of free annotated data for training NER systems.
contrasting
train_11785
(2003) suggest combining various information sources for solving SAT analogy problems.
previous work on compound interpretation has generally used either lexical similarity or relational similarity but not both in combination.
contrasting
train_11786
Run Y performs better than E a for 5 of the 25 individual concepts, including NationalPark, for which no instances of national parks or related class labels are available in run E a ; and River, for which relevant instances in the labeled classes in E a , but they are associated to the class label river systems, which is incorrectly linked to the Word-Net concept systems rather than to rivers.
run E a outperforms Y on 12 individual concepts (e.g., Award, DigitalCamera and Disease), and also as an average over all classes (last two rows in Table 5).
contrasting
train_11787
Adding another word space model to the ensemble, either a word-based or syntaxbased model, brings down performance.
the addition of the compound model does lead to a clear gain in performance.
contrasting
train_11788
For example, the Pinchak and Lin (2006) model is forced to consider a question focus context (such as "X is a city") to be of equal importance to non-focus contexts (such as "X host Olympics").
we have observed that it is more important that candidate X is a city than it hosted an Olympics in this instance.
contrasting
train_11789
For example, "explorer Hernando Soto" is a candidate marked appropriate by both annotators to the question "What Spanish explorer discovered the Mississippi River?"
our context database does not include the phrase "explorer Hernando Soto" meaning that only a few features will have non-zero values.
contrasting
train_11790
Their focus is on complex-answer questions in addition to the use of a collection of user-generated answers rather than answer typing.
their use of preference ranking mirrors the techniques we describe here in which the relative difference between two candidates at different ranks is more important than the individual candidates.
contrasting
train_11791
A user simulation for NLG is very similar, in that it is a predictive model of the most likely next user act.
this user act does not actually change the overall dialogue state (e.g.
contrasting
train_11792
For example, the user is less likely to choose an item if there are more than 7 attributes, but the realizer can generate 9 attributes.
in some contexts it might be desirable to generate all 9 attributes, e.g.
contrasting
train_11793
Urdu is influenced from Arabic, and can be considered as having three main parts of speech, namely noun, verb and particle (Platts, 1909;Javed, 1981;Haq, 1987).
some grammarians proposed ten main parts of speech for Urdu (Schmidt, 1999).
contrasting
train_11794
Statistical approaches usually achieve an accuracy of 96%-97% (Hardie, 2003: 295).
statistical taggers require a large training corpus to avoid data sparseness.
contrasting
train_11795
The different scores of the RANDOM assignment for the two languages can be explained by their different branching factors: trees in the German treebank are typically more flat than those in the English WSJ corpus.
note that other settings of our two annotation algorithms do not always obtain better results than random.
contrasting
train_11796
This hypothesis is passed on as output.
at time-point t 2 , as additional acoustic frames have come in, it becomes clear that "forty" is a better hypothesis about the previous frames together with the new ones.
contrasting
train_11797
Given the robust performance of MAX when translation scores originated from the same translation model in English to Italian, it is not surprising that it favors the case where all the outputs are scored by the same model ("All tuned").
diversity amongst the system outputs has been shown to be important to the performance of system combination techniques (Macherey and Och, 2007).
contrasting
train_11798
the realisations are expected to be more variable than those of a rare type.
another aspect of Exemplar Theory has to be considered, namely entrenchment (Pierrehumbert, 2001;Bybee, 2006).
contrasting
train_11799
Baumann (2006) finds deaccentuation to be the most preferred realisation for givenness in his experimental phonetics studies on German.
baumann (2006) points out that H+L * has also been found as a marker of givenness in a German corpus study.
contrasting