id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19000
On the other hand, there is statistically significant difference in the number of spelling errors of distance 1 and of distance 2 between the two groups.
both types of errors can be dealt with by using traditional spelling correction approaches.
contrasting
train_19001
As the number of candidates increases, both evaluation metrics should increase as well.
one should expect a negative impact of too many candidates being shown to the user, especially among participants with language disorders for whom reading is an issue.
contrasting
train_19002
Our research has pointed us in the direction of Mediaeval being the original print font ( De Drukkerij E. J. Brill, 1932), with a digital representation being found in Dutch Mediaeval; the latter, however, appears to be difficult to obtain, and also differs from the original font in the shape of its serifs and that of various capital letters.
even if we had been able to retrieve the original font, text printed on a 100-year-old typesetting machine in comparison to that generated by modern typesetting software might differ too strongly in appearance to train a language model for the textual source (Nagy et al., 2000).
contrasting
train_19003
Three aspects that make this toolkit suite stand out, and that -to our knowledge -no other tool or toolkit can provide, are that (1) it presents confusion matrices and accuracy values for single characters and words, (2) it comes with an extensive set of separate tools that each assess and highlight different performance metrics, and (3) it is the only toolkit suite in existence to have been used as a de facto standardized assessment tool.
non-ASCII text was causing troubles to this toolkit from the mid-nineties, when OCR operations were still focused on purely ASCII texts (Beusekom et al., 2008).
contrasting
train_19004
Secondary data usually works as prime data in publication.
in a language study, primary data itself is prime data in publication.
contrasting
train_19005
So far, it has been taken for granted that software of multimedia players is used to ensure the relationship.
in order to make sound data as primary data in linguistics, sound data should ideally be independent from any application environments.
contrasting
train_19006
External researchers may discover other novel uses for the corpus, in both corrected and uncorrected forms.
the need to develop a robust method for Welsh-language text anonymization prior to more widespread distribution is a significant hurdle that must be overcome.
contrasting
train_19007
In Figure 7(a), even though the non-pivot words are connected by four pivot words representing four senses/meanings, the transgraph only has one translation pair candidate (w A 1 -w C 1 ) and so the precision is 100%.
polysemy in pivot language negatively impact the precision.
contrasting
train_19008
So the part of the corpus to be published avoids such content.
the remaining parts still contains lots of references to third persons.
contrasting
train_19009
It did not provide additional name reference candidates to the other approaches.
a match with the list of concepts might help to decide whether or not a name should be anonymized because the list contains mainly well-known persons or places.
contrasting
train_19010
In the recent years, named entity recognition systems gained great popularity on the Web.
these systems, at large, do not evaluate the actual importance of the recognized entities in the documents.
contrasting
train_19011
tagging 'Yellow Submarine' as dbpedia.org/page/Yellow Submarine (song).
this is not a trivial task as mentions to Music entities show language and register idiosyncrasies (Tata and Di Eugenio, 2010;Gruhl et al., 2009), and therefore a certain degree of tailoring is required in order to account for them.
contrasting
train_19012
Arabic script does not, when writing Arabic, represent short vowels or make a distinction between long high vowels and glide consonants.
this can be a source of greater ambiguity when Arabic scripts are used to write languages in which vowels carry a higher functional load -in particular, Indo We implemented and trained a character level linear chain conditional random field (CRF) based system for converting the Perso-Arabic script for Sorani to IPA.
contrasting
train_19013
The features are technically three-valued: plus (+), minus (−) and unspecified (0).
unspecification is used sparingly in the database and 0 values can be safely recoded as + values.
contrasting
train_19014
This suggests that active contribution to guideline development is more useful for subsequent annotation than simply following the guidelines developed by others.
to the pilot phase, the main phase showed clearer trends.
contrasting
train_19015
The former NER output is clearly preferable to the latter; because, in the former case, lung and cancer are semantically related to the reference annotation; the meaning of the the top level entity (lung cancer) can be composed from the annotated entities.
only recognizing cancer should arguably be penalized, since it is not as informative.
contrasting
train_19016
Knowledge Base Population (KBP) (Ji et al., 2014), the process through which the entities identified by NEL systems are used to populate new knowledge bases, is another useful technique for exploring the relations between entities.
since NEL or KBP evaluation tasks might require a new corpus or at least a new gold standard, and the creation of such resources requires significant effort, there is a desire to automate steps in the corpus creation process.
contrasting
train_19017
By exploiting the knowledge graph built with named entities from a corpus, new gold standards can be created for specific tasks like KBP, entity contextualization or enrichment.
to research focused on social media (Bontcheva and Rout, 2012), we used regional news and analyzed transcripts from the German regional broadcaster RBB (Rundfunk Berlin Brandenburg).
contrasting
train_19018
Besides the Wikipedia web site, Wikipedia data is available via a RESTful API as well as complete XML dumps.
aPI access is officially limited to one request per second, prohibiting a web scraping approach to acquire the data.
contrasting
train_19019
During annotation, we encountered cases, in which two patterns contradicted each other, e.g., "person|SPOUSE person|SPOUSE divorced in date|TODATE" ⊥ "person|SPOUSE was married to per-son|SPOUSE until death in date|TODATE.
these cases were rare and we did not annotate them separately, as the entailment graphs we construct only capture binary decisions (entailment, non-entailment).
contrasting
train_19020
To summarise, the algorithms used in the EOP have not succeeded in coming close to getting enough the correct answers to pass the USA national bar exam.
they can reliably identify the wrong answers.
contrasting
train_19021
The RTE datasets are widely applicable for number of applications, such as Question Answering, Information Retrieval or Information Extraction.
the STS/SR task requires to identify the degree of similarity or relatedness that exists between two text fragments (phrases, sentences, paragraphs, etc), where similarity is a broad concept and its value is normally obtained by averaging the opinion of several annotators.
contrasting
train_19022
For the RTE 1-4 corpora, we use the same SR interval scores and the same type of TE values, that means that a pair of sentences could have a SR score on a 5-point-semantic-scale [1-5] that ranges from 1 (completely unrelated) to 5 (very related) and that there are TE values ENTAILMENT, CON-TRADICTION and NEUTRAL.
as the texts in RTE corpora are usually not equal in size/length between the text (T) and the hypothesis (H), and most of the time, one text is (much) longer than the other one, we need to have a specific rule for this case.
contrasting
train_19023
Few cases were discussed, less than 5%, in order to leverage over big differences in annotation.
the annotation difficulty was not homogeneous among the RTE 1-4 corpora.
contrasting
train_19024
The RTE-4 introduces many to one mapping.
it is not clear how this could influence the performances of certain systems overall.
contrasting
train_19025
We present the plot for each corpora separately (in the final version of the paper).
only the degree of this correlation may vary, but it is general that ENTAILMENT is associated with high similarity.
contrasting
train_19026
A different module carries out a deep analysis on the sentence structure in order to decide the entailment especially for those border line cases signaled by the SR score, that is scores that are not very high but not very low either The entailment judgment is used to recompute the SR score, according to the following rule described in Figure 9.
the Structural Module in Figure 11 should employ a different class of techniques for determining the entailment.
contrasting
train_19027
It is desirable then to be able to automatically extract this information from users' content.
to the best of our knowledge there is no such resource for author profiling of health forum data.
contrasting
train_19028
Ideally, we should get recordings from the same bilingual or multilingual speaker for all the languages that are being mixed and then switch between databases while synthesizing the different languages.
getting such data and maintaining recording conditions across all the TTS databases may be difficult.
contrasting
train_19029
We conducted subjective tests comparing our manually mapped phonemes to the automatically mapped phonemes.
we found that in subjective tests, listeners had a very significant preference for the manually mapped phonemes.
contrasting
train_19030
Participants with high levels of extraversion were more influenced by the agent condition than participants with low extraversion.
as visible in Figure 4, these high extraverts diverged from the pattern of expressive behavior of the agents, making gestures with larger spatial extent when the agent moved to a more introverted style of nonverbal expressive behavior over the course of the interaction.
contrasting
train_19031
Distinguishing between the useful answer and the gibberish ones automatically may be extremely difficult, or even impossible.
during the following evaluation phase filtering out workers who don't Figure 1: Workflow of our approach for the generation of a Japanese ontology lexicon provide acceptable evaluations of the translations automatically would be much easier and could e.g.
contrasting
train_19032
On the one hand, that approach would probably be very expensive, and one may assume that crowdsourcing the task will be considerably cheaper.
with input from only one person, the variance of the received verbalizations would probably be lower than if possibly a lot of different people provide translations.
contrasting
train_19033
This time, results were significantly better, and there were no obvious word-by-word translations or workers who entered gibberish.
at around one month this run also took much longer to finish.
contrasting
train_19034
On the one hand, CrowdFlower uses these questions to test workers before the actual task starts in so-called quiz mode, where workers need to answer a certain amount of test questionsfive in our case -before they can actually work on the task.
also during the task itself a certain amount of the microtasks shown to the workers -in our setup twenty percent -are actually test questions.
contrasting
train_19035
For example, for the property yearOfConstruction the Japanese gold standard, among other entries, contains the construction (に)完成する, which is a rather general term that could be translated as to complete [in].
the English gold standard we worked with contained the more specific constructed [in] as the only verbalization of yearOfConstruction.
contrasting
train_19036
In general, different people tend to describe situations subjectively, with a varying degree of detail.
texts from the TAKING A BATH and PLANTING A TREE scenarios contain a relatively smaller number of sentences and fewer word types and tokens.
contrasting
train_19037
More than 2 out of 3 participants in total belong to one of only 5 labels.
the distribution for events is more balanced.
contrasting
train_19038
On the one hand, the manual creation of widecoverage knowledge bases is infeasible, due to the size and complexity of relevant script knowledge.
texts typically refer only to certain steps in a script and leave a large part of this knowledge implicit, relying on the reader's ability to infer the full script in detail.
contrasting
train_19039
In Aroyo and Welty (2012) the focus of crowdsourcing is not on assessing the ability of the crowd to perform a specific task, i.e., event detection, but on disagreement as a "natural state" suggesting that event semantics are imprecise and varied.
sprugnoli and Lenci (2014) evaluated the ability of the crowd in detecting event nominals in Italian, pointing out the complexity of this task due to the presence of ambiguous patterns of polysemy.
contrasting
train_19040
A less explored feature of the collection is the inclusion of alternative translations, which can be very useful for training paraphrase systems or collecting multi-reference test suites for machine translation.
differences in translation may also be due to misspellings, incomplete or corrupt data files, or wrongly aligned subtitles.
contrasting
train_19041
In most cases, they contain exactly one version per language, referring to either the source text or its translation.
there are cases in which one would like to consider alternative translations, for example, when evaluating machine translation using metrics such as BLEU (Papineni et al., 2002).
contrasting
train_19042
Certainly, these differences are not very interesting when looking for truly alternative translations.
misspellings are important to identify for further cleaning of the data or for filtering out corrupted portions of the collection.
contrasting
train_19043
However her heart swayed , Edith suffered .
her heart swayed , Edih suffered .
contrasting
train_19044
Thus, WA research would highly benefit from gold standard data specifically tailored to assess WA systems on this is-sue.
to our knowledge, none of the available WA benchmarks specifically focuses on the problem of out-ofvocabulary (OOV) and rare words.
contrasting
train_19045
"voi") is missing, so in principle "you" could be left unaligned.
since the information about grammatical person is present in the verb, "you" was aligned with a P-link to "domanderete".
contrasting
train_19046
The number of these links is 14,598, out of which 84% are S-links (12,218).
since in this work we restrict the evaluation to only OOV/rare words, both the detailed statistics presented in Table 3 and the evaluation results given in Tables 8 and 9 refer to all and only the alignment links directly involving OOV/rare words.
contrasting
train_19047
The pioneer metric BLEU (Papineni et al., 2002) was originally designed to work with four reference sentences.
obtaining reference sentences is labour intensive 1 and expensive.
contrasting
train_19048
Meteor (Denkowski and Lavie, 2014), TERp (Snover et al., 2009) or ParaEval (Zhou et al., 2006).
out of these metrics, only Meteor is available for MT evaluation of Czech; we use it for comparison with our results.
contrasting
train_19049
The stochastic parse ranking module, which is part of the XLE distribution (Riezler and Vasserman, 2004), was trained on 34,000 manually disambiguated sentences (330,000 words).
to previous work on parse ranking for LFG treebanks (Cahill et al., 2007a), we are using discriminants as features in the log-linear model of the module.
contrasting
train_19050
This is the same idea as the one pursued by (Van der Wouden et al., 2015), who describe a project adding intelligent links to a grammatical database in the form of annotated queries to various Dutch language resources, thereby both making the query facilities more accessible and adding to the value of the grammatical database.
norsk referansegrammatikk is only available on paper, so in our case it is a question of structuring the documentation according to the chapter structure of the published grammar.
contrasting
train_19051
Expected results are observed using syntactic features that improve over a baseline as it was already demonstrated for DM (Ribeyre et al., 2015).
it is important to understand what is indeed improved with those features.
contrasting
train_19052
We exclusively identifies 356 constructs that are mostly "verbal masdar" ICs.
cATiB exclusively identifies 1,315 Ics.
contrasting
train_19053
An interesting result of their study is that noun phrases in English informational writing have become increasingly elaborate over the last two centuries, making academic discourse a "compressed" genre.
biber and Gray (2011) do not relate this change to terminology, i.e.
contrasting
train_19054
1) lists key benefits for microservice-based architectures, including: "scaling": As DEREKO grows, the capacity of KorAP has to enhance as well.
scalability requirements differ depending on the component.
contrasting
train_19055
This indicates that blanket is a highly imageable word and would have a higher rating on an imageability scale.
the word honour does not form a mental image as easy as blanket, meaning that honour would have a lower value on the scale.
contrasting
train_19056
This is consistent with our hypothesis.
the nouns above the 50% line from the BOUN>CLC words did not get high AoA ratings, as indicated by the negative correlation between them (r (48) = -0.44, p<0.01).
contrasting
train_19057
When only considering German documents from the crawled data, the focused crawl yields consistently lower perplexity values and the difference increases as the crawl progresses.
while more data is collected, the larger becomes the fractional amount of relevant / German vs. irrelevant / non-German data.
contrasting
train_19058
Recently, statistical machine translation was also used for diacritic restoration of Hungarian (Novák and Siklósi, 2015).
we consider the method to be too complex for the task at hand as diacritic restoration does not require calculating word or phrase alignments, does not use phrases in the translation model and does not need to perform reordering.
contrasting
train_19059
Best results are obtained when taking into account both the probability of a form given its dediacritised version and the probability of the form in the given context (TM+LM).
very good results can be obtained already when using just the probability of each form given its dediacritised version (lexicon).
contrasting
train_19060
Tables 1 and 2, the symbol * represents a space 'inside' a word spelling, the last character of a word spelling being usually a space.
since word spellings are not necessarily 1-grams, not all word spellings recognised by our parser end in a space character.
contrasting
train_19061
The WMT15 QE shared task also followed this approach, using METEOR (Banerjee and Lavie, 2005) for a paragraph-level QE task (Bojar et al., 2015).
as shown by Scarton et al.
contrasting
train_19062
When these words were included among one of the unrelated alternatives, they were replaced by other candidates following the same criteria for frequency and number of senses as before.
when they were either among the targets or related words, they were simply removed from the resource.
contrasting
train_19063
Marmot contains utilities targeted at quality estimation at the word and phrase level.
due to its flexibility and modularity, it can also be extended to work at the sentence level.
contrasting
train_19064
These tasks might appear identical to the web search problem.
there is a number of distinct characteristics.
contrasting
train_19065
Multiple rank-ordering evaluation metric algorithms exist in the field of information retrieval (IR).
none of them is appropriate for the task described in the previous chapter.
contrasting
train_19066
The cost function was intentionally designed this way to bring more relevant search results closely to the top.
the rank-ordering problem needs a relative function with respect to the rest of the elements.
contrasting
train_19067
For example a list [9,1,1] will have different costs for [1,9,1] and [1,1,9].
we contend that the two lists are equally wrong because the algorithm decided that element of rank 9 is rank 1.
contrasting
train_19068
This problem is very challenging and the results are far from perfect.
to demonstrate shortcomings of popular rank-measures we create four tests: 1) we limit the data and produce a bad ranking prediction using limited part-of-speech analysis, 2) a slightly better rank predictions using LIWC word list (Pennebaker et al., 2001), 3) further improved ranker using n-gram approach, and 4) the perfect prediction, comparing the reference with itself.
contrasting
train_19069
TextRank outperforms in all preprocessing settings.
no algorithm outperform the PS-baseline.
contrasting
train_19070
It suggests that proper word segmentation has the positive effect in experiment 1 with an average score 0.07.
proper word segmentation has a negligible effect for Experiment 2 and 3.
contrasting
train_19071
From these results, it may be concluded that proper word segmentation improves the summarization results marginally.
it is worth note that the resources (stopwords lists, lemmatizer, and stemmer) are built on space segmented words.
contrasting
train_19072
The possible reason for the similar results may be that, probably, both have the same level of inconsistent over-stemming, causing data sparseness problem.
fix 1 seem to have consistent over-stemming, which reduces sparseness.
contrasting
train_19073
Although we consciously exclude relations between entities expressed as pronouns, this does not seem to result in significant loss of information, as abstracts and introductions occur early in the paper and usually contain the first mention of an entity.
other limitations stem from entity annotation errors, in particular bad delimitation.
contrasting
train_19074
When processing news (and news-style) documents, SUTime and HeidelTime perform similarly.
two major differences between them are that HeidelTime is multilingual, and that it applies different normalization strategies depending on the domain of the documents that are to be processed.
contrasting
train_19075
Explicit expressions can be directly normalized with standard temporal knowledge (e.g., 2016-01 for January 2016) and implicit expressions with non-standard temporal knowledge such as information about holidays (e.g., 2016-03-17 for Saint Patrick's Day 2016).
the normalization of relative and underspecified expressions requires a reference time for the normalization, and a relation to the reference time for underspecified expressions, additionally.
contrasting
train_19076
Note that the WikiWars corpus contains annotations in TIMEX2 format and that both HeidelTime and SUTime extract temporal expressions following TimeML's TIMEX3 tags.
a meaningful comparison is nevertheless possible as (i) many TIMEX values are identical in TIMEX2 and TIMEX3, (ii) some simple mappings from TIMEX3 to TIMEX2 are possible and we applied them for SUTime's and HeidelTime's output, and (iii) the other differences affect HeidelTime's and SUTime's evaluation results in the same manner.
contrasting
train_19077
A further temporal tagger distinguishing between newsand narrative-style documents is DANTE (Mazur and Dale, 2009).
dANTE extracts temporal expressions following the older TIdES TIMEX2 annotation guidelines, and modern efforts and evaluation tasks have focused on the different TIMEX3 standard for many years.
contrasting
train_19078
Examples are chronic, as in chronic depression or treatment-resistant, as in treatment-resistant anxiety disorder.
in the context of disorder, we decided to exclude a set of variants.
contrasting
train_19079
In addition, both the virtue and vice aspects of these foundations are more often invoked.
conservatives score higher in the loyalty and authority moral foundations, similar to theory.
contrasting
train_19080
Using an automated approach has the advantage of gathering data quickly and with fewer resources.
as with any other method, validation of the method is required.
contrasting
train_19081
Table 11: Evaluation results of various n-gram models for the second step of two-step classification framework: gold standard test set experiments It is evident from the results obtained through different experimental setups that temporal classifier, in general, performs remarkably well while we deal only with two classes, namely temporal and atemporal.
results are not up to the mark while we attempt to perform classification with all the five classes (four temporal and one atemporal classes).
contrasting
train_19082
al, 2012;Morante & Blanco, 2012;Vincze et al, 2008;Pyysalo et al., 2007).
in conversational texts, the scope and focus of negation may be located in the same utterance or in the previous dialogue context (i.e., inter-sentential negation), such as the previous utterance.
contrasting
train_19083
Natural language question answering (QA) provides an intuitive interface for retrieving EHR data by reducing the need to understand the internal organization of the data.
since this data is stored in both unstructured text and structured databases, a deep semantic understanding of EHR questions is necessary for an effective QA system.
contrasting
train_19084
In this work, our goal is to provide a sufficient number of question/logical form pairs to train a baseline semantic parser.
the broad range of the medical domain likely means that additional types of data will be necessary to achieve human-like semantic parsing capabilities for EHR questions.
contrasting
train_19085
Other common predicates were quite high, such as δ (0.95), the has * predicates (0.96), and time within (0.95).
agreement on the latest predicate was less impressive (0.64).
contrasting
train_19086
For example, one annotator marked just in just minutes apart as a MODIFIER(TYPE=APPROX).
since just minutes represents the same period as minutes does (i.e., just minutes is not an approximate version of minutes) just should not have been annotated as a MODIFIER.
contrasting
train_19087
Other projects aim at annotating discourse relations between clauses, among which causal relations, marked or not by discourse connectives such as because or then (Prasad et al., 2008;Carlson et al., 2007).
larger lexical projects have on the one hand covered all sorts of POS expressing causality (including nouns, verbs, adverbs, conjunctions, prepositions, adjectives) and on the other hand, distinguished a much larger set of causal relationships involving events as well as facts: for instance in FrameNet (henceforth FN) (Baker et al., 1998), some frames are concerned with argumentation, where typical causal expressions introduce evidence for a claim, or reasons for an agent's behaviour.
contrasting
train_19088
These systems extract semantic links between verbs and their arguments.
the work presented here complements semantic role representations with temporally-anchored spatial knowledge.
contrasting
train_19089
According to the syntactic approach of PerDT, some word sequences like ""(promise + to give) to promise and ""(allow + to give) to allow have been considered to have the above-mentioned structure in which the words "" and "" are the objects of the simple verb "" not the non-verbal elements of the complex predicate ""or ""; more clearly the above word sequences are not considered as complex predicates and considered just as a combination of a noun (with the syntactic role object) and a simple verb.
since in PerPB every event or action with inflectional properties of the verbs in the Persian is treated as the verb, the previously mentioned sequences are considered individual verbs.
contrasting
train_19090
ACL is also characterized by a larger proportion of data (DATA-ITEM) being discussed than in ACM, and, as expected, natural languages (LANGUAGE) appear more frequently in ACL.
the items labeled INTELLIGENT-AGENT characterize the ACM set, owing to several articles about electronic commerce.
contrasting
train_19091
Accordingly, the past tense is assumed to be used to describe events that are located before the moment of utterance.
since Reichenbach (1947) at the latest, it has been known that tense and aspect cannot be adequately analysed without taking into account a third component, labeled 'Reference Time' by Reichenbach (1947), and explicated by Klein (1994) as 'Topic Time'.
contrasting
train_19092
1 Ideally new PNs extracted from collections of diachronic text news are OOV PNs with respect to the LVCSR vocabulary.
all new PNs are not present in the test set audio documents.
contrasting
train_19093
Related works have been done using syllable-based acoustic modeling on large-vocabulary continuous speech recognition (LVCSR) for both monosyllabic and polysyllabic languages, including Mandarin (Lee et al., 1993;Pan et al., 2012;Deng Li and Li Xiao, 2013;Hu et al., 2014;X. Li, and X. Wu, 2014) and West languages (Hinton et al., 2012;Swietojanski et al., 2013;Gupta & Boulianne, 2013;Schmidhuber 2015).
automatic STT on Cantonese is far behind.
contrasting
train_19094
This writing system is called 'Ajami.
the official spelling is based on the Latin alphabet called Boko.
contrasting
train_19095
H, L, LL, HH or E as defined in (Brognaux et al., 2013)).
we defined a specific type of accentual phrase (which could be considered as closer to intonational phrases) that only considers groups of words ending in higher level boundaries (i.e.
contrasting
train_19096
The original motivation for the SPA platform was to answer the increasing number of requests received for transcribing audio/video files in European Portuguese.
it was firstly created to provide a simple interface, easy to use by non-expert users, but requests have multiplied and diversified, namely in terms of languages (Spanish, English) and varieties covered (European, Brazilian), domains (broadcast news, interviews), etc.. although the majority of the SPA users are only interested in the automatic transcripts, other technology partners showed interest in obtaining information about characteristics of the speakers (e.g.
contrasting
train_19097
The default models have been optimized for broadcast news captioning in several languages (Portuguese, English, Spanish).
recognition models trained with telephone speech were also recently made available in the context of SpeDial project.
contrasting
train_19098
Different classification methods from the Weka toolkit (Hall et al., 2009) have been applied, including: Naïve Bayes, Logistic Regression, Decision trees, Classification and Regression trees, and Support Vector Machines (SVM).
the best performance was achieved with SVM, which has been setup to use Sequential Minimal Optimization with a Linear kernel as the training algorithm.
contrasting
train_19099
Furthermore, speaker pairing and topic attribution were constrained so that no two speakers would be paired with each other more than once and no one spoke more than once on a given topic.
only a subset of 1155 manual transcriptions (annotated with disfluency, abandonment, and interruption information), contain-ing 223606 utterances, was annotated for dialog acts, using the SWBD-DAMSL tag set (Jurafsky et al., 1997).
contrasting