id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_95300
3M sentences matched by seed facts were utilized to learn more than 1.5M pattern candidates for the relation extraction task.
when annotating ontological relations, we used the wordNet ontology (Fellbaum, 1998) as our reference.
neutral
train_95301
We can say, then, that the bar exam encapsulates a range of legal knowledge.
mel's mistake negated the required specific intent.</t> <h>mel should be acquitted.</h> </pair> A range of other structural issues of the text were identified and controlled for in order to produce a corpus that conceptually matches the original: • meta comments about the exam question, e.g.
neutral
train_95302
From RTE 1-3 (Dagan et al., 2006;Bar-Haim et al., 2006;Giampiccolo et al., 2007), it was a binary classification problem for only two relations: YES and NO, regarding to entailment and non-entailment.
• (Doug Lawrence bought the impressionist oil landscape by J. Ottis Adams in the mid-1970s at a Fort Wayne antiques dealer.)
neutral
train_95303
The Cohen's kappa coefficient of interagreement on RTE+SR was above 0.7 in overall.
the similarity may become even a fuzzier concept.
neutral
train_95304
Alternative configurations of the model are described and compared to each other and to LDA, the most popular topic model.
comparing both configurations of the SemLDA, the topics obtained with the second experiment (All senses / Semcor) seem to be more suitable than those of the fourth experiment, where WSD was applied.
neutral
train_95305
Wikipedia was used because it provides a large and wide-coverage source of text, completely independent from the datasets used and from WordNet.
for each word in the corpus that is in one or more WordNet synsets, if there is a previously trained model, probabilities P (w|s) are predicted from the distribution of LDA.
neutral
train_95306
Since a test collection is a collection of patients, it represents a patient population.
relevance judgments for this topic required an average of 2 minutes per patient.
neutral
train_95307
For the most part, the system performed well with an average f-score of 94.7 across all entity types.
figure 1 shows a sample x-ray report and Line 9-11 is marked as a rationale snippet for CPIS/PNA.
neutral
train_95308
For annotation, we use a lightweight, web browser-based annotation tool called BRAT (Stenetorp et al., 2012).
we can treat the event detection task as a dependency parsing problem, as discussed in Section 4.2.
neutral
train_95309
Preliminary experiments showed that all these factors made it difficult to replicate the Hindi-English experiments for this data.
shows the mapping between English phones from the phone set used in our standard US English build and Hindi phonemes from the Indic phone set, along with the phonetic features assigned to each phoneme in both sets.
neutral
train_95310
(Sriram et al., 2004) describe a technique for multilingual query processing, in which words in the queries are converted into a language independent 'common ground' representation, after which a weighted phonetic distance measure is used to match and rank queries.
it seemed like our approach was viable and was not influencing the quality of the system negatively.
neutral
train_95311
These include time and date information which Taidhgín can deliver on request, as the tags are set to the nearest hour in the time zone specified.
a further priority will be the development of speech recognition for Irish, which is also part of future research plans.
neutral
train_95312
It was susceptible to concatenation errors if the waveform coverage in the voice database was incomplete but in that period much progress was made using as little as one hour of recorded speech and the samples in the corpus are all produced from such small databases.
the corpus is freely available as an Open Access resource (Gratis & Libre) for research use and as a historical archive under CC-BY (Attribution) licensing.
neutral
train_95313
The timeline for the audio is then used to annotate the beginning timestamps for the gestures: as shown in Figures 9 and 10, in front of every gesture, a time is shown inside a pair of square brackets to indicate the beginning time of this gesture stroke.
a3: What alleviated the people of the riot?
neutral
train_95314
The humangenerated dialogue corresponding to Figure 1 is shown in Figure 2.
in this annotation, a gesture stroke is positioned 0.2 seconds before the beginning of the gesture's following word.
neutral
train_95315
1998;McNeill 2005).
this corpus could be useful for researchers investigating how personality affects dyadic interaction on verbal and bodily levels.
neutral
train_95316
Participants were selected from a larger sample to be 0.8 of a standard deviation above or below the mean on the Big-Five Personality extraversion scale, to produce an Extravert-Extravert dyad, an Introvert-Introvert dyad, and an Extravert-Introvert dyad.
the three interactions for the current study were more abstract in nature, providing a novel conversational format for the analysis of nonverbal expressive alignment.
neutral
train_95317
In total, we did two separate runs of the evaluation stage, one with Japanese instructions and one with English ones, to test again if the language of instructions has any effect on the received results.
there may be English seed verbalizations with more than one possible meaning, or which could be used to verbalize more than one ontology element.
neutral
train_95318
Accordingly, workers were only shown sentences of the form X was constructed in [year] Y for this ontology element, and we only retrieved Japanese verbalizations at around the same level of semantic specificity.
we ask Japanese crowdsourcing workers to provide Japanese translations of the English verbalizations, with each Japanese translation being understood as a potential verbalization of the ontology element linked to the original English verbalization.
neutral
train_95319
For both resources, the FLYING IN AN AIRPLANE scenario is most diverse (as was also indicated above by the mean word type overlap).
as can be seen in Table 2, there is some variation in stories across scenarios: The FLYING IN aN aIRPLaNE scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used.
neutral
train_95320
They annotated event-denoting verbs in the stories with the event labels and participant-denoting NPs with the participant labels.
extraction of script knowledge from large text corpora (as done by Chambers and Jurafsky (2009)) is difficult and the outcome can be noisy.
neutral
train_95321
Syntactic information may help in the identification of the correct temporal value but deciding among fine-grained temporal relations is not easy.
for temporal expressions we can observe a more conservative approach (e.g.
neutral
train_95322
We then start comparing the span size (by means of the tokens' offset) and content in order to promote and select tokens and multi-tokens annotations.
machine performance with target entities given obtains good results (F1 0.564 for English and F1 0.736 for Italian) suggesting that detailed annotation guidelines can contribute to the performance of automatic tools but are often difficult to follow.
neutral
train_95323
In this paper we will present the details about the creation of the Boulder Lies and Truth Corpus (BLT-C) along with some preliminary results based on a study leveraging such a corpus.
lying Words: (Newman et al., 2003) created a labeled corpus of elicited narratives marked as lie or true, then applied machine learning techniques (logistic regression) to rank the contribution of these linguistic categories.
neutral
train_95324
• There are more than three edit operations in a row.
yet another result is the identification of possible errors (spelling mistakes, encoding problems, etc.)
neutral
train_95325
Overall, STACC provides the best balance between the number of alignments it can identify and the precision in identifying correct alignments.
in LEXACC, a core part of the final score is computed by evaluating each source token against each target token and measuring translation correspondences using lexical translation probabilities for each token pair.
neutral
train_95326
As we saw in Section 2.2., aligning complex lexical units requires creating one-to-many/manyto-one as well as many-to-many links, which represent a known difficulty for WA models.
here, the verbs "spaccando" and "ripping" are rare (both F[1,15]), while "stanno" and "are" are not rare.
neutral
train_95327
In automatic machine translation evaluation, outputs of an MT system are compared to a reference translation, i.e.
we evaluate also using Meteor with no paraphrase support (MeteorNP).
neutral
train_95328
As with the DM corpus (Oepen et al., 2014), DEEPFTB is comparable in the sense that the semantic arguments of verbs and adjectives are made explicit, but it leans a little less towards a semantic representation (hence the "deep syntactic" name).
an increasing number of works have been proposed over the last few years to cope with graphs (Sagae and Tsujii, 2008;Flanigan et al., 2014;Martins and almeida, 2014), whether acyclic or not.
neutral
train_95329
Another argument against the use of prepositions for denoting semantic relations is that in our framework, lexicon, syntax and semantics are considered autonomous levels of representation/analysis of the language.
the tIC group is further classified into semantic relations.
neutral
train_95330
KorAP already participates in CLARIN-FCS and further support is in preparation (e.g.
it supports very large corpora with multiple annotation layers, multiple query languages, and complex licensing scenarios.
neutral
train_95331
Although it uses Lucene as well, it applies different indexing strategies.
in addition to the serialisation of user-formulated queries, the protocol supports query rewrites.
neutral
train_95332
KorAP is developed at the Institute for the German Language (IDS), member of the Leibniz-Community 11 and supported by the KobRA 12 project, funded by the Federal Ministry of Education and Research (BMBF).
it can be a good candidate for performance comparison in the future and may serve as an alternative search backend.
neutral
train_95333
Conversely, we observed that models trained on one type of corpus and applied on the other type always produce lower results.
we also evaluated the performances of the COLID models when the predictions made are used to infer the column separator.
neutral
train_95334
For some words it was not possible to assign the grammatically correct POS tags due to compatibility issues with the disambiguator.
the BOUN sub-corpus was used as a representative of the adult language.
neutral
train_95335
This is in agreement with previous research (e.g., Stadthagen-Gonzalez & Davis, 2006) indicating that highly imageable words tend to have lower AoA ratings, suggesting an early acquisition.
the CLC was created to represent children's language.
neutral
train_95336
High imageability, then, points to low rated AoA values just as high frequency does.
the correlations of CLC>BOUN nouns with rated AoA were not significant.
neutral
train_95337
Best results are obtained when taking into account both the probability of a form given its dediacritised version and the probability of the form in the given context (TM+LM).
overall, the best type of data for training diacritic restorers of both standard and non-standard texts is Web data.
neutral
train_95338
As shown in Figure 3 we present the users with 3 variants of the same sentence: one produced by a template-based generator (referred as template in Table 5), one produced by the NLG statistical model (presented in Section 5.)
as for paraphrases, a variety of approaches have been tested including extracting paraphrases from multiple translations of the same source text (Barzilay and McKeown, 2001) (parallel corpus approach), using monolingual comparable corpora (Wang and Callison-Burch, 2011), aligning several dictionary definitions of the same term to extract paraphrases (Murata et al., 2005), etc.
neutral
train_95339
For Portuguese, some such initiatives include Onto.PT 3 (Gonçalo Oliveira and Gomes, 2010), OpenWN-PT 4 (de Paiva et al., 2012), MultiWordnet of Portuguese 5 , Word-Net.PT 6 (Marrafa, 2002), WordNet.Br 7 (Dias-da-Silva et al., 2008).
as a second step, any relation that was not found in Onto.PT was manually validate by two native speaker human judges.
neutral
train_95340
From the above table, relevance vector machine does dramatically reduce the model complexity at the expense of affordable prediction accuracy loss in comparison to SVM.
for comparison, we also train with support vector machine learning algorithm on all 165 features.
neutral
train_95341
DCG's discounted factor relies on the position of each element, and this implies that the last four value of L list will produce different costs.
for this reason, it is often used in research (Lapata, 2006;Philbin et al., 2007;Järvelin and Kekäläinen, 2000).
neutral
train_95342
It is important to note that in all these stopword lists, words are separate on white-spaces.
despite spoken by millions of people, Urdu is an under-resourced language in terms of available computational resources.
neutral
train_95343
Both studies consider structural and statistical factors only.
it may be inferred that if the resources are built on properly-segmented words, we might have better results for PS summary corpus.
neutral
train_95344
By identifying domain-relevant synsets and categories, we could limit the number of entities annotated from BabelNet to those specific to our domain.
a sample of 500 abstracts from the corpus is currently being manually annotated with these semantic relations.
neutral
train_95345
Coverage with respect to the domain vocabulary shows the proportion of the concepts of a domain that are included in the ontology: it can be interpreted as a measure of recall.
section 2. describes how the texts of the corpus were selected and pre-processed.
neutral
train_95346
We manually evaluated annotation precision on a sample containing 100 sentences, with 358 annotated entities and 932 annotations (an entity is thus linked to 2.6 resources on average).
other limitations stem from entity annotation errors, in particular bad delimitation.
neutral
train_95347
The semantic analysis of scientific corpora allows to add new relation types and instances to existing ontologies (Petasis et al., 2011) or thesauri (Wang et al., 2013).
we experimented with combining domain-specific and generic resources to achieve a satisfying balance between annotation density and precision.
neutral
train_95348
The training data is the merged TBAQ dataset.
the non-deverbal gazetteers contain 6528 and 745 entries for event and state hyponyms respectively.
neutral
train_95349
The results obtained are then compared to the substitutions proposed by humans.
we propose in this paper to further explore this IR approach to build thesauri.
neutral
train_95350
Contact and Transaction events are augmented with additional attributes.
for Rich ERE, there is more annotation on the English side than on the Chinese side at all levels, except at the entity level, in which Chinese has slightly more entities annotated than English.
neutral
train_95351
An example of an annotation of this kind can be seen in Figure 4.
in the context of disorder, we decided to exclude a set of variants.
neutral
train_95352
Indeed, conservatives sources frame Snowden as being a 'traitor' or 'disloyal' to his country, while liberals frame the story in terms of harm caused to the country.
differently from liberals who focus on both virtue and vice of their foundations, conservatives highly emphasize the vice aspect i.e.
neutral
train_95353
This time frame consideration also allows accounting for negations, irrealis and sarcastic statements in text.
several factors need to be considered when one thinks of possessions and their attributes.
neutral
train_95354
Our research is motivated by the affective-cognitive consistency model (Rosenberg, 1956;Rosenberg, 1968), a branch of cognitive consistency theory that not only hypothesizes that people are motivated to seek a coherent state both internally (at the level of thoughts, beliefs, feelings, and values) and externally (through attitudes and behaviors), but also that individuals gain more motivation in achieving a consistent state so that others perceive them to be consistent.
the value of the object is devoid of personalized information, thus allowing cross owner profile analysis (both A and B may possess a shoulder bag, but only A's bag is green).
neutral
train_95355
temporal vs. atemporal) in confusion matrix as shown in Table 14.
3: Initialize previous stratified cross-validation accuracy to 0.
neutral
train_95356
Table 11: Evaluation results of various n-gram models for the second step of two-step classification framework: gold standard test set experiments It is evident from the results obtained through different experimental setups that temporal classifier, in general, performs remarkably well while we deal only with two classes, namely temporal and atemporal.
distribution of seed words among the various temporal classes is important in order to ensure that temporal classifier is not biased to any particular class.
neutral
train_95357
The key idea is to simplify the question as much as possible prior to input to the semantic parser.
in this work, our goal is to provide a sufficient number of question/logical form pairs to train a baseline semantic parser.
neutral
train_95358
For example, one annotator marked just in just minutes apart as a MODIFIER(TYPE=APPROX).
formally: figure 22 shows an example BETWEEN annotation, along with a graphical depiction of its formal interpretation: the interval starting at the end of 1994 and ending at the document creation time.
neutral
train_95359
), such as the past three summers, since this cannot be described with some prefix of a YYYY-MM-DDTHH:MM:SS date-time.
there are a few drawbacks of the ISO-TimeML approach.
neutral
train_95360
As also indicated by the Conceptual Metaphor Theory (Lakoff and Johnson, 1980), the mapping across domains stands at the core of the metaphorical connection and this kind of connection is very commonly established in proverbs as linguistic metaphors.
based on the resulting annotations, we show how words belonging to different parts-of-speech and semantic domains contribute differently to the metaphoricity in the two languages.
neutral
train_95361
We therefore describe the main traits of previous methodologies, before presenting ours.
roles are called "core" if they "instantiate a conceptually necessary component of a frame, while making the frame unique" (Ruppenhofer et al., 2006) 2 .
neutral
train_95362
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 UNK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 All 0.53 0.58 0.51 0.75 0.75 0.67 0.57 0.62 0.60 0.61 0.66 0.63 Results.
can be generated by calculating all combinations of then is x located at y a day before y verb ?
neutral
train_95363
For geographical information, we are relying on the Openstreetmap (OSM) database 2 (Haklay and Weber, 2008).
the approach we want to adopt for reference resolution is the words-as-classifiers approach described in (Kennington and Schlangen, 2015).
neutral
train_95364
So there exists two different kinds of labels.
for example in PerDT there is a metaverb PP SBJ OBJ "" to give which has a meta syntactic structure as shown below: ||subject, object, prepositional object (to)|| Ali book ACM 1 to Sara gave 'Ali gave the book to Maryam.'
neutral
train_95365
The proposed method is evaluated for POS tagging tasks).
table 1 summarizes the tag set for entity annotation.
neutral
train_95366
In the 1213 sentences in the ACM set, the numbers of identified entities and relations were 12463 and 11201, respectively.
the items labeled INTELLIGENT-AGENT characterize the ACM set, owing to several articles about electronic commerce.
neutral
train_95367
It also indicates that the granularity of the scope of ATTRIBUTE relation is wider than that of others, i.e., properties can further be broken into several subtypes.
instead of precisely defining a frameset for in-domain events, we describe the roles of entities in the form of their mutual relations using a set of general relationships such as method-purpose, system-output, and evaluationresult.
neutral
train_95368
(Gast et al., 2015b;Gast et al., 2015a).
since Reichenbach (1947) at the latest, it has been known that tense and aspect cannot be adequately analysed without taking into account a third component, labeled 'Reference Time' by Reichenbach (1947), and explicated by Klein (1994) as 'Topic Time'.
neutral
train_95369
In the above sample, the frequency of '魚塘 (fishpond)' is 417 versus '乳糖 (lectose)' being 146 in the LM raw corpus.
our special thanks to all Kaldi community.
neutral
train_95370
Text-to-speech has long been centered on the production of an intelligible message of good quality.
the distribution of the corpus across these three levels is as follows (with silences longer than 1 second excluded): Neutral (2955 sec.
neutral
train_95371
With the emergence of standards for modern web technologies and, at least equally important, browsers that support them, it is possible to implement even complex software as web applications that run in a browser.
this means that certain parts, like uploading files to the server, displaying a preview, etc.
neutral
train_95372
The non-standard word taxonomy was adopted from Sproat et al.
to WebMAUS this service does not require any orthographic or phonological transcript.
neutral
train_95373
The current version of the SPA platform already integrates several modules and is available at https://www.l2f.inesc-id.pt/spa.
the parameters common to all the services, such as the input filename, are provided only once.
neutral
train_95374
Identifying the dialog acts is important for spoken dialogue systems, since they reveal the intention of the speaker.
different ap-proaches have been tested, an improved MLP-based system similar to the baseline one including MLP retraining and low-energy frame dropping, but also other methods based on segment-level features in combination with neural network modeling and i-vector based classifiers.
neutral
train_95375
That can be achieved by searching, for instance, the chain " Other annotations can be combined to achieve useful results for phonetic, morphological or lexical variation and change.
the use of a database allows simpler and faster search through different criteria.
neutral
train_95376
One is the Penn Discourse TreeBank (PDTB) (Webber and Joshi, 1998), built directly on top of Penn TreeBank (Marcus et al., 1993), composed of extracts from the Wall Street Journal.
a final set of 16 discourse functions described in metaTED was achieved, and it is composed as follows: • ADD -collapsed from Adding to Topic and Marking Asides The annotation of metaTED was done through crowdsourcing, on Amazon Mechanical Turk (AMT) 3 .
neutral
train_95377
The second group (which, what and how) are at 79-82% percentage agreement between crowd and gold standard annotation, which is likely due to their more ambiguous answer surface realization possibilities, e.g., a what-question can ask for an activity ('What did Peter do?')
with respect to the observed differences between the annotation quality of the answers to different question form subtypes, our hypothesis is that since certain examples, such as answers to why-questions, exhibit a much greater variation in terms of their linguistic material, this leads to less consistent results in the annotation, especially for the crowd.
neutral
train_95378
We compare focus annotation by trained annotators with a crowd-sourcing setup making use of untrained native speakers.
during this piloting process, the second author met with the annotators to discuss difficult cases and decide how the scheme would accommodate them.
neutral
train_95379
Over the last decade, there has been active research in modeling stance.
the dataset has instances corresponding to six pre-chosen targets of interest: 'Atheism', 'Climate Change is a Real Concern', 'Feminist Movement', 'Hillary Clinton', 'Legalization of Abortion', and 'Donald trump'.
neutral
train_95380
All of the data created as part of this project (the Stance Dataset, the domain corpus, the annotation questionnaire, etc.)
creating a dataset where 99% of the tweets are from this category makes the dataset less interesting and less useful.
neutral
train_95381
It is clear that this method emphasized precision over recall, as it emulates a grep-style filtering program.
good night, Happy Birthday/New Year/Anniversary, Merry Christmas), in certain expressions (e.g.
neutral
train_95382
In the "Relevance Analysis" task on T weet2014DS1, the crowd identified 476 out of 566 (88%) tweets as being relevant for "whaling", where the relevance score is higher than 0.2.
section 6. presents state-of-the-art approaches for relevance, novelty and sentiment analysis.
neutral
train_95383
We extent this space with relevant tweets and news snippets and relevant event mentions in those.
for each tweet we derive an aggregated novelty score in comparison with the rest, by using a weighted schema: weight 1 if the tweet is more novel, weight 0.5 if the tweets are equally novel and weight −1 if the tweet if less novel.
neutral
train_95384
used in similar context even if not together.
since tweets are very short texts, we expected that a small context window would work better.
neutral
train_95385
The H words left and right of w i are extracted for every instance of w i in the corpus formulating a feature vector.
3) Every image is represented as a vector of visual words.
neutral
train_95386
Let the semantic neighborhoods of a target word w i computed based on textual and visual features be represented as ordered sets (according to similarity) denoted as T i and V i , respectively.
for larger neighborhoods (> 50 neighbors) the fusion-based approaches perform con-sistently better than the baseline.
neutral
train_95387
We might conclude from its low entropy that it would be worth checking whether the single annotated token of 'have been' is a rare case or an error.
it would be possible to increase consistency during the annotation process and not only on completely annotated corpora.
neutral
train_95388
3 As mentioned in the introduction, since each word can trigger multiple event mentions having different types/subtypes, we train one CRF for each type.
in this section, we introduce our corpus and the event coreference task.
neutral
train_95389
For example, if a third singular (3SG) person and number inflection is marked by an ascending tone in SJQ, here annotated as 42 (see table 1), the same 3SG will fit into the range of tone 32 in another Eastern Chatino dialect (as noted in section 2.3. below, table 1 shows the tone registers with 1 as the label for the relatively highest tone).
this approach may have a positive impact on linguistic science and language documentation in general.
neutral
train_95390
This initial low-investment speech corpus allows us to train speech and language technologies for a more rapid extension of the volume of automatically annotated recordings, as well as further bootstrapping of speech and language technologies for the particular languages.
we record a high-quality speech corpus with minimized noise and optimized speech signal from prepared text to create an initial corpus with little time investment.
neutral
train_95391
For the command recognition task, an equally-likely finite state grammar formed by all the unique possible command sentences was initially used.
nevertheless, the rather restricted read command recognition grammar used is favoured by the ground-truth segmentation that does not introduce insertions due to wrongly hy- pothesized speech segments (and neither deletions due to lost speech segments).
neutral
train_95392
The overall multi-room and multi-channel voice command recognition processing pipeline WER (%) performance results are shown in Tables 4 and 5 using ground-truth and automatic segmentation respectively.
in practice it was necessary to use an extended command grammar incorporating the background model to better handle inaccurate segmentations provided by the automatic SAD.
neutral
train_95393
For the emotion dimension estimation, the automatic cross-corpus emotion labeling for the different corpus was effective for the dimensions of aroused-sleepy, dominant-submissive and interested-indifferent, showing only slight performance degradation against the result for the same corpus.
for all utterances, perceived emotional states of speakers are provided.
neutral
train_95394
From the 91768 ST tokens and 104785 TT tokens in the T,P and D translation sessions under investigation only 13110 (14%) showed simultaneous gazing and typing activities.
the time needed to complete the translation of six texts was not restricted but usually took between 2 to 3 hours.
neutral
train_95395
Neighborhood statistics were then calculated from the top 17,000 phonological words for all words and nonwords within each of the syllable segmentation schemas.
the final neighborhood statistic, neighborhood frequency (NF), is calculated by summing the subtitle word frequencies of all of a given phonological word's neighbors.
neutral
train_95396
<tok form="obscuras" nform="oscuras"> ob-<lb/>scuras </tok> Figure 2: A TEITOK <tok> example.
although TEITOK originally worked in much the same way as CQPWeb rendering CQP results in the browser, it now works in a somewhat more involved way: the CQP corpus is created by a dedicated tool called tt-cwb-encode, which like cwb-encode builds CQP corpus files, except that it builds them directly from the XML files.
neutral
train_95397
But doing so is very slow and labour intensive.
in the CQP corpus, both the form written by the student and the corrected form provided by the teacher are searchable, making Figure 5: A spoken document from the COPLE project it possible to search for various types of orthographic errors.
neutral
train_95398
This because the reality of manuscripts can become rather complicated: in a Ladino corpus currently being developed in TEITOK, there is the original orthography, and expanded form, and the normalized form in current spelling.
when various orthographic forms are exported, say the original as well as the normalized orthography, it becomes possible to use CQP to search for orthographic changes or errors (depending on the corpus), for instance one can study the development of the word-initial h in Spanish by searching for all words that used to be written with an h but no longer are.
neutral
train_95399
When various orthographic forms are exported, say the original as well as the normalized orthography, it becomes possible to use CQP to search for orthographic changes or errors (depending on the corpus), for instance one can study the development of the word-initial h in Spanish by searching for all words that used to be written with an h but no longer are.
since the orignal documents are mostly in Hebrew characters, it is very useful for acces-<w> <choice> <org>ob-<lb/>scuras</org> <reg>oscuras</reg> </choice> </w> Figure 1: A TEI <choice> example.
neutral