id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19300
Since cross linguistically, the structures of synonymy (and ultimately word nets) seem to differ to certain extents, it seems plausible that interindividual differences to similar extents can exist within speakers of the same language.
function words and functional morphemes such as regular endings are rigid and shared between speakers as to the mapping of form and function.
contrasting
train_19301
Ideally, publications that report on a result of an empirical study should contain a direct link to the cited dataset and lead the reader directly to the research data that underlies the publication.
in practice, this metadata is often missing.
contrasting
train_19302
The problem of automatically linking a data citation text fragment to the corresponding dataset has been addressed 1 https://wosp.core.ac.uk/ 2 Workshop on Research Results Reproducibility and Resources Citation in Science and Technology of Language in the INFOLIS project (Boland et al., 2012).
advanced algorithms that are able to identify the survey variable mentions used in the underlying study and link them to a specific survey variable identifier in a knowledge base are still lacking.
contrasting
train_19303
A design decision was to restrict the length of text samples to a sentence.
in the variable corpus, the local context of the variable is provided, i.e., the whole paragraph in which the mention occurs, so that the similarity of the context of the mention with the associated variable can be exploited.
contrasting
train_19304
Thus, temporal tagging has become a vibrant research area, and several new temporal taggers have been made available and new strategies have been developed.
as was shown in previous work (Mazur and Dale, 2010;Strötgen and Gertz, 2013;Bethard et al., 2016;Tabassum et al., 2016), different types of documents pose different challenges for temporal tagging such that domain-sensitive normalization strategies are required (Strötgen and Gertz, 2016).
contrasting
train_19305
To judge the performance of temporal taggers and new methods, evaluations need to be performed on diverse text types, e.g., on news articles and narrative-style Wikipedia documents.
to many natural language processing tasks, there has also been some effort towards multilinguality in the context of temporal tagging, e.g., research competitions were organized not only for English but covered further languages such as Spanish and Italian (Verhagen et al., 2010;Caselli et al., 2014).
contrasting
train_19306
They have then been adapted to other languages such as Italian (Caselli and Sprugnoli, 2015), Spanish (Saurí et al., 2009) and French (Bittar, 2010).
until now no adaptation of the guidelines to German has been done.
contrasting
train_19307
named entity recognizers, syntactic parsers, semantic role labelers) are widely used -typically many such components in combination -as inputs to a more specialized (and complex) application.
the process of managing and aggregating these tools and their inputs and outputs is typically labor-intensive and errorprone, requiring significant engineering effort before the target application can be built and evaluated.
contrasting
train_19308
Our main targets are data centered workflows, hence the explicit input and output field in the metadata scheme described in the next section.
it is noteworthy that the scheme is flexible enough to also model steps consisting of only title and description, e.g.
contrasting
train_19309
The main problem of Bokeh is that it is not NLP centric and too general, and its API is rather abstract to be used directly.
it can be used as a low-level graphing API for a custom layout engine as well as a basic WebUI.
contrasting
train_19310
Web-scale information extraction systems like NELL (Carlson et al., 2010) or Knowledge Vault (Dong et al., 2014) can acquire massive amounts of machine-readable knowledge from the Web, whereas projects like DBpedia (Bizer et al., 2009), YAGO (Rebele et al., 2016) or BabelNet (Navigli and Ponzetto, 2012) have turned collaboratively-generated content into large knowledge bases.
all of these resources are entitycentric in that they are primarily built around the notion of entities, as either provided by an external resource (e.g., Wikipedia pages) or automatically discovered from text (e.g., by clustering entity mentions).
contrasting
train_19311
saneren "remediation" as hypernym of saneren van verontreinigde bodems "remediation of contaminated soils".
this account would not suffice for constructions requiring the more local noun to be the hypernym.
contrasting
train_19312
In their paper, they argue that the logical entailment present between the text and hypothesis is not captured properly.
we are interested in a subtype of entailment that can predict the support relation based on argumentation theory.
contrasting
train_19313
Using our system we achieved a significantly higher accuracy score of 78.3%.
as there were no dedicated test set for this dataset we tested our system on a randomly selected subset of 20% of the data, which makes the comparison with the other systems not conclusive.
contrasting
train_19314
As noted before, Twitter has been the most common source for sarcasm in previous corpora; this is likely due to the explicit annotation provided by its hashtags.
using Reddit as a source of sarcastic comments holds many research advantages.
contrasting
train_19315
The third case amounts to solving word sense disambiguation, and we did not find a universally simple approach to reduce noise of this form.
it is possible to reduce the likelihood of this form of sense mismatch by restricting to subreddits which are known to not have alternate senses for "/s" (e.g., politics).
contrasting
train_19316
We also experimented with instructions that do not include negative examples.
apart from some reduction in the number of low-quality happy moments, we did not detect significant differences between happy moments that are collected from instructions with or without negative examples.
contrasting
train_19317
Code-switching can be observed at various linguistic levels of representation for different language pairs: phonological, morphological, lexical, syntactic, semantic, and discourse/pragmatic switching.
very little codeswitching corpora exist from which researchers can train statistical models.
contrasting
train_19318
In grammatical error correction, often a more detailed taxonomy for errors is used; the default benchmark has 28 categories (Ng et al., 2014).
many of the errors in this taxonomy are not annotated in the normalization benchmarks and many normalization replacements are not included in this taxonomy.
contrasting
train_19319
(2015) are also annotated with error categories.
the guidelines for the annotation of these corpora are substantially different compared to the other, more commonly used, corpora.
contrasting
train_19320
The taxonomy proposed by Baldwin and Li (2015) has a very high percentage of normalizations since it allows deletion and insertion of tokens as well as the correction of capitalization.
the Foreebank (Kaljahi et al., 2015) has a very low percentage of normalized words.
contrasting
train_19321
Thus, based on our analyses of the group discussions we further conclude that the hypothesis "Individuals who have a high degree of influence within a group discussing a topic they are knowledgeable about, remain influential when moved to another group that discusses a different topic they are less knowledgeable about" is not supported by the data that we have.
we have shown that when influential people are moved to groups where unfamiliar topics are discussed, their influence declines, sometimes significantly.
contrasting
train_19322
All these features can be useful for learner corpora transcription.
as the final users of the digital texts produced via the above mentioned crowdsourcing tools are historians, the annotation systems of these tools are adapted to bibliographical, paleontological and historical needs.
contrasting
train_19323
For instance one can use some language model and then let some algorithm such as Viterbi find the most probable final text.
this will only produce variants already extant in the manuscripts, but not such, which have been ancestral, but irretrievably altered.
contrasting
train_19324
As denoted previously in Section 1., the number of cuneiform character classes are up to hundreds.
most of them are ligatures of basic characters or less frequently appeared.
contrasting
train_19325
As denoted in Section 2.1., we limited target age for collection.
for constructing more practical dataset, we have to collect other glyphs of other ages.
contrasting
train_19326
If the stimuli which are presented to crowd-workers are based on acted scenarios, the reactions of crowd-workers might also not be as they would be in a completely spontaneous co-located interactions.
in a completely spontaneous co-located interaction, participants might also be influenced by other events or actions going on at the same time.
contrasting
train_19327
Finding new technological solutions in order to enhance the work in clinical operating rooms has been in the focus of research for many years.
with the emergence of new technologies, the surgical working environment comprises nowadays many medical devices which have to be monitored and controlled and thus becomes increasingly complex.
contrasting
train_19328
When the surgeon confirms to start with the surgery, the second part of the dialogue begins.
to the first part which is very flexible and allows the user to control the dialogue, the procedural part follows an exact surgery schedule which has been modelled in the Spoken Dialogue Ontology.
contrasting
train_19329
Moreover, IDACO stays in the background if the procedure goes as scheduled.
the evaluation with the experienced physicians indicated that passive system behaviour makes the surgery team insecure.
contrasting
train_19330
Especially, corpora of free conversations annotated with some linguistic information are valuable.
for the Japanese language, there is no annotated corpus in the domain of free conversations that is publicly available.
contrasting
train_19331
• The "Sympathy" tag can be assigned when an utterance shows sympathy or approval.
if a speaker just shows agreement with the other participant, the "Sympathy" tag is not assigned.
contrasting
train_19332
F rw1 simply checks whether the same content word appears in both the current and the previous utterance.
f rw2 more strictly checks the presence of a repetition of content words: f rw2 is activated if either of the conditions below is fulfilled.
contrasting
train_19333
location), the dynamic nature of RL can be better studied, similarly to how Ibarra and Tanenhaus (2016) observed changes in referring strategy when contrastive features previously used to disambiguate entities are no longer effective due to introducing new entities with similar features.
these tasks culminate to an end goal, again leading to e.g.
contrasting
train_19334
t n and treated each unique referent in the corpus r ∈ R as a "document", where |R| = 840 Table 7: Significance of the correlation between coreference sequence order n (the n th time a round in a game refers to an entity with a unique SHAPE value s) and instructor token type overlap ∆c s n .
for |D| = 42 dyads with 20 referents per dyad: (2) in order to encode the knowledge that RL converges in dialogue (see Section 5.1.
contrasting
train_19335
In this endeavour, they use word vectors in combination with deep neural networks to determine the dialogue act of an utterance.
the representation in vector space they utilise stays on the word level.
contrasting
train_19336
The original word2vec trains its representations similarly to autoencoding.
rather than training against the input word itself, word2vec trains words against their adjacent words in the input corpus, either using the word to predict its context or using the context to predict the word.
contrasting
train_19337
Considering only those results, employing LCs would seem unnecessary.
their advantage of providing potential for generalisation becomes prominent when a DVM that was trained on one corpus is used for clustering utterances of the other corpus.
contrasting
train_19338
In other words, the perception of strong engagement is triggered by the expert's smile.
for the partially disengaged level, the mean starting time of smile is 1.5 seconds: smile is produced 1.5 seconds after the perception of a partial engagement.
contrasting
train_19339
The notion that addressees are indicated to a lesser degree in the text thus seems implausible, given that the features applied to addressees show a larger change compared to speakers.
if we consider the difference in the distribution of types presented in Table 3 for speakers and Table 4 for addressees, we see that there are significant differences for explicit, anaphoric and definite description indicators.
contrasting
train_19340
Direct comparison with previous work is difficult, primarily because of differences in the data-size and the experimental setup.
our results on speaker identification are relatively similar to previously obtained results (Elson et al., 2010;He et al., 2013;Muzny et al., 2017).
contrasting
train_19341
Parallel corpora play an important role in many multilingual NLP applications, such as Machine Translation, Cross-Lingual Text Classification or Information Retrieval.
the topics and genres of parallel corpora are limited even for better resourced languages, e.g., resources are scarcer outside of the official documents of Europarl and the United Nations (Koehn, 2005;Eisele and Chen, 2010).
contrasting
train_19342
In addition to a model with a seed bilingual dictionary, it also introduced constraints on what its authors call "morphological structure" (actually the Levenshtein Distance) for keeping only the cognate words in the output.
further work on bilingual lexicon induction did not include the use of cognates, especially in the context of related languages.
contrasting
train_19343
The potential for such one-to-many matches is smaller for closely related languages, since they usually have the same set of morphological categories.
differences in the suppletivism of forms are common even across related languages, for example, the feminine adjectival forms ending with 2 https://github.com/ssharoff/cognates ой in Russian (e.g., новой, 'new') are used for any nonnominative case, while unique cognate forms are used in Ukrainian for each grammatical case, e.g., genitive: нової, dative: новiй, instrumental: новою, etc.
contrasting
train_19344
For example, the meaning of the identical forms postale in both French and Italian is the same ('post.adj'), they share a number of collocates with the same meaning, e.g., adresse postale vs indirizzo postale, so they are likely to be well-aligned in the shared embedding space (either with or without WLD constraints).
the French form is feminine, while the Italian one is masculine, so the correct embedding space should have mapped postale in Italian with postal in French.
contrasting
train_19345
Even as NMT development proceeds at breakneck speed, research on newer advanced technologies based on Quantum Neural Networks (QNN) is already in progress (Moire et al., 2016).
despite of the significant improvement in translation quality, the ability of NMT systems to correctly translate named entities and some technical terms has in fact somewhat deteriorated.
contrasting
train_19346
Some potential obstacles are (1) that lexicons, unlike corpora, do not provide context, and (2) that ordinary lexicons do not provide translation probabilities.
this is not critical for named entities, especially POIs, and even for many technical terms, since named entities are mostly monosemic, which means that word sense disambiguation is unnecessary and that the lexicon can automatically be assigned a higher probability.
contrasting
train_19347
A larger vocabulary indeed means being able to understand a larger scope of texts.
analyses of large corpora indicate that, from kindergarten through college, native speakers encounter approximately 150,000 different words (Zeno et al., 1995).
contrasting
train_19348
At C2, a learner is able to read virtually all forms of written language however abstract, structurally or linguistically complex.
such descriptions remain elusive and the limitations of the CEFR for practical purposes have been stressed (North, 2005, 40).
contrasting
train_19349
The methodology applied has the great advantage of being able to assign different difficulty levels to the different senses of a word.
alderson (2007) which is a collection of performances on Cambridge examinations, may be an issue for generalization.
contrasting
train_19350
It also offers a finer view of word use within a level: for instance, EFLLex makes obvious that write is a much more prevalent word at A1 (934 occurrences) than explore (20 occurrences).
the resource also has some limitations as regards frequency estimation.
contrasting
train_19351
NMT has outperformed previous translation systems in many language pairs (e.g., German-English, French-English).
in order to reach high accuracies, neural translation systems tend to require very large parallel training corpora (Koehn and Knowles, 2017).
contrasting
train_19352
Several ideas have been proposed in order to mollify this issue including multi-lingual systems with zero-shot translation (Johnson et al., 2016), transfer learning (Zoph et al., 2016) and back-translations (Sennrich et al., 2016).
their general effectiveness still requires wider evaluation.
contrasting
train_19353
For the NMT model, updating the pre-trained embeddings during training (u-emb) has invariably led to the highest accuracies, up to an improvement of 2.98 BLEU points over the random embeddings in the eu→en direction.
the performance ranking has changed drastically when testing on the more probing Berriak corpus.
contrasting
train_19354
On the other hand, the training corpus of Google Translate is certainly much bigger, and that has helped it achieve better results on Berriak.
the BLEU score when Basque is the target is still very low (9.91) and significant improvements are an outstanding need.
contrasting
train_19355
Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance in recent years.
the under-translation and over-translation problem still remain a big challenge.
contrasting
train_19356
The past several years have witnessed the rapid progress of end-to-end Neural Machine Translation (NMT).
there exists discrepancy between training and inference in NMT when decoding, which may lead to serious problems since the model might be in a part of the state space it has never seen during training.
contrasting
train_19357
In many applications of sequence-to-sequence models, at inference time, the output of the decoder at time t is fed back and becomes the input of decoder at time t+1.
during training, it is more common to provide the correct input to the decoder at every time-step even if the decoder made a mistake before, which leads to a discrepancy between how the model is used at training and inference.
contrasting
train_19358
Traditionally, backing-off language models rely on the n-gram approximation, which are often criticized because they could only store limited information and thus lack any explicit representation of long range dependency.
recurrent neural network language models always estimate probabilities based on the full history (Sundermeyer et al., 2012) In other words, recurrent neural networks do not use limited size of context, which is the major reason why we choose recurrent neural network language models.
contrasting
train_19359
It can be seen from the figure that, if very similar sentences (0.4-1.0) can be found for the testing sentence, using only 1 similar sentence can greatly improve the performance, using more does not provide further help and may even degrade the performance.
when the found sentences are not very similar (0-0.4), the improvement brought by fine-tuning is much smaller, and using more sentences, such as 16, is better than using one.
contrasting
train_19360
(2012) proposes a local training method which also learns sentencewise weights based on similar sentences.
since there are only about a dozen of features in SMT, such as translation score and language model score, adjusting the relative weights of these features cannot making full use of the similar sentences.
contrasting
train_19361
Much work has been done on machine translation between major language pairs including Arabic-English and English-Japanese thanks to the availability of large-scale parallel corpora with manually verified subsets of parallel sentences.
there has been little research conducted on the Arabic-Japanese language pair due to its parallel-data scarcity, despite being a good example of interestingly contrasting differences in typology.
contrasting
train_19362
Much work has been done on MT between major language pairs including Arabic-English and Japanese-English, thanks to the availability of large-scale parallel corpora across various domains with manually aligned subsets.
there has been little research conducted on the Arabic-Japanese language pair due to its parallel-data scarcity, despite being a good example of interestingly contrasting differences in typology.
contrasting
train_19363
In addition, Arabic has a complex system of derivation, inflection, and cliticization.
a Japanese token can be highly ambiguous due to the absence of spaces between tokens.
contrasting
train_19364
For the French translation of outward investment, the most used translation within the parallel corpus is investissement extérieur, whereby the additionally suggested term investissement réaliséà l'étranger documented in IATE does not appear in the corpus.
the French translations of the term electrical engineering, i.e., electrotechnique and génieélectrique, are both frequently mentioned in the used corpora.
contrasting
train_19365
Many other languages exhibit similar morphological patterns that could not be handled by Unidecode.
the baseline actually performs better than Moses in 42 languages, which is a testament to the difficulty of our task.
contrasting
train_19366
In language modeling, perplexity or cross-entropy is widely accepted as a de facto standard for intrinsic evaluation.
distributed word representations include the additive (or compositional) property of the vectors, which cannot be assessed by perplexity.
contrasting
train_19367
We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair.
we did not restrict the attributes of words, such as the level of feeling, during annotation.
contrasting
train_19368
Kipper Schuler's (2005) Verb-Net, grouping English verbs into classes defined by shared meaning components and syntactic behaviour, is one of the richest lexical verb resources currently available, and its utility in various NLP applications has been repeatedly demonstrated (Rios et al., 2011;Windisch Brown et al., 2011;Schmitz et al., 2012;Lippincott et al., 2013;Bailey et al., 2015).
creation of a similar resource from scratch, drawing simultaneously on semantic and syntactic criteria, is a challenging and time-consuming task when attempted by annotators without theoretical linguistics background (Majewska et al., 2017).
contrasting
train_19369
The result for the first baseline, 0.0, is the same as in SemEval, and a natural consequence of B-Cubed since there are no pairs within a class.
while the overall performance of the All-instances, One sense baseline in SemEval surpasses its best participating system (achieving the score of 0.623), the result for this baseline on our verb clustering is much lower (0.069), suggesting the task is significantly more difficult, due to the high number of clusters.
contrasting
train_19370
Another type of multi-sense word embeddings introduce external knowledge base for accurate sense generation (Iacobacci et al., 2015;Chen et al., 2014;Pelevina et al., 2017).
they are somehow limited as such external knowledge may be lack for languages other than English.
contrasting
train_19371
Mainstream approaches employ machine learning techniques to integrate/combine visual features with linguistic features.
to or supplementing these approaches, this study assesses the effectiveness of social image tags in generating word embeddings, and argues that these generated representations exhibit somewhat different and favorable behaviors from corpus-originated representations.
contrasting
train_19372
The table presents that the degradations in USF Assoc score compared to that of SimLex999 are evident in all the embedding types.
the difference in YFCC (tag embedding) is larger than the other two types.
contrasting
train_19373
For example: evaluate if a model for a frame F, trained on data from the archeology domain can successfully be applied on data from the WW1.
to full text parsing corpus, the frame semantic annotations of CALOR are limited to a small subset of frames from FrameNet (Baker et al., 1998).
contrasting
train_19374
Therefore the ambiguity is limited and CRFs can be trained efficiently even with a large number of features.
the drawback is that the training data is split across words in the LU lexicon, therefore similarities among LU are not exploited.
contrasting
train_19375
Experiments show that biLSTM-MT model achieves a better recall, while CRF-MM achieves better precision, this is due to the architecture of each model, in CRF-MM we divide frame parsing into small subtasks one per LU, reducing the number of possible labels in each decision, thus augmenting the precision.
biLSTM-MT is able to share data across LUs boosting its capacity to deal with complex syntactic patterns and being able to retrieve more frame elements during parsing.
contrasting
train_19376
Charteris-Black (2009) collected data of British parliamentary debates from online versions of Hansard, while Koller and Semino (2009) assembled a corpus of interviews and speeches by Germany chancellors from the official websites of the German government.
these corpora have not been released and made available for public search or free use.
contrasting
train_19377
For example, Lu and Ahrens (2008) found that Kuomintang Presidents in Taiwan used BUILDING metaphors to instill a Chinese ideology.
the president from the Democratic Progressive Party preferred not to use BUILDING metaphors, and instead used FARMLAND metaphors to emphasize Taiwan's agricultural background and political independence.
contrasting
train_19378
Usually, the Japanese sense dataset Balanced Corpus of Contemporary Written Japanese (BC-CWJ) (Maekawa et al., 2014), tagged with sense IDs from Iwanami Kokugo Jiten (Nishio et al., 1994), is used for supervised WSD.
unsupervised approaches to allwords WSD often require synonym information, which sense datasets cannot provide.
contrasting
train_19379
If the synonyms for Sense 1 are A, B, and C, and the synonyms for Sense 2 are C, D, and E, we exclude C from the synonym sets for both Sense 1 and Sense 2.
the above conditions can not take into consideration the ambiguity of the synonyms.
contrasting
train_19380
The original SemEval 2010 Task 14 used the V-Measure external clustering measure (Manandhar et al., 2010).
this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a 'dummy' singleton baseline.
contrasting
train_19381
Indeed, most of sense annotated corpora are either directly annotated with WordNet sense keys or they are annotated with a sense inventory linked to the senses of WordNet, such as BabelNet (Navigli and Ponzetto, 2010).
it is not trivial to use these corpora, because most of them differ in their format and on the version of Word-Net they use.
contrasting
train_19382
Sense annotations have been converted, when necessary, from their original WordNet sense key to the last version of WordNet (3.0) thanks to conversion tables from (Daudé et al., 2000).
because some senses have been dropped from the old versions of WordNet, some sense annotations have not been converted.
contrasting
train_19383
Word embeddings -generated with neural networks (NN) or other factorization techniques -are a standard element in natural language processing (NLP) applications.
an important issue is their lack of sense-awareness, i.e.
contrasting
train_19384
Obviously, fastSense outperforms these competitors.
since these tools link to different resources (e.g., DBpedia, WordNet or Wikipedia), this comparison only holds for time effort.
contrasting
train_19385
Of course, the tree representation removes the context from which the events are extracted.
by presenting both views at once, the user is able to see the summary semantics of the annotation relations alongside the annotations embedded in the text.
contrasting
train_19386
So far, we have shown how the same concept can be represented through different images, depending on perspective, or the semantic content highlighted (Faber et al., 2007;Reimerink, García de Quesada & Montero-Martínez, 2010).
one and the same image may also work for the representation of other related concept entries (e.g.
contrasting
train_19387
Thus, each annotator can freely describe the function the VKP represents in this image.
as this will probably cause a high degree of inconsistency between annotators, a list with fixed options has been defined, which will be implemented shortly (see Table 1).
contrasting
train_19388
The WebANNO tool is also a web-based tool that offers wide range of linguistic annotations tasks, e.g., named entity, dependency parsing, co-reference chain identification, and part-of speech annotation.
both systems SWAT and WebANNO lack of some functionalities and features that can simplify and speed up the annotation task for our purposes.
contrasting
train_19389
This architecture design allows multiple annotators to work on various tasks simultaneously.
wASA allows the admin user to manage and handle a single central database.
contrasting
train_19390
To increase the speed of the annotation process, some of the words, like Named Entities and punctuations, will have an initial tag assigned automatically as part of a preprocessing step.
the annotator is allowed to change the initial tag if he/she finds words annotated with a wrong tag.
contrasting
train_19391
His findings showed that, for example, in Japanese novels, words that are highly characteristic are suffixes and nouns attached to people, such as san, sama, chan (all are general suffixes of personal names but differ in politeness).
in translated novels, many pronouns and proper nouns are highly characteristic.
contrasting
train_19392
For PDF annotation, there are many commercial products such as Adobe Acrobat, PDF Annotator 1 , and A.nnotate 2 , which basically support text highlighting and adding notes and comments on PDF.
these tools are not intended to be used for linguistic annotation, thus these lack annotation types suitable for linguistic phenomena such as dependency relation and coreference chain.
contrasting
train_19393
However, these tools are not intended to be used for linguistic annotation, thus these lack annotation types suitable for linguistic phenomena such as dependency relation and coreference chain.
pDFAnno supports such relation annotation and multi-user annotation.
contrasting
train_19394
Actually, every annotator performed the work of textbased annotation while referring to the original PDF.
such problems did not occur in the PDF-based annotation.
contrasting
train_19395
This can be helpful for data exchange between collections of predefined processing components such as parsers.
it made this approach unsuited for us as alternative to designing our own modeling toolkit.
contrasting
train_19396
Several ontologies have been built for the representation of legal concepts (Hoekstra et al., 2009;Wyner and Hoekstra, 2012), and there are formal languages for the representation of the content of legal documents, such as SBVR (OMG, 2008) and LegalRuleML (Athan et al., 2015).
a direct translation from natural language, particularly legal language, into a given formal language is extremely difficult to accomplish.
contrasting
train_19397
On the one hand, legal professionals cannot reasonably be expected to engage with the complexity of LegalRuleML conformant encoding of legal documents.
it is important to keep track of the text which has been translated.
contrasting
train_19398
Repositories of linguistic annotation terminology, such as GOLD (Farrar and Langendoen, 2003), ISOcat (Windhouwer and Wright, 2012) and its successor CCR, 3 make it possible to overcome the heterogeneity of annotation schemes by acting as an interlingua that allows mapping annotations from one scheme to another, thus addressing conceptual interoperability (Chiarcos, 2012a).
far from all corpora that we use today follow the principles mentioned above.
contrasting
train_19399
(2005), who discuss the issues involved in creating a Unified Linguistic Annotation (ULA) by merging the annotation schemes of PropBank, NomBank, Time-Bank, the Discourse Treebank and Coreference Annotation.
their work remains on theoretical ground by limiting their discussion to overlapping and conflicting annotations in example sentences.
contrasting