id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_18500
The best score for adequacy is 5 and this means that the whole meaning of the reference is also expressed in the translation, while 1 means that none of the original meaning was kept.
it seems that people have difficulty in evaluating these two aspects of language separately.
contrasting
train_18501
To address this issue, research has been focusing on developing automatic methods for readability evaluation, which could account for the specific reading difficulties of various reader populations (Dubay, 2004;Benjamin, 2012), including people with cognitive disabilities (Feng et al., 2010;Yaneva and Evans, 2015).
relying on readability formulae to measure text accessibility is an approach that has many drawbacks (Benjamin, 2012; Dubay, 2004;Siddharthan, 2004), mainly re-lated to the fact that readability formulae only employ surface text features such as word or sentence length.
contrasting
train_18502
F leschReadingEase = 206.835− 1.015 × words sentences -84.6 × syllables words (Flesch, 1948) Subtler text characteristics could be accounted for by readability models based on machine learning algorithms.
developing these models for people with cognitive disabilities is currently not feasible, due to the lack of large enough corpora for model training and evaluation.
contrasting
train_18503
Considering the simple embeddings, we observe that Skipgrams performs significantly better than CBOW and GloVe on POS and MENT tasks.
for the other tasks CBOW achieves the best results.
contrasting
train_18504
Tukey post-hoc comparisons indicate that the mean score for the EN condition (M=198.9, SD=22.0) was significantly different to the DE_PE condition (M=305.8, SD=62.2).
the DE_MT (M=255, SD=43.9) condition did not significantly differ from the EN and DE_PE conditions.
contrasting
train_18505
AQL is a powerful language that implements an IE algebra (Reiss et al., 2008).
in our opinion, this loses some of the simplicity that Odin's Runes enjoys.
contrasting
train_18506
For example, the syntaxbased grammar correctly finds two ubiquitination events and two negative regulations in the sentence "CYLD inhibits the ubiquitination of both TRAF2 and TRAF6" because the dependency graph correctly connects "ubiquitination" to "TRAF2" and "TRAF6", as seen in Figure 2.
the surface-based grammar misses the ubiquitination event involving "TRAF6" (and the negative regulation of this ubiquitination), because the last two tokens of the sentence are not explicitly handled by the rules.
contrasting
train_18507
On the other hand, the surface-based grammar misses the ubiquitination event involving "TRAF6" (and the negative regulation of this ubiquitination), because the last two tokens of the sentence are not explicitly handled by the rules.
syntax-based grammars assume that a syntactic parser is available and produces robust output.
contrasting
train_18508
Since one of the points of the Living Lexicon is to investigate how an unsupervised DSM will develop as it continuously reads uncontrolled data from online sources, such qualitative differences are to be expected; the lexicon is nothing more than a representation of the current state of online language use.
this makes it slightly challenging to perform quality assurance and evaluation of the lexicon.
contrasting
train_18509
For example, they assume that annotation is performed in the token level.
korean is an agglutinative language whose words are formed by joining morphemes together, so it can not be annotated properly in the token level.
contrasting
train_18510
One may argue that using the different attribute names to denote IDs will make the various kinds of tags easier to recognize.
in terms of further applications that make use of temporal information, it is not necessary to use various attribute names to denote tag IDs the kind of tag is already known when its attributes are parsed.
contrasting
train_18511
Most of the existing wide-coverage knowledge bases have no problems in identifying, e.g., major cities ("New York is a city") or celebrities ("Madonna is a singer"), but they show limitations with respect to small villages and less known people.
the potential of web-scale intelligent, data-intensive applications can only be unlocked if they are capable of dealing with the most prominent entities, as well as the long tail.
contrasting
train_18512
The dataset contains types for DBpedia entities that have been extracted from the corresponding Wikipedia articles using Hearst patterns.
the work presented in this paper uses the whole Web as a corpus, not only Wikipedia, and hence, it is not limited to find hypernymy relations to entities that are represented by a Wikipedia page, but between arbitrary entities.
contrasting
train_18513
For example, the eleventh query (OHSU11) had an F-measure of 0.16 for KantanMT and 0.19 for Moses.
the original English query managed to yield an F-measure of only 0.08, which is half the performance of the machine translated queries.
contrasting
train_18514
Most of the previous studies on English spelling-error extraction collected English spelling errors from web services such as Twitter by using the edit distance or from input logs utilizing crowdsourcing.
in the former approach, it is not clear which word corresponds to the spelling error, and the latter approach requires an annotation cost for the crowdsourcing.
contrasting
train_18515
Their approach saves the cost of crowdsourcing, and guarantees an exact alignment between the word and the spelling error.
they did not assert whether the extracted spelling error corpora reflect the usual writing process such as writing a document.
contrasting
train_18516
This game includes the intended word and does not require the cost of crowdsourcing.
the writing process of a word-typing game may differ from the usual writing process (e.g., writing a document).
contrasting
train_18517
We suppose it was because they extracted uncorrected spelling errors using a set of common We implemented a correctable typing game to extract corrected spelling errors.
the difference in the two games may affect the spelling errors.
contrasting
train_18518
Google definition boxes 1 (that appear sometimes when you search on Google) are very well-known result of "automatic" definition finding.
they are probably (to the best of our knowledge, the exact algorithm has not been published yet) based on a few reliable sources of definitions, such as first paragraphs of Wikipedia articles or particular dictionaries.
contrasting
train_18519
Overall, systems appeared to perform better over the generalization and drifting sessions than the specification ones (Kanoulas et al., 2010).
only one team achieved a significant statistical improvement.
contrasting
train_18520
Hand inspection showed that, in some cases, producing better translation does not necessarily mean that the information need is expressed better.
this fact should be contrasted with a more extended topic-set including this type of reformulations.
contrasting
train_18521
For instance, Figure 2 shows the growth of the topic "Statistical Machine Translation" in the NLP community using the ACL anthology corpus 4 .
saffron does not provide any topic trend forecast for the coming years.
contrasting
train_18522
In this approach, we individually model the time series data for every keyword.
as we use regression, we assume that the previous year values are independent of each other, which is not true for time series data.
contrasting
train_18523
Also, more sophisticated approaches for modelling the time series data such as using Fourier transforms or Recurrent Neural Networks can be investigated in context of modelling the growth of scientific topics in this domain.
we do not have enough temporal data points for LREC (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) to experiment with such approaches.
contrasting
train_18524
The purpose of a POS tag is to describe the grammatical structure of words in an utterance, and a corresponding resource aims to help computational methods reduce ambiguity in what is communicated by the text.
domain-specific annotations for NLP are often centered on real-world applications of text.
contrasting
train_18525
In order to respect both perspectives (namely, the crosslinguistic one and the one associated to grammar traditions in each language), the ontology put forward here is to be used for the overall classification and linking of linguistic information across languages.
each specific application resorting to such content (e.g., dictionaries, computational lexicons, automatic tools for language processing, etc.)
contrasting
train_18526
There are 6 tags that are only present in our ontology: affix, contraction, and idiomatic (all of them needed here because they are elements present in dictionary and grammar content); article and predeterminer (both subsumed under determiner in UD); and ideophone, not accounted for in that ontology.
the UD POS tagset has categories not available in our proposal: tween the OGL Ontology and each of our bilingual dictionaries.
contrasting
train_18527
For example, assuming English to isiZulu and an English to Northern Sotho datasets exist, we are able to extract isiZulu to Northern Sotho translations.
this strategy might not always be accurate; as, a word might have multiple senses, thus isolating a sense to sense translation without some post-processing on data may not be possible; or even in the case of having only one sense in both mentioned dictionaries, the senses might not represent the same meaning.
contrasting
train_18528
We will next explore supervised and ensemble methods.
(ensemble methods may not work for this task as captions may be funny in different ways; for example, of two equally funny captions, one may be funnyabsurd and the other funny-ironic.)
contrasting
train_18529
The assumption behind eye tracking is that the longer the eye gaze fixation on a certain word is, the more difficult it is for cognitive processing (Just et al., 1996).
until now, eye tracking has not been used to investigate reading in autism, possibly due to the number of procedural difficulties related to this kind of research with autistic participants (Section 3), and thus there is no reliable information about the particular types of phrases which need simplification for readers with autism.
contrasting
train_18530
increase eye contact with the audience), a full green bar would indicate to the participant that his/her performance is very good.
the virtual characters only adopted a neutral posture and did not provide additional feedback.
contrasting
train_18531
The interactive virtual audience condition producing nonverbal feedback was not significantly better than the control condition after the investigated minimal training of only two short presentations.
we found using participants' questionnaires that the interactive virtual audience was perceived as more engaging (µ IV A = 4.50, µ N on−IV A = 3.44; t(30) = 0.86, p = 0.001, g = 1.211) and challenging (µ IV A = 2.94, µ N on−IV A = 2.00; t(28) = 1.02, p = 0.025, g = 0.801) than the control condition, which could prove pivotal in the long run and keep the learner engaged and present a more challenging task.
contrasting
train_18532
Out of the single modalities the acoustic information seems to be most promising for the assessment of performance improvement.
we are confident that with the development of more complex and tailored visual features similar success can be achieved.
contrasting
train_18533
For instance, the goal of Schone and Jurafsky (2000) was "to identify an automatic, knowledge-free algorithm that finds all and only those collocations where it is necessary to supply a definition."
in order to accomplish this complex task authors made little if any effort to normalize extracted MWT candidates: "Prior to applying the algorithms, we lemmatize using a weakly-informed tokenizer that knows only that white space and punctuation separate words."
contrasting
train_18534
However, in order to accomplish this complex task authors made little if any effort to normalize extracted MWT candidates: "Prior to applying the algorithms, we lemmatize using a weakly-informed tokenizer that knows only that white space and punctuation separate words."
for highly-inflected languages, such as Serbian and other Slavic languages, this task can hardly be avoided as each nominal MWT can have many inflected forms (from five to ten or even more) and many of these forms (but usually not all) can in general be extracted from a corpus.
contrasting
train_18535
But if obloge trake is extracted, then two different structures -N2X and NXNcan be associated with various interpretations for both constituents.
if some other form is extracted besides it, e.g.
contrasting
train_18536
argument merger (Grimshaw and Mester, 1988), argument fusion (Butt, 2010), argument composition (Hinrichs et al., 1998), and also Alonso Ramos (2007).
to our best knowledge, none of these mechanisms have been verified on corpus data.
contrasting
train_18537
To obtain more data, we made use of the Czech part of the PCEDT, see above Table 3.
the annotation of CPs in the PCEDT is not as rich as in the PDT.
contrasting
train_18538
Al-Haj (2009) presented a systematic linguistic characterization of MWEs in Hebrew, and provided in a full picture of the diverse properties that Hebrew MWEs exhibit.
al-Haj and Wintner (2010) limited their investigation to noun compounds.
contrasting
train_18539
Tsvetkov and Wintner (2012) proposed an algorithm for identifying MWEs in bilingual corpora, using automatic word alignment as their main source of information.
automatic construction of parallel corpora is a time consuming task.
contrasting
train_18540
"(he) ate with all his mouth") can inflect for number, gender, person, and all tenses.
the noun ph does not inflect for number.
contrasting
train_18541
This representation can also benefit from the fact that it is possible to find multi-word entities of a given language in texts in another language (especially with names of international organisations such as European Space Agency which can be found in German text).
translated tokens many entities have different written forms across languages so that a string-based comparison of tokens is not successful.
contrasting
train_18542
This knowledge enables an entity linking system to model and perform reasoning over the semantic context of the real-world entities it links to.
the heterogeneity of the LOD knowledge also represents a barrier towards its automatic use, integration and manipulation.
contrasting
train_18543
General Motors Corporation ( GM ) 3 even specifically introduces an abbreviation "GM", allowing a human reader to understand the further mentions of this abbreviation in the remainder of the article.
the first step of our approach is not able to correctly disambiguate the other 4 entity mentions from this snippet.
contrasting
train_18544
After that, the projection is given to the non-linear hidden layer and then the output is given to softmax in order to receive a probability distribution over all the words in the vocabulary.
as suggested by Mikolov et al.
contrasting
train_18545
Due to the fact that each tweet is treated as a single document with only 140 characters, it is difficult to make use of non-local features such as context aggregation and prediction history for the NER task on tweets.
local features are mostly related to the previous and next tokens of the current token.
contrasting
train_18546
Finally, the total score for the node e is score(e) = coh(e) + P P R avg • iSim(e) where total coherence coh(e) of node e to the graph is computed with respect to aggregation constraints, and initial similarity score iSim(e) is weighted by an average value of P P R weights used in coherence computation.
this approach often ranks higher a popular candidate connected to many nodes in a graph over the correct but less popular one.
contrasting
train_18547
As we can see the number of Company and Product entities in the Synthesio corpus is almost four times more than in the Ritter Corpus.
the number of Person, Geo-loc and job-title entities in the Synthesio corpus is only about one fifth of the same entities in the Ritter Corpus.
contrasting
train_18548
Improvement has been reported when translated from French (+1.6 BLEU), German (+1.95 BLEU) or Hungarian (+1 BLEU) into English.
application of similar approach for English-Latvian MT has resulted in insignificant improvement by only +0.12 BLEU points (Rikters, 2015).
contrasting
train_18549
In the presented approach, the chunker splits sentences in top-level chunks without analysis of sub-chunks or cases when a chunk is single token.
the larger chunks should be split in smaller sub-chunks and the single-word chunks should be combined with the neighbouring longer chunks.
contrasting
train_18550
In Hiragana TIMES, NE abstraction methods were effective.
in BTEC, NE abstraction methods were not effective because most of the sentences are typical dialogues that are easy to translate with the baseline, e.g., "I must arrive in Tokyo by tomorrow morning ."
contrasting
train_18551
In contrast, in BTEC, NE abstraction methods were not effective because most of the sentences are typical dialogues that are easy to translate with the baseline, e.g., "I must arrive in Tokyo by tomorrow morning ."
manual NER (indicated as "man-NER" in Table 5) achieved a better result than automatic NER methods ("auto-NER") and the baseline method, even in the BTEC corpus.
contrasting
train_18552
(1991) obtained above 95% precision on the Hansards).
for literary bitexts, alignment quality could be much less satisfactory.
contrasting
train_18553
For example the inclusion of "this/that" which like "it" can be used as anaphoric or event reference pronouns, or "your" which requires a similar deictic/generic disambiguation approach as for "you" (included).
they represent similar translation problems to those posed by "it" and "you" and in order to keep the number of pronoun tokens manageable when it comes to manual evaluation, certain exclusions must also be made in terms of pronoun forms.
contrasting
train_18554
For example, the IDIAP system (Luong et al., 2015) has fewer reference translation matches for intra-sentential anaphoric "they" than the baseline.
it produces some pronoun translations that are better than those produced by the baseline.
contrasting
train_18555
The "Briggs" does not exist in the corpus either.
word2vec retrofitted by PPDB incorrectly paraphrased it to "Smith", leading to this incorrect translation.
contrasting
train_18556
Generally the text transcript generated by automatic speech recognition (ASR) systems is unpunctuated and unsegmented.
the readability of the transcript can be largely improved by the existence of punctuation marks, and the segmentation of the transcript based on punctuation positions will also increase the efficiency of many downstream natural languages processing (NLP) tasks, such as semantic parsing, question answering or machine translation (Matusov et al., 2007;Wang et al., 2010).
contrasting
train_18557
In previous research efforts, frequently used lexical features include the language model (LM) score, token, part-ofspeech (POS) tag, chunk tag and so on.
we would like attempt another possibility by using the Word Vector.
contrasting
train_18558
Interestingly enough, no single parser in Table 6 outperforms the others by achieving the highest LAS for all the selected dependency relations.
each parser seems to be best performing on specific relations.
contrasting
train_18559
This helps the grammar writer to arrange the rules in appropriate sections, with safest and most effective rules coming first.
this method will not notice a missed opportunity or a grammar-internal conflict, nor suggest ways to improve.
contrasting
train_18560
In order to find the set of readings, we expand a morphological lexicon 1 , ignore the word forms and lemmas, and take all distinct analyses.
many grammar rules target a specific lemma or word form.
contrasting
train_18561
The readings in a grammar can be underspecified: for example, the rule REMOVE (verb sg) IF (-1 det) gives us "verb sg" and "det".
the lexicon only gives us fully specified readings, such as "verb pres p2 sg".
contrasting
train_18562
If we accept those lists as readings, we will generate symbolic sentences that are impossible, and not discover the bug in the grammar.
if we are primarily interested in rule interaction, then using the underspecified readings from the grammar may be an adequate solution.
contrasting
train_18563
Since VISL CG-3 already points out unused sets, we did not add such feature in our tool.
we noticed an unexpected benefit when we tried to use the set definitions from the grammar directly as our readings: this way, we can discover inconsistencies even in set definitions that are not used in any rule.
contrasting
train_18564
Despite the inaccuracies, we can see that increasing the number of readings and adding the ambiguity class constraints slow the program down significantly.
many of the use cases do not require running the whole grammar.
contrasting
train_18565
A third of the transcripts (35 %, 145 hours) has been created very carefully using a systematic transcription approach and manual quality cross-checks, resulting in an orthographic basic transcription (Wendelstein, 2016).
the majority of the transcripts (65 %, 236 hours) does not follow the criteria of linguistic transcripts, was created without any postchecks and includes hardly any transcribed vocables (hesitations, back-channeling, disfluencies).
contrasting
train_18566
In order to have a number of reliable short segments for training and evaluation, a part of the transcribed data is currently also being segmented manually.
as the manual segmentation takes about 5 times real-time, it will not be able to replace the long audio alignment based segmentation.
contrasting
train_18567
A more rigorous definition of prosodic coverage is the ability to synthesise speech from the speech corpus with as many possible prosodic states (declarative intonation or interrogative intonation for example).
to define prosodic coverage more rigorously, one should define prosody.
contrasting
train_18568
Many errors occur because the pronunciation of inflections, compound words and words of foreign origin as well as so-called "non-standard words" (Sproat et al., 2001) often cannot be handled by rules.
there are too many of these for inclusion in exception dictionaries.
contrasting
train_18569
In order to collect real system interaction data, one should choose elicitation methods which do not bias the participants.
a high number of valid utterances should be generated.
contrasting
train_18570
Beside the benefits, crowdsourcing is often criticized as producing poor quality because it is difficult to control the work quality and the status of the workers (Eskenazi et al., 2013).
in the past years, the speech processing communities have realized that crowdsourcing is a possible solution to their strong need for speech data (Eskenazi et al., 2013).
contrasting
train_18571
The imperative and the command style & infinitive were by far the most frequently used sentence constructions over all tasks and methods.
it is striking that the command style & infinitive is the most preferred sentence construction in the semantics method, whereas the imperative is the most preferred sentence construction in the pictures and in the text method.
contrasting
train_18572
The PTDB-TUG (Pirker et al., 2011) and the Keele corpus (Plante et al., 1995) contain single-channel and close-talking speech recordings, which cannot be used for multi-channel experiments.
there are corpora containing multi-room and multi-channel recordings: the ATHENA corpus (Tsiami et al., 2014), the DIRHA-GRID corpus (Matassoni et al., 2014), and the GRASS corpus (Schuppler et al., 2014a).
contrasting
train_18573
Second, the audio quality exceeds that of most other work in the field of clinical voice research.
two limitations exist and should be taken into account when the data are analysed and conclusions are drawn.
contrasting
train_18574
Automatic speech recognition (ASR) technologies for Latvian have a relatively short history because even three years ago (i.e., in 2013 and before) there was no orthographically annotated speech corpus, which could be used for ASR purposes, available.
there have been attempts to develop ASR systems for broadcast speech recognition (Oparin et al, 2013) in the Quaero project (Lamel, 2012) using acoustic model bootstrapping.
contrasting
train_18575
and provided a list of possible spoken commands that they could use during the recording session.
speakers were instructed to not limit themselves to the list if they thought that a different spoken command was necessary.
contrasting
train_18576
There are some children's speech databases for EP, such as Speecon with rich sentences (Speecon Consortium, 2005); ChildCAST (Lopes, 2012;Lopes et al., 2012) with picture naming; the Contents for Next Generation (CNG) Corpus targeting interactive games (Hämäläinen et al., 2013) and (Santos, 2014;Santos et al., 2014) with childadult interaction.
these databases do not present the required samples of disfluent reading speech.
contrasting
train_18577
With regards to Urdu Summary Corpus, we did not perform such experiment.
we found the token difference of 9.8% between both versions, as shown in Table 3.
contrasting
train_18578
ROUGE functions based on the assumption that in order for a summary to be of high quality, it has to share many words or phrases with a human gold summary.
different terminology may be used to refer to the same concepts and thus relying only on lexical overlaps may underrate content quality scores.
contrasting
train_18579
In the case of Keywords (KW) query reformulation, without using discounting, we can see that there is no positive gain in correlation.
keywords when applied on the discounted variant of SERA, result in higher correlations.
contrasting
train_18580
For instance, comment sentences linked to the same article sentence can be seen as forming a "cluster" of sentences on a specific point or topic.
having labels capturing argument structure enables computing statistics within such topic clusters on how many readers are in favour or against the point raised by the article sentence.
contrasting
train_18581
In this paper, we make use of ConceptNet (Speer and Havasi, 2012), that is a semantic graph that has been directly created from it.
with linguistic resources such the above-mentioned WordNet, ConceptNet contains semantics which is more related to common-sense facts.
contrasting
train_18582
Given that the end of the candidate triple c 1 is contained in P conceptnet (pain), the triple is added to synset S burn,burning .
the triple c 2 is not added to S burn,burning since relatedto − melt is not contained neither in P conceptnet (pain) and P conceptnet (hurting).
contrasting
train_18583
On the contrary, is-a and related-to relations have shown a lower performance.
this Table 3: Accuracy of some WordNet semantic enrichments obtained by the manual evaluation.
contrasting
train_18584
Also, we were using a 7-item Likert scale, whereas the WSsim/USim team used a 5-item Likert scale, and, intuitively, increased granularity ought to increase the risk of interannotator disagreement.
wSsim/Usim were purposefully annotated by non-experts, whereas all our annotators are linguists (with high non-native proficiency in English) and had been working with PDEV before.
contrasting
train_18585
About half the verbs, two thirds of the nouns, and nearly all the adjectives have only a single sense listed in GermaNet.
the average number of senses per lemma, 1.40, is still higher than GermaNet's overall average of 1.31.
contrasting
train_18586
Conceptually, our work is more in the spirit of Birke and Sarkar (2006), who treated "literal" and "idiomatic" as two different senses of the target word and applied (unsupervised) word-sense disambiguation techniques.
to this work, however, we use a simple supervised approach to distinguish literal from idiomatic uses.
contrasting
train_18587
Accordingly, we see an increase of the frequency of variants written as one word for idiomatic uses.
we also observe that literal uses start to be more frequently written as one word, though not to the same extent as idiomatic uses.
contrasting
train_18588
The average agreement score for full senses is 0.52, compared to an average agreement on 0.56 for clustered senses.
each ambiguous noun tells its own very individual story, some being fairly easy to annotate with or without clusters and others being more or less impossible, agreement scores spanning from 0.048 for plade (plate, sheet, disc, etc.)
contrasting
train_18589
The annotation of such multiword expressions that are present in the provided MWE-list of the word achieves the highest agreement because the MWEs are not individually interpreted by the annotators, as they just have to mark them up according to the list.
occasional syntactic and lexical co-incidence between a free construction and a fixed expression makes up a difficulty for an appropriate identification of the lexical vs. phraseological sense.
contrasting
train_18590
Here we can conclude that a clustered annotation scheme based on an ontologically driven collapsing of subsenses performs substantially better than a fully fine-grained scheme (disregarding here the better chance of agreeing on few tags than on many).
it is remarkable how each individual noun exposes its own pattern, and how some very ambiguous nouns prove almost impossible to annotate -with or without clusters.
contrasting
train_18591
Our corpus is about one fifth of the size of SemCor.
as mentioned, a large part of the data has been doubly annotated and later adjudicated.
contrasting
train_18592
Although segmentation errors can affect the quality, it has been shown that such word embeddings can improve Chinese NLP tasks such as parsing (Wu et al., 2013).
character information can also be used for NLP (Zhang et al., 2014a).
contrasting
train_18593
By setting the number of senses according to the cilin corpus, the multi-prototype embeddings give the best results.
there is not a universal standard on the number of senses for each character, due to variation in semantic granularity.
contrasting
train_18594
While the pair "abduct-abductor" can be used in the muscular sense in Portuguese -{01449427-v abduct, abduzir -pull away from the body; "this muscle abducts"}, it seems more commonly used in its kidnapping sense, {01471043-v snatch, kidnap, abduct, nobble, abduzir, sequestrar, raptar -take away to an undisclosed location against their will and usually in order to extract a ransom; "The industrialist's son was kidnapped"}, where the role would be agent.
looking up the pair "dilate-dilator", we see one of the first applications of the work morphosemantic links do for us.
contrasting
train_18595
These resources are mostly produced manually by domain experts and contain high quality data including segmented inflectional and derivational morphemes even for under-resourced languages.
this kind of morpheme data is not machine-processable and, therefore, hardly reusable and hence remains isolated on the Web.
contrasting
train_18596
We did not find a way to reliably identify the source language from which a specific translation was created.
we observed that each source document is translated (potentially via a pivot document) into 2.77 languages on average, thus generating combinations of sentence alignments as in the examples of Table 3.
contrasting
train_18597
This has entailed encoding roots as Lexical Entries with type root (see LE kbr in the figure).
the choice of encoding roots as LE is somewhat problematic for theoretical and practical reasons.
contrasting
train_18598
It is apparent that we need to be cautious in conclusions, as different data are of different sizes which may cause errors in estimations (Baayen, 2001;Kilgariff, 2001).
we believe that our analyses have shed considerable light into quality of the Digi collection and our procedure can be used for quality approximation also after possible improvements in the data.
contrasting
train_18599
For this purpose, Optical Character Recognition (OCR) systems have been developed to transform scanned digital text into editable computer text.
different kinds of errors in the OCR system output text can be found, but Automatic Error Correction tools can help in performing the quality of electronic texts by cleaning and removing noises.
contrasting