id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19500
We investigate the impact of this in our experimental section (Section 5.6.).
ocpd and ocpd+ provide a posterior distribution on the run length, from which it is possible to automatically detect the number of change points.
contrasting
train_19501
This property helps the model to identify the similar entities.
it still suffers from several issues such as, they cannot handle out-ofvocabulary words, misspelled words and variations in noun or verb phrase.
contrasting
train_19502
This layer takes the output (sequence of probability vector) of the previous layer as input and produces the label sequence as the output.
the label sequence C 1∶n can be obtained for C i as follows: ).
contrasting
train_19503
The evaluation shows that our proposed system achieves higher performance compared to these systems.
it is to be noted that our reported results are on cross-validation and we have not been able to perform experiments on the test data as this is not publicly available.
contrasting
train_19504
Figure 2 shows an example for each case: Figure 2: Examples of relations for meta-language tokens in PoSTWTITA-UD.
if syntactically integrated within the sentence, these same elements are annotated taking into account their actual syntactic role.
contrasting
train_19505
In their corpus, 47.3% of points of interest are annotated invalid, meaning that their methodology to extract points of interest using Foursquare is not very effective.
to their work, we (a) present a corpus with few invalid locations (⇡ 6%), and (b) work with finer-grained temporal information (when somebody tweets, within 24 hours before and after he tweeted, and longer than 24 hours before and after he tweeted).
contrasting
train_19506
This is apparently based on an expectation that there should be a single way to encode any given kind of textual phenomena.
anyone with moderate awareness of the richness of modern day Digital Humanities will know that there is definitely no single way to approach the different kinds of data, information needs, visualisation requirements and the varying foci of interest of Digital Humanities' scholars.
contrasting
train_19507
by increasing the possibility of exceptions, or by simply enforcing the usage of different routines to be able to add and extract similar-level information to and from the text) that seem avoidable by allowing for the homogeneity of linguistic markup.
the TEI tagset can easily tolerate an addition of an encoding variant that provides a localized alternative to existing tagging solutions.
contrasting
train_19508
Current research in ATECC is mostly evaluated by using reference translations from a source other than the input corpus or by using only a limited set of manually evaluated term equivalents from the input corpus.
to accurately evaluate the entire output and be able to trace mistakes back to their source, a new type of GS is needed.
contrasting
train_19509
In the domain of heart failure, "ejection fraction" would be an example of a Specific Term.
no matter how well constructed the corpus, there may also be terms which are lexiconspecific, but not domain-specific.
contrasting
train_19510
The annotation scheme is helpful for deciding whether a linguistic unit is a term.
there is a second difficulty in term annotation, namely deciding the term boundaries.
contrasting
train_19511
For the English corpus on wind energy, there are even more 2WTs (49%) than SWTs (25%).
apart from the variations for SWTs and 2WTs, the numbers are rather consistent for all languages and domains.
contrasting
train_19512
We showed that, in general, terms are mostly SWTs or 2WTs and that very few terms are longer than 5 words.
specific Terms tend to be longer and less likely sWTs than Common Terms.
contrasting
train_19513
Each file is relatively small (around 150Mb) and is easy to download and work with locally during the development phase.
to work with the entire corpus we recommend using some kind of parallelism, e.g.
contrasting
train_19514
We do not distribute the index itself due to its huge size.
users can re-create the index using the open source software provided as a part of the JoBimText package 28 from the the CoNLL files.
contrasting
train_19515
UD schema defines that the basic units of annotation are syntactic words, though a renowned typologist failed to identify words consistently across languages (Haspelmath, 2011).
sUW is not suitable for syntactic dependency annotations.
contrasting
train_19516
The system used to manually disambiguate trees generated with Świgra (Woliński, 2010) includes a module to automatically re-annotate a parse forest generated with a changed grammar preserving the tree previously chosen by annotators.
in the form previously implemented, the system sought for a tree literally identical to the previously selected one.
contrasting
train_19517
Morphology was intensively studied by the NLP community, with the research primarily concentrated on inflectional morphology.
in recent years researchers noticed the potential of derivational morphology to improve the performance in many important areas of NLP, which caused the development of novel language resources which focus on word formation.
contrasting
train_19518
(2001) develop a system for automatic generation of morphological families 2 in order to improve their information retrieval systems.
the derivational rules are incorporated into the system by a human expert rather than automatically learned.
contrasting
train_19519
Learning to rank Learning to rank is a widely studied area of machine learning which was originally researched in the context of automatic ranking of web search results in the information retrieval community.
it proved to be useful in many other areas such as statistical machine translation, see (Watanabe, 2012).
contrasting
train_19520
The application of the SPADE algorithm with the minimal support set to the 1% of the lexicon size resulted in roughly 27 thousand frequent subsequences.
our filtering technique based on the phi coefficient limited the set of patterns to 13 441 regular expressions.
contrasting
train_19521
We evaluated our approach using 5-fold cross-validation and we obtained the accuracy of 82.33% without applying any threshold on the confidence of the ranker.
since we prefer precision to coverage, we have chosen a threshold which allowed us to obtain 98.8% of precision with the recall of 38.2%.
contrasting
train_19522
Furthermore, it is also the biggest free language resource about Polish derivations.
the coverage of the created resources still needs to be improved.
contrasting
train_19523
The number of documents also fluctuates over the years, mainly due to the biennial frequency of some conferences.
the total number of papers itself increases steadily reaching a total of more than 65,000 documents as of 2015.
contrasting
train_19524
We studied the evolution of the presence of the terms over the years, in order to check the changes in paradigm.
the fact that some conferences are annual, while others are biennial brings noise.
contrasting
train_19525
The statistical information provides a weighting of the extracted applicant terms.
the frequency of a term is not necessarily an appropriate selection criterion.
contrasting
train_19526
Q 1 > Q 1.1 ) that guide the occurrence of so called contrastive topics.
qUD trees can be systematically mapped onto discourse graphs from Segmented Discourse Representation Theory (SDRT; Asher and Lascarides, 2003).
contrasting
train_19527
It is, therefore, perhaps still too early to interpret the results, due to the overall complexity of the task and the lack of a reason- Table 2: Kappa values for QUD-annotated spoken dialogue able baseline.
the results can provide a point of reference for future developments in this area.
contrasting
train_19528
In fact, given two arguments and a discourse connective many discourse parsers at the 2016 CoNLL Shared Task on Multilingual Shallow Discourse Parsing (SDP) (Xue et al., 2016) were around 78% accurate in recognizing the discourse relation on the SDP blind dataset.
in implicit relations no connective is used.
contrasting
train_19529
Inspired by recent advances in the use of attention, we used attention to detect alignment scoring for IDR as word-pair features have be shown to contribute to IDR (Pitler et al., 2009;Biran and McKeown, 2013).
unlike these methods we make no feature engineering.
contrasting
train_19530
(Rönnqvist et al., 2017) also uses an attention mechanism to recognize implicit discourse relations.
their approach differs from ours in two important ways: in (Rönnqvist et al., 2017), the two discourse arguments are concatenated to form a single input and the attention mechanism is applied over the entire input, which is fundamentally different to our sequence-tosequence approach.
contrasting
train_19531
We believed the model would be more robust if the classification layer had inputs from all decoded hidden states directly.
using only the final state vector resulted in higher classification score while using less parameters.
contrasting
train_19532
The hidden vector s t obtained after the last character is called the last feature vector, as it stores the information related to the character language model and the sentiment of the utterance.
it was shown that the average vector over all characters in the utterance works better for emotion detection (Lakomkin et al., 2017).
contrasting
train_19533
ANNIS' latest version, ANNIS3, provides a solution for RST trees by means of a visualization plugin (Krause and Zeldes, 2016).
the application is limited to only displaying previously annotated corpora.
contrasting
train_19534
For a fuller review of available corpora and the challenges of genre in conversation, see (Gilmartin et al., 2015a).
we are interested in the substance of longer casual conversation beyond these first encounters, and thus we have collected a number of multimodal recordings of conversations of multiparty casual speech to form a dataset for preliminary explorations.
contrasting
train_19535
For example, considering again sentence 208, the wordécartement, which is a morphological variant of the substituteécart, appears as a new valid substitute.
the annotators may find no answer for a given sentence because there is only one good substitute corresponding to a rare word (e.g.
contrasting
train_19536
As can be seen, for both measures the scores are scaled by the sum scores of all substitutes for the target word.
there are important differences from one target sentence to another, the number of substitutes with a positive score in the second gold standard varying from 5 to 56.
contrasting
train_19537
So the short answer is that this second annotation was not worth the effort, and we hope that this can be of use for future work on the development of such evaluation data.
the comparison of the two data sets gives us useful insights on the task itself, and helps us understand the gaps between the systems.
contrasting
train_19538
This decisionmaking conforms to the idiom principle (Sinclair, 1991) or formulaic language (Wray, 2001), which help the translators produce native-like selections and reduce the cognitive processing effort.
it seems that our BMWU alignments have less effect on fluency.
contrasting
train_19539
As can be seen, thousands of translation candidates were generated for each language pair.
not all of these word pairs are correct translation candidates, therefore we needed to extract the useful word pairs from the merged dictionary for each language pair.
contrasting
train_19540
And the clinicians evaluating the speech of patients are very well trained to the phonetic characteristics associated with the physiopathology of dysarthria.
a frequent criticism to perceptual evaluation is the subjectivity of the listeners (both naive and expert).
contrasting
train_19541
This capacity has been already highlighted in (Laaridh et al., 2015a) with 81% of phone-based anomalies annotated by an expert well detected by the system.
the low AG targetAnomaly rate of 13% observed on "false positives" reveals the limitations of the proposed approach and its somehow approximate judgment when facing more subtle anomalies.
contrasting
train_19542
This behavior is particularly evident on ALS patients on whom the jury annotated the most anomalies compared to other populations and where the AG targetAnomaly rate reaches 19.6% on the "false positives" category.
an opposite behavior is observed on CTRL speakers and PD patients for whom an overall good quality of the speech is usually observed and the computed AG targetAnomaly rate over the "ambiguous segments" reaches 15.2% and 42.7% respectively.
contrasting
train_19543
They compare their crowdsourced dialog data set to a smaller lab-based data set.
in terms of naturalness they do not compare the collected dialogs but rather subjective questionnaire ratings.
contrasting
train_19544
This can be due to the fact that the amount of training utterances is increased in this case.
this result should be viewed with caution.
contrasting
train_19545
As this guidance, VCOPA poses a rich set of challenges, many of which have been viewed as the holy grail of automatic image understanding and causal reasoning in general.
it includes several components that the KR and CV communities have made significant progress on during the past few decades.
contrasting
train_19546
Alternative 1 seems to happen before the premise, while Alternative 2 happens after the premise.
the question is asking for the effect, which means the latter alternative is the correct answer.
contrasting
train_19547
Building linguistic resources, such as corpora or dictionaries, can be very labor-intensive, requiring great amounts of work-hours and expert annotation.
as pointed out by Ringger et al.
contrasting
train_19548
As the table shows, there are variations between the Fscores: the first model (M1) has the lowest F-score, while the third model (M3) has the highest F-score, traditionally making M3 the clear choice.
our annotation budget is constrained, so the question we ask is: which of the machines will provide the maximum benefit in terms of the number of correctly identified VNICs, given our budget?
contrasting
train_19549
Much recent effort has been put into building Knowledge Bases (KBs), either manually curated (Freebase (Bollacker et al., 2008), Cyc (Lenat, 1995)) or automatically produced (YAGO (Suchanek et al., 2007), Knowledge Vault (Dong et al., 2014)), ranging from logically consistent linked-data in OWL (SUMO (Pease et al., 2002)) to little-structured sets of textual relations extracted from text (NELL (Mitchell et al., 2015)) with Open IE systems (Reverb, Ollie (Mausam et al., 2012), ClausIE (Del Corro and Gemulla, 2013), Stanford Open IE (Angeli et al., 2015), CSD-IE (Bast and Haussmann, 2013)).
large they may be, typical KBs are largely incomplete, and many relevant facts are missing (West et al., 2014).
contrasting
train_19550
The task of taxonomy extraction is closely related to tasks such as hypernym detection (Hearst, 1992) or ontology learning (Buitelaar et al., 2005), in which a structured representation of concepts should be learned.
the task of taxonomy extraction does not have the formal nature 1 http://www.acm.org/publications/ class-2012 that either of these tasks in that the terms only need to be loosely associated.
contrasting
train_19551
In fact, taxonomies are not intended to be strict hypernym graphs but may in fact contain other relations.
we use this as the basis for our experiments as this dataset is established and has been used by other systems.
contrasting
train_19552
Abbreviation prediction means associating the fully expanded forms with their abbreviations.
due to the deficiency in the abbreviation corpora, such a task is limited in current studies, especially considering general abbreviation prediction should also include those full form expressions that do not have general abbreviations, namely the negative full forms (NFFs).
contrasting
train_19553
LSTM shows competitive performance in this task.
neural networks usually need large data for training.
contrasting
train_19554
In other words, the relValue should be start with '+' symbol if the meaning of text is later than the reference date, and vice versa.
in a special case such as the last example in Table 1, there is no symbol to prefix because the meaning of text exactly pointed the specific date or time.
contrasting
train_19555
Frameworks that support general text mining, e.g., the General Architecture for Text Engineering (GATE) (Cunningham et al., 2011), provide "local interoperability" for tools available from within the framework, but there is no interoperability with tools or components available from outside the framework that the user might wish to use.
the LAPPS Grid provides interoperable access to tools in various UIMA systems as well as tools from GATE, which can be pipelined within the LAPPS Grid without the need for I/O format conversion.
contrasting
train_19556
The second experiment with the oracle LM indicates the current capabilities of the acoustic model.
the AM can be adapted using the available recordings to fit the actual acoustics conditions in the interviews.
contrasting
train_19557
82%)) to establish letter discriminability matrices for the Latin alphabet.
as many tables as were readily available from the supplied paper links have been extracted and it was ensured that they were labelled for 1. modality (visual, motoric, acoustical) 2. directionality (symmetric matrix?, ∆(< a >, < d >) 3. letter set (upper case, lower case, numbers, mixed case) 4. polarity (similarity or distance) some matrices or data reported in the papers were not used, since they either analysed irrelevant data (perception in pigeons (Blough, 1985), discrimination of the Braille alphabet, (Gilmore et al., 1979)), reported a poor predictive performance (Coffin, 1978), provided incomplete data (Uttal, 1969), featured very few observations (Banister, 1927) or were hardly extractable due to age or condition of the pdfs.
contrasting
train_19558
Naturally, one could choose grapheme-to-phoneme (g2p) and p2g based approaches.
since the aim of the present study is to analyse explicitly modally motivated errors, we alternatively do the following and leave g2p/p2g as an alternative for future research.
contrasting
train_19559
Indicating shape/size with deictics Deictic gestures have been extensively studied for positional information.
humans often encode more than positional information while "pointing".
contrasting
train_19560
This is what we do also observe in the WAW corpus when we compare the number of tokens of the Arabic translations (B2) compared with the transcript of the original speaker's speech (A1, in English).
when we compare the number of word tokens in the transcripts of the interpreters (B1) with the number of tokens in the translations of these transcripts (A2), we see a much smaller ratio.
contrasting
train_19561
These corpora are partially suitable to evaluate multimodal computational models for object-word learning.
they are not sufficient for crossmodal action learning.
contrasting
train_19562
Parsing at phrase-level is accurate except for nominal modifiers perhaps due to confusing usage of directional and temporal adverbial nouns and prepositions in Vietnamese.
parsing at clause-level is poor.
contrasting
train_19563
The only tokens which are not defined are the output tags.
considering too many features overloads the model, so that CRF++ crashes without generating one.
contrasting
train_19564
AET was originally designed in a research project concerned with modeling the compositional interaction of attributive adjectives with nouns 4 and a project concerned with the interplay of events and adverbial modifiers 5 .
the structure of the AET database makes it easy to extend the tool to different research areas and languages or to add more corpora.
contrasting
train_19565
, syntactic chunks , dependency trees (Hwa et al., 2005), word senses (Bentivogli and Pianta, 2005), named entities (Mayhew et al., 2017) and semantic roles (Padó and Lapata, 2009;Akbik et al., 2015).
as the example in Figure 1 shows, annotation projection may not always produce fully annotated target language sentences.
contrasting
train_19566
Similarly to PKT, where -SBJ function tag denotes a subject node, KTB offers three morpheme tags for the same purpose: jcs, jcc, and jxt.
while jcs and jcc roughly correspond to nsubj and csubj, jxt suggests that the phrase is the topic of the phrase or clause, but offers nothing informative in distinguishing whether it is in fact a subject (which it frequently is) and, if so, whether it is a clausal or nominative subject.
contrasting
train_19567
We introduce the structure to the documents and annotate them on sentence and document level.
we exclude from this annotation process 100 collection documents, i.e., documents consisting of more than one article, since they have their own structure.
contrasting
train_19568
There have been some works on POS tagging in Amharic (Gamback B., 2012;Martha, Solomon, and Besacier, 2011;Binyam, 2010;Gambäck, Olsson, Argaw, and Asker, 2009;Sisay, 2005).
the work of Demeke and Getachew (2006), known as the Walta Information Center corpus (WIC), has received much attention among Amharic NLP researchers and has been used for different applications.
contrasting
train_19569
They are trying to give tag-sets for various syntactic constructions, (phrases, clauses and sentences) in addition to a syntactic word.
amharic is a less-resourced and morphologically-rich language where problems of OOV and ambiguities are major bottlenecks.
contrasting
train_19570
According to the guideline these elements are attached to nouns, verbs, pronouns, adjective and numerals.
some adverbs (for instance, ዛሬ /zare/ 'today') can attach a preposition and/or conjunction.
contrasting
train_19571
Specifically, nouns (verbal noun -VN), verbs (auxiliary -AUX, relative verb -VREL) and numerals (cardinal -NUMCR and ordinal -NUMOR) which have sub-categories with the respective specific tags.
when these sub-categories attach a preposition or a conjunction, their distinction from the other respective categories cannot be distinguished.
contrasting
train_19572
The nominal will be given the grammatical role of nsubj, obj, etc., while the clitics will be treated as a pronominal copy of the nominal and will get the role of expl.
when the nominal is dropped, the clitic will get the grammatical roles of nsubj or obj.
contrasting
train_19573
In most works of Amharic corpora, data are collected from electronic media, especially from the news media.
such sources are produced without proper text editing tools like a spell or grammar checker.
contrasting
train_19574
They demonstrated that a multilingual model can yield better results than monolingual models for different European languages.
their approach relied on the existence of a massive parallel corpus, as their experiment was based on Europarl.
contrasting
train_19575
Compared with the result of the Shared Task, it seems that our approach (lexicalized cross-lingual transfer parsing with resources from relevant languages) can be effective for parsing low-resource languages.
additional language features and the application of the ensemble mechanism also seem to be very important.
contrasting
train_19576
Tokenization, or morphological analysis, is a fundamental and important technology for processing a Japanese text, especially for industrial applications.
we often face many obstacles, such as the inconsistency of token unit in different resources, notation variations, discontinued maintenance of the resources, and various issues with the existing tokenizer implementations.
contrasting
train_19577
For many companies tokenization is a fundamental and important technology for text processing.
when increasing number of companies are demanding Japanese text processing recently, we are lacking freely available and useful resources for tokenization.
contrasting
train_19578
The user may select a resource for tokenization from publicly available choices.
iPADiC (Asahara and Matsumoto, 2003) is the most widely used resource for Japanese tokenization; it has not been updated since 15 years ago, therefore the dictionary lacks new words and the bug fixes have not been applied.
contrasting
train_19579
NAIST Japanese Dictionary 4 , a dictionary developed based on IPADIC, aimed to solve the license issues of IPADIC, as those issues make it difficult to use the resource for OSS purposes.
it is currently not widely used, as the dictionary lacks some essential vocabularies, and IPADIC license issues have been solved subsequently.
contrasting
train_19580
How to define the granularity of the token unit in Japanese tokenization has long been discussed.
the suitable unit differs for each application.
contrasting
train_19581
If the token is a noun, we define the unit to include its prefix and suffix.
if it is a verb we include up to the compound verb to be a middle unit.
contrasting
train_19582
We expect that a possibility to access useful information would increase by using these rules.
for example, '2 -(p -Tolyl)ethanol' has not been recorded on Nikkaji, '2 -(4 -Methylphenyl)ethanol' that is created by paraphrase rule has been recorded.
contrasting
train_19583
If we can identify the same chemical compounds with different notations by using paraphrasing rules, we can get information from different databases that register the same chemical compounds with different notations.
the new chemical compounds that have not been registered in databases cannot be found.
contrasting
train_19584
Wang (2013) uses Pedro Almodóvar's films La mala educación and Volver as the corpus to analyze how the subtitled Spanish discourse markers can be translated into Chinese, so as to make a guideline for translation education between the language pair.
none of these works use RST as its theoretical framework.
contrasting
train_19585
When a DM is removed, the distractors of the exercise are selected from those in the same group.
within each group, the DMs are grouped if it is almost impossible to distinguish between them.
contrasting
train_19586
Useful studies might look at how writing changes in longitudinal studies or as a function of particular training programs, thereby lending insight into quality of school books or teaching philosophies.
very little of this kind of validation is done on a larger scale or open to comparative research with open corpora.
contrasting
train_19587
It is not always obvious how this cleaning should be done, and we have experimented with several variants, for example in an earlier variant we removed the xxx markings.
this often led to clearly undesirable parses.
contrasting
train_19588
The SICI continuum has distributions of nouns and verbs that describe their individual difficulty differences.
no direct evidence has connected the concrete measure of word difficulty to abstract spaces because no index, other than the occurrence rate of the part of speech, has expressed word difficulty.
contrasting
train_19589
As a consequence, speakers of a free-stress language need to encode stress position in their mental representation of the words.
the position of stress in a fixed-stress language such as French is not variable, and thus not contrastive.
contrasting
train_19590
In the Present tense, the Habitual comes first, followed by the Negative: gatáagang-'ang-gang (eat-HAB-NEG-PRES) "never eats".
in the Reported Past, the order is reversed, with the Negative coming first and the Habitual coming second: gatáa-'ang-gaang-aa-n (eat-NEG-HAB-REP-PAST) "never used to eat".
contrasting
train_19591
2011;Hulden 2009), implement a minimization procedure on the finite-state model, so that recurring realizations of string-final character sequences and associated morphological features are systematically identified and merged, resulting, in the end, in a relatively compact model.
if some aspect of the chunked morpheme sequences needs to be changed, with the chunking strategy these have to be implemented in multiple locations.
contrasting
train_19592
It is a Germanic language whose nearest relative is Icelandic; the two are not normally mutually intelligible in speech to unpractised listeners, and Faroese is considerably more different (certainly not mutually intelligible) from its next closest relatives, the larger and better-resourced Mainland Scandinavian languages Danish, Norwegian, and Swedish.
many native speakers of Faroese do speak Danish as a second language, in current times often used for university education or employment in Denmark (while Faroese is the language of home, daily life, school, etc.
contrasting
train_19593
In the first place, automated OCR is unsatisfactory; then, we find also that building an entirely automated post-processing system, although it improves the text, still leaves far too much error.
incorporating a moderate investment of annotator time during post-processing leaves an acceptable token error rate below 1.5% in this low-resource setting.
contrasting
train_19594
Digitised resources could be distributed as scanned images rather than machine readable text, and communities may still access the heritage and culture represented in the resources.
text facilitates better distribution due to reduced file size, and large corpora are not easily navigable for any purpose without searchable text.
contrasting
train_19595
English (Nakov et al., 2013), German (Cieliebak et al., 2017), French (Bosco et al., 2016), or Italian (Barbieri et al., 2016).
we are not aware of any sentiment corpora for Swiss German.
contrasting
train_19596
Extension of the Resource As described in Section 5.1, J-MeDic contains 51,784 new written forms; 49.0% of those were newly incorporated.
44.7% of the disease names that are covered by J-MeDic were newly incorporated written forms.
contrasting
train_19597
It has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess.
it has no relationship information with another words, such as synonymous words and phrases.
contrasting
train_19598
CVL would be bundling cluster1 and cluster2 in our list by the same core meaning.
in our result, synonym expressions in cluster 1 and those in cluster2 are clearly divided because they are used in a different context by figurative meaning.
contrasting
train_19599
As stated in the guidelines, we want to capture the senti-ment of the expression's common usage.
we are also interested in the ambiguity that is reflected in the answer distribution.
contrasting