id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_11500
The standard task of NER focuses on the former class of proper names.
it is often not easy to differentiate between both classes.
contrasting
train_11501
Time In the standard task of NER, temporal information is not captured by the four base entities.
the aspect of time is important for the research on biodiversity which is constantly evolving.
contrasting
train_11502
In language modeling however, this approach of overlapped data points has not yet been fully exploited.
extracting frame-based acoustic features such as mel-fequency cepstral coefficients (MFCCs) using overlapping windows is a common technique in speech processing and more specifically in automatic speech recognition (ASR) (Chiu et al., 2018;Kim and Stern, 2016).
contrasting
train_11503
We observed that CEs of the obtained ⇡ ✓ 's were about the same for different values of |D| and Training-1 regimes.
there was no systematic improvement in the training speed of one method over the other.
contrasting
train_11504
When we increase the frequency threshold further to 25, the performance of the model has dropped compared with Char-BiLSTM-add-Word-LSTM-Word (g = 0.5, n = 1, θ = 15) as more frequent words are discarded.
we found a relatively small frequency threshold θ = 5 works quite effectively.
contrasting
train_11505
In fact, most recent endto-end deep learning models have surpassed the performance of careful human feature-engineering based models in a variety of NLP tasks.
deep neural network based models are often brittle to various sources of randomness in the training of the models.
contrasting
train_11506
Most of the interpretation based methods involve one of the following ways of interpreting models: a) sample oriented interpretations: where the interpretation is based on changes in the prediction score with either upweighting or perturbing samples Koh and Liang, 2017); b) interpretations based on feature attributions using attention or input perturbation or gradient-based measures; (Ghaeini et al., 2018;Feng et al., 2018;Bach et al., 2015); c) interpretations using surro-gate linear models (Ribeiro et al., 2016) -these methods can provide local interpretations based on input samples or features.
the presence of inherent randomness makes it difficult to accurately interpret deep neural models among other forms of pathologies (Feng et al., 2018).
contrasting
train_11507
We observe that the original SGD reaches the desired minima, however, it almost reaches the saddle point and does a course correction and reaches minima.
we observe that SGD with ASWA is very conservative, it repeatedly restarts and reaches the minima without reaching the saddle point.
contrasting
train_11508
We note that on an average, the standard deviations are on the lower side.
we observe that the mean standard deviation of the bins close to 0.5 is on the higher side as is expected given the high uncertainty.
contrasting
train_11509
However, we observe that the mean standard deviation of the bins close to 0.5 is on the higher side as is expected given the high uncertainty.
both, ASWA and NASWA based models are relatively more stable than the standard CNN based model.
contrasting
train_11510
(2017) that the human coders considered sexist terms as offensive rather than hateful.
in terms of our classifier, this may only be due to the implicit nature of most sexist insults and a lack of sexist samples within the OLID dataset.
contrasting
train_11511
Also, there are very few datasets that provide a large number of samples that can be taken advantage of by huge neural networks .
we do acknowledge the difficulty in collecting abusive samples as most discourse online is benign.
contrasting
train_11512
This is because there is still value in identifying whether a sample is racist / sexist / cyberbullying over just recognising whether the abuse is explicit or not, and in identifying the target of abuse.
the hierarchical model in its current form still cannot differentiate between various subsets of abusive language.
contrasting
train_11513
Analysis of the existing language resources in the area of sentiment analysis shows that they largely concern the English language (Dashtipour et al., 2016).
there is a clear growing interest in other languages, often much more complex than English (e.g.
contrasting
train_11514
To analyze sentiment in text, researchers have made different assumptions on linguistic behaviors that are leveraged with approaches developed based on the nature of the text, the representation of the related information and the objectives.
majority of the studies are conducted at the population-level which assumes that people follow a common understanding with regard to the use of language.
contrasting
train_11515
(2016) have proposed a phased LSTM that utilizes an addi-tional time gate to control the passing of the information.
the time gate is triggered by periodic oscillations while modeling sensory events, which makes such a design less flexible when the time gaps are highly various.
contrasting
train_11516
For instance, when the user frequency is around 80, the WE formulation has the best accuracy in both models.
such an observation is also restricted by the number of frequent users in general -with only 372 posts in the test set when the user frequency is at least 100, the performance is highly dependent on the remaining 3 users.
contrasting
train_11517
It can be seen that the last user in the figure (the one at the bottom line) has the greatest values for ✏ and , which means that the decay factor has a great impact on the prediction for this user than the others but the influence from the past decays comparably fastthe user is affected a lot by recent events.
among the 10 users, the second last user is the least influenced by the past which is visualized in darker colors.
contrasting
train_11518
In prior research, much progress has been made on text classification, including traditional approaches based on human-designed features (Lazaridou et al., 2013;Zhang et al., 2015a) and neural networks based on deep architectures (Lai et al., 2015;Yang et al., 2016).
such methods prefer to deal with documents and paragraphs, and still have limitations for short texts.
contrasting
train_11519
Existing clinical decision support systems are heavily reliant on the structured nature of the EHRs.
the valuable patient-specific data contained in unstructured clinical notes are often manually transcribed into EHRs.
contrasting
train_11520
(2018) reported a suite of five clinical prediction tasks, including the length-of-stay, mortality, and ICD-9 code group prediction on the MIMIC-III database using deep learning models and benchmarked their performance against the existing state-of-the-art methods and severity scoring systems.
mining and modeling the valuable patient-specific information in unstructured clinical nursing notes for the development of CDSSs remains mostly uncommon.
contrasting
train_11521
The feature extractor is trained to fool the discriminator so that the extracted features are language invariant.
its performance is significantly lower than the variant that uses pretrained CLWE.
contrasting
train_11522
Our goal is to predict the sentiment polarity of the examples in a target language-domain pair In Section 5 we compare two CLIDSA variants.
cLIDSA f ull exploits unlabeled data from all possible language-domain pairs, i.e., we set P = L × D. since most previous cLSA methods do not use multi-domain or multilingual unlabeled data, we create a variant cLIDSA min that requires minimal resources by setting A natural way to utilize unlabeled data is to perform the language modeling task.
contrasting
train_11523
Note that we would also want to evaluate our model on some low resource languages.
since there isn't an public benchmark for such languages, we leave it to future work.
contrasting
train_11524
In earlier work with the writing instruction application, "Writer's Workbench," some features associated with style were evaluated, including: average word length, the distribution of sentence lengths, grammatical types of sentences (e g, simple and complex), the percentage of passive voice verbs, and the percentage of nouns that are nominalizations (see MacDonald et al, 1982 for a complete description of the Writer's Workbench).
to a subjective measure such as, repetitive word usage, the stylistic features in the Writer's Workbench are not subjective.
contrasting
train_11525
This resulted in a statistically consistent model dubbed ML-DOP.
mL-DOP suffers from overlearning if the subtrees are trained on the same treebank trees as they are derived from.
contrasting
train_11526
PCFG-reduction of Bonnema (1999) By using these PCFG-reductions we can thus parse with all subtrees in polynomial time.
as mentioned above, efficient parsing does not necessarily mean efficient disambiguation: the exact computation of the most probable parse remains exponential.
contrasting
train_11527
Feature extraction was unchanged (albeit performed on French data) for the models that capture the contexts for the insertion of negation, prepositions, and subordinating conjunctions.
in every one of these cases, the set of values for the target features changed to reflect the language.
contrasting
train_11528
The maximum n-gram length of 1 or 2 yields a high correct ratio, and that of 3 or 4 yields a low correct ratio.
the correct ratio of NIST is not influenced by the maximum n-gram length.
contrasting
train_11529
In the original BLEU or NIST formulation of the test set unit (or document or system level) evaluation, n-gram matches are computed at the utterance level, but the mean of n-gram matches is computed at the test-set level.
considering the characteristics of the translation paired comparison method, the average of the utterancelevel scores might be more suitable.
contrasting
train_11530
In particular, the averaged utterance-level BLEU score shows the highest correlation.
looking at Figure 10, the system's TOEIC score using this measure deviates from that of the original translation paired comparison method.
contrasting
train_11531
In Figure 3, "CNF 1 " got the right POS.
the preceding "that" is falsely assigned as a determiner.
contrasting
train_11532
The first two expressions are on the borderline between synset and phraset, while the third is on the borderline between phraset and definition.
in most cases phrasets provide a flexible tool to aid lexicographers in the process of choosing the lexical status of multiword expressions.
contrasting
train_11533
Below, we outline an algorithm which circumvents the problem of choosing the right parameters.
to pure Markov clustering, we don't try to find a complete clustering of G into senses at once.
contrasting
train_11534
Artifact detection in a signal deserves a separate study in its own right.
in SumTimE we are initially using a median filter and an impossible value filter developed in our collaborator project NEONATE.
contrasting
train_11535
The differences of the rates in Figure 3 correspond to the difference of these numbers (i.e., 1467 and 543).
it is very important to note that, for both "pseudo-parallel with CUR" and "contextual similarity with CUR: reduced", the rate of containing correct bilingual term pairs tends to decrease as the order of TP(tE ) sorted by corrEJ(TP(tE)) becomes lower.
contrasting
train_11536
Also note that the performance of "pseudoparallel with CUR" is affected little by the similarity lower bounds Ld.
for "contextual similarity with CUR: reduced/full", the performance becomes worse as the similarity lower bound Ld becomes smaller and the crosslingually relevant English/Japanese corpus RCE and RCJ becomes noisier.
contrasting
train_11537
The table also shows that the clausal relation would benefit from improvement since the clausal relations is frequently omitted from all the Penn Treebank parsers.
in the case of this relation, we have sacrificed recall for precision.
contrasting
train_11538
7 The 'algorithms' are only evaluated on pronouns BC BU CH Cl C2 BC ---60 0 BU 0 --70 0 CH 60 65 -75 75 Cl -----C2 ---70 - 8 The difference between the 'worst' and the 'best' systems' mean itt performance is about 1%.
the variance o-2 (a measure of robustness of a system) is lowest for Collins' model 1.
contrasting
train_11539
The personal pronoun in the English sentence is used as a noun phrase (NP) similar to the pronoun in the Swedish sentence.
putting all the clues together, the aligner comes up with the following alignment without actually having to know the two languages: the corridors korridorerna are jumping myllrar with av them dem looking at the sentence pair again, a second aligner with knowledge of both languages might realize that the verbs myllrar (English: swarm) and jumping do not really correspond to each other in isolation and that the expressions are rather idiomatic in both languages.
contrasting
train_11540
Assume that the alignment program found the following alignment clues which are based on string similarity and co-occurrence statistics: 2 The alignment clues contain only three multiword units.
even these few units cause several overlaps.
contrasting
train_11541
Looking at the matrix, we can find clear relations between certain words such as [hand,baggage] and handbagaget.
between other word pairs such as is and sedan we find only low associations which conflict with others and therefore, they can be dismissed in the alignment process.
contrasting
train_11542
The lower values for precision for the other two approaches could be expected.
the low recall value for the combined approach is a surprise.
contrasting
train_11543
Clue patterns can be defined depending on the information which is available (POS tags, phrase types, semantic tags, named entity markup, dictionaries etc.).
clue patterns have to be designed carefully as they can be misleading.
contrasting
train_11544
For example, in the case of English and Japanese, some prepositions are not translated into Japanese.
the preposition 'after' may be translated into Japanese as the noun `ato.'
contrasting
train_11545
There are five word links between Source and Target 1, so the TCR is 1.0 by Equation (1).
in the case of Target 2, four words are found in the dictionary (Tt = 4), and there are three word links.
contrasting
train_11546
As opposed to filtering based on a threshold, all source sentences are used for knowledge construction, so the coverage of MT knowledge can be maintained.
some context/situation-dependent translations remain in the extracted corpus when only one non-literal translation is in the corpus.
contrasting
train_11547
In the case of filtering, the coverage of the MT knowledge is decreased by limiting translation to highly literal sentences.
even though they are non-literal, such sentences may contain literal translations at the phrase level.
contrasting
train_11548
Thus, the phrase (A-2)N P is generalized, and the long transfer rule (A-1 )S is generated from the non-literal translation.
the TCR of the top phrase (B-1 )S is 1.0, so all phrases in (B) are generalized and totally six rules are generated.
contrasting
train_11549
It is difficult to significantly improve the quality by bilingual corpus filtering because it is difficult to both remove insufficiently literal translations and maintain coverage of MT knowledge.
the BLEU score and the subjective quality both improved in the case of split construction, even though the coverage of the exact rules decreased.
contrasting
train_11550
The syntax/semantics interface and the use of unification then ensures that these variables get assigned the appropriate values i.e., the values representing their given arguments.
flat semantics are being increasingly used to (i) underspecify the scope of scope bearing operators and (ii) prevent the combinatorial problems raised during generation and machine translation by the recursive structure of lambda term and first order formulae (Bos, 1995;Copestake et al., 2001).
contrasting
train_11551
For example, the parity language 0* (10*10*)* contains strings with an even number of occurrences of the factor "1".
intuitively, it seems improbable that similar counting constraints occur in natural language grammars many regular expressions in Voutilainen's ENGFSiG (1994) involve the Kleene star.
contrasting
train_11552
The schematic equivalences presented in Sections 3.3 -3.4 can transform 1554 of these into a star-free form.
there still remain 1103 constraints that use the restriction operator To complete the proof of the star-freeness of ENGFSIG, I show that star-free languages are closed under the restriction operation (as in FSIG).
contrasting
train_11553
Effectively, this analysis treats the whole data set as unseen.
notice that for each test/train set split we obtain different regression equations since the PCFA yields different factors for different data sets.
contrasting
train_11554
In the past we have found that this wordassociation metric provides an excellent basis for learning single-word translation relationships, and the higher the score, the more likely the association is to be a true translation relation.
with this particular metric there is no obvious way to combine the scores for indvidual word pairs into a composite score for phrase translation candidates; so we use the scores indirectly to estimate some relevant probabilities, which can then be combined to yield a composite score.
contrasting
train_11555
In one sense this makes the task more difficult, since source language phrases have to be discovered as well as target language phrases.
coverage claims are often harder to evaluate since, lacking annotated test data, there is no way to tell how many more phrases a better phrase finder would have discovered that would be mistranslated by the translation finder.
contrasting
train_11556
The CCG account given by Steedman (2000) for crossing dependencies in Dutch subordinate clauses relies crucially on >B".
steedman must restrict the harmonic rule >B in order to block some ungrammatical orders.
contrasting
train_11557
In our opinion, the Catalan training set is not big enough and the errors in the retraining folds degrade the performance of the bootstrapped model.
the crosslinguistic models show a slightly better behavior, achieving a maximum increase of about 0.5 points, getting also somewhat stable beyond iteration 5.
contrasting
train_11558
Bootstrapping, therefore, is not very helpful on improving models.
these models seem to have learned a robust concept which overcomes the errors produced when relabelling folds.
contrasting
train_11559
While our method used supervised learning, Stevenson and Merlo (1999) found little difference in performance between unsupervised and supervised approaches on the original MS01 task.
since unsupervised methods are more sensitive to irrelevant features, additional care will be required to determine useful subsets of features from our (larger) general feature space.
contrasting
train_11560
For instance, case information for nouns or gender information for verb complements are in most cases unwanted.
only very little semantic information, if any, will be found in this way, and it will need to be supplied by other means.
contrasting
train_11561
The required knowledge can either be inferred from speech data or extracted from the literature.
contrary to intra-lingual (e.g.
contrasting
train_11562
The first idea that suggests itself in order to model these substitutions are phoneme/allophone mapping tables that replace particular L2 sounds by similar speech sounds from the Li inventory.
simple context-free phoneme mapping is problematic in at least two respects: First, for many L2 sounds it is not clear what the 'best' Li equivalent is.
contrasting
train_11563
They are based on the assumptions that (1) the naturalness of the translations is effective for selecting good translations because they are sensitive to the broken target sentences due to errors in translation processes, and (2) the source and target correspondences from the semantic point of view are maintained in a state-of-the-art translation system.
the second assumption does not necessarily hold.
contrasting
train_11564
Using the movement metaphor of syntactic theory as our guide, we would ideally like to identify a complete path downward from the surface syntactic location of the WH phrase to the location of its associated gap.
it is hard to see how such a path could be represented as one of a finite number of classes.
contrasting
train_11565
95% of those phrase translation pairs were judged to be correct.
no results where reported if these additional translation correspondences resulted in improved translation quality.
contrasting
train_11566
When a PP is attached to the preceding NP or PP (henceforth referred to as noun attachment), such as in the structure ... eats pizza with anchovies, a phrase boundary between pizza and with is usually considered inappropriate.
when a PP is attached to the verb in the clause (verb attachment), as in the structure ... eats pizza with a fork, an intervening phrase boundary between the PP and its preceding NP or PP (between pizza and with) is optional, and when placed, usually judged appropriate (Marsi et al., 1997).
contrasting
train_11567
The bias of IB 1 to base classifications on all training examples available, no matter how low-frequent or exceptional, resulted in a markedly higher recall of up to 82 on noun attachment, indicating that there is more reliable information in local matching on lexical features and the cooccurrence feature than RIPPER estimates.
with a larger training corpus, we might not have found these differences in performance between IB 1 and RIPPER.
contrasting
train_11568
Simple chunk parsers only determine the extents of chunks, but neither the interrelationships between them nor their internal structure.
they sometimes try to assign to each chunk a lexical head which can then represent the entire chunk.
contrasting
train_11569
This approach increased the coverage of the overall system from 12.5% to 22.1% on the NEGRA-corpus but failed to reduce the average number of analyses which even grew from 16.19% to 18.53% per sentence.
due to its inherent robustness properties our parser always returns the single best solution, i.e.
contrasting
train_11570
In the experiment for Czech an indirect approach has been pursued, converting the dependency structures of the Prague Tree Bank into Penn-Treebank-like phrase structure trees.
to this approach, WCDG parses dependency structures directly.
contrasting
train_11571
In the case where each text only belongs to one author, the performance of an authorship classifier can be naturally measured by its overall accuracy: the number of correctly classified texts divided by the number of texts classified overall.
overall accuracy does not characterize how the classifier performs on each individual category.
contrasting
train_11572
Authorship attribution in Chinese normally requires an initial word segmentation phase, followed by a feature extraction process at the word level, as in English.
word segmentation is itself a hard problem in Chinese, and an improper segmentation may cause insurmountable problems for later prediction phases.
contrasting
train_11573
A context n that is too small will not capture sufficient information to accurately model character dependencies.
a context n that is too large will create sparse data problems in training Both extremes will result in a larger perplexity than an optimal length.
contrasting
train_11574
Text categorization with n-gram models has also been attempted by (Cavnar and Trenkle, 1994).
they use'n-grams as features for traditional feature selection, and then deploy classifiers based on calculating feature-vector similarities.
contrasting
train_11575
(Apt et al., 1994) have used word-based language modeling techniques for both English and German.
their techniques do not apply to Asian languages where word segmentation remains a significant problem.
contrasting
train_11576
Such focus would be unproblematic if it were clear how these accounts could be extended to cover multiple, interacting types of structural indications.
even for (Steedman, 2000;Hoffman, 1995), which are the formally most detailed, this is by no means obvious.
contrasting
train_11577
corn), while see occurs in 42m and book in 29m.
in a collection of 13k documents about digital signal processing Fourier appears 1100 times, so P(d E Dt ) is about 6.5 10 -5 while P(d E tr) is about 5.5 • 10 -3 , two orders of magnitude better.
contrasting
train_11578
As the above reasoning makes clear, linearity is to some extent a matter of choice: certainly the underlying bag of words assumption, that the words are chosen independent of one another, is quite dubious.
it is a good first approximation, and one can extend it from Bernoulli (0 order Markov) to first, second, third order, etc.
contrasting
train_11579
As with many other fields, most of the available tools are for English, but the CoNLL02 shared task (Tjong Kim Sang, 2002) has shown that it is possible to use machine learning approaches to design named entity recognisers for languages other than English.
these approaches need annotated corpora to learn how to identify the named entity.
contrasting
train_11580
For example, FO contour would clearly be useful in predicting backchannel location.
the challenge of extracting appropriate prosodic features from a pitch tracker lay outside the scope of the research effort reported here.
contrasting
train_11581
With the availability of rich linguistic resources, we can also minimize the need to perform complex linguistic processing.
this does not mean that NLP is now out of the picture.
contrasting
train_11582
One approach is to expand the query by adding the top k words in C , and those in Gq and Sq.
if we simply append all the terms, the resulting expanded query will likely to be too broad and contain too many terms out of context.
contrasting
train_11583
The algorithm is as follows, where E p f, is the empirical expected value of J and E pfi is the expected value according to model p: • Set Az " equal to some arbitrary value, say: • Repeat until convergence: where (t) is the iteration index and the constant C is defined as follows: In practice C is maximised over the (x, y) pairs in the training data, although in theory C can be any constant greater than or equal to the figure in (8).
since determines the rate of convergence of the algorithm, it is preferable to keep C as small as possible.
contrasting
train_11584
Surprisingly, students had initiative more of the time in the didactic dialogues (21% of the turns) than in the Socratic dialogues (10% of the turns), and there was no direct relationship between student initiative and learning.
socratic dialogues were more interactive than didactic dialogues as measured by percentage of tutor utterances that were questions and percentage of words in the dialogue uttered by the student, and interactivity had a positive correlation with learning.
contrasting
train_11585
3 It may be the case that these annotation assumptions fail on selected examples.
in eliminating the assumptions it is likely that we will introduce more errors than we correct.
contrasting
train_11586
These results may be good news for system builders; one possible Socratic teaching strategy would be to ask sequences of targeted questions where strong expectations about plausible answers make it easier to interpret student input.
we must be mindful of the fact that, even in Socratic interaction, students sometimes do take initiative rather than simply answering the sequence of questions posed by the tutor.
contrasting
train_11587
When the users in the No Help condition produced outof-coverage utterances the system responded only with a text display of the message "not recognized".
when users in the Help condition produced out-of-coverage utterances they received in-depth feedback such as: "The system heard fly between the hospital and the school, unfortunately it doesn't understand fly when used with the words between the hospital and the school.
contrasting
train_11588
In both help conditions, performance improved over the course of the experimental session.
the advantage conferred by help merely diminished and did not disappear during the session.
contrasting
train_11589
This word should be broken into the two parts Grund and Rechte.
grund translates usually as reason or foundation.
contrasting
train_11590
Such models can be represented by static terminologies or ontologies, which are usually constructed manually.
documents frequently contain unknown terms that represent newly identified or created concepts.
contrasting
train_11591
The GUI interface layer utilises dynamic recognition of terms and their clusters, as well as an unrestricted set of tags that can be used for querying.
other systems that use GUI-driven query formulation, such as TAMBIS (Baker et al., 1998), usually use a predefmed ontology impose restrictions on query definition.
contrasting
train_11592
The main advantage of XML representation is that it can represent nested structures, something not easily done in RDBs.
even when XML is used to encode documents, many applications still use RDBs for storage and manipulation.
contrasting
train_11593
RDBs are generally considered more efficient when it comes to retrieving specific types of elements.
xMLnative DBs provide extended querying facilities given by a native query language (e.g.
contrasting
train_11594
evaluated in the context of a non-trivial mediumvocabulary command and control system.
to other work described in the literature ((Wang et al., 2002) is a recent example) rules are treated on a basis of strict parity with other data sources, so that the balance between them can be entirely determined by the training data.
contrasting
train_11595
There is not a great deal of training data available, and the statistical methods used are simple and unsophisticated.
we still get a significant improvement on rules alone by adding a trainable component.
contrasting
train_11596
The two-step approach would be the natural way of extending current PP attachment disambiguation methods to the more specific 4way attachment we propose here.
based on exploratory data analysis and general wisdom in machine learning, there is reason to believe that it is better to solve the 4-way classification problem directly rather than first solving a more general problem and then specialise the classification.
contrasting
train_11597
It is theoretically easy to obtain a gold standard for our evaluation, since the purely syntactic relationships that have to be annotated are less ambiguous than the distinction between collocations and noncollocational candidates.
the annotation of tokens rather than types is a prohibitively laborious task.
contrasting
train_11598
Hence apparently it would be preferable to assign different tags to a word when it is used in different syntactic distribution.
this introduces a crisis for the whole classification of Chinese syntactic categories because any given word cannot be prescribed a category and any category cannot be adequately described.
contrasting
train_11599
The distributional approach was shown to be quite effective for tasks where new words need to be assigned to a limited number of classes (up to 5; e.g., Riloff and Shepherd, 1997;Roark and Charniak, 1998).
its application to numerous classes, as would be the case with a thesaurus of a realistic size, proves to be much more challenging.
contrasting