id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100500
By utilizing the roundtripping approach, the monolingual data play a similar role to the bilingual data, effectively reducing the requirement for parallel data.
if more than two associated tasks form a closed loop, this approach can applied in each task for improvement of the overall model, even in the face of unlabeled data.
neutral
train_100501
3 From SMT to PBMST Our enhancement to SMT takes a noisy channel model as a starting point, where translation is modeled by decoding a source text, thereby eliminating the noise (e.g., adjusting lexical and syntactic divergences) to uncover the intended translation.
4 For our purposes, we have trained the Spanish LM using about 700M words of crawled newspaper material.
neutral
train_100502
After analyzing Turkish sentences we found out some sentences have more than one predicate, so we continued to search for another predicate in the sentence and ran the same procedure for each predicate candidate.
these 5,999 annotations may be hand-annotated and recompared for validity of the transferred annotations.
neutral
train_100503
It has been applied to more than 15 different languages.
process only requires a source language model and parallel data to construct target SRL model.
neutral
train_100504
This approach increased F1 scores 1.52 and 1.74 respectively for Chinese and English.
when we look at the all word results, 12,966 roles were not transferred.
neutral
train_100505
In Figure 1, sample morphological structure is displayed.
creating SRL models are difficult.
neutral
train_100506
We can say these semi-space signs segment words into smaller morphemes.
as is shown in Table 1, our Bi-LSTM model performs slightly better than the rest in boundary prediction, however, the classification models are surprisingly almost on the par with our complex DL model.
neutral
train_100507
Besides the classification-based segmentation models, we designed and evaluated five DL models based on GRU, LSTM, Bi-LSTM, seq2seq and Bi-LSTM with the attention mechanism, respectively.
in formal writing and in all Persian normal corpora, this space is neglected frequently and it could make a lot of problems in Persian and Arabic morphological segmentation task.
neutral
train_100508
The "de" following "Benim" refers to "also".
to train the model, sentences with both correct and incorrect spellings of the clitic "de/da" are required.
neutral
train_100509
For the spell correction task, vocabulary based methods have been replaced with methods that take morphological and grammar rules into account.
this model uses various word embeddings to train a model for the named entity recognition task for this clitic.
neutral
train_100510
Propaganda of various pressure groups ranging from big economies to ideological blocks is often presented in a form of objective newspaper texts.
the main way of self-protection against such propaganda influence lies in careful verification of the presented information sources.
neutral
train_100511
Their presence in texts may take the form of "indices".
among the achievements, one can distinguish the tools which take into account the difference between the lexical term and the ontology concept (differentiated tools) and those that do not make such distinction.
neutral
train_100512
The scientific community works intensively on data acquisition for the ontology building.
even though we assume the pivot as being interlingual, it is still close to a natural one.
neutral
train_100513
The rule has the following form: property=aPourEtatPhysique (property name) src=?s (domain, set of terms) reltype=r_carac (relation type) tgt=?o(range, set of terms) source_isa=aliment(src hypernyms) target_isa=etat physique (tgt hypernyms) annotation=int:physical state (meta-information) src_feat=OUT/r_pos/int:Noun (in and out relations that characterize terms in the source set) tgt_feat=OUT/r_pos/int:Adj (in and out relations that characterize terms in the target set) If a given rule allowed detecting enough structures in the source language (at least, 2 structures), it is considered as a valid one and can generate the qualifying object.
it can be considered as a union of word senses lexicalized or identified in the languages covered by the MLSN.
neutral
train_100514
The relation typed r carac is annotated.
such approach would be error-prone due to the potential alignment and polysemy issues.
neutral
train_100515
Table 3 shows that our baseline is +1.5 BLEU points better than the scores reported by Luong et al.
it has been shown that transfer learning approaches using out of domain data, such as the European Parliament data 1 , to regularize the learning helps improve the translation quality (Miceli Barone et al., 2017).
neutral
train_100516
This allows the tagger to fill in the gaps with reasonable templates that are in the lexicon.
we then limit our search to the most probable templates as predicted by the supertagger.
neutral
train_100517
As will become clear in what follows, the Encoder of the QBERT architecture is able to compute the hidden representation of a word w t in a sequence w 1:n as a function of the weights and of the whole sequence except w t itself, i.e.
with respect to our own comparison systems, QBERT performs better than ELMo, Flair and SBERT in this setting as well.
neutral
train_100518
CLM is inherently unidirectional, as the model must not be able to "peek" at the word it has to predict.
representations from Transformers), a baseline featuring the same architecture as QBErT but missing the BiTransformer layer: the outputs of the past and future stacks are simply combined through elementwise sum after the position shift.
neutral
train_100519
because it is sampled from the same text).
these works contain many one-letter variable names, logical symbols and other formal language that a model might otherwise use to position vectors of Quine terminology in particular areas of the semantic space, as these tokens are highly infrequent in the general domain.
neutral
train_100520
Given a pre-trained background space kept frozen across experiments, the vector representation of a target is generated by simple vector addition over its context words.
in the full Quine dataset, 68.6% of terms reach the maximum size, while in the Word & Object dataset, only 32.1% of terms reach it.
neutral
train_100521
As we increased the percentage of request goals in the warm start buffer, the overall success rate decreases and the learning curve becomes less smooth.
the effect of pre-training with very large datasets observed in the experiments is the most surprising result of the paper (Erhan et al., 2010): the early examples determine the basin of attraction for the remainder of the training and the supervised fine-tuning cannot escape from it.
neutral
train_100522
The ordeal started february 1 when several police officers and a bailiff went to a home hoping to get payment for a gas bill , said butt , a senior police official in lahore.
the ability to facilitate deeper understanding of texts is an important, but recently ignored, property for coherence modelling approaches.
neutral
train_100523
The diagnosis with ICD-10 code I20 "Ischaemic heart disease" is chosen for experiments because the Diabetes Register contains a plenty of clinical descriptions about this case.
although only the top 5 most related documents are taken into consideration, some of the extracted Wikipedia pages are not directly related to the symptoms and risk factors, as they discuss e.g.
neutral
train_100524
• Co-referencing Features Often discussion about a work or the author's work will be carried out over several sentences.
these original labels are both quite different linguistically and we speculate that this might prove difficult for the classifier.
neutral
train_100525
More frequently occurring categories, cited work description (CW-D), background sentences with and without evidence (BG-NE, BG-EP) are more robust to feature omissions.
it also uses SVM for classification.
neutral
train_100526
An example of stimulus image is illustrated in Figure 2.
we notice that the increasing availability of text corpora labelled with author demographics in general (e.g., gender, age, education information etc.)
neutral
train_100527
Armed with the pseudo-labelled data generated via Algorithm 1, we can now learn a projection for the target domain, T prj , following the same procedure we proposed for learning S prj in Section 3.1.
as stated above, our proposed method Self-adapt, differs from the prior work discussed above in that it (a) does not require pivots, (b) does not require multiple feature views, (c) learns two different projections for the source and target domains and (d) combines a projection and a selftraining step in a non-iterative manner to improve the performance in UDa.
neutral
train_100528
Train a sentiment classifier on S * L ∪ T L and use it to classify target domain test instances.
recent work on UDA (Morerio et al., 2018) has shown that minimising the entropy of a classifier on its predictions in the source and target domains is equivalent to learning a projection space that maximises the correlation between source and target instances.
neutral
train_100529
For this purpose, the union of the source and target feature spaces is split into domain-independent (often referred to as pivots) and domain-specific features using heuristics such as mutual information or frequency of a feature in a domain.
3 Self-Adapt does not require sub-sampling of unlabelled data and uses all the available unlabelled data for UDA.
neutral
train_100530
(The patient will be discharged from the intensive care unit without renal failure after eight days of management and five hemodialysis sessions.)
even though CAS (speculation) was only trained on 226 examples, the model still shows decent results in scope tokens detection.
neutral
train_100531
", which demonstrates the effectiveness of our dependency-based self-attention.
table 3 also shows that the proposed model outperforms the baseline model when BPE is not used.
neutral
train_100532
In both cases, the most important information sources are stance (does a tweet or a news article agree or disagree with the claim?
we further used a variety of domain-specific features, which we eventually combined in a meta classifier.
neutral
train_100533
We tried simple random oversampling as well as the more complex SMOTE (Chawla et al., 2002) model.
they argued that the title and the body should be analyzed separately.
neutral
train_100534
Otherwise, the base system replaces the entities with default entries in the substitute lists, i.e., generally (non-inflected) lemmata.
all sensitive information was replaced with corresponding semantic placeholder codes of the encountered semantic type (e.g., each specific email address was replaced by the type symbol EMaIL), not by an alternative semantic token, i.e., a pseudonym.
neutral
train_100535
For this first attempt of developing an Old Tibetan Treebank, we therefore decided to reduce the amount of tags to a small and simplified version of the standard Universal Dependency POS set, consisting of 15 tags only (De Marneffe et al., 2014).
we substituted 'I' with 'i', which is the standard wylie transliteration for this character, as shown in (1): (1) rgyal po'I > rgyal po'i 'of the king' The Old Tibetan script furthermore presents a set of features that need to be 'normalised' or converted to a form that looks like Classical Tibetan.
neutral
train_100536
Without adding the Classical Tibetan training data, however, the vocabulary list that the memory-based tagger builds would simply be too small to get any good results on unseen data.
in practice, Unicode Tibetan script is far more widely used within the Tibetan community.
neutral
train_100537
Training takes up 60% of the instances (6,373), whereas validation and test have 20% of the instances each (2,125).
summarization results in other datasets are not directly comparable to our results.
neutral
train_100538
In the legal domain, it is common to reference existing laws, specific dates, and names.
there are some peculiarities when using a language different from English, e.g., we need to check if the standard summarization evaluation (designed for English) can be directly applied.
neutral
train_100539
Unlike the handcrafted gold standard, the automatically generated tests were produced randomly by a machine with no knowledge of test design so we would expect automatic gaps to be often inserted in inconvenient locations within the text, yielding lower quality tests.
entropy is shown to provide insights into the expected difficulty of the question and correlate directly with the target proficiency level of the exercises.
neutral
train_100540
In this regard, Skory and Eskenazi (2010) observe that Shannon's information theory (Shannon, 1948) could be used to estimate the reading difficulty of answers to a gap based on their probability of occurrence.
these are referred to as closed cloze questions, since the answer is limited to the alternatives given.
neutral
train_100541
The fitness F it of the segment cluster C ⊆ S is defined through the precision pr of the cluster and the coverage co of the cluster.
to address this task, we apply the following methods from the literature: the popular graph-based summarizer textRank; an adaptation of a topic-based method (topSum).
neutral
train_100542
Extractive summarization methods identify important elements of the text and generate them verbatim (they depend only on extraction of sentences or words from the original text).
section 3 presents the lyrics summarization task and the proposed methods.
neutral
train_100543
These models differ from more traditional recurrent neural networks in different aspects.
the dictionary of 32 terms that we used for the dictionary lookup method consists approximately half of terms that are quite specific to the Rap genre, such as glock, gat, clip (gun-related), thug, beef, gangsta, pimp, blunt (crime and drugs).
neutral
train_100544
Most of the computational historical linguistics approaches rely on the use of lexical items.
(Franzoi and Sgarro, 2017a,b), but also in the investigation of the history of texts.
neutral
train_100545
The system was designed especially for analyzing longitudinal data from language acquisition studies.
the entries in the merged lexicon are sorted alphabetically.
neutral
train_100546
The "Merge worksheets" macro is a general utility macro that integrates the contents of multiple spreadsheets in a file.
queries over single-item sequences (e.g., [Verb,SG]) calculate the number of tokens and types and can also return a list of items that matched the query (Figure 12).
neutral
train_100547
The first stage is marked by the acquisition of the first 10 words, the second by a total lexicon size of 50 words, and then an additional 50 words for every subsequent stage (Adam and Bat-El, 2009).
for example, it is possible to get all verbs attempted by a child at a given age/lexical stage or range of ages/lexical stages.
neutral
train_100548
The scope of queries can be constrained by age or stage of lexical development.
such an approach can help detecting problems in a limited part of the corpus.
neutral
train_100549
3 The transcription procedures ("macros") operate on a vector of words.
each entry in the table of phono-orthographic groups has two fields (Table 2): the name of the group, and its members.
neutral
train_100550
Besides this algorithm, we outline a machine learning approach to classify <EDT-Q, EDT-A> pair as correct or incorrect.
we demonstrate that involvement of background knowledge via virtual DTs for complex convergent questions requiring entailment is significant.
neutral
train_100551
The main dialogue can be viewed as a one in the meta-level, and the object-level dialogue is naturally embedded into the meta-level one.
elaboration (LeftToRight) attribution (RightToLeft) TEXT: , the Investigative Committee of the Russian Federation believes elaboration (LeftToRight) TEXT: that the plane was hit by a missile from the air <where was it produced?> TEXT: which was not produced in Russia .
neutral
train_100552
It is worth to take into consideration that the Polish language is more complex than English from the statistical point of view due to the rich morphology and a weekly constrained word order.
as the maximum length of a single tweet is 140 characters, so we made the representation of a single tweet to be a matrix of 140 code vectors 2 .
neutral
train_100553
Firstly, in all cases the BPE-based embedding model built on the large corpus appeared to be superior to the domain-based model.
we replaced all occurrences of extralinguistic elements with symbols representing their types: • hashtags were exchanged with '#' sign, • mentions with the '@' sign, • URLs with the 'ň' sign (not present in neither Tweet Corpus nor Influencer Set).
neutral
train_100554
(10), leading to the final objective function as follows.
word embedding is to convert symbolic representation of words to vector representation with semantic and syntactic meanings, which reflects the relations between words.
neutral
train_100555
Recently, several approaches have been proposed to make more efficient word embedding matrices, usually based on contextual information (Søgaard et al., 2017;Choi et al., 2017).
while many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks.
neutral
train_100556
5for all classes, but they are not available explicitly.
cBOW predicts a word given its neighbor words, and Skip-gram predicts the neighbor words given a word.
neutral
train_100557
(baseline) I have not been right to realise that I am so old.
the other class probabilities can contain additional information describing the input data samples differently even when the samples are in the same class.
neutral
train_100558
The figure shows the same tendency of improvements regardless of the change in dimension size.
if it is not the case, this might weaken the quality of the word's embedding vector.
neutral
train_100559
The rarity of co-occurring every candidate word pair which possibly involves in a semantic relation leads us to exploit a method which does not necessarily need to see the word pair in a context together.
according to the distributional hypothesis which states "words that occur in similar contexts tend to have similar meanings" (Harris, 1954), distributional approaches try to recognize the relation between words based on their separate occurrence in the corpus which can be represented for example by their word embedding vectors (Mikolov et al.,2013) and these methods have shown great performance (Baroni et al., 2012;Turney and Pantel, 2010;Roller et al., 2014).
neutral
train_100560
This task helps customers to absorb better a large number of comments and reviews before making decisions as well as producers to keep track of what customers think about their products (Liu, 2012).
in the scope of this paper, we focus on discussing neural-based systems for generic and opinions summarization.
neutral
train_100561
We choose the optimal values of hyper-parameters in our model and baselines via a grid search on 30% of LAPTOP domain.
for a large corpus or multiple systems comparison, this test requires a huge amount of human effort.
neutral
train_100562
These differences are statistically significant with the sign test (p < 0.05).
(2017), neural network-based approaches suffer the difficulties in achieving high performance on out-ofdomain data due to its high capability to fit indomain data.
neutral
train_100563
We used the size of 100 for hidden layers in the LSTMs and 300 for word embeddings, which were initialized with pre-trained embeddings, word2vec-slim.
the labels are strongly affected by the length of reference summaries in the Dai-lyMail dataset.
neutral
train_100564
To effectively avoid the influence of parse errors and take advantage of the recent advances in neural network-based approaches, we propose a model that jointly learns the discourse tree structure of the source document and a scoring function for sentence extraction.
all the parameters are updated to reproduce the correct labels and edges appearing in the training data D. λ is a parameter to control the priority of the output labels or the edges given by an RST parser.
neutral
train_100565
The main issue is that most of the words are strongly imbalanced in terms of their sense distribution, thus, due to the lack of required training data, the supervised approaches present a lower recall in all-words WSD setting.
we show that the lexical knowledge base with its expansions and the way they are exploited has a strong impact on wSD performance and the proposed method allows for the efficient utilisation of large lexical knowledge bases.
neutral
train_100566
It should be noted, that increasing the hidden layers did not improve the validation accuracy in our experiments.
in an n-dimensional space, the angle between the similar words should be close to zero.
neutral
train_100567
For fine tuning the rules, we randomly pick 140 out of the 205 positive sentences (roughly 70%), containing a total of 158 NPE instances.
this gives us a final precision of 78.79%, a recall of 63.41% and an F1-score 70.27%.
neutral
train_100568
Such cases of ellipsis are called endophoric.
nominal Ellipsis or noun Phrase Ellipsis (nPE, henceforth) is a type of ellipsis in linguists wherein the sub-parts of a nominal projection are elided, with the remaining projection pronounced in the overt syntax.
neutral
train_100569
Addition of only the feature that checks for prepositions immediately following the noun modifier results into an increase in F1-score by 13.73%.
for testing, it is important to use both positive and negative samples.
neutral
train_100570
We use some features that are closely related to the screen and user names of accounts.
it should be noted here that the fact that we and other researchers are able to identify bots quite reliably clearly shows that these assumptions are not unfounded.
neutral
train_100571
That means, the training objective of the systems remains unchanged: they are required to correctly predict the value of the binary label at the first annotation layer.
these comparisons, however, fall outside of the scope of this paper.
neutral
train_100572
While lexical similarity is an important factor in linguistic variation, we would argue that it does not capture all the translationally relevant features of texts.
reducing a functional vector to just the strongest component would be unfair to the functionally hybrid texts that fall under the genre labels of non-academic and editorial in our BNC slice.
neutral
train_100573
num connectors) and numbers (e.g.
• Spsim option 1 and Spsim option 2 are the only metrics which require supervised training, in order to learn grapheme mappings between language pairs (Gomes and Pereira Lopes, 2011).
neutral
train_100574
A disadvantage of using text-formatted pretrained embeddings is that we could not generate embeddings for all words in the gold standard list.
(2011) design a new similarity metric that is able to learn spelling differences across languages.
neutral
train_100575
The remainder of this paper is organized as follows.
predictive models directly try to predict a word from its neighbors in terms of learned dense embedding vectors (Baroni et al., 2014).
neutral
train_100576
In the argument extraction step, for each marked predicate, we try to detect its arguments within a sentence by analyzing its syntax dependency tree with a number of manually constructed rules.
in this case, the model based on RuBERT shows worse performance than all other approaches.
neutral
train_100577
However, for many predicates there are just few examples, and some semantic roles are also rare.
in this work, we additionally suggest using embeddings generated by deep pretrained language models, train models on automatically generated linguistic annotations (morphology / syntactic trees), and provide the full pipeline for semantic role labeling including argument extraction.
neutral
train_100578
Designing such a measure and its automatisation, if at all achievable, is beyond the scope of this work.
{whip:4} incorporates the peripheral FE Instrument ('whip') of {strike:1} in the frame Cause harm assigned to both; -Reducing the scope of the frame through imposing more strict selectional restrictions on the FEs, e.g.
neutral
train_100579
The first one is Using (Is Used by ↔ Uses), a frame-to-frame relation defined as a relationship between two frames where the first one makes reference in a very general kind of way to the structure of a more abstract, schematic frame (Ruppenhofer et al., 2016, p. 83); it may be viewed as a kind of weak inheritance (Petruck, 2015) where only some of the FEs in the parent frame have a corresponding entity in the child frame, and if such exist, they are more specific (Petruck and de Melo, 2012).
we separate: (i) suggested frames related to the one assigned from the hypernym, which are given higher priority; (ii) unrelated suggestions.
neutral
train_100580
Moreover, such features significantly contributed to generalize the proposed RE to other domains of interest.
to conclude, the overall achieved results suggest that more accurate semantic information about entity instances can contribute a great deal to RE.
neutral
train_100581
In addition, we collected three sets of reference data as negative samples using the hashtag #realfood #comidareal and #fitness.
different approaches to text and data mining methods can be applicable to social media data and may prove invaluable for health moni-toring and surveillance.
neutral
train_100582
In addition, the study shows that Twitter is the major data source from social networks and English is the main language studied in the different papers.
in this subsection, we report and discuss the performances of our systems on the Spanish anorexic Table 5: Statistics about positive and negative words in the corpus.
neutral
train_100583
On the one hand, if we look at the false positives, we can see that two of the reasons why our system can be wrong is because it detects that there are words related to food and also that the vocabulary of the other tweets labeled as control is very similar to the vocabulary used in anorexia.
in addition, this is one of the few papers which center on languages other than English.
neutral
train_100584
In this section, we describe different experiments we carried out to test the validity of the SAD corpus.
the results are described in table 5.
neutral
train_100585
Today, they have millions more pages and loyal followers, and the Internet has connected thousands of people with eating disorders.
the treatment of this information using NLP technologies can be applied to the automatic detection of mental problems such as eating disorders.
neutral
train_100586
Another difference is CCG's (by default limited) support for word order.
since word order can label (now already semantic) constituents, as in John sUB loves M ary OBJ , a limited form of syntax-semantics interface is also expected.
neutral
train_100587
EnetCollect 1 aims at exploring a solution to this challenge by combining the activities performed in language learning with approaches for crowdsourcing language-related datasets.
for each of these terms, we acquired more than 20 terms that are related to them.
neutral
train_100588
V-trel can be accessed through two interfaces, a Tele-gram chatbot and a Web application.
any crowdsourcing ambition is faced with the challenge of attracting and retaining crowdworkers and with safeguarding the quality of results produced by the crowd (Daniel et al., 2018).
neutral
train_100589
For in-stance, the embeddings for sub (prefix, mid-word), est (whole-word, suffix), ion (suffix, whole-word), mid (prefix, suffix), and the (whole-word, suffix) lie far from each other.
the figure clearly shows that different forms of the same character n-gram have different contributions towards the likability prediction of books.
neutral
train_100590
Each action contains the information on who performed it and what values the agent provided.
for our experiments, each of these states handles one particular slot.
neutral
train_100591
We now list the domain intents and their descriptions as follows: Making a Transfer.
the context also contains a form of turn that the agent has never seen during training, which is the first turn compression of the second example in Table 10.
neutral
train_100592
According to official test sets' result, their model ranked first in phraselevel and second in message-level tasks.
recently, deep learning is widely used to classify the text as it is capa-ble of extracting public opinion regarding a specific topic and also works excellently with highlevel features.
neutral
train_100593
Also called frequency based, it provides a sparse representation of corpus of documents as a matrix.
while considering the highest F1 score, we got 93.33 percent from the combination of tf-idf vectorizer and unigram model.
neutral
train_100594
English language has the luxury of having numerous annotated datasets as well as having well-tuned text preprocessing techniques.
probability of collision should be considered when choosing the size.
neutral
train_100595
It is fundamental to take into account the contextual information in order to better distribute the words into the classes.
indeed, there are words with many forms.
neutral
train_100596
In fact, a semantic class may correspond to a label or a group of labels, whereas a label cannot belong to only one class.
evaluating the constructed model by calculating its perplexity.
neutral
train_100597
In this work, we adopted the full diacritization level, at which all diacritics should be specified in a word.
the Tunisian dialect is the current mother tongue and the spoken language of all Tunisians from different origins and distinct social belongings.
neutral
train_100598
To achieve better context-sensitive sourcetarget mappings, traditional SMT systems rely on phrase-level translation models.
the success of this kind of systems highly depends on the dictionary coverage.
neutral
train_100599
• IBM Watson Natural Language Understanding.
recent years have witnessed an increased impetus on machine learning methods for data-driven Fr detection (Mukherjee et al., 2013;Ott et al., 2011;rout et al., 2017) The performance of machine learning models for detecting Fr is heavily influenced by the data representation (or features) in their application (Bengio et al., 2013).
neutral