id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_95600
This field of study has recently attracted a lot of attention due to its implications for businesses and governments.
sentiment analysis for English using the sanders dataset has been reported in a number of papers.
neutral
train_95601
Using our system we achieved a significantly higher accuracy score of 78.3%.
this could be connected to the observation that the AStD dataset has the largest number of classes (5 classes compared to 4 in Sanders and 3 in Deutsche Bahn) and there could be an inverse correlation between the increased number of classes and the system performance.
neutral
train_95602
We evaluate our system on three datasets in three different languages, and we find that stateof-art results can be achieved without language-specific features or pre-trained word embeddings.
we recommend data annotation to selectively target more data to increase the size of the minority classes to allows the system to better understand and predict these classes.
neutral
train_95603
Of interest is also the quality of the text in the corpus and its applicability to other NLP tasks.
to investigate the noisiness of using Reddit as a source of self-annotated sarcasm we estimate the proportion of false positives and false negatives induced by our filtering.
neutral
train_95604
While having only posts with at least one sarcastic response is useful, it also increases the false negative rate as comments warranting a sarcastic response often draw other sarcastic statements that are similar in content to the labeled sarcastic responses, but which themselves may not be labeled.
we evaluate the baseline performance of simple machine learning methods and compare them with human performance.
neutral
train_95605
For the second case, we only keep comments which have the "/s" at the end of the comment.
a subreddit focusing on the discussion of web programming, for example, might include instances where "/s" is used with a different meaning.
neutral
train_95606
Figure 3 shows the age distribution on the collected survey.
it is the first Brazilian Portuguese corpus about blogs.
neutral
train_95607
Valuable additional resources are a lexicon with word class information and Brown clusters extracted from a web corpus.
horsmann and Zesch (2016) use the FlexTag tagger (Zesch and horsmann, 2016) with a conditional random fields (CRF) classifier.
neutral
train_95608
We summarize the model architecture and its motivation here.
very little codeswitching corpora exist from which researchers can train statistical models.
neutral
train_95609
On most of the other categories the system performs quite well.
the Foreebank (Kaljahi et al., 2015) has a very low percentage of normalized words.
neutral
train_95610
When looking at Twitter public profiles, we found that some users may include information such as their real name, location and a short biography in their profile.
we also present some issues that we encountered during these phases.
neutral
train_95611
All these features can be useful for learner corpora transcription.
all these features can be useful for learner corpora transcription.
neutral
train_95612
It includes an annotation functionality serving historical research needs that can be transformed into a linguistic annotation functionality.
as the final users of the digital texts produced via the above mentioned crowdsourcing tools are historians, the annotation systems of these tools are adapted to bibliographical, paleontological and historical needs.
neutral
train_95613
Naturally, averaging over loss scenarios with partly heavy loss will thus deteriorate average values but may be more faithful to historical circumstances.
alternatively, well-known clustering algorithms performed the cluster steps.
neutral
train_95614
Starting from a number of prealigned observed variant texts, those are transformed, a) into pseudo-DNA and b) into bitvectors of so-called leitfehler.
the traditional scholarly concept of Leitfehler is taken to be a quantitative one: a variant's usefulness as Leitfehler may be assigned a number or weight.
neutral
train_95615
The term autograph refers to (any) manuscripts written by an author him or herself.
this is rather secondary to the main aim of this paper, which is to present a method, which produces both stemma and archetype in conjunction.
neutral
train_95616
Those graphical ambiguity and incompleteness would be solved by taking into account linguistic context such as proposed in Dhóndt et al.
in this work, we limited to most oldest (by Ur iii, about 2,000 BC) ones.
neutral
train_95617
Since the oldest glyphs have complex shapes on each character class, we can distinguish classes more easily compared to newer glyphs as shown in Figure 2.
most of the candidates have been collected from the former, and supplementary employ the latter when we cannot find characters which belong to the target classes.
neutral
train_95618
For example, there are about 70,000 digitally untranscribed hand copies out of all approximately 300,000 documents registered in CDLI.
few studies tried to detection of a character class from handwritten copies of original tablets (Massa et al., 2016;Rothacker et al., 2015), grammatical analysis (Homburg and Chiarcos, 2016) and automatic machine translation (Pagé-Perron et al., 2017).
neutral
train_95619
To tackle to those touched and "complex" characters, it may be effective to apply some existing character recognition methods such as Rothacker et al.
most of them are ligatures of basic characters or less frequently appeared.
neutral
train_95620
In the first stage, we create a 2D array where non-zero values represent the bounding boxes of text tokens in the PDF, then analyze this array for blocks.
it is important to maintain the visual alignment of text elements when projecting coordinate-positioned text in a variable-width font to columns in a monospaced plain-text file.
neutral
train_95621
As the participants are being recorded on both video and audio this gives rise to both ethical, privacy and legal concerns.
one limitation is that the approach is restricted to dyadic corpus creation.
neutral
train_95622
Many studies in particular have focused on the timing of feedback behaviours (Morency et al., 2010;Ward and Tsukahara, 2000;Cathcart et al., 2003).
crowdsourcing has been extensively used for annotations as well as transcriptions (Gruenstein et al., 2009;Hipp et al., 2013;Rashtchian et al., 2010).
neutral
train_95623
Moreover, with respect to patient safety, appropriate strategies need to be defined in order to maintain full control of the medical devices even if IDACO is allowed to perform some pre-defined actions during the surgery and control devices automatically.
being hands-and eyes-free, speech as an input and output modality seems to be a good choice.
neutral
train_95624
The only issue which remains open to debate is how the system should give feedback to the surgery team as, on the one hand, the surgeon should not be annoyed by unnecessary system prompts, but on the other hand, completely passive system behaviour makes the surgery team insecure.
the variety of information the user can request from the system in this mixed-initiative dialogue part is illustrated in Figure 1.
neutral
train_95625
For the presented system, we modelled exemplarily the procedure of a laparoscopic cholecystectomy.
the complexity of the dialogue increases with the complexity of the surgery structure.
neutral
train_95626
In this context, the Operating Room of the Future is a keyword often used (Feußner, 2003).
to the first part which is very flexible and allows the user to control the dialogue, the procedural part follows an exact surgery schedule which has been modelled in the Spoken Dialogue Ontology.
neutral
train_95627
As these instruments and materials, which are necessary to perform each procedural task, are clearly defined, it is possible to predict the surgeon's utterances during each step.
the speech interface, the dialogue and the communication style were assessed positively which leads us to the conclusion that we can confirm our claim that the surgical environment presents a field of application for Spoken Dialogue Systems.
neutral
train_95628
Afterwards, the surgeon starts with the operation and IDACO escorts the team throughout the entire surgery.
the database contains a list of all existing devices as well as the corresponding device parameters.
neutral
train_95629
• Guideline for Response(Declaration) The tag "Response(Declaration)" should be applied to a speaker's response presented by declarative sentences.
to identify the dialog act of a user's utterance, a Support Vector Machine (SVM) (Cortes and Vapnik, 1995) was trained with features such as the character n-grams, word n-grams, and semantic categories.
neutral
train_95630
The probability given by LIBLINEAR (Fan et al., 2008) is used as the reliability of the classification.
these cor-pora were not released.
neutral
train_95631
First, errors often crop up when a previous utterance is long and consists of several sentences.
the performance on the imbalanced dataset was not good.
neutral
train_95632
The CASCADES is integrated with the MATTER method (Pustejovsky et al., 2017) for annotation and data modelling, conceptualized as the Model, Annotate, Train, Test, Evaluate and Revise cycle which inspired the presented methodology.
the DM, designed as a set of processes (threads), receives data, updates the information state and generates the system next action(-s), see also (Malchanau et al., 2015).
neutral
train_95633
Modality corresponds to the speaker's evaluation of the probability of events; it concerns what the speaker believes to be possible, necessary or desirable.
this opens the possibility to specify quite detailed information about the semantic content of dialogue acts, including domain-specific semantics as shown in (Petukhova et al., 2017a) for negotiations.
neutral
train_95634
An extended ISO 24617-2 metamodel (see concepts marked red in Fig.
the classified modality related to the speaker's preferences, priorities, needs and abilities is defined (Lapina and Petukhova, 2017).
neutral
train_95635
Recordings that show an obvious influence of the researcher on the produced utterances have not been included in the corpus.
no metrics have been recorded to attest for this.
neutral
train_95636
An overview of the entire corpus is shown in Two different lexical analyses were performed in order to evaluate the appropriateness of the corpus for research in dialogic alignment of RL and conceptual pacts: Firstly, a trend of lexical convergence was observed both within speakers (i.e.
(2015): They communicate freely via speech but cannot interact in any other way.
neutral
train_95637
In this endeavour, they use word vectors in combination with deep neural networks to determine the dialogue act of an utterance.
hence, our evaluation comprises two steps: 1.
neutral
train_95638
Deictics: point to a location in space, for example, an object a place or a concrete direction), 4.
we build five datasets of non-verbal signal sequences representing the five engagement levels.
neutral
train_95639
Nominal and prepositional phrases are in focus, the results are often comparable to text chunks, but the approach is closer to grammatical rules and to the linguistic understanding of a phrase.
a series of queries are performed on a SQLite database to generate web pages, in a diachronic way to see the keyword evolve in the course of time but also classified by speaker.
neutral
train_95640
Data and visualization are both accessible online.
their copyright status makes them highly relevant for replication studies as well as a wide range of purposes.
neutral
train_95641
An example is given in Figure 2.
testing whether hunter-gatherer languages are more likely to make a lexical distinction than agriculturalists is interesting because it provides a data point towards understanding whether there are cross-linguistic differences in languages spoken by hunter-gatherers and agriculturalists.
neutral
train_95642
This and similar visualizations are unquestionably useful for doing exploratory analysis.
a map projection is an attempt to portray the surface of the earth onto a flat surface.
neutral
train_95643
We hope that GermaParl makes a useful contribution to a growing family of corpora of plenary protocols.
the appropriate approach to handle this is to implement things in a fully object-oriented fashion.
neutral
train_95644
The appropriate approach to handle this is to implement things in a fully object-oriented fashion.
the tool contains a backend which is used to select speeches for annotation via stratified sampling, thereby ensuring a balanced sample of speeches to be annotated, i.e.
neutral
train_95645
Documents offer minimal metadata (legislative period, session number, date).
the ultimate aim of the project is to develop a generic workflow and a framework for preparing corpora of parliamentary protocols.
neutral
train_95646
A problem that appears for addressee identification is when the addressee is not mentioned in the narrative.
for the purpose of specifying our method, we make the following assumptions, aimed at covering differing author styles.
neutral
train_95647
An explicit mention in a line without a speech verb may be the addressee, but can also be a passer-by or a person not currently present.
typically, the goal has been to mirror relations between entities or events extracted from the text as a whole.
neutral
train_95648
Table 3 shows the variation of indicators for speakers across the authors.
our results on speaker identification are relatively similar to previously obtained results (Elson et al., 2010;He et al., 2013;Muzny et al., 2017).
neutral
train_95649
Analogical reasoning dataset is compromised of analogous word pairs, i.e., pairs of tuples of word relations that follow a common syntactic relation.
we denote the headwords obtained by this method as NP-heads during the evaluation section.
neutral
train_95650
We collected data from social networks, and for each word, we used the algorithm based on Soundex described in Section 4.2.1. to get all the words sharing the same phonetic codes.
a cleaning process is done on each entry: removing the words (bold examples in Table 6) that are not related to the entry, removing the words (in italic in Table 6) which are similar, but they do not have the exact meaning as the entry and adding the missed words (examples in blue in Table 7).
neutral
train_95651
Comparable corpora of different levels of comparability (Sharoff et al., 2013) can be used for induction of bilingual lexicons from small seed dictionaries.
one test set was based on the European Commission reports, another one on news wires concerning Donald Trump.
neutral
train_95652
Similarly, in the "Examples" tab, user can click on a "Show" button and the examples from PCEDT 2.0 are shown in the tred 11 viewer/editor, which must be installed locally.
mappings between CzEngClass entries and the entries in the other lexical resources are not necessarily 1:1.
neutral
train_95653
These additional pieces of information are not used in the combination approach.
of the combination, we will first of all have, more translations for lemmas in the four languages, and new target languages that haven't existed before in the original dictionaries.
neutral
train_95654
more words are introduced at the lower levels (A1 and A2), but this trend reverses at the intermediate levels (B1 and B2).
in a second step, the RFL are weighted by a dispersion index (D), intended to counteract the effect of context-specific low frequency words being overused within a small number of texts.
neutral
train_95655
Similarly, the availability of frequencies per level allows to rank words assigned to the same level of proficiency by frequency.
compared to invariable words (such as adverbs, prepositions, conjunctions), they would seem less frequent in texts than they really are.
neutral
train_95656
This observation is of particular relevance when estimating counts from textbooks.
such descriptions remain elusive and the limitations of the CEFR for practical purposes have been stressed (North, 2005, 40).
neutral
train_95657
To provide a resource contribution, we have therefore collected and released a small, high-quality en-eu corpus called Berriak (news in Basque) Table 3: BLEU score of the models over the WMT16 IT corpus.
we can observe that the numbers are clearly higher when Basque is the target language, which matches our intuition that Basque is more difficult to translate into.
neutral
train_95658
We can see that for sentence 2 the OpenNMT model has provided a translation identical to the ground truth, probably thanks to the fact that there are sentences with the same structure in the training corpus.
experiments have been conducted in both english-to-Basque (en→eu) and Basque-to-english (eu→en) to assess performance in both directions.
neutral
train_95659
The NMT and SMT models have outperformed Google Translate when using training and test data from the same corpus (PaCo2 EnEu).
for English, we have used the available CommonCrawl pre-trained embeddings 3 .
neutral
train_95660
In our work, we have used it from the convenient Google Cloud Translation API 2 .
the training corpus of Google Translate is certainly much bigger, and that has helped it achieve better results on Berriak.
neutral
train_95661
The expressions classify the results as correct, incorrect or requiring a manual check.
the database is constantly growing and the more reports are carried out, the less manual work of inspecting segments that are neither covered by a regular expression nor by a positive/negative token is needed.
neutral
train_95662
Our experimental results show that this approach can alleviate the under-translation problem, especially can sharply reduce the number of under-translation cases for the words that should be reordered during translation.
we exploit the pre-ordering for NMT to alleviate this problem.
neutral
train_95663
Baseline: japan freezes its offer of humanitarian assistance to russia in the interim .
during training when decoding, there is a possibility that we feed our model with an estimateŷ t−1 coming from the model itself rather than true previous token y t−1 .
neutral
train_95664
There has been some work in the field of NMT.
in common practice, the decoder uses gold reference as history during training, but it has to use generated output as history during testing.
neutral
train_95665
Wiseman and Rush (2016) take a similar approach and regard training as beam search optimization.
the phrase we mention here has the same meaning as the one in phrase-based machine translation, which denotes any consecutive word sequence.
neutral
train_95666
Neural machine translation (NMT) becomes a new state of the art and achieves promising translation performance using a simple encoder-decoder neural network.
calculating Levenshtein distance between the testing sentence and each sentence in the filtered set is still not fast enough.
neutral
train_95667
In order to generalize the process of identifying idiom occurrences, we lemmatize the phrases and consider different re-ordering of the words in the phrase as an acceptable match.
in the next step we sample without replacement from these sets and select individual sentence pairs to build the test set.
neutral
train_95668
We observe that for some idioms the literal translation in the target language is close to the actual meaning, while for others it is not the case.
we automatically select sentence pairs from the training corpora where the source sentence contains an idiom phrase to build the new test set.
neutral
train_95669
The English terms distortive effect and distorting effect, both belonging to the IATE-814939 entry, are similarly frequent in English as well as their translations into Slovene.
the French translations of the term electrical engineering, i.e., electrotechnique and génieélectrique, are both frequently mentioned in the used corpora.
neutral
train_95670
In this extremely low-resource task, we found that a phrase-based MT system performs much better than other methods, including a g2p system and a neural MT system.
this is not too surprising, especially since most languages use Roman script.
neutral
train_95671
For each query-generated output pair, we asked participants following questions : • Is the question grammatically correct?
search engines are evolving to save time for users and increase their productivity.
neutral
train_95672
Moreover, perplexity makes little use of infrequent words; thus, it is not appropriate for evaluating distributed presentations that try to represent them.
distributed word representation is known to improve the performance of many NLP applications such as machine translation (Chen and Guo, 2015) and sentiment analysis (Tai et al., 2015) to name a few.
neutral
train_95673
It provides human ratings for the similarity of 3,500 verb pairs so that it enables robust evaluation of distributed representation for verbs.
a word similarity task and/or a word analogy task are generally used to evaluate distributed word representations in the NLP literature.
neutral
train_95674
The smaller classes gave rise to the somewhat inflated % IAA score for Croatian because of the larger number of true negatives (verbs that are correctly found not to go in the same class).
what is more, some display a degree of semantic vacuity, that is, have little semantic content of their own and tend to express a more precise meaning when combined with some other word (e.g.
neutral
train_95675
The result for the first baseline, 0.0, is the same as in SemEval, and a natural consequence of B-Cubed since there are no pairs within a class.
in order to measure the overlap between classifications produced by annotators for each language individually and across languages, we calculate percentage inter-annotator agreement (% iAA) for all pairings of verbs.
neutral
train_95676
In our work, we not only indicate senses with the method which is based on similar knowledge (Levy and Goldberg, 2014), but also improve the associate distributional word representations.
we compute the context vector of each instance where C(w i ) is the context set of word instance w i which contains prototype of words, and v g (t) is the global vector of word t. we have where v c (w i , k) is the context vector of the k th sense of word w (prototype of w i ).
neutral
train_95677
The best performing are 200-dimensional embeddings trained with a 5-word context window, achieving a Spearman correlation of 0.524.
the introduction of computationally efficient neural network-based methods for unsupervised learning of word embeddings from large unannotated corpora has been a watershed moment in natural language processing in recent years.
neutral
train_95678
This is not an uncommon phenomenon among lowresource languages, since creation of such resources requires significant time and manpower.
since our focus is on learning representations of Urdu words only, we remove all non-Arabic script characters from the input.
neutral
train_95679
While the language modelling neural network of Bengio et al.
without sufficient labelled data, it is very difficult to build natural language processing systems that can learn useful patterns which generalize well.
neutral
train_95680
These approaches enable word representations to properly capture the distributional hypothesis by measuring the commonality of the linguistic contexts of word occurrences.
this may be due to the fact that the tags attached to an image are tightly associated with the image, whereas linguistic contexts, or context windows, are more generous to include weakly associated words.
neutral
train_95681
Mainstream approaches employ machine learning techniques to integrate/combine visual features with linguistic features.
this confirms that the proposed method could be effective in excluding antonyms from the other semantically similar/related words.
neutral
train_95682
CyTag produced 13,220 words (excluding punctuations) of which 3,716 (28.11%) are function words; WNLT produced 14,435 words (excluding punctuations) of which 5,314 words (36.81%) are function words.
as mentioned earlier, initially developed for English, it has been extended and modified to cover an increasing number of languages.
neutral
train_95683
The rising prominence of corpus-driven approaches in linguistic studies has sparked a growing interest in applying a corpus-based approach to metaphor analysis in the political domain (Charteris-Black, 2004Ahrens, 2009;Deignan, 1999Deignan, , 2005Semino, 2006Semino, , 2008Deignan and Semino, 2010).
the Strict Father model and the Nurturant Patent model) and found support for Lakoff's (1996Lakoff's ( , 2002 hypothesis that Democrats and Republicans view the world differently.
neutral
train_95684
Researchers in many fields are able to use the multiple search functions on the website for their specific research purposes.
after two linguisticallytrained native speakers of Chinese manually checked the corpus, the problematic taggings were revised and the tagged Chinese corpora provide a reliable Chinese database with a wide range of syntactically tagged texts.
neutral
train_95685
The best results in this research occurred at the 7 th iteration for K = 3 and distance weights, where we obtained an accuracy rate of 59.8%.
these labels can be obtained in an unsupervised manner, that is, it does not make use of sense-tagged data, and this method is knowledge-based because the WLSP is a thesaurus.
neutral
train_95686
Several words can have the same precise article number, even when the semantic breaks are considered.
the condition variations that we now considered were as follows.
neutral
train_95687
We again obtained the best results when using only concept embeddings.
we investigate the optimal number of iterations experimentally in Section 4.
neutral
train_95688
If there was a sentence including the word below, the surrounding word vector for the word is the concatenated vector of the word embeddings or the words , , , and .
the surrounding word vectors of A are created not from all word tokens of A, but from only word tokens of A predicted as Sense 1 at the n-1 prediction.
neutral
train_95689
In this paper, we consider several implementations of the feature space to which x belongs.
this architecture must be trained in a layer-wise fashion (train layer , disambiguate training data, train layer + 1, etc.
neutral
train_95690
The universe of WSD approaches is usually divided into the two main categories of "supervised" and "knowledge-based" methods (Raganato et al., 2017).
in the IMS case, semantic features lead to more evident benefits w.r.t.
neutral
train_95691
The tooltip shows its lemma "experiment", the synset identifier (36055), and the words forming the synset "experiment", "experimenting" as well as its hypernyms "attempt", "reproduction", "research", "method".
2 https://github.com/nlpub/word2vec-pyro4 Figure 1 shows the Web interface of Watasense.
neutral
train_95692
Our accuracy was lower than that of the recent Korean WSD study, which employed a supervised approach (Shin and Ock, 2016); however, our method is unsupervised, and therefore has the advantage that it can be applied to any document without learning.
it is expensive and time-consuming to construct such corpora.
neutral
train_95693
The merging phase identifies identical sentences with annotations on different words, and creates a single sentence containing all annotations.
original annotations are done with WordNet 1.6.
neutral
train_95694
In fact, this happens to a large extent on adjectives, causing the largest losses.
the latter's emphasis is to model opposite meanings (antonym-like) as highly non-similar, e.g.
neutral
train_95695
As an example, consider processing the ambiguous word bank as depicted in Figure 2: when observing an occurrence of this word in a sentence about a financial topic, a classifier like fastText will likely suggest topic labels such as finance, money or financial institute because of the fact that the input sentence is about such a topic.
we also tested fastSense on SemEval-2007 Task 17 Subtask 1 (SE7) and Subtask 3 (SE7'), SemEval-2013 Task 12 (SE13) and SemEval-2015 Task 13 (SE15).
neutral
train_95696
The paper is structured as follows: In Section 2., we contrast fastSense with related approaches to WSD.
fastText is not suitable for disambiguating words.
neutral
train_95697
In Section 4., we explain the experiments carried out to eval-uate fastSense and show the results achieved by it.
we do not necessarily select the label of highest probability, but go through the list of rankordered candidates until the first occurrence of x tagged by a corresponding sense number is reached.
neutral
train_95698
TAG also utilizes a range of visual encodings to identify relationships and types, and includes an alternative representation of linguistic relationships.
this interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.
neutral
train_95699
Purgina and Mozgovoy 2017introduce WordBricks, which also utilizes a container layout to identify linguistic structure while maintaining an explicit sequential representation that is easy to read.
bRAT does not support the ability to draw links between links, which makes it difficult to represent relations linking several predicate-less relations, a feature necessary to completely describe complex events.
neutral