id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93700
Using these scoring functions, we assign scores to all possible pairs of conjuncts.
given a coordinator word (e.g., "and," "or," or "but"), a system must return each conjunct span if the word actually plays the role of a coordinator; otherwise, NONE is output for the absence of coordination.
neutral
train_93701
In this article, we propose to investigate a new problem consisting in turning a distributional thesaurus into dense word vectors.
synonyms that are not part of the injected knowledge.
neutral
train_93702
rank-fitting is significantly the worst method and its association with counterfitting is better than retrofitting for P@1 only, but not significantly.
first, T cnt significantly outperforms GloVe and SGNS for all measures 4 .
neutral
train_93703
Our last choice is based on a less constrained form of matrix factorization where T is decom-posed into two matrices in such a way that U m, n. U and V are obtained by minimizing the following expression: where the first term minimizes the reconstruction error of T by the product U • V while the second term is a regularization term, controlled by the parameter λ for avoiding overfitting.
table 2 also shows that there is still room for improvement for reaching the level of the initial thesaurus t cnt .
neutral
train_93704
Each synonym is given with its [rank] among the neighbors of the entry and its similarity value with the entry gration of external knowledge into the thesaurus before its embedding is clearly effective as illustrated by the significant differences between SGNS+retrofit(K) and svd(T cnt +K)+retrof(K).
the relevance of the semantic neighbors of an entry strongly decreases as their rank increases and the first neighbors are particularly important.
neutral
train_93705
The sentences in all datasets were tokenized, compound-split, and lemmatized, and for each target word we automatically determined the set of possible senses, given its context and inflec- Table 3: WSD accuracy on baselines, UKB, and the three variants of our model (ρ = 0.9) on all test sets.
if by using the SN we intend to learn clear separations between different senses of a word, it attends to reason to limit its application to those cases, while monosemous words can be sufficiently well trained by the usual corpus-based approach, and act as semantic anchors in the broader vector space.
neutral
train_93706
The extent of the regularizer's influence on the model is adapted by a mix parameter ρ ∈ [0, 1]: the higher the value of ρ, the more influence the SN data has on the model, and vice versa.
we computed the Mean Averaged Precision (MAP) score by macro-averaging the AP scores over the set of frames.
neutral
train_93707
Manually inspecting some of the simplifications made, we find that when it comes to sentence splits, both MT-based simplifiers seem to be able to perform this type of transformation in an accu- rate way.
as we are going to show in this section, standard end-to-end systems without special adaptation to TS do not succeed in learning alternative formulations of the original text.
neutral
train_93708
Taito Corporation's homepage is http://www.taito.com.
each predicate becomes an STT with two entity types (the type of the subject, which is the instance entity, and the type of the object) in V ; a single relation between the two types (the predicate) in R; and two simple initial templates in L: where (PReDICATe) is replaced with the relevant predicate.
neutral
train_93709
The text is annotated with the r-NE tags.
a: Figure 4: LSTM language model training.
neutral
train_93710
Since available video/text pairs are limited in size, the direct application of end-to-end deep learning is not feasible.
let f .., f i+(l−1) be a substring, of length l, of f that corresponds to a single recipe sentence.
neutral
train_93711
Considering the likelihood of object recognition and the likelihood of a combination of r-NEs included in the sequence, we set the likelihood P (e) that e appears as follows: where P (e) is the average of the probability of the result of object recognition: This value indicates the likelihood of object recognition.
we can say that our method is promising to solve this novel task.
neutral
train_93712
Hereafter, we only focus the frames with the procedure-related objects listed in x-NEs, and describe the sequence of such frames as follows: where f i is the i-th frame and |f | is the length of the sequence.
it is reasonable for the procedural text generation to focus more on object recognition than motion recognition.
neutral
train_93713
The third model proposed by us is Leaf-Composition-Tree LSTM (LCT-LSTM).Different from the LT-LSTM and TC-LSTM, this model not only takes the composition nodes into consideration when building the LSTM layer, but also makes sum of the outputs of two modules mentioned above.
during the forward propagation, we add the output of two modules and take the result as the output of the whole model.
neutral
train_93714
Suppose we have vector representations of an argument a and a Wikipedia article w. The similarity score, sim(a, w) is simply the dot product of the two representations, aw T .
for all methods that use the presence of Wikipedia articles, we use several variations of a corpus to determine how well the methods leverage topic-specific articles, as opposed to randomly selected articles.
neutral
train_93715
We then propose modifications of the supervised models to leverage external data.
high ME corresponds to high randomness.
neutral
train_93716
We run LSTM1 to get the hidden representation {h f 1 ,h f 2 , ...,h fn } and run LSTM2 to get the hidden representation {h b 1 ,h b 2 , ...,h bn }.
the representation of the words as continuous vectors (word embedding) are proved more powerful than discrete representation (Bengio et al., 2003) (Mikolov et al., 2013b).
neutral
train_93717
Thirdly, to capture document level and cross-document level clues without complicated inference rules.
the wider context that "She wants to cal-l her pregnant daughter Saba in Sweden to see if she has delivered."
neutral
train_93718
Such resources have been largely ignored by the mainstream statistical NLP research, because they were not specifically designed for NLP purpose at the first place and they are often far from complete.
2 Approach Overview A typical supervised name tagger is presented in (Lample et al., 2016), consisted of Bi-directional Long Short-Term Memory networks (Bi-LSTM) and CRFs.
neutral
train_93719
They observed that NMT systems are also more accurate at producing inflected forms, but they perform poorly when translating very long sentences.
if the translator did not apply any changes, the system automatically assigned the highest quality rating -"Unchanged".
neutral
train_93720
To validate, whether the post-edits are of good quality, we performed quality assessment of the post-edits according to the LISA Quality Assurance model 3 .
errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs.
neutral
train_93721
Actually, there have been some effective methods to deal with the situation of translating language pairs with limited resource under different scenarios (Johnson et al., 2016;Sennrich et al., 2016a;.
our approach can be easily applied to any endto-end attention-based NMT framework.
neutral
train_93722
As shown in Figure 2, when predicting the words y 2 and y 3 , we want to attend more information of corresponding words x 2 and x 3 .
in §3.1, we acquired the partially aligned corpora with the phrase pairs and monolingual sentences.
neutral
train_93723
• Erroneous gold labels: Six instances (25%) were actually correctly labeled as "usage" by the system, whereas the gold label was wrong ("I really love the smell of fresh laundry, and the smell of Downy.").
in addition to opinions on product quality, reviewers often share how, where, or why they use the product.
neutral
train_93724
Additionally, the ASR system can be fine-tuned in parallel with the emotion branch by updating the layers of the ASR network.
to fully understand a current emotion expressed by a person, also knowledge of the context is required, like facial expressions, the semantics of a spoken text, gestures and body language, and cultural peculiarities.
neutral
train_93725
The progressive architecture complements information from the ASR network with SER representations trained end-to-end.
in both experiments, dropout with the rate of 0.25 was applied to averaged representations.
neutral
train_93726
Table 1 summarizes our experiments on our proposed local attention models and compares them to the baseline model using several possible scenarios.
for our decoder target, we re-mapped the original target phoneme set from 61 into 39 phoneme class plus the end of sequence mark (eos).
neutral
train_93727
Finally, the decoder task, which predicts the target sequence probability at time t based on previous output and context information c t can be formulated: For speech recognition task, most common input x is a sequence of feature vectors like Mel-spectral filterbank and/or MFCC.
the local property helps our attention module focus on certain parts from the speech that the decoder wants to transcribe, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the speech.
neutral
train_93728
It contains 113438 words for training and 12753 for testing (12000 unique words).
x ∈ R S×D where D is the number of features and S is the total frame length for an utterance.
neutral
train_93729
Despite the fact that we reset the memory for the Attentive RNN-LM at each sentence boundary whereas the caches for the baseline systems span sentence boundaries, our best Attentive RNN-LM is within 1 perplexity point of the system of (2017) (which is allowed to cache 2,000 previous hidden states), and has a lower perplexity than all of the other baselines.
the words in the X-axis (horizontal) are the inputs at each timestep and the words in the Y-axis (vertical) are the next (or predicted) words.
neutral
train_93730
An LM provides a probability for a sequence of words in a given language, reflecting fluency and the likelihood of that word sequence occurring in that language.
we see our "attentive" RNN-LM (see §3) as a generalized version of these models as we rely on the encoded information in the hidden state of the RNN-LM to represent previous input words and we use a set of attention weights (instead of a key) to retrieve information from the past inputs.
neutral
train_93731
The purpose of our attention mechanism is to enable an RNN-LM to bridge long distance dependencies in language.
(2015) and Press and Wolf (2016).
neutral
train_93732
IGC thus falls on a continuum between chit-chat (open-ended) and goal-oriented task-completion dialog systems, where the visual context in IGC naturally serves as a detailed topic for a conversation.
we define idf(h, D) = log |D| |{d∈D:h∈d}| , where we set N =10 to cut short each N-best list.
neutral
train_93733
These studies, however, did not incorporate dynamic representations in the input layer.
the resulting novel model, Dynamic Neural text Model, uses the dynamic word embeddings that are constructed from the context in the output and input layers of an RNLM, as shown in Figures 1 and 2.
neutral
train_93734
In addition, dynamic updates of both the input and output layers (D) further improved the performance from those of either the output (C) or input (B) layer.
our proposal incorporates the copy mechanism through the use of dynamic representations in the output layer, integrating them with dynamic mechanisms in both the input and output layers by applying dynamic entity-wise representation.
neutral
train_93735
Then we convert the parse tree into binary with right branching and we remove the internal nodes that have only one child so that our binary tree-structured models can be applied.
we argue that these tags are worth considering when we compose the semantics in the tree-structured neural networks.
neutral
train_93736
The MSE per dimension is visual- Table 3: Results of the binary sentiment classification task for each language with each translation matrix.
for this task, we built a regressor for each of the three dimensions whose input is a 300-dimension vector and whose output is a real number from 1 to 9.
neutral
train_93737
We opt to use offline alignment to show that such a low-cost approach does, in fact, capture a significant part of the relationship between words of different languages when it comes to sentiment.
this would serve to demonstrate the (expected) limits of the linear transformation model's in handling polysemy and subsequently its impact on the topological mapping of sentiment in vector space.
neutral
train_93738
For this task, we built a regressor for each of the three dimensions whose input is a 300-dimension vector and whose output is a real number from 1 to 9.
a single transformation (linear in the case of our work) is sufficient to learn a projection which allows one to use labeled English data to aid in sentiment analysis.
neutral
train_93739
The main issue of Dong et al.
first, we compare the effects of different features on target target-dependent sentiment analysis.
neutral
train_93740
We call such lexicon-based features as lexicon embeddings because embeddings are a feature input of deep learning model and describe the properties of a word or a phrase.
the resulting matrix has dimension d × (s + m − 1).
neutral
train_93741
The model with CharAVs and LexW2Vs built on top of GoogleW 2V vectors and Dependen-cyW2Vs is effective in order to increase classification accuracy.
to construct embedding inputs for our model, we use a fixed-sized word vocabulary V word and a fixed-sized character vocabulary V char .
neutral
train_93742
Based on these equations, adding dependencybased word vector T is corresponding to considering a composite input [x i , T ] to the GRNN cell that concatenates the advanced continuous word embeddings and advanced dependency-based word embeddings.
gRUs have fewer parameters (U and W are smaller) and thus may train a bit faster or need less data to generalize.
neutral
train_93743
The first convolution produces a fixed-size character feature vector named mgram features by extracting local features around each character window of the given word and using a max pooling over vertical character windows.
we modify Bi-GRNN of (Chung et al., 2014) into Bi-CGRNN to take word embeddings in order to produce a sentence-wide representation from sentence compositions.
neutral
train_93744
The usage in the first sentence, on the other hand, can be reliably disambiguated due to the post-modifying phase news agency.
we break the performance down by the depth of each post in a given thread, and present the results in Figure 3.
neutral
train_93745
Long short-term memories ("LSTMs": Hochreiter and Schmidhuber (1997)), a particular variant of RNN, have become particularly popular, and been successfully applied to a large number of tasks: speech recognition (Graves et al., 2013), sequence tagging (Huang et al., 2015), document categorisation (Yang et al., 2016), and machine translation .
although CRFs are adept at capturing local structure, the problem does not naturally suit a linear sequential structure, i.e.
neutral
train_93746
Using the scoring function in Equation 7, we calculate the score of the sequence y normalised by the sum of scores of all possible sequencesỹ, and this becomes the probability of the true sequence: We train the model to maximise the probability of the gold label sequence with the following loss function: where p(y (n) |x (n) ) is calculated using the forward-backward algorithm.
as pointed out by and Linzen et al.
neutral
train_93747
The grid is summarized into a vector of transition probabilities.
2017propose a convolutional neural network formulation for AA tasks (detailed in Section 3).
neutral
train_93748
Grammar Error (GE) To detect GEs, we use the LanguageTool proofreading program 5 to detect all GEs (e.g., redundant phrases and typos) in all training set justifications.
these results are obtained when n is set to 12.
neutral
train_93749
Style Our fourth baseline captures aspects of an argument's style.
given an argument, we employ the following steps to extract references and clean up internal citations.
neutral
train_93750
This task recently draws increasing research concerns and is helpful to many downstream applications, such as user and product recommendation.
we train the word embeddings through using the training and developing sets of each dataset with word2vec tool (Mikolov et al., 2013).
neutral
train_93751
Semantic information maybe provides opposite opinions in different contexts.
for sentiment classification with CNN, LSTM vectors, we proposed a 3-layer neural network which efficiently takes advantages of these vectors.
neutral
train_93752
Simverb-3500 (Gerz et al., 2016) was introduced to provide researchers with a testbed for verb relations, a specific yet important class of words that was less common in earlier word-level similarity data sets.
it appears that similarity, relatedness, and motivational alignment are more highly correlated with one another than perceived actor congruence.
neutral
train_93753
The features are generated for each segment 9 for tokens in the training data x (i) .
it can be shown, more generally that the order restriction in (7), (8) leads directly to where the order between any two edit operations in δ (i) , called σ k , σ l is free only in the case of multiple insertion acting on the same position of a string, i.e.
neutral
train_93754
3%, .43-, '(, 5+'/4 transition score matrix A mn , which indicates the score of jumping from m-th tag to the n-th tag.
we directly use ground-truth segments as inputs and predict a DA label for these segments.
neutral
train_93755
The experimental results show that these factors indeed influence the emotional trend.
early stopping is carried out on validation set during training.
neutral
train_93756
However, it is impractical to ask users to provide explicit feedbacks when the agents' responses displease them.
early stopping is carried out on validation set during training.
neutral
train_93757
Both Q −2 and Q −1 contains negative emotions and possibly lead to dissatisfaction.
as described in the introduction, the task is to predict the impending dissatisfaction given n + 1 round context.
neutral
train_93758
PARC3, derived from the Penn Discourse Tree-Bank, consists of Wall Street Journal news articles in which attributions have been manually annotated.
to approximate the perceptions of the general public, we use crowdsourced human judgments in creating the ground truth verifiability-scored dataset.
neutral
train_93759
Spot checks of 100 pronoun-containing attributions in PARC3 showed that this produced reliable, grammatical interpolations.
one important way in which attributions can differ, particularly with respect to news reporting, is in their verifiability, the ease with which an attribution's fidelity to the source can be checked.
neutral
train_93760
We see that our predictions mostly accord with the personality literature, and that domain adaptation often strengthens the correlations that we expected and weakens the correlations that had been predicted from language, but were not expected.
the results suggest that tSDA works well when the county-level scores are further aggregated to the state level (Rentfrow and Jokela, 2016).
neutral
train_93761
We know that personality at the level of individuals is relatively stable over time; average personality in a county level should be extremely stable from year to year.
tSDA corrects for the different word distributions between Facebook and twitter and for the varying word distributions across counties by adjusting target side word frequencies; no changes to the trained model are made.
neutral
train_93762
These two learners are complementary in the two-path bootstrapping system.
meanwhile, Twitter user @purplhaze42 is a self-proclaimed anti-racist and anti-Zionist.
neutral
train_93763
In order to ensure our annotators have a complete understanding of online hate speech, we asked two annotators to first discuss over a very detailed annotation guideline of hate speech, then annotate separately.
among them, only 45 phrases were seen in existing hate slur databases while the other terms, 261 phrases in total, were only identified in real-world tweets.
neutral
train_93764
Smells like a stout and enjoyable.
several recent works have focused on learning generative models of product reviews, either to generate reviews per se, or as a means of learning user and item attributes (Lipton et al., 2015;Radford et al., 2017;Dong et al., 2017;.
neutral
train_93765
A sentence is regarded as a question if the last character is "?"
the tables show the number of times each method on a row was evaluated higher than another method on a column.
neutral
train_93766
Lead Question is a strong baseline in particular.
in this study, the vanilla encoder-decoder and the attention models use Long Short-Term Memory (Hochreiter and Schmidhuber, 1997), and Gated Recurrent Unit (GRU) (Cho et al., 2014) is used in the model with the copying mechanism.
neutral
train_93767
There was no length constraint in terms of the number of characters or words.
each instance was evaluated by 3 evaluators.
neutral
train_93768
In Example 3 in Table 4, the pronoun "it" in the main question "Do you like it?"
the model failed to decode the sequence.
neutral
train_93769
The evaluation criteria were "grammaticality" and "focus", which are based on the criteria used in DUC.
we assumed that a better summary would contains more focused content in a shorter output.
neutral
train_93770
As the METEOR and ROUGE evaluations show, this is not a problem for the final result.
if n concepts are selected, n units of flow are sent from the root over the edges of the graph and each selected concept consumes one of them.
neutral
train_93771
This eases the burden of the decoder by shifting the labeling task to the selector.
it may seem redundant since the RNN gated unit already has the sophisticated gating mechanism such as the GRU unit and the LSTM unit (Hochreiter and Schmidhuber, 1997).
neutral
train_93772
We can motivate improvements to Algorithm 1 by viewing it as a greedy approximation to the optimization problem which chooses a set of scored predictions according to: Given the view that CAEVO is providing a solution to the objective in Equation 1 using Algorithm 1, it is straightforward to see possible directions for improvement.
in the past, labeled corpora used for training and evaluation contained only small subsets of pairs of events and times.
neutral
train_93773
DT, SVM, RF, GB and AB stand for Decision Tree, Support Vector Machine, Random Forest, Gradient Boosting, and AdaBoost, respectively.
environmental influences, which are usually called Gene-environment interaction (Gxe), have been considered as important factors and have extensively been researched in biology.
neutral
train_93774
From this observation, we anticipate that if we address the GxE task focusing on the number by combining the two models, the performance will exceed the current best score, 0.426.
the static RNN decoder outperforms other models.
neutral
train_93775
In our paper, we propose a simple PMI-based approach, which utilizes local mutual frequencies in the course corpus, to calculate the phraseness score for each candidate.
our experimental results verify that over 98% of the candidates have vector representations in this way.
neutral
train_93776
The choice of the fake identities was based on two practical considerations.
in the future, with largerscale data collections, we will be able to work with an extended set of gender options.
neutral
train_93777
By contrast, we explore the discriminatory capability of the unstructured clinical narratives to infer the possible diagnoses.
such models heavily rely on large labeled data, and lack the ability to capture inherent ambiguities and complexities of a clinical scenario.
neutral
train_93778
In the following, an overview of the participants of the user study, the study design as well as the results are presented.
as discussed in Section 5.3.1, there exist significant interactions of generation method, nationality and original statement on the preference of elaborateness/indirectness.
neutral
train_93779
Here, CG statements are harder to understand than HG ones.
section 3 gives an overview of the Ds our approach is employed in.
neutral
train_93780
In this work, we propose to follow the work of Bamman et al.
6 For each tweet, (1) we search for a location mention from GeoNames 7 in the text message, then (2), we verify if the user enabled the geo-coding function of his/her device, and (3) we look for location information from the stored user's profile.
neutral
train_93781
Similarly to Kwok and Wang (2013), this study focuses on a specific community and is of limited scope.
this data set can be thought as a hard test for classifiers as non racist tweets may contain slurs unlike most works so far, which assess their models based on the hypothesis that non racist tweets usually contain general vocabulary and do not exhibit any critical content.
neutral
train_93782
For example, Sasaki et al.
rule-based means the conventional rulebased method proposed by Sasano et al.
neutral
train_93783
Since " (totemo, such)" and " (tanoshii, fun)" are Out Of Vocabulary (OOVs), the traditional system cannot generate the correct word segments or POS tags.
twitter data contain manually annotated estimates the word segmentation and POS tagging, we have to assess whether or not adding the normalization candidates negatively affects a system.
neutral
train_93784
Figure 8 shows the average evaluation results on the simulated data sets.
the system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database.
neutral
train_93785
The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database.
(A branching is a subgraph where each vertex has an in-degree of at most 1.)
neutral
train_93786
SQuAD: SQuAD is a machine comprehension dataset constructed on 536 Wikipedia articles (23K paragraphs), with more than 100,000 questions.
although these improvements are not as large as those we achieved with multiple-turn reasoning, they are are still considerable and imply that robust representations of words is an important building block to strong RC models.
neutral
train_93787
The document score ds is defined by Term Frequency-Inverse Document Frequency (TF-IDF), which is written as where tf (w ′ , d) is the frequency of the word w ′ in the document d, idf (w ′ ) is the inverse document frequency of the word w ′ , and ℓ(d) is the length of d. The VQA feature uses document-local statistics (except for IDF) and counts only in the topk search results.
and demonstrated that the system performance was boosted by multi-step inference that combines, e.g., taxonomic knowledge ("N.Y. is in the northern hemisphere") and general law ("The summer solstice in the northern hemisphere is in June").
neutral
train_93788
The test sentences (i.e., hs) included those taken from NCTUA world history exams and hence the latter task setting is close to ours.
the solver needs a more rigorous inference about temporal relations than about the matching of other NEs.
neutral
train_93789
In this work, we develop the dataset DailyDialog which is high-quality, multi-turn and manually labeled.
we examine our Dai- lyDialog dataset by how many conversations are ending or positive emotions (i.e., happy), and find 3,675 (28.0%) "happy" dialogues.
neutral
train_93790
a women sings I The kid is sliding down a tan plastic slide.
(2015) which was their best performing individual neural model.
neutral
train_93791
We believe this is because the sentences in FN+ contain language that is rarely seen.
a series of injections are used to battle a type of cancer in patients because patients have a special type of drug which counteracts this sickness.
neutral
train_93792
This has multiple advantages: it allows us complete control over the learning process, and we can train on larger and more diverse corpora.
they have some drawbacks that limit their usefulness for specific, low-resource domains.
neutral
train_93793
The dimensions of w j−1 andŝ j−1 are d. The attention score α j (i) is calculated as follows: The context vector c j for generating a targetlanguage sentence is calculated by The attentional vectorŝ j is calculated by using the context vector as follows: and then using the state of this hidden layer, the probability of the output word y j is given by where W c and W s represent weight matrices 3 .
the vertical axis represents a source sentence.
neutral
train_93794
In fact, the German sentence is an idiomatic translation of the English sentence.
padó and Lapata (2005), as one of the earliest studies on annotation projection for SRL using parallel resources, apply different heuristics and techniques to improve the quality of their model by focusing on having better word and constituent alignments.
neutral
train_93795
Note that PBMT 1-best results are equivalent for PBMT in × NMT in and PBMT in × NMT out since the same PBMT model is used and NMT is not relevant.
a very deep lattice with a large beam begins to approach the unconstrained search space of standard decoding in NMT.
neutral
train_93796
Many JSL words and syllables, which are basic units that compose words, are newly coined to meet these requirements, e.g., (Japanese Federation of the Deaf, 2011).
contacts indicate whether and when both hands have contact in the syllable execution.
neutral
train_93797
Therefore, such bounds should be used to decide the number of dimensions, instead of trial and error.
in this paper, we show that the dimension should instead be chosen based on corpus statistics.
neutral
train_93798
This is because once the lower bound is reached, the errors introduced due to the violation of equality constraint are removed.
one needs to think of embedding algorithm's capability to capture these differences effectively, which is governed by its hyperparameters.
neutral
train_93799
The dataset contains 74 restaurant reviews typed with phrase suggestions.
we can obtain an unbiased estimate of the reward that h ✓ would incur using importance sampling:R the variance of this estimate can be unbounded because the importance weights h ✓ (y i |x i )/p i can be arbitrarily large for small p i .
neutral