id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_22000
Open-domain question answering (QA) has been extensively studied in recent years.
many existing works have followed the 'search-and-answer' strategy and achieved strong performance (Chen et al., 2017;Kwon et al., 2018;Wang et al., 2018b) spanning multiple QA datasets such as TriviaQA (Joshi et al., 2017), SQuAD (Rajpurkar et al., 2016), mS-macro (Nguyen et al., 2016), ARC open-domain QA tasks become inherently more difficult when (1) dealing with questions with little available evidence; (2) solving questions where the answer type is free-form text (e.g.
contrasting
train_22001
However, this assumption ignores the difficulty of retrieving question-related evidence from a large volume of open-domain resources, especially when considering complex questions which require reasoning or commonsense knowledge.
aRC does not provide passages known to contain the correct answer.
contrasting
train_22002
Instead, the task of identifying relevant passages is left to the solver.
questions in ARC have multiple answer choices that provide indirect information that can help solve the question.
contrasting
train_22003
we might retrieve sentences similar to the question or the answer choice, but would struggle to find evidence explaining why the answer choice is correct).
a reformulated query consisting of essential terms in the question and Choice 4 can help retrieve evidence explaining why Choice 4 is a correct answer.
contrasting
train_22004
For the first question, there exists evidence that can justify the answer candidate (C).
the model chooses (D) which has more words overlapping with its evidence.
contrasting
train_22005
At each hop, it predicts a relation path using the reasoning module, and also optimizes it using intermediate results.
uHop has demonstrated the ability to process large-scale knowledge graphs in experiments conducted on Freebase (Bordes et al., 2015).
contrasting
train_22006
If the process should continue, i.e., stop is false, loss is defined as where score s r is the score of the question paired with the gold relation r in the next hop and sr is the score of the question paired with the extracted relationr.
if the process should terminate, we optimize the model by The model thus learns to infer sr is greater than s r , resulting in the termination of relation extraction.
contrasting
train_22007
Coref-GRU (Dhingra et al., 2018) uses coreferences among tokens in documents.
it is still limited by the longdistance relation propagation capability of RNNs.
contrasting
train_22008
In addition, bidirectional attention (Seo et al., 2016) shows its superiority to vanilla mutual attention because it provides complementary information to each other for both contexts and queries.
little work exploits the attention between graphs and queries.
contrasting
train_22009
Both MHQA-GRN and Entity-GCN utilize graph networks to resolve relations among entities in documents.
the lack of attention and complementary features limits their performance.
contrasting
train_22010
For example, 'text' carries semantic information of the spoken sentence, whereas 'acoustic' information reveals the emphasis (pitch, voice quality) on each word.
the 'visual' information (image or video frame) extracts the gesture and posture of the speaker.
contrasting
train_22011
System (Poria et al., 2017b) uses contextual information for the prediction but without any attention mechanism.
(Zadeh et al., 2018a) uses multiattention blocks but did not account for contextual information.
contrasting
train_22012
(2018) proposed an inter-modal attention framework for the multi-modal sentiment analysis.
the key differences with our current work are as follows: a) Ghosal et al.
contrasting
train_22013
The underlying assumption of this task is that the entire text has an overall polarity.
the users' comments may contain different aspects, such as: "This book is a hardcover version, but the price is a bit high."
contrasting
train_22014
The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively.
bERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet.
contrasting
train_22015
Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth.
when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized.
contrasting
train_22016
Rules R3/R4/R5 are less effective on their own.
as a whole, they can still improve the overall performance.
contrasting
train_22017
This means this rule may introduce some noises into the objective function.
"-R3" can result in worse accuracy for TripAdvisor, which means it is still complementary to the other rules for this dataset.
contrasting
train_22018
It should be good news that Ross has his paper published and Rachel is glad to see related reports about it.
the transcripts do not reveal very strong emotions compared to what the characters might act in the TV show.
contrasting
train_22019
Negation rules have been found to work effectively across different domains and rarely need finetuning (Taboada et al., 2011).
rule-based approaches entail several drawbacks, as the list of negations must be pre-defined and the selection criterion according to which rule a rule is chosen is usually random or determined via cross validation.
contrasting
train_22020
Multilingual Word Embeddings BWE methods can be extended to the case of multiple languages by simply mapping all the languages to the vector space of a selected language.
directly learning multilingual word embeddings (MWE) in a shared space has been shown to improve performance (Ammar et al., 2016;Duong et al., 2017;Chen and Cardie, 2018;Alaux et al., 2018).
contrasting
train_22021
This method can be generalized to the cross-lingual setting by training monolingual sentimental embeddings on both languages then aligning them in a common space.
it requires sentiment resources in the target language thus is impractical for low-resource languages.
contrasting
train_22022
In the unsupervised case, such a dictionary can be induced from the monolingual embeddings S and T (Artetxe et al., 2018).
the quality of this dictionary is usually not good, which in turn degrades the quality of the projection matrices learned from this dictionary.
contrasting
train_22023
(3) for the task Y s(l) and l th layer except that it does not treat the NMT parameters θ as constant.
it always occurs on the training data that Y s To alleviate this inconsistency issue for better approximating the full-coverage method, we leverage the above structural property by adding another regularization term.
contrasting
train_22024
Since layer 1 is not selected as the regularized layer, no significant gap is observed.
since layer 1 is close to the loss directly imposed on layer 2, improvements about 5% and 8% are obtained.
contrasting
train_22025
Since UD attaches subordinating and coordinating conjunctions to the subsequent conjunct, this results in them being positioned in the same conjunct they relate (e.g., After will be included in the first conjunct in "After arriving home, John went to sleep"; and will be included in the second conjunct in "John and Mary").
uCCA places conjunctions as siblings to their conjuncts (e.g., "[After] [arriving home], [John went to sleep]" and "[John] [and] [Mary]").
contrasting
train_22026
Units relating a Scene to the speech event or to the speaker's opinion are Ground (e.g., "no, Warwick in New Jersey" and "Please visit my website").
discourse elements that relate one Scene to another are Linkers (e.g., anyway).
contrasting
train_22027
Our results thus suggest that encoding syntax more directly, perhaps using syntactic scaffolding (Swayamdipta et al., 2018) or guided attention (Strubell et al., 2018), may assist in predicting unit boundaries.
tUPA often succeeds at making distinctions that are not even encoded in UD.
contrasting
train_22028
Low perplexity signifies less uncertainty over which words can be used to continue a sequence, quantifying the ability of a language model to predict gold or reference texts (Brown et al., 1992;Young et al., 2006).
style transfer outputs are not necessarily gold standard, and the correlation between perplexity and human judgments of those outputs is unknown in the style transfer setting.
contrasting
train_22029
For style transfer intensity, kappas for relative scoring do not show improvement over the previously used approach of absolute scoring of x .
we observe the opposite for the aspect of naturalness.
contrasting
train_22030
(2002), only focus on pairs of unigrams (single words).
the concept of semantic relatedness applies more generally to any unit of text.
contrasting
train_22031
For example, apples and bananas (co-hyponyms of fruit) are both edible, they grow on trees, they have seeds, etc.
semantically related concepts may not have many properties in common, but there exists some relationship between them which lends them the property of being semantically close.
contrasting
train_22032
on how to get food delivered, or on the risks of caffeine intake.
given how the narrative is constructed, we can intuit that the more likely goal of the narrator is to get advice on how to overcome the effects of sleep deprivation so that they can be alert for the upcoming programming lesson.
contrasting
train_22033
To find plausible alternative answer options for each candidate cloze test instance, one direct approach could be to find questions that are semantically related to the ground-truth question.
there are two underlying problems with this approach.
contrasting
train_22034
The Penn Discourse Treebank (PDTB) is one of the most well-known examples (Miltsakaki et al., 2004;Prasad et al., 2008).
although multimodal corpora increasingly include discourse relations between linguistic and nonlinguistic contributions, particularly for utterances and other events in dialogue (Cuayáhuitl et al., 2015;Hunter et al., 2015), to date there has existed no dataset describing the coherence of text-image presentations.
contrasting
train_22035
At each user turn, dialogue state tracking (DST) aims to estimate user's goal by processing the current utterance.
in many turns, users implicitly refer to the previous goal, necessitating the use of relevant dialogue history.
contrasting
train_22036
2017discussed on similar limitations of current DST task and introduced a new task of frame tracking that explicitly tracks every slot-values that were introduced during the dialogue.
that significantly complicates the task by maintaining multiple redundant frames that are often left unreferenced.
contrasting
train_22037
The parameters added due to using relevant context are the parameters for encoding the antecedent referential user utterance and the previous system utterance as well as the past utterance and past slot-value scorers.
we also observe high variance in the joint goal accuracy.
contrasting
train_22038
We leverage these annotations to obtain the ground truth visual groundings (A V ) for the referents in our questions.
each of the caption and question templates has referring phrase annotations in them, thus giving the ground truth textual groundings (A T ).
contrasting
train_22039
In our second experiment, we train and test a classifier on the original Waseem-dataset in 10fold crossvalidation.
we remove either of the two types of biased words from the dataset: (i) We remove 25 topic words from the 100 most correlating words that we thought bear no relation towards abusive language (e.g.
contrasting
train_22040
It also provides a perfect solution to the problem of removing the gender direction from non-gendered words.
as we show in this work, while the gender-direction is a great indicator of bias, it is only an indicator and not the complete manifestation of this bias.
contrasting
train_22041
For example, in the binary gender case, the extremes of bias subspace reflect extreme male and female terms.
this is not possible when projecting multiple classes into a linear space.
contrasting
train_22042
In line with Gonen and Goldberg's (2019) findings, simply removing the bias component is insufficient to remove multiclass "cluster bias".
increasing the size of the bias subspace reduces the correlation of the two variables (Table 4 in the Appendix).
contrasting
train_22043
such lexica has emerged (Emerson and Declerck, 2014;Altrabsheh et al., 2017), borrowing ideas from crowdsourcing (Raykar et al., 2010;Hovy et al., 2013).
this is a non-trivial task, because lexica can use binary, categorical, or continuous scales to quantify polarity-in addition to different interpretations for each-and thus cannot easily be combined.
contrasting
train_22044
(2016) use distributions of parts-ofspeech, sentiment, and exaggerations.
to these approaches, our model uses only word embeddings as input representations.
contrasting
train_22045
These datasets are frequently used for the task of predicting titles from abstracts or short stories.
no keyphrases are provided; they do not serve to our purpose.
contrasting
train_22046
Recall that in the original Pyramid, SCUs are exhaustively collected; then, coreferring SCUs between reference summaries are merged and weighted by the number of reference summaries from which they originate.
our method enables using a sample of SCUs for evaluation, out of the SCUs collected in this phase (we have sampled, for uniformity, 32 SCUs per topic).
contrasting
train_22047
Then, nodes having the same label form a group of the partitioning.
to the local search, CW does not directly optimize the objective function proposed by , however, we empirically found that it yields partitionings that score very well with regard to that objective.
contrasting
train_22048
Our BiLSTM implementation yields lower performance than : our model achieves a score of 79.1, compared to their reported score of 80.3.
our BDH model yields a score of 80.8, already achieving state-of-the-art performance.
contrasting
train_22049
7 The original AllenNLP library uses a byte representation.
the multilingual fork assigns a unique character id to each unicode character, thereby avoiding the need to recognize mutibyte representations.
contrasting
train_22050
Given a configuration with word w i on top of the stack, as the pointer network just returns a position p from a given sentence, they proceed as follows to determine which transition should be applied: • If p = i, then the pointed word w p is considered as a child of w i ; so the parser chooses a Shift-Attach-p transition to move w p from the buffer to the stack and build an arc w i → w p .
• if p = i, then w i is considered to have found all its children, and a Reduce transition is applied to pop the stack.
contrasting
train_22051
These parsers are conceptually simple, not needing traditional parsing algorithms or auxiliary structures.
experiments on the PTB and a sample of UD treebanks show that they provide a good speed-accuracy tradeoff, with results competitive with more complex approaches.
contrasting
train_22052
This is the case of tasks such as PoS tagging, chunking or named-entity recognition, for which different approaches obtain accurate results (Brill, 1995;Ramshaw and Marcus, 1999;Reimers and Gurevych, 2017).
previous work on dependency parsing as sequence labeling is vague and reports results that are significantly lower than those provided by transition-, graph-based or sequence-tosequence models Kiperwasser and Goldberg, 2016;Dozat and Manning, 2017;Zhang et al., 2017a).
contrasting
train_22053
We tested four different encodings, training a standard BiLSTM-based architecture.
to previous work, our results on the PTB and a subset of UD treebanks show that this paradigm can obtain competitive results, despite not using any parsing algorithm nor external structures to parse sentences.
contrasting
train_22054
They are based on character-level language models which treat text as distributions over characters and are capable of generating embeddings for any string of characters within any textual context.
such purely character-based approaches struggle to produce meaningful embeddings if a rare string is used in a underspecified context.
contrasting
train_22055
Less pronounced impact on WNUT-17.
we also find no significant improvements on the WNUT-17 task on emerging entities.
contrasting
train_22056
Our pooling operation makes the conceptual simplification that all previous instances of a word are equally important.
we may find more recent mentions of a word -such as words within the same document or news cycle -to be more important for creating embeddings than mentions that belong to other documents or news cycles.
contrasting
train_22057
Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.
in practice, annotated data can often be imperfect with one typical issue being the training data may contain incomplete annotations.
contrasting
train_22058
The former approach may even lead to sub-entity level annotations (e.g., "radio" is annotated as part of an entity).
we argue such assumptions can be largely unrealistic.
contrasting
train_22059
In practice, their approach worked for the task of citation parsing, where the q distribution may not deviate much from the uniform distribution (Figure 3(c)) in such a task.
in the task of NER, we find such a simple treatment to the q distribution often leads to sub-optimal results (as we can see in the experiments later) as the q distribution is highly skewed due to the large amount of O labels.
contrasting
train_22060
Since existing work explicitly exploited annotated trigger words in their approaches, they can directly model this observation.
in our case, annotated triggers are unavailable.
contrasting
train_22061
On one hand, v att is calculated by the dot product of s att and t 1 , which is designed to capture local features (specifically, features about hidden trigger words).
the last output of the L-STM layer, h n , encodes global information of the whole sentence, thus v global = h n • t T 2 is expected to capture global features of a sentence.
contrasting
train_22062
In this task, we focus on named entity recognition at discourse level (DiscNER).
to traditional sentence-level NER (SentNER), where sentences are processed independently, in DiscNER, long-range dependencies and constraints across sentences have a crucial role in the tagging process.
contrasting
train_22063
For benchmarks annotated with the knowledge base (KB) guided distant supervision, this assumption is often valid since all types are from KB ontologies and naturally follow tree-like structures.
since knowledge bases are inherently incomplete (Min et al., 2013), existing KBs only include a limited set of entity types.
contrasting
train_22064
More sophisticated features based on different kinds of similarity measures have also been considered, such as the surface similarity based on Dice coefficient and the WuPalmer WordNet similarity between argument heads (Mc-Conky et al., 2012;Cybulska and Vossen, 2013;Krause et al., 2016).
these features are computed using either the outputs of event argument extractors and entity coreference resolvers (Ahn, 2006;Ng, 2014, 2015; or semantic parsers (Bejan and Harabagiu, 2014;Yang et al., 2015;Peng et al., 2016) and therefore suffer from serious error propagation issues (see Lu and Ng (2018)).
contrasting
train_22065
Several previous works proposed joint models to address this problem (Lee et al., 2012;, while others utilized iterative methods to propagate argument information Choubey and Huang, 2017) in order to alleviate this issue.
all of these methods still rely on argument extractors to identify arguments and their roles.
contrasting
train_22066
The event pair (m 1 , m a 2 ) has a pair of implicit compatible arguments in the REASON-argument role and is likely to be coreferent.
altering the target argument to contentious citizenship amendment bill (m b 2 ) would yield an pair of implicit incompatible arguments, and the resulting event pair would become non-coreferent.
contrasting
train_22067
Ideally, in order to make the memory best represents a previous task, we hope to choose diverse samples that best approximate the distribution of task data.
distribution approximation itself is a hard problem and will be inefficient due to its combinatorial optimization nature.
contrasting
train_22068
These approaches could successfully prevent forgetting.
they do not suit many lifelong settings in NLP.
contrasting
train_22069
In OTyper (Yuan and Downey, 2018), each type is represented by averaging the embedding of the words constitutes the type label.
protoLE (Ma et al., 2016) represents each type by a prototype that consists of manually selected entity mentions, where the type embedding is obtained by averaging the prototype mentions' word embeddings.In contrast, our work differs from OTyper and protoLE by constructing the type representations based on the Wikipedia descriptions of the types, which not only carry more information about the type but also can be easily adapted to other tasks such as event typing and text classification.
contrasting
train_22070
Short descriptions are less informative and carry less shared semantics with the type's mentions.
overly long descriptions could also be confusing as it might share a significant number of common words with the descriptions of other types.
contrasting
train_22071
However, its input is not the meaning vector but the form vector:f The motivator has the same architecture as the discriminator, and the same loss function.
while the adversarial loss forces the encoder E to produce a meaning vector m with no information about the form f , the motivational loss encourages E to encode this information in the form vector by minimizing L M : The overall training procedure follows the methods for training GANs (Goodfellow et al., 2014;Arjovsky et al., 2017) and consists of two stages: training the discriminator D and the motivator M , and training the encoder E and the generator G. to Arjovsky et al.
contrasting
train_22072
If the true post-modifier, the one that is actually used in the context, is rated the highest compared to the rest, then we assume the post-modifier is indeed specific to the context.
if the crowd workers rate multiple other post-modifiers as good fits for the context, then the true post-modifier is not context specific.
contrasting
train_22073
People tend to prefer generated post-modifiers over the ones written by professional journalists when they are shorter and to use more general terms without elaborating too much about the entity.
longer and more detailed human written post-modifiers are preferred when they are especially relevant to the rest of the sentence.
contrasting
train_22074
It continues by adding to this hypothesis the second word, small.
suppose it then extends this hypothesis with cat.
contrasting
train_22075
Given a perfect rewriter that always generates semantically equivalent paraphrases and a perfect NLI model robust to perturbations, we would expect no change in predictions between the original development set and the rewritten ones.
this is not what we observe; Table 4 shows that rewriting leads to a greater percentage of newly incorrect predictions than newly correct predictions.
contrasting
train_22076
(2016 It is shown that augmenting at evaluation time (aggregation by voting) result in stable improvement (around +2~3% MAP and +2~6% MRR for both scenarios that either augments the training data or not)-this shows that increasing the paraphrastic diversity of the answer candidates could potentially make the system more robust.
augmenting the training set does not yield such improvements-we speculate that this may introduce some noise to the training data.
contrasting
train_22077
As for the crea., without any lexical constraint, the normal model always generates the sentences which are similar to the relatively high-frequency sentences in the corpus and results in the lack of novelty.
the target words are used in its less used literal senses in the uncommon sense model and the results seem to be kind of creative.
contrasting
train_22078
It is one of the key challenges in natural language understanding, and has drawn increasing attention in recent years (Levesque et al., 2011;Roemmele et al., 2011;Zhang et al., 2017;Rashkin et al., 2018a,b;Zellers et al., 2018;Trinh and Le, 2018).
due to the lack of labeled training data or comprehensive hand-crafted knowledge bases, commonsense reasoning tasks such as Winograd Schema Challenge (Levesque et al., 2011) are still far from being solved.
contrasting
train_22079
Because DAGs do not contain cycles, they must always have at least one root and one leaf, but they can have multiple roots and multiple leaves.
our results apply in different ways to single-rooted and multi-rooted DAG languages, so, given a label set Σ, we distinguish between the set of all connected DAGs with a single root, G 1 Σ ; and those with one or more roots, G * Σ .
contrasting
train_22080
Standard word representation models are based on the distributional hypothesis (Harris, 1954) and induce representations from large unlabeled corpora using word co-occurrence statistics (Mikolov et al., 2013;Pennington et al., 2014;Levy and Goldberg, 2014).
as pointed out by recent work (Bojanowski et al., 2017;Vania and Lopez, 2017;Pinter et al., 2017;Chaudhary et al., 2018;Zhao et al., 2018), mapping a finite set of word types into corresponding word representations limits the capacity of these models to learn beyond distributional information, which leads to several fundamental limitations.
contrasting
train_22081
f Θ is a composition function taking R w as input and outputting a single vector w as the word embedding of w. For the distributional "word-level" training, similar to prior work (Bojanowski et al., 2017), we adopt the standard skip-gram with negative sampling (SGNS) (Mikolov et al., 2013) with bag-ofwords contexts.
we note that other distributional models can also be used under the same framework.
contrasting
train_22082
(2017), and remains a strong baseline for many tasks.
addition treats each subword with the same importance, ignoring semantic composition and interactions among the word's constituent subwords.
contrasting
train_22083
This result is quite intuitive: sms is trained according to the readily available gold standard morphological segmentations.
sms is less useful for entity typing, where almost all best-performing configurations are based on morf (see also Figure 4).
contrasting
train_22084
(2017) proposes KGE-LDA to incorporate embeddings of KGs into topic models to extract better topic representations for documents and shows promising performance.
kGE-LDA forces words and entities to have identical latent representations, which is a rather restrictive assumption that prevents the topic model from recovering correct underlying latent structures of the data, especially in scenarios where only partial kGs are available.
contrasting
train_22085
For example, for the second column of 20 Newsgroups, topic words from both TMKGE and KGE-LDA are related to computers.
it can be noted that words from TMKGE focus more on the core words of computer science.
contrasting
train_22086
(2018) introduced a rank-based similarity measure for word embeddings, called APSynP, and demonstrated its efficacy on outlier detection tasks.
the results on the word-level similarity benchmarks were mixed, which, interestingly enough, could have been predicted in advance by our analysis.
contrasting
train_22087
We show that the addition of MIL regularizers for generating explanations using thresholded attention improved precision and recall hypothesis explanations.
similar improvements were not realized for the premise sentence.
contrasting
train_22088
(3) On the more difficult WN18RR and FB15k-237, ConvR consistently outperforms most of the baselines, except for MRR score of ConvKB on FB15k-237.
on WN18RR ConvR outperforms Con-vKB on all known metrices, especially MRR.
contrasting
train_22089
After that, (Nguyen et al., 2018) propose ConvKB that explores the global relationships among same dimensional entries of the entity and relation embeddings.
neither of them models the interactions between various positions of entities and relations.
contrasting
train_22090
As shown in the above example, we can see it is necessary to include temporal information in DS.
in fusion module, most existing work focused on denoising using methods such as attention or reinforcement learning.
contrasting
train_22091
TempMEM also follows the encoding-fusion framework (Zeng et al., 2015;Lin et al., 2016;Luo et al., 2017).
we make two crucial modifications to the original framework.
contrasting
train_22092
2 Hyper-Parameter Settings For WIKI-TIME experiments, we construct query over each appeared time spot in the mention set.
for NYT-10 experiments, we adopt a single query without temporal encoding to compare results with other baseline methods since the dataset only contains one label for each mention set.
contrasting
train_22093
This classifier is trained using a cross entropy loss with a training dataset whose examples are from seen classes only.
the zero-shot classifier is a binary classifier with a sigmoid output.
contrasting
train_22094
hand, the combination of [v w ; v c ], which included semantic embeddings of both words and the class label, increased the accuracy of predicting unseen classes clearly.
the zero-shot classifier fed by the combination of all three types of inputs ] achieved the highest accuracy in all settings.
contrasting
train_22095
Standard word embedding algorithms, such as word2vec and Glove, make a restrictive assumption that words are likely to be semantically related only if they co-occur locally within a window of fixed size.
this restrictive assumption may not capture the semantic association between words that co-occur frequently but non-locally within documents.
contrasting
train_22096
The co-occurrences in both word2vec and glove are essentially local in nature.
our proposed algorithm leverages both local and non-local co-occurrences.
contrasting
train_22097
In our method, we use a similar graph-based construction to train vector representations of a node (each node a word).
we use a stratified sampling approach within a maximum distance (hopcount) of 2, instead of allowing the random walk to proceed along in a combined depth-first and breadth-first manner, as in (Grover and Leskovec, 2016).
contrasting
train_22098
Among the baseline approaches, both node2vec and SGNS-LDA work well on the concept categorization task.
the performance improvements are inconsistent across datasets, e.g.
contrasting
train_22099
Similarly, it can be seen that the results tend to improve with higher values of β, which confirms that direct associations between words in the word-node graph are more important than transitive ones (2nd plot from the left and the rightmost plot of Figure 1).
second-order transitive associations are still important because the results tend to decrease for β close to 1.
contrasting