id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92000 | Variational Auto-Encoder (VAE) (Kingma and Welling, 2013): The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. | the reward of the agent is: where rw(c, r) is the reward with regard to the conversation c and its reference response r in the dataset. | neutral |
train_92001 | Recently, the Pew Research Center (Center, 2017) reported that "roughly four-in-ten Americans have personally experienced online harassment, and 63% consider it a major problem." | as is to be expected with machine-generated text, in the other human evaluation we conducted, where Mechanical Turk workers were also presented with sampled human-written responses alongside the machine generated responses, the human-written responses were chosen as the most effective and diverse option a majority of the time (70% or more) for both datasets. | neutral |
train_92002 | Metrics Evaluating How Predictions are Made: We need to quantitatively evaluate the difference between how models and humans classify tweets. | for each model, we also report the percentage of tweets that has rationale 0 or 1, respectively. | neutral |
train_92003 | As λ increases from 0.0 to 1.0, the performance of identifying f alse and unverif ied rumors generally gains. | to show the effect of our customized graph convolution operation (Eq. | neutral |
train_92004 | The contributions of this work are as follows. | when the supervision signal of stance classification becomes strong, the learned stance features can produce more accurate clues for predicting rumor veracity. | neutral |
train_92005 | When labeled data is insufficient, existing methods may not work well as expected. | • APJFNN (Qin et al., 2018): It learns a word-level semantic representation for both job requirements and resumes based on RNN and four hierarchical ability-aware attention strategies. | neutral |
train_92006 | Following previous studies (Vaswani et al., 2017), we set the hidden dimension of our model HGAT and other neural models to d = 512 and the dimension of pre-trained word embeddings to 100. | to address the problem, efforts have been made to enrich the semantics of short texts. | neutral |
train_92007 | In many practical scenarios, the labeled data is scarce, while human labeling is time-consuming and may require expert knowledge (Aggarwal and Zhai, 2012). | in this way, the sparsity of the short texts is alleviated. | neutral |
train_92008 | CNN: CNN (Kim, 2014) with 2 variants: 1) CNN-rand, whose word embeddings are randomly initialized, and 2) CNN-pretrain, whose word embeddings are pre-trained with Wikipedia Corpus. | these methods are not able to achieve good performance as the feature engineering step relies on domain knowledge. | neutral |
train_92009 | We use financial news (2009.10.19 to 2016.10.31) from Sohu 4 to predict the daily movement of Shanghai securities composite index. | with the argument that a piece of news often describes one or several events which could be represented as a collection of elements, this study extracts news elements as enti- ties, actions and nouns from news titles and texts, with respect to their tf-idf scores. | neutral |
train_92010 | A possible explanation is that the Paragraph Vector model assigns a distinct vector to each of the news articles, thus inconsistency between news in different news browsing sequences is easily created by the sequence-level fine-tuning. | for unsupervised learning, the autoencoder model is an early solution to compress text data (Vincent et al., 2010). | neutral |
train_92011 | Table 1 shows that the SEQ model outperforms the current SOTA system on all three text genres for binary CWI (statistically significant using McNemar's test, p=0.0016, χ 2 =9.95). | in order to rank substitutes, the simplicity and contextual semantic equivalence scores are combined to produce an overall suitability score. | neutral |
train_92012 | Previous work has framed this task as ranking words according to simplicity (Paetzold and Specia, 2016a). | threshold-based filtering is performed by removing all substitutes that are unlikely to be grammatical or do not fit the target context. | neutral |
train_92013 | The gold standard substitution suggested by the human annotators in this context is "demarcated", whereas REC-LS substitutes this word with "split". | these embeddings are able to model complex syntactic and Oak is strong and also gives shade g.c. | neutral |
train_92014 | 1, the second news is more informative than the first news in modeling user preferences, since the first one is usually not audience sensitive. | denote the click probability score of the positive news asŷ + , and scores of the K negative news . | neutral |
train_92015 | This is because our approaches incorporate both word-and newslevel attention networks to simultaneously select important words and news for learning more informative news and user representations. | first, the characteristics of search queries, webpage titles and news have huge differences. | neutral |
train_92016 | 1, search queries are usually phrase pieces with a few words, while news and webpage titles are usually complete sentences. | first, we visualize the distributions of the view-level attention weights in our NRHUB approach, and the results are shown in fig. | neutral |
train_92017 | Different from these methods, our approach can learn unified user representations by incorporating heterogeneous user behaviors via an attentive multi-view learning framework. | evaluating the informativeness of different views via a view-level attention network can learn better user representations. | neutral |
train_92018 | This is probably because reviews usually contain rich clues on user preferences and item properties, which is useful for user and item representation learning for recommendation. | to solve this problem, in our approach we propose a hierarchical attentive graph neural network for recommendation which can model high-order user-item interactions. | neutral |
train_92019 | We conducted experiments on four widely used benchmark datasets in different domains and scales to validate the effectiveness of our approach. | it may be effective to recommend Item-2 to User-2. | neutral |
train_92020 | We use SenticNet (Cambria et al., 2018) to normalize these emotion words (W = {w 1 , w 2 , . | two events with similar word embeddings may have similar embeddings despite that they are quite unrelated, for example, as shown in Figure 1 (b), "PersonX broke record" and "PersonY broke vase". | neutral |
train_92021 | According to (Hirschauer, 2010), coverage and divergence should be considered for the acceptance decision of a paper. | the deep and wide components jointly make the prediction. | neutral |
train_92022 | The EoG model outperforms all baselines for all pair types. | they allow mention-mention edges only if these mentions belong to the same entity and consider that a single mention pair exists in a sentence. | neutral |
train_92023 | We identify three frequent cases of errors, as shown in Table 6. | for the inter-sentence pairs, performance significantly drops with a fully connected graph (Full) or without inference (NoInf). | neutral |
train_92024 | Through ablation analysis, we observe that the memory buffer and the KG features contribute significantly to this performance gain. | by forming semantic abstractions for each highlighted text span, we decompose a single complex tagging task in a large label space into correlated but simpler sub-tasks, which are likely to generalize better when the training data is limited. | neutral |
train_92025 | Then we introduce scalable implementations of the Rectified Anchor Word (RAW) algorithm and various evaluation metrics, investigating the impact of each inference step from vocabulary curation to topic inference. | then the main learning task of topic models is to find the wordtopic matrix B and topic-document matrix W that approximates H ≈ BW with the column-stochastic constraints B ∈ CS N ×K , W ∈ CS K×M . | neutral |
train_92026 | They consist of political blogs (Eisenstein et al., 2011) and business reviews (Lee and Mimno, 2014) with the smallest vocabulary (4.4k, 1.6k), respectively. | most topics involve Pop and Rock, emulating the overall genre distribution of the corpus as illustrated in Figure 5. | neutral |
train_92027 | While using the pivoted QR (Arora et al., 2013) 6 Avoiding negative entries is useful because RAW is not a purely algebraic algorithm but uses probabilistic conditions. | instead of directly decomposing H, JSMF decomposes smaller but aggregated statistics for revealing the latent topics and their correlations. | neutral |
train_92028 | Spectral methods explicitly construct word co-occurrence moments as statistically unbiased estimators, providing alternatives to the probabilistic algorithms via moment-matching. | figure 2 shows that running only 5 iterations of AP or DR sufficiently rectifies C, and 15-20 iterations yields almost identical results to 150 iterations. | neutral |
train_92029 | Given the binary nature of the classification problem we can use the cross entropy loss: where Φ is the set of all trainable weights in the model, N is the number of data samples, m i is the number of sentences in the i-th data sample, p ij is the predicted label of the j-th sentence in the i-th data sample, and y ij is the corresponding ground truth label. | ally, in an ablation study we demonstrate that our various modeling choices, which tackle the inherent challenges of comment-edit understanding, each contribute positively to empirical results. | neutral |
train_92030 | Such a system would necessarily first need to learn mappings between edits and natural lan-guage comments, before it could learn to generate them. | it becomes difficult to know which tasks have been completed and which haven't, especially if authors are not proactive about marking comments as addressed. | neutral |
train_92031 | It is important to recall that the main goal of our work is to develop fast and efficient on-device neural text classification approach, which can achieve near state-of-the-art performance while satisfying the on-device small size and memory resource constrains. | baseline Comparison: We use the same baselines as described in Tang et al., 2015). | neutral |
train_92032 | By following (Kudo, 2018), we can find the most likely segmentation sequence t starting from all of the n-best segmentations t 1 , • • • ,t n of S(p) rather than from onlyt 1 . | this metric remedies shortcomings of common QAC evaluation metrics, mean reciprocal rank (MRR) and partial-matching MRR (PMRR), which require sampling of a prefix length and are favorable to short queries. | neutral |
train_92033 | Whereas computations run in parallel in GPU, the number of operations for the output layer in the language model is proportional to the vocabulary size in CPU. | to deal with issues coming from introducing subword language model, we develop a retrace algorithm and a reranking method by approximate marginalization. | neutral |
train_92034 | Query suggestion (Sordoni et al., 2015;Dehghani et al., 2017) and query reformulation (Nogueira and Cho, 2017) are related to QAC and well-established problems. | on the assumption that log p(t) log p(t ) for all t ∈ T B and t / ∈ T B , Equation (1) implies that marginalization over the final beam outputs can provide better approximation: reranking after summing out the probability of duplicates can give better ordered list of candidates. | neutral |
train_92035 | The higher the QPS, the better. | these limitations arouse to consider alternatives to represent a query in a shorter sequence. | neutral |
train_92036 | Although current retrace algorithm is implemented straightforwardly, it can be improved by merging beams efficiently. | sR models only obtain small improvement. | neutral |
train_92037 | To address the challenges of identifying rare and complex disease names, (Xu et al., 2019) proposed a method that incorporates both disease dictionary matching and a document-level attention mechanism into BiLSTM-CRF for disease NER. | in Figure 3, the sentences as S C s and S C t contain the word "cough", andh sm andh tn are the corresponding hidden states. | neutral |
train_92038 | The results show that our proposed model with global attention significantly outperforms the Bi-LSTM CRFinference model for symptom inference across all the categories. | the results show that our proposed model with global attention significantly outperforms the Bi-LStM CRFinference model for symptom inference across all the categories. | neutral |
train_92039 | In Figure 3, the sentences as S C s and S C t contain the word "cough", andh sm andh tn are the corresponding hidden states. | the weight w i,j ∈ (0, 1). | neutral |
train_92040 | Interestingly, however, fine-tuning with the larger set of counterfactuals (CF loss) does not seem to help in rewriting endings that relate to the counterfactuals well. | rob ended up playing the best song of his life. | neutral |
train_92041 | Only the BERTScore metrics appear to positively correlate with human scores for counterfactual understanding, making them usable for evaluating generations across properties related to all three questions. | model Size and Pretraining Data We observe that models with more parameters are better at the counterfactual rewriting task than smaller models. | neutral |
train_92042 | Premise Ana had just had a baby girl. | crowdworkers were presented outputs of a pair of systems, and asked to choose which one is better, or "equally good" or "equally bad", in terms of each of the three criteria. | neutral |
train_92043 | The n-grams in the target text that are not part of the LCS are the phrases that would need to be included in the phrase vocabulary to be able to construct t from s. In practice, the phrase vocabulary is expected to consist of phrases that are frequently added to the target. | 3.2 using the validation set of 46K examples. | neutral |
train_92044 | Here we describe its main components: (1) the tagging operations, (2) how to convert plain-text training targets into a tagging format, as well as (3) the realization step to convert tags into the final output text. | for a batch size 8, LASERTAGGER AR is already 10x faster than comparable-in-accuracy SEQ2SEQ BERT baseline. | neutral |
train_92045 | With the two networks, we can factorize the generation probability P (C|T, B) as P (S|T, B) • P (C|S, T ), where S = (s 1 , . | in iR-T and iR-TC, we use three types of filters with window sizes 1, 3, and 5 in the CNN based matching model. | neutral |
train_92046 | The number of samples in Monte Carlo sampling is 1. | the attention mechanism selects useful information in the title, and the gate mechanism further controls how much such information flows into the representation of the article. | neutral |
train_92047 | Supposing we have a smaller vocabulary, the document represented by this vocabulary is y = {y i } m i=1 , where m < n is the number of words and y i ∈ R e is the word embedding of the word at position i. | but it is not good at modeling semantics (Dieng et al., 2017) and may cause the latent variable collapse problem (bowman et al., 2016) which leads that the decoder ignores information from the inferred latent codes. | neutral |
train_92048 | propose to adopt a Gaussian-based topic model which assumes each word is generated by a Gaussian distribution. | the i-th word generated by the decoder is where W v ∈ R |V |×e denotes word embedding. | neutral |
train_92049 | Without bounding box regression, the Soft-Label Chain CRF model has an accuracy of 69.85%, a 4.84% reduction compared to the setting with bounding box regression. | overlapping regions can be penalized only after prediction, so this loss is not differentiable, and one has to resort to reinforcement learning. | neutral |
train_92050 | For each of the targeting pronoun, the workers are asked to select all the mentions that it refers to. | we align mentions in the dialogue with objects in the image and then jointly use the contextual and visual information for the final prediction. | neutral |
train_92051 | Then, we rank all segment predictions by the scoreŝ: A straightforward solution is selecting the segment with the highest score as the final prediction. | 2) The positive and negative training samples are extremely imbalanced. | neutral |
train_92052 | The results are reported in Table 1. | we set the ground truth confidence of each frame based on its "centerness" in the segment. | neutral |
train_92053 | Table 1 presents some typical examples of our corrections. | the details are presented as follows. | neutral |
train_92054 | Note that, our proposed framework is general and fits most of, if not all, NER models that accept weighted training data. | we follow the standard train/dev/test splits and use both the train set and dev set for training (Peters et al., 2017;Akbik et al., 2018). | neutral |
train_92055 | ", this "Chicago", representing the NBA team Chicago Bulls, should be annotated as an organization. | another usage of CrossWeigh is to identify potential label mistakes during label annotation process, thus improving the annotation quality. | neutral |
train_92056 | More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. | the interannotator agreement is 95.66%. | neutral |
train_92057 | high recall), while keeping away from wrongly identified non-mistake sentences (i.e. | no matter "Chicago" is LOC or ORG, it only counts as its surface name "Chicago". | neutral |
train_92058 | When k becomes larger, each fold D i will be smaller, thus leading to a smaller size of test_entities i ; correspondingly, a larger train_set i will be picked. | when annotators are not careful or lack background knowledge, this "Chicago" might be annotated as a location, thus being a label mistake. | neutral |
train_92059 | The Test Performance compares the performance of the NER models trained with these annotations. | for brevity of space, we compare results for two languages in figure 3: 3 Dutch, a relative of English, and Hindi, a distant language. | neutral |
train_92060 | When we count the number of entities present in the data selected by the two strategies, we see in Figure 4(b) that data selected by ETAL has a significantly larger number of entities than SAL, across all the 6 annotation experiments. | the reason for this thresholding is purely for computational purposes as it allows us to discard all spans that have a very low probability of being an entity, keeping the number of spans actually stored in memory low. | neutral |
train_92061 | The details of the NER model hyper-parameters can be found in the Appendix. | the performance of these models is highly dependent on the availability of large amounts of annotated data, and as a result their accuracy is significantly lower on languages that have fewer resources than English. | neutral |
train_92062 | Moreover, this strategy minimizes annotator effort by requiring them to label fewer tokens than the full-sequence annotation. | for SAL we see that the annotator missed a likely entity because they focused on the other more salient entities in the sequence. | neutral |
train_92063 | The improvements are robust and significant on both tasks, both metrics, and on all depths. | the transformer is convolutional on all length k of n-grams; the same parameters are used model the interactions between n-grams at each length, to reduce the parameter size. | neutral |
train_92064 | Further, we set the ijk element of the tensor W ∈ R ne×nr×ne to 1 if the fact (e s , r, e o ) holds and -1 otherwise. | typically, P , Q, R are smaller than I, J, K respectively, so Z can be thought of as a compressed version of X . | neutral |
train_92065 | As described in the methodology section, the label remapping function f L we use, is a one-to-one mapping between the labels of the original task and the adversarial task. | white-box based reprogramming outperforms the black-box based approach in all of our experiments. | neutral |
train_92066 | i) Reuters21578: A dataset consisting of 10788 news documents from 90 different categories; ii) 20N ewsgroups: A collection of 18828 newsgroup posts that are divided into 20 different newsgroups; iii) T M C: A dataset containing the air traffic reports provided by NASA, which includes 21519 training documents with 22 labels. | despite of superior performances, only the simplest priors are used in these models, i.e. | neutral |
train_92067 | Given a document x, its hashing code can be obtained through two steps: 1) mapping x to its latent representation by z = µ φ (x), where the µ φ (x) is the encoder mean µ φ (•); 2) thresholding z into binary form. | it is shown that the endto-end training brings a remarkable performance improvement over the two-stage training method in VDSH. | neutral |
train_92068 | We can see that the retrieval precisions of the proposed models, especially the BMSH, are quite robust to this parameter. | it is widely known that priors play an important role on the performance of generative models (Goyal et al., 2017;Chen et al., 2016;Jiang et al., 2016). | neutral |
train_92069 | The greedy search algorithm is also used in the graph construction for candidates collecting. | later a theoretical solution for MIPS by searching on the Delaunay Graphs will be summarized. | neutral |
train_92070 | By Definition 2.2, a data point is isolated (i.e., have no incident edges) if its Voronoi cell is empty. | at beginning of the graph construction, relatively "super" points are not true extreme points. | neutral |
train_92071 | For embedding based methods (word2vec, GloVe, D-embedding 7 , and MWE 8 ), we follow previous work (Mikolov et al., 2013;Levy and Goldberg, 2014) and use the cosine similarity between embeddings of head and tail words to predict their relations. | • GloVe (Pennington et al., 2014), learning word embeddings by matrix decomposition on word co-occurrences. | neutral |
train_92072 | Several studies attempted to acquire SP automatically from raw corpora (Resnik, 1997;Rooth et al., 1999;Erk et al., 2010;Santus et al., 2017). | assuming that there are 20 relations among words, the size of all embedding models with the increasing word numbers in Figure 4. word2Vec and GloVe have the smallest size because they only train one embedding for each word while sacrificing the relation-dependent information among them. | neutral |
train_92073 | end end end end drifting range a, whose value controls the distance between the local relational embeddings and the center embedding. | in this paper, we propose a multiplex word embedding (MWE) model, which can be easily extended to various relations between two words. | neutral |
train_92074 | Deep neural language models with a large number of parameters have achieved great successes in facilitating a wide range of Natural Language Processing (NLP) applications. | we call each group a codebook, and each d m dimensional vector in the codebook a codeword. | neutral |
train_92075 | In particular, quantization (Lin et al., 2016;Hubara et al., 2016) and groupwise low rank approximation (Chen et al., 2018a) achieve the state-of-the-art performance on tasks like language modeling and machine translation. | we refer to our proposed method as MulCode (Mul stands for both multi-way and multiplicative composition). | neutral |
train_92076 | pose quantitative methods for evaluating bias in word embeddings of gendered languages and bilingual word embeddings that align a gendered language with English. | for example, if "doctor" (male doctor) leans more toward to masculine (i.e., close to masculine definition words like "el" (he) and far away figure 2: Projections of occupation words on the semantic gender direction in two gender forms (blue for male and red for female) in Spanish, the original English word (grey), and the bias-mitigated version of the English word (black with "*"). | neutral |
train_92077 | English) only has the former direction. | of this bias, the embeddings may cause undesired consequences in the resulting models (Zhao et al., 2018a;Font and Costa-jussà, 2019). | neutral |
train_92078 | Some failure cases in the 25 examples occur for pairs where the set of translations of y e contains an incorrect translation which is totally unrelated to x e or y e . | recent work on semantic relation prediction largely focuses on a single relation between words in the same language (mostly English) (Nastase et al., 2013;Vulić and Mrkšić, 2018;Glavaš and Ponzetto, 2017;Ono et al., 2015). | neutral |
train_92079 | While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. | section 2 presents related work. | neutral |
train_92080 | End-to-end neural machine translation (NMT) (Sutskever et al., 2014;Bahdanau et al., 2015) learns to generate an output sequence given an input sequence, using an encoder-decoder model. | on average across the test sets, their approach did not outperform SVM with word embedding features. | neutral |
train_92081 | We attempt to apply DynSP * (trained on SQA) directly on FollowUp test set, which results in an extremely low AnsAcc. | the main idea is to decompose the task into two phases by introducing a learnable intermediate structure span: two queries first get split into several spans, and then undergo the recombination process. | neutral |
train_92082 | For example, givenz as "where are the players from",w could be "Las Vegas". | structured Query Language) and retrieves the answer from databases regardless of its context. | neutral |
train_92083 | To this end, we design a two-phase process and present a novel approach STAR to perform follow-up query analysis with reinforcement learning. | we propose to initialize SplitNet via pre-training, and then use reward to optimize it. | neutral |
train_92084 | Given x, y, during training of Phase I (in blue), we fix P rec to provide the reward R(q, z), then P split can be learnt by the REINFORCE algorithm. | let f j = softmax(A :,j ), where f j ∈ R n denotes the attention weights on x according to y j . | neutral |
train_92085 | Besides, we use 3 layers BiLSTM with 400-dimensional hidden states, applying dropout with an 80% keep probability between time-steps and layers. | our baseline is a modification to the model of which uniformly handled the predicate disambiguation. | neutral |
train_92086 | According to our statistics of syntactic rule on training data for seven languages, which is based on the automatically predicted parse provided by CoNLL-2009 shared task, the total number of distance tuples in syntactic rule is no more than 120 in these languages except that Japanese is about 260. | based on gold syntax, the top-1 argument pruning for Catalan and Spanish has reached 100 percent coverage (namely, for Catalan and Spanish, all arguments are the children of predicates in gold dependency syntax tree), and hence our syntax-aware model obtains significant gains of 1.43% and 1.35%, respectively. | neutral |
train_92087 | BERT is also a transformer encoder that has access to the entire input but this choice requires a special training regime. | examples in BooksCorpus are a mix of individual sentences and paragraphs; examples are on average 36 tokens. | neutral |
train_92088 | Finally, we include the BERT results from Yang et al. | increasing the number of Bi-LSTM layers can sometimes even hurt, as shown in Figure 2b. | neutral |
train_92089 | Inspired by the correlation, researchers try to improve SRL performance by exploring various ways to integrate syntactic knowledge (Roth and Lapata, 2016;He et al., 2018b;. | our experimental result boosts to 88.5 F1 score when the framework is enhanced with BERT representations. | neutral |
train_92090 | First, a set of phrase alignments A is obtained for s, t Figure 2: Our method injects semantic relations to sentence representations through paraphrase discrimination. | it is also obvious that the three-way classification of phrasal paraphrases, on which the model discriminates paraphrases, random combinations of phrases from a random pair of sentences, and random combinations of phrases in a paraphrasal sentence pair, is superior to binary classification. | neutral |
train_92091 | Results indicate that sentential and phrasal paraphrase classification complementarily contributes to Simple Transfer Fine-Tuning modeling. | our method learns to discriminate phrasal and sentential paraphrases on top of the representations generated by BERT. | neutral |
train_92092 | (2018), we canonicalize strings in the target cell and truncate them to a maximum of 120 code tokens. | automatically making these decisions conditioned on prior history is one of the main challenges of this task. | neutral |
train_92093 | Table 2 presents dataset statistics for JuICe. | we tackle the task of general purpose code generation in an interactive setting, using an entire sequence of prior NL and code blocks as context. | neutral |
train_92094 | Although there has been work on describing a SQL query with an NL statement (Koutrika et al., 2010;Ngonga Ngomo et al., 2013;Iyer et al., 2016;Xu et al., 2018), few work studies generating questions about a certain SQL component in a systematic way. | the transition can be deterministic or stochastic. | neutral |
train_92095 | Intuitively if the base semantic parser gives a low probability to the top prediction at a step, it is likely uncertain about the prediction. | for SQLNet, we also compare our system with the reported performance of DialSQL (Gur et al., 2018, Table 4). | neutral |
train_92096 | Similarly, (Yao et al., 2019) relies on a pre-defined two-level hierarchy among components in an If-Then program and cannot generalize to formal languages with a deeper structure. | a more direct comparison of various settings under the same average number of questions can be found in appendix C. To better understand how each kind of error detectors works, we investigate the portion of questions that each detector spends on right predictions (denoted as "Q r "). | neutral |
train_92097 | When the agent interacts with users, the maximum number of alternative options (in addition to the original prediction) per component, K, is set to 3. | for SQLNet, MISP-SQL outperforms the DialSQL system with only half the number of questions (1.104 vs. 2.4), and has a much simpler design without the need of training an extra model (besides training the base parser, which DialSQL needs to do as well). | neutral |
train_92098 | 1 In order to distinguish the edge direction, we add a direction symbol to each label with ↑ for climbing up along the path, and ↓ for going down. | unsurprisingly, this will end up with a large number of features. | neutral |
train_92099 | We calculate F 1 scores based on the partial match instead of exact match. | • A p is used to denote that the current word is part of a sentiment span with polarity p, but appears after the target word or exactly as the last word of the target. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.