id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_98400 | Given sentence vector s i as input, SUMO computes: r i , e ij = TMT(r i ,ẽ ij ) (11) Iterative Structure Refinement SUMO essentially reduces summarization to a rooted-tree parsing problem. | the Road Home, the Louisiana grant program for homeowners who lost their houses to hurricanes Katrina and Rita, is expected to cost far more than the $7.5 billion provided by the Federal Government, in part because many more families have applied than officials had anticipated. | neutral |
train_98401 | At the same time, they are noisy and may not reflect the desired phenomenon. | one aspect of controllability is intention; our model produces contrastive claims without understanding the view of the original claim. | neutral |
train_98402 | Results are presented in Table 4. | when we combine this model with constrained decoding, the improvement is smaller than for the other settings. | neutral |
train_98403 | We include the baseline, the baseline with constrained decoding, and the best constrained model ("COUNT + SUB + COPY") according to BLEU and partial match. | addressing point 2 is more difficult due to the variety of possible substitutions, including named entities. | neutral |
train_98404 | Because of these two restrictions, ONTONOTES only has 195K markables, and a low markable density (0.12 markable/token). | we use PD silver to train a coreference system able to simultaneously identify non-referring expression and build coreference chains (including singletons). | neutral |
train_98405 | Empirical study of argumentation requires examples drawn from authentic, human-authored text. | we have used our method to quickly and cheaply produce a large, argument-annotated data set of product reviews, which we freely release, along with the source code to our annotation interface and processing tools. | neutral |
train_98406 | Human dialogs are like well-structured buildings, with words as the bricks, sentences as the floors, and topic transitions as the stairs connecting the whole building. | 7(a) generated by HMM, even if we set 10 states in the HMM, some states are still collapsed by the model because they share a similar surface form. | neutral |
train_98407 | The dialog system is controlled by a fixed structure and hand-set probabilities. | we also proposed a way to incorporate the learned dialog structure information into a downstream dialog system building task. | neutral |
train_98408 | Caselli and Vossen (2017) showed that only 117 annotated causal relations in this dataset are indicated by explicit causal cue phrases while the others are implicit. | understanding causal relations between events in a document is an important step in text understanding and is beneficial to various NLP applications, such as information extraction, question answering and text summarization. | neutral |
train_98409 | (2018) proposed to model the usefulness of reviews using review-level attention to enhance the learning of user and item representations. | for example, in the sentence "The laptop I bought yesterday is too heavy", the word "heavy" is more informative than the word "yesterday" in representing this laptop. | neutral |
train_98410 | For example, if a user frequently mentions "the price is too high" and "very expensive" in his/her reviews for different items, then we can infer this user may be sensitive to price. | in addition, we incorporate a three-tier attention network into our model to select and attend to informative words, sentences and reviews to learn more accurate representations of users and items. | neutral |
train_98411 | We propose an alternative approach that is based on matrix norms and which proved to be more noise-robust by focusing primarily on high word similarities. | the most popular method to come up with word vectors is Word2Vec, which is based on a 3 layer neural network architecture in which the word vectors are obtained as the weights of the hidden layer. | neutral |
train_98412 | We experimented with different keyword list sizes but obtained the best results with rather few and therefore precise keywords. | it is quite striking that, although sn 1 lacks two properties of a normalized similarity measure (boundedness by 1 and symmetry), it reaches quite good results on contest 1. | neutral |
train_98413 | Symmetry directly follows, if we can show that Z = Z for arbitrary matrices Z, since with this property we have Let M and N be arbitrary matrices such that MN and NM are both defined and quadratic, then (see (Chatelin, 1993)) where ρ(X) denotes the largest absolute eigenvalue of a squared matrix X. | this measure was evaluated on the task to assign users to the best matching marketing target groups. | neutral |
train_98414 | The evaluation showed that the inter-annotator agreement values vary strongly for contest 2 part 2 (minimum average annotator agreement according to Cohen's kappa of 0.03 while the maximum is 0.149, see Table 4). | several of our matrix norm based similarity estimates focus primarily on strongly related word pairs and are therefore less vulnerable to noise. | neutral |
train_98415 | Our complete pipeline comprises the following steps: 1. | to further explore this issue of whether normalization thus formed classification model has any benefit over GCN and GAt, we experiment with the Cora and Citeseer scientific document classification tasks (as reported in (Kipf and Welling, 2016) table 3: Summary of the fine-grained F 1 @15 on Se-mEval. | neutral |
train_98416 | Our approach is an alternative to this technique, providing an approximate max-violating inference with respect to AP for a more general case of the ranking represenation. | set, plotting the learning curves over the training epochs, T . | neutral |
train_98417 | In their case, this is due to the impossibility of the global inference (their model is also augmented with the structural component describing item-item interactions) and the scale of the task (they do ranking for recommendation domain). | weston and Blitzer (2012) bypass the necessity of comparison to a complete ranking during training and sample the candidate pairs. | neutral |
train_98418 | 1) of the two ranking outputs, the current ground truth r * i and the max-violatingr i . | we start with the last N th position of the rank and put there the minimum weighted item: According to the decomposition in Eq. | neutral |
train_98419 | The max number of epochs, T , is set to 100, for both LSP and LSP-AP. | the exact algorithm for max-violating inference of Yue et al. | neutral |
train_98420 | (Mohammad et al., 2016) and related deep learning based methods including MITRE (SemEval-2016 best performing system) (Zarrella and Marsh, 2016), n-grams+embeddings (Mohammad et al., 2017), TGMN-CR (Wei et al., 2018b), T-PAN (Dey et al., 2018), AS-biGRU-CNN (Zhou et al., 2017), and TAN (Du et al., 2017). | most of the related work of tweet stance detection explored the traditional deep learning models in their methods. | neutral |
train_98421 | By contrast, the WE WPI system specifically examines the difference between the word positions of the translation and reference, not the difference of lengths between the translation and reference. | the P and Q consist of some P i and Q j , which are the respective signatures. | neutral |
train_98422 | The induced embeddings are evaluated in three tasks: bilingual lexicon induction (BLI), multilingual dependency parsing, and multilingual document classification. | furthermore, empirically prove that the method typically fails for distant language pairs such as English-finnish. | neutral |
train_98423 | Training stops when the perplexity on the development set has not improved for 20 checkpoints (1000 updates/batches per checkpoint). | nMT Setup Our nMT models are developed in Sockeye 5 (Hieber et al., 2017). | neutral |
train_98424 | This is not surprising, as when there is enough in-domain data, continued training on only the in-domain data can already achieve a pretty good performance, and we do not need to use extra unlabeled-domain data to augment it any more, neither does curriculum learning. | the probabilistic curriculum (Bengio et al., 2009) works by dividing the training procedure into distinct phases. | neutral |
train_98425 | In this work we present two primary methods of synthesizing natural noise, in accordance with the types of noise identified in prior work as naturally occurring in internet and social media based text (Eisenstein, 2013;Michel and Neubig, 2018). | similarly for emoticons, we randomly select an emoticon and insert it on both sides. | neutral |
train_98426 | These represent small nuances which the model learns to capture with increasing supervision. | if we further fine-tune the model using only 10k MTNT data, we note that the model still struggles with generation of *very*. | neutral |
train_98427 | For our baseline model we use the standard Transformer Base model (Vaswani et al., 2017). | we propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low. | neutral |
train_98428 | In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation. | we use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer. | neutral |
train_98429 | A machine reading comprehension model is designed to predict the start and end positions in the article of the answer. | their method relies heavily on the multi-branch structures of the tasks, which are not widely applicable in neural machine translation. | neutral |
train_98430 | Recent work tackled the training problem using variants of reinforcement learning (RL) (Suhr and Artzi, 2018;Liang et al., 2018) or maximum marginal likelihood (MML) (Guu et al., 2017;Goldman et al., 2018). | improving beam search has been investigated by proposing specialized objectives (Wiseman and Rush, 2016), stopping criteria , and using continuous relaxations (Goyal et al., 2018). | neutral |
train_98431 | Many VQA datasets have been created. | the split 'testA' contains object categories sampled randomly to be close to the original data distribution, while 'testB' contains objects sampled from the most frequent object categories, excluding categories such as 'sky', 'sand', 'floor', etc. | neutral |
train_98432 | The best model is selected based on the validation loss after training for 50 epochs. | initial datasets, e.g., VQAv1 (Antol et al., 2015) and COCO-QA (Ren et al., 2015a), exhibited significant language bias in which many questions could be answered correctly without looking at the image, e.g., for VQAv1 it was possible to achieve 50% accuracy using language alone (Kafle and Kanan, 2016). | neutral |
train_98433 | We pre-train a neural network model to accurately understand the advice. | as there are now significantly more regions. | neutral |
train_98434 | model would have made a prediction ('x') close to the true block (square). | in the first step, the model from Section 2.5 self-generates restrictive advice based on the most confident predicted region, which it uses as input in the end-to-end model. | neutral |
train_98435 | We gratefully acknowledge the donation of a GPU from the NVIDIA Grant Program. | we use a sequence-to-sequence model with an attention mechanism to map a textual description of a node to its path representation. | neutral |
train_98436 | Furthermore, the proposed model is more interpretable for two reasons. | we use an LSTM (Hochreiter and Schmidhuber, 1997) encoder to project the textual description into a vector space and an LSTM decoder to predict the sequence of entities that are relevant to this definition. | neutral |
train_98437 | We evaluate our ablation baselines on IQUAD V1 and EQA, reporting top-1 QA accuracy (Table 4) given gold standard navigation information as V. These decoupled QA models do not take in a previous action, so we do not consider A ONLY ablations for this task. | the agent sees more of the scene, but can take more training iterations to learn to move to the goal. | neutral |
train_98438 | Furthermore, if we add LSTM span predictors along with the video LSTM (ExCL 2-{b, c}) we obtain an additional boost in performance. | we verify empirically that our method significantly outperforms prior work on two benchmark datasets -TACoS, ActivityNet and comparably well on the third, Charades-STA. | neutral |
train_98439 | unimodal visual model for German verb sense disambiguation, but we find the opposite for Spanish unimodal verb sense disambiguation. | table 4 shows the results of the translation experiment. | neutral |
train_98440 | There are two existing multimodal translation evaluation sets with ambiguous words: the Ambiguous COCO dataset (Elliott et al., 2017) contains sentences that are "possibly ambiguous", and the Multimodal Lexical Translation dataset is restricted to predicting single words instead of full sentences (Lala and Specia, 2018). | when visual information is added to textual features, models in both the languages predict the correct label. | neutral |
train_98441 | For word-level tagging, we use a hierarchical bidirectional LSTM (BiLSTM) that incorporates both token-and character-level information (Plank et al., 2016), similar to the winning system (Samih et al., 2016) of the Second Code-Switching Shared Task (Molina et al., 2016). | we replace usernames with @username in order to preserve privacy. | neutral |
train_98442 | In experiments on a new Spanish-Wixarika dataset and on an adapted German-Turkish dataset, our proposed model performs slightly better than or roughly on par with our best baseline, respectively. | intra-word CS was not handled explicitly, and often systems even failed to correctly assign the mixed label. | neutral |
train_98443 | For freer word order languages such as Polish or Latin, we observe a substantial drop in performance because most information on inter-word relations and their roles (expressed by means of case system) is lost. | there, precision@10 is on average 8 points higher than precision@1. | neutral |
train_98444 | We also observe that most uncertainty comes from morphological categories such as noun number, noun definiteness (which is expressed morphologically in Bulgarian), and verb tense, all of which are inherent (Booij, 1996) 8 and typically cannot be predicted from sentential context if they do not participate in agreement. | we also observe that most uncertainty comes from morphological categories such as noun number, noun definiteness (which is expressed morphologically in Bulgarian), and verb tense, all of which are inherent (Booij, 1996) 8 and typically cannot be predicted from sentential context if they do not participate in agreement. | neutral |
train_98445 | This is within expectations, because with only L N M T , weights related to NSDs are kept to the initial values and were not updated, and hence detrimental to learning. | although using a shorter sequence may improve the efficiency, some syntactic information is lost. | neutral |
train_98446 | In SS and DSS, the probability of using reference words s is annealed using inverse sigmoid decay : s = k/(k + exp(i/k)) at the i-th checkpoint with k = 10. | we train the models using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 1024 words. | neutral |
train_98447 | (2018) proposed to complete a predicted prefix with all possible reference suffixes and picking the reference suffix that yields the highest BLEU-1 score. | the schedule sampling hypothesis uses a mixture of the reference (black) and sampled (blue underlined) words, while the entire hypothesis sequence is sampled in our approach. | neutral |
train_98448 | This mimics the gradual annealing of the online curriculum, so one possibility is that the agent is simply choosing the cleanest bin whenever it can, and its good performance comes from the enforced period of exploration. | figures 4, 5 and 6 show coarse representations of the policies learned by the Q-learning agent on the Paracrawl and WMT English-french datasets. | neutral |
train_98449 | The former, by design, telescopes towards the clean bins. | the fact that the agent beats the fixed -schedule (see Table 1) described above on both corpora makes this unlikely. | neutral |
train_98450 | At a high level, the regularization term keeps parameters which are important to general-domain performance close to the initial general-domain model values during continued training, while allowing parameters less important to general-domain performance to adapt more aggressively to the in-domain data. | the first, multi-objective fine-tuning, which they denote MCL, trains the network with a joint objective of standard loglikelihood loss plus a second term based on knowledge distillation (Hinton et al., 2015;Kim and Rush, 2016) of the general-domain model. | neutral |
train_98451 | Continued training is an effective method for domain adaptation in neural machine translation. | (2017) found that mixing general-domain data with the in-domain data used for continued training improved generaldomain performance of the resulting models, at the expense of training time. | neutral |
train_98452 | In all these cases, the meaning of the word stays the same, despite the change in context. | we also find systematic deviations. | neutral |
train_98453 | Our unsupervised approach (LP) is almost on a par with the most complex SVM. | their constituents are often not semantically related. | neutral |
train_98454 | We observe that when enough data is available (e.g. | accuracy) that we intend to optimize cannot be observed. | neutral |
train_98455 | The joint loss is the Figure 1: High-level overview of our proposed TL architecture. | uLMFiT trains a LM and fine-tunes it to the target dataset, before transferring it to a classification model. | neutral |
train_98456 | An auxiliary task (next sentence prediction) is used to enhance the representations of the LM. | a comparison of the P-LM + aux model and the P-LM model shows that the performance of SiATL on classification tasks is improved by the auxiliary objective. | neutral |
train_98457 | We hypothesize that this is due to two reasons. | we tokenize with the NLTK's TweetTokenizer (Bird, 2006), lowercase all text, and use regular expressions to remove stop words, numbers, urls, consecutive repeated words and Twitter users (i.e., tokens whose first character is '@'). | neutral |
train_98458 | Table 1 shows major hyperparameters. | multi-task learning and personalization did not contribute to the improvement of these auxiliary tasks. | neutral |
train_98459 | An arguably more promising direction is to focus on fact-checking entire news outlets, which can be done in advance. | using multi-task ordinal regression is novel for these tasks, and it is also an under-explored direction in machine learning in general. | neutral |
train_98460 | The input sentence w 1:n is encoded as a one-hot vector, v (total occurrence weighting scheme). | the amount of available fan fiction for this saga allows to create a large corpus. | neutral |
train_98461 | For instance, ACL reviews tend to contain more propositions than those in ML venues, especially with more requests but fewer facts. | among the exactly matched proposition segments, we report a Cohen's κ of 0.64. | neutral |
train_98462 | We draw our corpus from the public-domain texts on Project Gutenberg, selecting individual works of fiction (both novels and short stories) that include a mix of high literary style (e.g., Edith Wharton's Age of Innocence, James Joyce's Ulysses) and popular pulp fiction (e.g., H. Rider Haggard's King Solomon's Mines, Horatio Alger's Ragged Dick). | we expand the criteria for PER to include such characters who engage in dialogue or have reported internal monologue, regardless of their human status (this includes depicted non-human life forms in science fiction, such as aliens and robots, as well). | neutral |
train_98463 | This alone-that cross-domain performance can be so strikingly worse-is a significant result, providing the first estimate of how performance degrades across these domains for this task. | geo-political entities are single units that contain a population, government, physical location, and political boundaries (LDC, 2005). | neutral |
train_98464 | At the end of annotating, the inter-annotator agreement was calculated by double-annotating the same five texts and measuring the F1 score. | to existing datasets built primarily on news (focused on geopolitical entities and organizations), literary texts offer strikingly different distributions of entity categories, with much stronger emphasis on people and description of settings. | neutral |
train_98465 | Previous research on automated abusive language detection in Twitter has shown that communitybased profiling of users is a promising technique for this task. | once the model is trained, we extract 200-dimensional embeddings E = A F W (1) from the first layer (i.e., the layer's output without activation). | neutral |
train_98466 | (2017) who found reducing finegranularity of senses beneficial to some settings. | (2016) that the reverse dictionary system performs best with the bag-of-words (BoW) input encoding and the ranking loss. | neutral |
train_98467 | (2016) which leverages a standard neural architecture in order to map dictionary definitions to representations of the words defined by those definitions. | in this paper, we provide an analysis to highlight the importance of addressing the meaning conflation deficiency. | neutral |
train_98468 | It is widely acknowledged that sense distinctions in WordNet inventory are too fine-grained for most NLP applications (Hovy et al., 2013). | the sense representations used in our experiments (DeConf) were constructed by exploiting the knowledge encoded in WordNet. | neutral |
train_98469 | Alternatively, senses can be automatically induced in an unsupervised manner by analyzing the diversity of contexts in which a word appears (Schütze, 1998;Reisinger and Mooney, 2010;Huang et al., 2012;Neelakantan et al., 2014;Guo et al., 2014;Šuster et al., 2016). | in our experiments we leveraged DeConf (Pilehvar and Collier, 2016). | neutral |
train_98470 | Section 4), we suspect that there is an intrinsic limit to how well generating from an AMR graph can replicate the reference realisation. | given the inherent difficulty of predicting a single syntax realisation (cf. | neutral |
train_98471 | OS: The Open-Sesame (Swayamdipta et al., 2017) classifier, pre-trained on the FN corpus (release 1.7). | as an example usage of our corpus, we used it to evaluate these frame disambiguation models: 1. | neutral |
train_98472 | A Ambiguity Examples in the Corpus # SENTENCE SQS FRAMES (F SS) 1 These writings lack the mystical, philosophical elements of alchemy, but do contain the works of Bolus of Mendes (or Pseudo-Democritus), which aligned these recipes with theoretical knowledge of astrology and the classical elements. | this test shows what is the best possible performance over our corpus that can be expected from a system such as OS that selects a single frame per sentence. | neutral |
train_98473 | The triple captures how much interest a user puts on a document given a query. | 2017, we use TransE as a strong baseline model for the search personalization task. | neutral |
train_98474 | These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging (Toutanova and Chen, 2015). | our MRR and Hits@1 scores are higher than those of TransE (with relative improvements of 14.5% and 22% over TransE, respectively). | neutral |
train_98475 | This is because annotating complex structures typically require certain expertise, and smaller tasks are often easier (Fernandes and Brefeld, 2011). | the results shown here implies that the information theoretical benefit of partialness can possibly offset its disadvantages for learning. | neutral |
train_98476 | This scenario has caused a challenge called multi-level answer ranking (Liu et al., 2018). | latent representation models aim to jointly learn lexical and semantic information from QA sentences and influence the vector generation directly, e.g., attention mechanism (Bahdanau et al., 2015). | neutral |
train_98477 | Note that Model 8 leverages additional signals, including URL information, character-level encodings, and external term features such as tf-idf. | because our model yields different numbers of h i with queries of different lengths, further aggregation is needed to output a global feature v. We directly average all vectors v = 1 Nq h i as the aggregated feature, where N q is the length of the query. | neutral |
train_98478 | The relationship between the PTDL curves in the exclusive evaluation shows that CNN-BiLSTM is actually the optimal tagging architecture for a brief window, overtaking CRF around 30K tokens and staying in front of BiLSTM-CRF until about 125K tokens. | active Learning active learning seeks to maximize the performance of a model while minimizing the manual annotation required to train it. | neutral |
train_98479 | In practice, probability vectors of LMs tend to be sparse (Kim et al., 2016). | a commonly discussed drawback of such LM-based text generation is exposure bias (Ranzato et al., 2015): during training, the model predicts the next token conditioned on the ground truth history, while at test time prediction is based on predicted tokens, causing a train-test mismatch. | neutral |
train_98480 | In IWGAN, a character level language model was developed based on adversarial training of a generator and a discriminator. | the output locus of the soft-GAN decoder would be two red line segments as depicted in Figure 2 (Right panel) instead of two points (in the one-hot case). | neutral |
train_98481 | Text generation systems often generate their output from an intermediate semantic representation (Yao et al., 2012;Takase et al., 2016). | for such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events. | neutral |
train_98482 | For such cases, machine learning techniques emulate human linguistics and learn from training examples to predict future events. | we obtain an 11.6 BLEU improvement through semi-supervised training using the output of a grammar-based parser, compared to training on gold data only. | neutral |
train_98483 | Further examples of the diversity of generation are given in the appendix. | due to language variability, the same plan entity may appear in several forms in the textual sentences. | neutral |
train_98484 | (2018b) propose a framework for fine tuning using policy gradients and perform a human evaluation showing promising results. | we train the model using maximum likelihood before fine tuning. | neutral |
train_98485 | Generated questions should be formed of language that is both fluent and relevant to the context and answer. | the generator is rewarded for successfully fooling the discriminator. | neutral |
train_98486 | We collapse co-referential entities into a single node associated with the longest mention (on the assumption that these will be the most informative). | to determine the shortcomings of our model, we calculate rough error statistics over the outputs of the GraphWriter on the test set. | neutral |
train_98487 | Our dataset consists of 40k paper titles and abstracts from the Semantic Scholar Corpus taken from the proceedings of 12 top AI conferences (Ammar et al., 2018). | they do not directly model the graph structure, relying on linearization and sequence encoding instead. | neutral |
train_98488 | This is likely caused by the lower quality KE model due to the larger vocabulary in AmazonQA. | we carefully created 51 test questions which were used to filter out untrusted judgments and workers. | neutral |
train_98489 | Table 1 shows the distribution of the various types of training instances. | it may not include information about the menu/times for the breakfast, credentials for the wifi, or the cancellation policy for a spa appointment at the hotel. | neutral |
train_98490 | We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., us- Table 3: Ablation study on WIKIHOP validation set. | for example, in figure 1, our model would be aware that "Stockholm" and "Sweden" appear in the same document but any context words, including the ones encoding relations (e.g., "is the capital of") will be hidden. | neutral |
train_98491 | BERT-MRC has almost no improvement on restaurant, which indicates Wikipedia may have no knowledge about aspects of restaurant. | previous study (Xu et al., 2018a) has shown that incorporating domain word embeddings greatly improve the performance. | neutral |
train_98492 | As a compromise, many online businesses leverage community question-answering (CQA) (McAuley and Yang, 2016) to crowdsource answers from existing customers. | we expand the vocabulary of the embedding layer from the pre-trained model on ReviewRC since reviews may have words that are rare in wikipedia and keep other hyper-parameters as their defaults. | neutral |
train_98493 | In the exemplary question " Who wrote the book The Pillars of the Earth? | for example, falcon correctly annotates question from LC-QuAD: 'Name the military unit whose garrison is Arlington County, Virginia and command structure is United States Department of Defense' where expected entities are dbr:Arlington_County,_Virginia and dbr:United_States_Department_of_ Defense. | neutral |
train_98494 | The reason behind the use of elastic search is its effectiveness over indexed KGs as reported by Dubey et al. | they mostly fail in case of short text (e.g. | neutral |
train_98495 | This is mainly due to the fact that the rationales are not complete programs and fail to explicitly describe all important numbers and operations required to solve the problem. | the base model is referred to as "Seq2prog," while our model with categorization is "Seq2prog + cat." | neutral |
train_98496 | The scale and diversity of this dataset makes it particularly suited for use in training deeplearning models to solve word problems. | using this representation language, our new dataset, MathQA, significantly enhances the AQuA dataset with fully-specified operational programs. | neutral |
train_98497 | These changes are not always reflected in the rationales, leading to incorrect solutions. | there is a significant amount of unwanted noise in the dataset, including problems with incorrect solutions, problems that are unsolvable without brute-force enumeration of solutions, and rationales that contain few or none of the steps required to solve the corresponding problem. | neutral |
train_98498 | We estimate artifacts by training the QANet model described in Section 5.2 on a version of DROP where either the question or the paragraph input representation vectors are zeroed out (question-only and paragraph-only, respectively). | at test time, we first determine this answer type greedily and then get the best answer from the selected type. | neutral |
train_98499 | For example, the word "AlElm" 1 can accept many possible core-word diacritics depending on the intended meaning 1 In this paper, we use Buckwalter transliteration. | we approached the task as a sequence-to-sequence (seq2seq) problem ; taking advantage of the recent advancements in Neural Machine Translation (NMT) (Britz et al., 2017;Kuchaiev et al., 2018) among other applications where seq2seq models made a breakthrough (Yu et al., 2016;Witten et al., 2016;Abadi et al., 2016). | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.