id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_4900 | (2017), who use Nystrom methods to compact the TK representation in embedding vectors and use the latter to train a feed forward NNs. | we present a simpler approach, where NNs learn syntactic properties directly from data. | contrasting |
train_4901 | Indeed, the latter only exploit lexical similarity measures, which are typically also generated by NNs. | even if our conjecture were wrong, the bottom line would be that, thanks to our approach, we can have NN models comparable to TK-based approaches, by also avoiding to use syntactic parsing and expensive TK processing at deployment time. | contrasting |
train_4902 | This is trivial for statistical machine translation (Koehn, 2009) because there is no overlap between the translation units of a hypothesis, i.e., we have a 0-1 coverage vector. | it is not the case for NMT where the coverage is modeled in a soft way. | contrasting |
train_4903 | This approach was developed based on the idea that the training cost is a useful measure to determine the translation quality of a sentence. | some of the sentences that can be potentially improved by training may be deleted using this method. | contrasting |
train_4904 | As a solution, various studies proposed segmenting words into sub-word units and performing translation at the sub-lexical level. | statistical word segmentation methods have recently shown to be prone to morphological errors, which can lead to inaccurate translations. | contrasting |
train_4905 | In fact, the second approach is now prominent and has established a pre-processing step for constructing a vocabulary of sub-word units before training the NMT model. | several studies have shown that segmenting words into sub-word units without preserving morpheme boundaries can lead to loss of semantic and syntactic information and, thus, inaccurate translations (Niehues et al., 2016;Ataman et al., 2017;Pinnis et al., 2017;Huck et al., 2017;Tamchyna et al., 2017). | contrasting |
train_4906 | Note that in this case there is now only an embedding vector (of dimension 512 in our experiments) for each speaker. | the resulting domain embedding are non-trivial to interpret (i.e. | contrasting |
train_4907 | We hypothesize that an NMT ensemble would be strengthened if its component models were complementary in this way. | ensembling often requires component models to make predictions relating to the same output sequence position at each time step. | contrasting |
train_4908 | Previous research has used very large batches to improve training convergence while requiring fewer model updates (Smith et al., 2017;Neishi et al., 2017). | with such large batches the model size may exceed available GPU memory. | contrasting |
train_4909 | It is also possible to constrain decoding of linearized trees and derivations to wellformed outputs. | we found that this gives little improvement in BLEU over unconstrained decoding although it remains an interesting line of research. | contrasting |
train_4910 | Furthermore, artificially generated partial feedback does not contain noise, given that the reference translation is adequate. | users may make mistakes in selection. | contrasting |
train_4911 | The previous work uses the bag-of-word to constraint the latent variable, and the latent variable is the output of the encoder. | we use the bag-of-word to supervise the distribution of the generated words, which is the output of the decoder. | contrasting |
train_4912 | Compared with the previous work, our method directly supervises the predicted distribution to improve the whole model, including the encoder, the decoder and the output layer. | the previous work only supervises the output of the encoder, and only the encoder is trained. | contrasting |
train_4913 | The beam search finds good candidate translations by considering multiple hypotheses of translations simultaneously. | as the algorithm searches in a monotonic left-to-right order, a hypothesis can not be revisited once it is discarded. | contrasting |
train_4914 | In this section, we introduce an extended decoding algorithm of the beam search, which maintains a single priority queue that contains all visited hypotheses. | to the standard beam search, which only considers hypotheses with the same length in each step, the proposed algorithm selected arbitrary hypotheses from the queue that may differ in length. | contrasting |
train_4915 | Finally, similarly to the second variant, the last variant, Pseudofit fus-*, adds a supplementary representation of the target word. | this representation is not an additional pseudosense but an aggregation of its already existing pseudo-senses, which can be viewed as another global representation of the target word. | contrasting |
train_4916 | On EVAL, Hearst-pattern based methods get penalized by OOV words, due to the large number of verbs and adjectives in the dataset, which are not captured by our patterns. | in 7 of the 9 datasets, at least one of the sparse models outperforms all distributional measures, showing that Hearst patterns can provide strong performance on large corpora. | contrasting |
train_4917 | Thus the alignment model can be considered as an implicit part of the translation model. | separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc. | contrasting |
train_4918 | Neural coreference is promising since it allows cross-lingual transfer using multilingual embedding. | most of the recent neural coreference models (Wiseman et al., 2015(Wiseman et al., , 2016Manning, 2015, 2016;Lee et al., 2017) have focused on training and testing on the same language. | contrasting |
train_4919 | The tasks of EL and coreference are intrinsically related, prompting joint models (Durrett and Klein, 2014;Hajishirzi et al., 2013). | the recent SOTA was obtained using pipeline models of coreference and EL (Sil et al., 2018). | contrasting |
train_4920 | For instance, according to the ACE-2005 annotation guideline, in the sentence "Jane and John are married", an ED system should be able to identify the word "married" as a trigger of the event "Marry". | it may be difficult to identify events from isolated sentences, because the same event trigger might represent different event types in different contexts. | contrasting |
train_4921 | If we only examine the first sentence, it is hard to determine whether the trigger "leave" indicates a "Transport" event meaning that he wants to leave the current place, or an "End-Position" event indicating that he will stop working for his current organization. | if we can capture the contextual information of this sentence, it is more confident for us to label "leave" as the trigger of an "End-Position" event. | contrasting |
train_4922 | The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. | most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. | contrasting |
train_4923 | The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Slots and values are tracked using a similar architecture as for domain tracking (Figure 1). | to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1. | contrasting |
train_4924 | The reason may be twofold: Firstly, as discussed in previous works (Joty et al., 2013), it is important to address discourse structure characteristics, e.g., through modeling lexical chains in a discourse, for discourse parsing, especially in dealing with long span scenarios. | most existing approaches mainly focus on studying the semantic and syntactic aspects of EDU pairs, in a more local view. | contrasting |
train_4925 | Here, lexical cohesion reflects the semantic relationship of words, and can be modeled as the recurrence of words, synonym and contextual words. | previous works do not well model the discourse cohesion within the discourse parsing task, or do not even take this issue into account. | contrasting |
train_4926 | The main reason might be that MST-full follows a global graph-based dependency parsing framework, where their high order methods (in cubic time complexity) can directly analyze the relationship between any EDUs pairs in the discourse, while, we choose the transition-based local method with linear time complexity, which can only investigate the top EDUs in S and B according to the selected actions, thus usually has a lower performance than the global graph-based methods, but with a lower (linear) time complexity. | the neural network components help us maintain much fewer features than MST-full, which carefully selects 6 different sets of features that are usually obtained using extra tools and resources. | contrasting |
train_4927 | We follow the criterion of Polanyi (1988) and Irmer (2011) which treats clauses as EDUs. | since a discourse unit is a semantic concept but a clause is defined syntactically, in some cases segmentation by clauses is still not the most proper strategy. | contrasting |
train_4928 | Thus, the success of such systems relies entirely on the ability of the map to make the predicted vectors similar to the target vectors in terms of semantic or neighborhood structure. | 1 whether neural nets achieve this goal in general has not been investigated yet. | contrasting |
train_4929 | Well-known theoretical work shows that networks with as few as one hidden layer are able to approximate any function (Hornik et al., 1989). | this result does not reveal much neither about test performance nor about the semantic structure of the mapped vectors. | contrasting |
train_4930 | Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016;Liu and Zhang, 2017;, building on a long line of work in transition-based parsing (Nivre, 2003;Yamada and Matsumoto, 2003;Henderson, 2004;Zhang and Clark, 2011;Chen and Manning, 2014;Andor et al., 2016;Kiperwasser and Goldberg, 2016). | models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016;Wiseman and Rush, 2016). | contrasting |
train_4931 | negative labeled F1, for trees), we use the risk objective: which measures the model's expected cost over possible outputs y for each of the training exam- Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002;Smith and Eisner, 2006;Li and Eisner, 2009;Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure. | we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016;Shen et al., 2016;Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem. | contrasting |
train_4932 | For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser. | exploration improves softmax margin training across all parsers and conditions. | contrasting |
train_4933 | (2017a) counts the incorrect labels (i, j, X) in the predicted tree: Note that X can be null ∅, and t * (i,j) denotes the gold label for span (i, j), which could also be ∅. | 6 there are two cases where t * (i,j) = ∅: a subspan (i, j) due to binarization (e.g., a span combining the first two subtrees in a ternary branching node), or an invalid span in t that crosses a gold span in t * . | contrasting |
train_4934 | Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally. | abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly. | contrasting |
train_4935 | Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly. | we are able to substantially simplify inputting abugidas by encoding them in a lossy (or "fuzzy") way. | contrasting |
train_4936 | Because they are the essence of the entire source paper, which can directly reflect the quality of the source paper. | the methods module of the source paper has little effect on the probability of being accepted according to Table 4. | contrasting |
train_4937 | Therefore, we combine HISK and BOSWE in the dual (kernel) form, by simply summing up the two corresponding kernel matrices. | summing up kernel matrices is equivalent to feature vector concatenation in the primal Hilbert space. | contrasting |
train_4938 | Predicting how Congressional legislators will vote is important for understanding their past and future behavior. | previous work on roll-call prediction has been limited to single session settings, thus did not consider generalization across sessions. | contrasting |
train_4939 | In addition to enabling prediction, associating text with ideology allows for a further degree of interpretability. | all previous work incorporating text into roll call prediction have limited their evaluation to in-session training and testing. | contrasting |
train_4940 | 1 As legislators typically serve for multiple sessions, and similar bills are proposed across sessions, we want to be able to leverage this data across sessions to inform our model. | the generalizability of previous methods to a crosssession setting is unknown. | contrasting |
train_4941 | Contrary to our hypothesis, MWE achieves higher accuracy than Meta-Only. | it remains unclear whether this signal is related to ideology or other contextual information. | contrasting |
train_4942 | #NAME? | these benchmarks were mostly derived independently of any NLP problems. | contrasting |
train_4943 | With the unigram distribution, for any training word, all the other vocabulary words can be selected as noise words because of their non-zero frequency. | with the bigram distribution, some vocabulary words may never co-occur with a given training word, which makes them impossible to be selected for this training word. | contrasting |
train_4944 | In the literature, much more attention has been paid to studies on what is said. | recently, capturing how it is said, such as stylistic variations, has also proven to be useful for natural language processing tasks such as classification, analysis, and generation (Pavlick and Tetreault, 2016;Wang et al., 2017). | contrasting |
train_4945 | These results indicate that the proposed CBOW-SEP-CTX model jointly learns two different types of lexical similar-ities, i.e., the stylistic and syntactic/semantic similarities in the different parts of the vectors. | our stylistic vector also captured the topic similarity, such as "サンタ (Santa Clause)" and "トナカイ (reindeer)" (the fourth row of Table 2). | contrasting |
train_4946 | The main idea behind these works is to develop neural architectures that are able to learn continuous features and capture the intricate relation between a target and context words. | to sufficiently train these models, substantial aspect-level annotated data is required, which is expensive to obtain in practice. | contrasting |
train_4947 | To some extent, humor reflects a kind of intelligence. | from both theoretical and computational perspectives, it is hard for computers to build a mechanism for understanding humor like human beings. | contrasting |
train_4948 | The most frequent discourse relations in humorous data include condition, background and Contrast. | non-humorous texts contain same-unit and attribution more. | contrasting |
train_4949 | The proposed approach is close to (Liu et al., 2015), where only the annotated data for aspect extraction is used. | we will show that our approach is more effective even compared with baselines using additional supervisions and/or resources. | contrasting |
train_4950 | The proposed embedding mechanism is related to cross domain embeddings (Bollegala et al., 2015(Bollegala et al., , 2017 and domain-specific embeddings (Xu et al., 2018a,b). | we require the domain of the domain embeddings must exactly match the domain of the aspect extraction task. | contrasting |
train_4951 | In Section 5 we refer to some alternatives and show that they do not achieve better results than the one presented above. | we do not claim that our blending method is the only option or even the best one. | contrasting |
train_4952 | The main target in this work is to investigate the effect of audio, visual, and text modalities, and different fusion methods in personality recognition, rather than proposing the method with the best accuracy. | we still repeat the accuracy of the reported methods in Table 2 and two winners of the ChaLearn 2016 competition DCC (Güçlütürk et al., 2016) and evolgen (Subramaniam et al., 2016) in Table 3. | contrasting |
train_4953 | It is likely that the limited backpropagation method learns something similar to a linear combination of channels, just like the decision-level method. | the full backpropagation method yields significantly higher results for all traits except Agreeableness. | contrasting |
train_4954 | As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context. | it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words. | contrasting |
train_4955 | As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query. | such a greedy policy is expensive and prone to selecting outliers. | contrasting |
train_4956 | Neural Machine Translation (NMT) has shown remarkable progress in recent years. | it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017). | contrasting |
train_4957 | encoder, decoder or the attention mechanism. | this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks. | contrasting |
train_4958 | With the development of several multilingual datasets used for semantic parsing, recent research efforts have looked into the problem of learning semantic parsers in a multilingual setup (Duong et al., 2017; Susanto and Lu, 2017a). | how to improve the performance of a monolingual semantic parser for a specific language by leveraging data annotated in different languages remains a research question that is under-explored. | contrasting |
train_4959 | Such a model allows two types of input signals: single source SL-SINGLE and multi-source SL-MULTI. | semantic parsing with cross-lingual features has not been explored, while many recent works in various NLP tasks show the effectiveness of shared information cross different languages. | contrasting |
train_4960 | For example, the two semantic units STATE : smallest one ( density (STATE)) and STATE : smallest one ( population (STATE)) share similar representations. | we also found that occasionally semantic units conveying opposite meanings are also grouped together. | contrasting |
train_4961 | When this news headline is fed into modern tools for Named Entity Disambiguation (NED), virtually all of them would map the mention Schumacher onto the former Formula One champion Michael Schumacher, as the best-fitting entity from a Wikipedia-centric knowledge base (KB). | knowing that Sunday refers to August 14, 1949, i.e., ignoring the surface form but exploiting normalized information, it becomes clear that the text actually refers to the German politician Kurt Schumacher. | contrasting |
train_4962 | As shown in Table 4, Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews. | expansionNet better captures the distribution of aspects that are discussed in real reviews. | contrasting |
train_4963 | (2017) and approaching their best RDF-aware method. | manual inspection reveal many cases of unwanted behaviors in the resulting outputs: (1) many resulting sentences are unsupported by the input: they contain correct facts about relevant entities, but these facts were not mentioned in the input sentence; (2) some facts are repeated-the same fact is mentioned in multiple output sentences; and (3) some facts are missingmentioned in the input but omitted in the output. | contrasting |
train_4964 | While the set of complex sentences is still divided roughly to 80%/10%/10% as in the original split, now there are nearly no simple sentences in We believe this split strikes a good balance between challenge and feasibility: to succeed, a model needs to learn to identify relations in the complex sentence, link them to their arguments, and produce a rephrasing of them. | it is not required to generalize to unseen relations. | contrasting |
train_4965 | In fact, we can also leverage this abstraction to visualize the simplified LSTM's weights as is commonly done with attention (see Appendix A for visualization). | there are three major differences in how the weights w t j are computed. | contrasting |
train_4966 | In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ). | they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes). | contrasting |
train_4967 | SRNN The finite-precision SRNN cannot designate unbounded counting dimensions. | the SRNN update equation is: By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h this counting behavior is within a tanh activation. | contrasting |
train_4968 | Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show. | figures 1c and 1d show the state values of the GRU-networks. | contrasting |
train_4969 | Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016), or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017;Zhou et al., 2018). | these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important. | contrasting |
train_4970 | Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1. | concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer. | contrasting |
train_4971 | We can see that the performance of our model on different types of questions in the RACE dataset is quite similar. | our model is only based on wordlevel matching and may not have the ability of reasoning. | contrasting |
train_4972 | Congratulations (bow-theknee emoticon)" Comparing Default with LogReg and LinSVM, we can see that the linear models performed better than the default RNN model without pretraining, when the labeled data size is less than or equal to 20K. | looking at the results of Dial, our method improved Default even for these cases (5K to 20K), and Dial clearly outperformed the linear models. | contrasting |
train_4973 | (S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for? | #Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN the reliability of the self-labeled data is an important issue. | contrasting |
train_4974 | d (k+1) is the representation of document s (k+1) using Self-Attentive Encoder,û is the output words after the second-pass decoder. | to the original Deliberation Network (Xia et al., 2017), where they propose a complex joint learning framework using Monte Carlo Method, we minimize the following loss as Xiong et al. | contrasting |
train_4975 | BLEU measures n-gram overlap between a generated response and a gold response. | since there is only one reference for each response and there may exist multiple feasible responses, BLEU scores are extremely low. | contrasting |
train_4976 | The original system wrongly classified the user intention as shopping since this is a common conversational pattern in shopping. | our utterance rewriter is able to recover the omitted information "under the weather in Beijing". | contrasting |
train_4977 | This has been often attributed to the manner and extent to which these models use the dialog history when generating responses. | there has been little empirical investigation to validate these speculations. | contrasting |
train_4978 | Sequence-to-sequence models (Sutskever et al., 2014) has become one of the most popular approaches to dialog systems, for it provides a high degree of automation and flexibility. | they are known to suffer from the "dullresponse" problem (Li et al., 2015). | contrasting |
train_4979 | On one hand, the highest frequency of source-side copy helps address sparsity and results in the highest precision and recall. | we see space for improvement, especially on the relatively low recall of target-side copy, which is probably due to its low frequency. | contrasting |
train_4980 | It is demonstrated that the annotation of only preterminal categories is sufficient to adapt a CCG parser to new domains. | the solution is limited to a specific parser's architecture, making non-trivial the application of the method to the current state-of-the-art parsers Yoshikawa et al., 2017;Stanojević and Steedman, 2019), which require full parse annotation. | contrasting |
train_4981 | We relax these assumptions by using dependency tree, which is a simpler representation of the syntactic structure, i.e., it lacks information of longrange dependencies and conjunct spans of a coordination structure. | due to its simplicity and flexibility, it is easier to train an annotator, and there exist plenty of accessible dependency-based resources, which we exploit in this work. | contrasting |
train_4982 | 14 We report labeled bracket F1 scores between the resulting trees and the gold trees in the true Switchboard corpus, using the EVALB script. | 15 the reported scores suffer from the compound effect of failures in CCG parsing as well as ones occurred in the conversion to the constituency trees. | contrasting |
train_4983 | Figure 3 is one of such cases, which the plain depccg falsely analyzes as one huge NP phrase. | after fine-tuning, it successfully produces the correct "If S 1 and S 2 , S 3 " structure, recognizing that the equal sign is a predicate. | contrasting |
train_4984 | In the previous experiments, we rely purely on unsupervised NMT for pivot translation, assuming that the translation on each hop cannot leverage any bilingual sentence pairs. | there in- deed exist plenty of bilingual sentence pairs between some languages, especially among the popular languages of the world, such as the official languages of the United Nations and the European Union. | contrasting |
train_4985 | The final loss function for the mapping matrix is: L W |D enables the model to leverage the distributional information available from the two embedding spaces, thereby using all available monolingual data. | l W |S allows for the correct alignment of labeled pairs when available in the form of a small seed dictionary. | contrasting |
train_4986 | This baseline does not make use of various advances in NMT architectures and training tricks. | to the baseline, we use a BiDeep RNN architecture (Miceli Barone et al., 2017), label smoothing (Szegedy et al., 2016), dropout (Srivastava et al., 2014), word dropout (Sennrich et al., 2016a), layer normalization (Ba et al., 2016) and tied embeddings (Press and Wolf, 2017). | contrasting |
train_4987 | The IT task is very small: training on IT data alone results in over-fitting, with a 17 BLEU improvement under fine-tuning. | no-reg fine-tuning rapidly forgets previous tasks. | contrasting |
train_4988 | Large KBs such as DBpedia (Auer et al., 2007), Wikidata (Vrandecic and Krötzsch, 2014) and Yago (Suchanek et al., 2007) contain millions of facts about entities, which are represented in the form of subject-predicate-object triples. | these KBs are far from complete and mandate continuous enrichment and curation. | contrasting |
train_4989 | The encoder-decoder with attention model (Bahdanau et al., 2015) has been used in machine translation. | in the relation extraction task, the attention model cannot capture the multiword entity names. | contrasting |
train_4990 | For example, the first bin in the left subplot shows that discovered topic 1 has 91 unique words, all belonging to common topic C1. | the first bin in the right subplot shows that discovered topic 1 has 100 unique words, 38 belonging to common topic C1 and 58 to common topic C2. | contrasting |
train_4991 | All five identified topics contain words from the 2 common topics. | in the aggregated dataset, the first identified topic contains a mixture of words from the 2 common topics, while the remaining 4 are almost entirely comprised of words from the 4 spatially distinct topics. | contrasting |
train_4992 | The combination of multilingual BERT, monolingual BPEmb, and character embeddings is best overall (92.0) among models trained only on monolingual NER data. | this ensemble of contextual and non-contextual subword embeddings is inferior to MultiBPEmb (93.2), which was first trained on multilingual data from all languages collectively, and then separately finetuned to each language. | contrasting |
train_4993 | On one hand, we wish to guide training and prediction with neural networks using logic, which is non-differentiable. | we seek to retain the advantages of gradient-based learning without having to redesign the training scheme. | contrasting |
train_4994 | The statement A 1 ∧ B 1 → A 2 ∧ B 2 is cyclic with respect to the graph. | the statement for compiling conditional statements into differentiable statements that augment a given network. | contrasting |
train_4995 | Designing the distance function The key consideration in the compilation step is the choice of an appropriate distance function for logical statements. | the ideal distance function we seek is the indicator for the statement Z: since the function d ideal is not differentiable, we need smooth surrogates. | contrasting |
train_4996 | Because the implication is bidirectional in biconditional statement, it violates our acyclicity requirement in §3.1. | since the auxiliary neuron state does not depend on any other nodes, we can still create an acyclic sub-graph by defining the new node to be the distance function itself. | contrasting |
train_4997 | Our work is connected to active learning, for example, to approaches that use reinforcement learning to learn a policy for a dynamic active learning strategy (Fang et al., 2017), or to learn a curriculum to order noisy examples (Kumar et al., 2019), or to the approach of who use imitation learning to select batches of data to be labeled. | the action space these approaches consider is restricted to the decision whether or not to select particular data and is designed for a fixed budget, neither do they incorporate feedback cost in their frameworks. | contrasting |
train_4998 | Using only full feedback (blue) as in standard supervised learning or learning from post-edits, the overall highest improvement can be reached (visible only after the cutoff of 80k edits; see Appendix A.2 for the comparison over a wider window of time). | it comes at a very high cost (417k characters in total to reach +0.6 BLEU). | contrasting |
train_4999 | All of these works follow a common paradigm: use an LSTM/GRU over the word sequence, extract contextual features at each time step, and apply some kind of pooling on top of that. | a few works adopt some different methods. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.