id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16600
Grammar-based parsers work by having a set of grammar rules that are either learned or hand-written, an algorithm for generating a set of candidate logical forms by recursive application of the grammar rules, and a criterion for picking the best candidate logical form within that set (Liang and Potts, 2015;Collins, 2005, 2007).
they are brittle to the flexibility of language.
contrasting
train_16601
As speech recognition technologies mature, more computing devices support spoken instructions via a conversational agent.
most agents do not adapt to the phrasing and interest of a specific end user.
contrasting
train_16602
In this context, various approaches based on modifying the RNN or LSTM architecture have been proposed to speed up the process (Seo et al., 2018;Yu et al., 2017).
the processing in these models is still fundamentally sequential and needs to operate on the whole document which limits the computational gain.
contrasting
train_16603
However, the processing in these models is still fundamentally sequential and needs to operate on the whole document which limits the computational gain.
to previous approaches, we propose a novel framework for efficient text classification on long documents that mitigates sequential processing.
contrasting
train_16604
That is our approach achieves higher performance across different test-time budgets and its performance is a predictable monotonic function of the test-time budget.
the performance of skim-RNN exhibits inconsistency for different budgets.
contrasting
train_16605
These works use relations from ConceptNet, a crowd-sourced database of structured commonsense knowledge, to train and validate their models (Liu and Singh, 2004).
it has been shown that these methods generalize poorly to novel data (Li et al., 2016;Jastrzebski et al., 2018).
contrasting
train_16606
Especially for harder analogies, where interclass similarity is high and intraclass similarities are low (e.g., in GN/BATS), DensRay outperforms SVMs.
to SVMs, DensRay considers difference vectors within classes as wellthis seems to be of advantage here.
contrasting
train_16607
Meta-learning algorithms have received lots of attention recently due to their effectiveness (Finn et al., 2017;Fan et al., 2018).
the potential of applying meta-learning algorithms in NLU tasks have not been fully investigated yet.
contrasting
train_16608
Syntactic structures interact closely with semantics in learning compositional representations and alleviating long-range dependency issues.
such structure priors have not been well exploited in previous work for semantic modeling.
contrasting
train_16609
The table shows that embedding similarity correlates better or the same as ROUGE with human ratings on informativeness, relevance and surrogate.
rOUGE precision is a more suitable metric for evaluating the extent of unnecessary content and verbosity.
contrasting
train_16610
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used.
contrasting
train_16611
For example, the encoder-decoder (EncDec) model (Sutskever et al., 2014;Bahdanau et al., 2015), which was originally proposed for MT, has been applied widely to GEC and has achieved remarkable results in the GEC research field (Ji et al., 2017;Chollampatt and Ng, 2018;.
a challenge in applying EncDec to GEC is that EncDec requires a large amount of training data (Koehn and Knowles, 2017), but the largest set of publicly available parallel data (Lang-8) in GEC has only two million sentence pairs (Mizumoto et al., 2011).
contrasting
train_16612
* Current affiliation: Future Corporation When incorporating pseudo data, several decisions must be made about the experimental configurations, namely, (i) the method of generating the pseudo data, (ii) the seed corpus for the pseudo data, and (iii) the optimization setting (Section 2).
consensus on these decisions in the GEC research field is yet to be formulated.
contrasting
train_16613
On one hand, the response must contain the desired information, referred to as meaning representation (MR), in order to provide or request a user's information.
the system response needs to mimic the fluency and variation in human language to improve the user experience.
contrasting
train_16614
However, it still remains a challenge in task-oriented dialogue systems to generate truly natural utterance indistinguishable from a human's response.
language modeling is a technique typically employed to learn language patterns from text.
contrasting
train_16615
However, it suffers from several drawbacks: First, in order to maintain the SLU performance on both original and new expressions, it usually requires almost retraining the model using the whole dataset.
as the size of training set grows, it becomes infeasible to repeatedly conduct time consuming retraining on such a large dataset.
contrasting
train_16616
Since FT-AttRNN is only trained on new data, it is oftentimes overwhelmed by new knowledge and results in forgetting the old knowledge.
fT-Lr-AttRNN has difficulty to learn new knowledge since it cannot jump out of local optimum due to a small learning rate.
contrasting
train_16617
Such NLU systems cater to utterances in isolation, thus pushing the problem of context management to DM.
contextual information is critical to the correct prediction of intents and slots in a conversation.
contrasting
train_16618
(2015) proposed adding contextual signals to the joint IC-SL task.
the contributions of their work were limited in terms of number of signals and how they were used, rendering the contextualization process still less interpretable.
contrasting
train_16619
For the former, we simply concatenate the contextual signals to get current turn feature vector, i.e.
for the latter, concatenation becomes intractable if context window is large.
contrasting
train_16620
The main approach of previous work (Upadhyay et al., 2018;Chen et al., 2018;Schuster et al., 2019) in this task is using aligned cross-lingual word embeddings between source and target languages.
this method suffers from imperfect alignments between the source and target language embeddings.
contrasting
train_16621
Correctly identifying out-of-scope cases is thus crucial in deployed systems-both to avoid performing the wrong action and also to identify potential future directions for development.
this problem has seen little attention in analyses and evaluations of intent classification systems.
contrasting
train_16622
Similar experiments were conducted for evaluating unknown intent discovery models in Lin and Xu (2019).
our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.
contrasting
train_16623
First of all, the Alloperations model is forced to use an operation (in this case Random Swap) with a fixed number of changes and a probability of 1.0, leading to less variation in the source inputs.
our Input-agnostic AutoAugment model samples 3 Verb Dropout's followed by Random Swap.
contrasting
train_16624
This suggests that using MWT for distant languages may help.
for close languages pairs the best way is still to translate directly (BI), as can be seen for Spanish and Portuguese.
contrasting
train_16625
Alignment errors have been noted to have a small effect on statistical MT performance (Goutte et al., 2012).
misaligned sentences have been shown to be much more detrimental to neural MT (NMT) (Khayrallah and Koehn, 2018).
contrasting
train_16626
Simultaneous translation outputs target words while the source sentence is being received, and is widely useful in international conferences, negotiations and press releases.
although there is significant progress in machine translation (MT) recently, simultaneous machine translation is still one of the most challenging tasks.
contrasting
train_16627
Thus, such a sequence must have | | number of WRITE actions.
not every action sequence is good for simultaneous translation.
contrasting
train_16628
To apply the learned policy for simultaneous translation, we can choose at each step the action with higher probability .
different scenarios may have different latency requirements.
contrasting
train_16629
It is possible that each hidden state in the decoder takes the previous hidden state into consideration, which would be somewhat similar to the proposed RPEs.
the encoder would not be expected to contain this mechanism.
contrasting
train_16630
Translating into one of these high-resource languages gives the possibility to address the downstream task by: i) using existing tools for that language to process the translated text, and ii) projecting their output back to the original language.
using MT "as is" might not be optimal for different reasons.
contrasting
train_16631
It is common practice in modern NLP to represent text in a continuous space.
the interface between the discrete world of language and the continuous representation persists and is arguably nowhere so important as in deciding what should be the atomic symbols that will be used as input and/or output.
contrasting
train_16632
Neural machine translation (NMT) is the current state-of-the-art.
with statistical machine translation (SMT; Koehn et al., 2007), where there are several established methods of incorporating external knowledge, 1 recent work is still examining how best to incorporate bilingual lexicons into NMT systems.
contrasting
train_16633
Semi-supervised approaches for NMT are often based on automatically creating pseudoparallel sentences through methods such as backtranslation (Irvine and Callison-Burch, 2013;Sennrich et al., 2016) or adding an auxiliary autoencoding task on monolingual data (Cheng et al., 2016;Currey et al., 2017).
both methods have problems with lowresource and syntactically divergent language pairs.
contrasting
train_16634
tactic analysis tools, a linguist with rudimentary knowledge of the structure of the target language can create them in short order using these tools.
similar pre-ordering methods have not proven useful in NMT (Du and Way, 2017), largely because high-resource scenarios NMT is much more effective at learning reordering than previous SMT methods were (Bentivogli et al., 2016).
contrasting
train_16635
For example, the "test-time waitk" scheme uses the full-sentence translation model and performs wait-k decoding at test time.
the obvious training-testing mismatch in this scheme usually leads to inferior quality.
contrasting
train_16636
Therefore, absolute position (Vaswani et al., 2017) or relative position (Shaw et al., 2018) are generally used to capture the sequential order of words in the sentence.
several researches reveal that the sequential structure may not be sufficient for NLP tasks (Tai et al., 2015;Kim et al., 2017;Shen et al., 2019), since sentences inherently have hierarchical structures (Chomsky, 1965;Bever, 1970).
contrasting
train_16637
This is because vocabularies for models #2 to #4 involve the target side of the En-XX corpus and "one" non-English corpus from the multi-parallel corpus.
the vocabularies for models #5 to #7 involve the target sides of the En-XX corpus and "all" non-English corpora from the multi-parallel corpus.
contrasting
train_16638
Simple fine-tuning (#2 and #5) gave great improvements over the baseline.
#5 was worse than #2, despite being trained in the same way, because #5 involves up to eight target languages, inevitably reducing the allowance for each target language.
contrasting
train_16639
However, #5 was worse than #2, despite being trained in the same way, because #5 involves up to eight target languages, inevitably reducing the allowance for each target language.
exploiting the multiparallel corpus through mixed pre-training (#6) or mixed fine-tuning (#7) brought consistent improvements over the corresponding 1-to-2 models (#3 and #4).
contrasting
train_16640
Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data.
these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain.
contrasting
train_16641
(2016a) concatenate backtranslated data with the original corpus.
these methods learn generic representations for all the text, as the learned representations are shared for all the domains and synthetic and natural data are treated equally.
contrasting
train_16642
We characterize the two baselines as data-centric methods as they rely on synthesized data.
our approach is model-centric as we mainly focus on modifying the model architecture.
contrasting
train_16643
There are also work on adaptation via retrieving sentences or n-grams in the training data similar to the test set (Farajian et al., 2017;Bapna and Firat, 2019).
it can be difficult to find similar parallel sentences in domain adaptation settings.
contrasting
train_16644
Neural machine translation (NMT) has achieved new state-of-the-art performance in translating ambiguous words.
it is still unclear which component dominates the process of disambiguation.
contrasting
train_16645
We find that encoder hidden states outperform word embeddings significantly which indicates that encoders adequately encode relevant information for disambiguation into hidden states.
to encoders, the effect of decoder is different in models with different architectures.
contrasting
train_16646
The WSD accuracy of using bidirectional hidden states are competitive to using hidden states from Transformer models.
concatenating forward and backward hidden states doubles the dimension.
contrasting
train_16647
Thus, a pipeline model of the tasks has been adopted widely by previous studies.
the model has a problem that it cannot utilize interactions among the tasks.
contrasting
train_16648
decompose and recover morphemes from eojeols or assign so-called POSMORPH tags (Heigold et al., 2016), and then an actual POS tag sequence is determined or resolved from the POSMORPH tags using a sequential labeling algorithm.
this pipeline model suffers from two kinds of weaknesses.
contrasting
train_16649
This is the information flow on which the previous studies focus.
the eojeol '나는' which appears twice in this figure is morphologically ambiguous and thus can be analyzed in two ways.
contrasting
train_16650
In POS tagging represented by dotted arrows in Figure 2, it is possible to generate a POS tag at every time t by paying attention to the encoder syllable states as done in morpheme processing.
this approach does not reflect the global dependency among adjacent tags.
contrasting
train_16651
Generally speaking, LSTM has shown great success for sequential data, leveraging long range dependencies and preserving the temporal order of the sequence (Cho et al., 2014;Graves, 2013).
lSTM requires intensive computational resources both for training and inference due to its sequential nature.
contrasting
train_16652
Feature engineering and classical machine learning algorithms such as Hidden Markov Models, Maximum Entropy Models, and Finite State Transducer were the dominant approaches (Nelken and Shieber, 2005;Zitouni et al., 2006;Elshafei et al., 2006).
recent studies show significant improvement using deep neural networks (Belinkov and Glass, 2015;Pham et al., 2017;Orife, 2018).
contrasting
train_16653
Prior work on training generative Visual Dialog models with reinforcement learning (Das et al., 2017b) has explored a Q-BOT-A-BOT image-guessing game and shown that this 'self-talk' approach can lead to improved performance at the downstream dialogconditioned image-guessing task.
this improvement saturates and starts degrading after a few rounds of interaction, and does not lead to a better Visual Dialog model.
contrasting
train_16654
A typical cross-lingual transfer learning approach boosting model performance on a resource-poor language is to pre-train the model on all available supervised data from another resource-rich language.
in large-scale systems, this leads to high training times and computational requirements.
contrasting
train_16655
A typical approach is to pre-train the model on labeled data from a richly resourced language, and then either apply it directly on a target language (Upadhyay et al., 2018) or finetune it on a smaller amount of supervised data from a target language (Do and Gaspers, 2019).
both in SLU and other NLP tasks, prior work on CLTL typically utilized all available source data to transfer knowledge, as it has been mainly inves-tigated in rather small Academic settings.
contrasting
train_16656
Instead of selecting the top K% utterances, we define a threshold θ: u is selected if R(u) >= θ. θ and α k become hyperparameters which can be optimized using Bayesian Optimization with the score of the CLTL model on a development set as the objective.
while this could improve performance, it is expensive, especially in large-scale settings, i.e.
contrasting
train_16657
The inference time complexity is O(M N ) (for generating M query representations for a size N datatset).
to the prior works, we leverage intra-modal multi-head attention, which can be easily parallelized compared to DAN and is with a preferred O(M ) inference time complexity compared to SCAN.
contrasting
train_16658
One of the desired properties of multi-head attention is its ability to jointly attend to and encode different information in the embedding space.
there is no mechanism to support that these attention heads indeed capture diverse information.
contrasting
train_16659
Object detection plays an important role in current solutions to vision and language tasks like image captioning and visual question answering.
popular models like Faster R-CNN rely on a costly process of annotating ground-truths for both the bounding boxes and their corresponding semantic labels, making it less amenable as a primitive task for transfer learning.
contrasting
train_16660
Prior work demonstrated the power of pre-training with image classification at scale (Sun et al., 2017;Mahajan et al., 2018;Wu et al., 2019).
we consider downstream vision and language tasks (image captioning and visual question answering), in contrast to less complex vision-only tasks explored in such work: object detection and in some cases semantic segmentation and human pose estimation.
contrasting
train_16661
Recent work focuses on learning the mapping from visual segments to the input text (Hendricks et al., 2017;Gao et al., 2017a;Liu et al., 2018;Hendricks et al., 2018;Zhang et al., 2018) and retrieving segments based on the alignment scores.
in order to successfully train a NLL model, a large number of diverse language descriptions are needed to describe different temporal segments of videos which incurs high human labeling cost.
contrasting
train_16662
3 Specific Settings Our model generalizes WM18, which can be realized by setting M = I 3 and β = m. Another interesting instance of the model is obtained by setting M = (1 − α m )I 3 and β = α m m, which we call the Diagonal Covariance (DC) model.
to the RGB model and WM18, which model m as a function of r and m, m in the DC model does not depend on r. Given r and m, to predict the t, our DC model predicts m first and then applies a linear transformation to get the target color vector as follows: is a scalar which only depends on m and measures the distance from r to m. In the DC model m is the interpolation point for modifiers, such as 0 0 0 for the modifier "darker".
contrasting
train_16663
Though TF enables efficient training, it results in "exposure bias" because agents must follow learned rather than gold trajectories at test time.
(ii) Student-forcing (SF), where an action a S is drawn from the current learned policy, allows the agent to learn from its own actions (aligning training and evaluation), however, it is inefficient, as the agent explores randomly when confused or in the early stages of training.
contrasting
train_16664
Previous studies (Barrett and Søgaard, 2015b;Barrett et al., 2016) suggest that real-time eye-tracking data can be collected at inference time, so that token-level gaze features are used during training but also at test time.
even if in the near future every user has an eye tracker on top of their screen -a scenario which is far from guaranteed, and raises privacy concerns (Liebling and Preibusch, 2014)many running NLP applications that process data from various Internet sources will not expect to have any human being reading massive amounts of data.
contrasting
train_16665
When looking at the dev set scores, the most discriminative gaze feature is mean fix dur that increases LAS by +0.17 and first fix dur by +0.14.
evaluation on the test set shows that the most informative gaze features are from the context group learned as multiple auxiliary tasks (context feats aux) and they improve the LAS score by +0.18, followed by fix prob and w−1 fix prob with +0.13, total fix dur with +0.12 and late feats aux with +0.10.
contrasting
train_16666
We considered using the constituency parsed Switchboard corpus (Godfrey et al., 1992), which is a human-human telephone corpus, due to its similarity to our domain.
no fully UDcompliant version of the dependency parsed corpus is publicly available.
contrasting
train_16667
These results indicate that our fine-tuned model is more closely aligned to the speech domain than our baseline model.
our best model still struggles when faced with fixed expressions common in conversational speech, such as "Top Gun" and "Kitchen Table Wisdom".
contrasting
train_16668
The widely used method for document-level translation usually compresses the context information into a representation via hierarchical attention networks.
this method neither considers the relationship between context words nor distinguishes the roles of context words.
contrasting
train_16669
To get the multi-perspective information, it is necessary to distinguish the role of each context word and model their relationship especially when one context word could take on multiple roles (Zhang et al., 2018b).
this is difficult to realize for the HAN model as its final representation for the context is produced with an isolated relevance with the query word which ignores relations with other context words.
contrasting
train_16670
We notice that for a reasonably small adapter size, adapter performance gradually converges to its peak and stays steady, with almost no over-fitting for a long enough period, easing final model selection.
with full fine-tuning, optimal model selection becomes challenging due to rapid over-fitting on the adaptation corpus.
contrasting
train_16671
Although it might be possible to reduce this gap further by increasing the adapter size beyond b = 4096, there might be more efficient ways to approach this problem, including more expressive network architectures for adapters, joint fine-tuning of adapters and global model parameters, etc.
we leave these studies to future work.
contrasting
train_16672
Multilingual Neural Machine Translation (NMT) models have yielded large empirical success in transfer learning settings.
these black-box representations are poorly understood, and their mode of transfer remains elusive.
contrasting
train_16673
One may also use advanced and domain-specific neural machine translation system to achieve better translation performance, while we leave it for individuals, and this is beyond the scope of this paper.
for span-extraction reading comprehension task, a major drawback of this approach is that the translated answer may not be the exact span in the target passage.
contrasting
train_16674
Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings.
the performance of these models degrades significantly when they are applied to more realistic scenarios, where answers are involved with various types, multiple text strings are correct answers, or discrete reasoning abilities are required.
contrasting
train_16675
(2019) extend previous one-type answer prediction (Seo et al., 2017) to multi-type prediction that supports span extraction, counting, and addition/subtraction.
they have not fully considered all potential types.
contrasting
train_16676
Existing methods use unsupervised training with encoder-decoder architectures (Lei et al., 2016;Zhang and Wu, 2018), and adversarial domain transfer where the model is trained on a source domain and adversarially adapted to a target domain (Shah et al., 2018).
such approaches typically fall short of the performances that are being achieved with in-domain supervised training.
contrasting
train_16677
Therefore, we can add more instances to the training splits with these methods.
if not otherwise noted, we use the same number of training instances as in the original training splits with duplicates.
contrasting
train_16678
Automatic DQD is often approached as a supervised problem with community-generated training labels.
smaller CQA forums may suffer from label sparsity: On Stack Exchange, 50% of forums have fewer than 160 user-labeled duplicates, and 25% have fewer than 50 (see Figure 1).
contrasting
train_16679
These include undersampling (Kotsiantis and Pintelas, 2003), oversampling, and feature selection (Zheng et al., 2004) at the data level.
due to random undersampling, potentially useful samples can be discarded, while random oversampling poses the risk of overfitting.
contrasting
train_16680
These themes were developed qualitatively from transcripts with therapists.
while they were comprehensive in the clinical domain, here, we aim to create a more general framework for classifying MAS in online interactions.
contrasting
train_16681
Further, our focus on gender also allows us to reliably recruit crowdworkers across the gender spectrum, whereas other social categories such as race or religion are more difficult to recruit in a balanced proportion through traditional mechanisms.
despite this current focus, both the typology and annotation approach are designed for of perceived offensiveness between annotator gender.
contrasting
train_16682
Hence, we prefer browsing the reviews online for the sake of finding our desirable products.
it is quite time-consuming for potential buyers to sift through millions of online reviews with uneven qualities to make purchase decisions.
contrasting
train_16683
One approach to define f θ might be to manually define features of interest, such as stylometric or surface features (Solorio et al., 2014;Sari et al., 2018).
when large amounts of data are available, it is preferable to use a data-driven approach to representation learning.
contrasting
train_16684
7 The most closely related work is author identification on social media.
previous work in this area has largely focused on distinguishing among small, closed sets of authors rather than the open-world setting of this paper (Mikros and Perifanos, 2013;Ge et al., 2016).
contrasting
train_16685
To computationally model a meaning-preserved variance of text across styles, many recent works have developed systems that transfer styles (Reddy and Knight, 2016;Hu et al., 2017;Prabhumoye et al., 2018) or profiles authorships from text (Verhoeven and Daelemans, 2014;Stamatatos et al., 2018) without parallel corpus of stylistic text.
the absence of such a parallel dataset makes it difficult both to systematically learn the textual variation of multiple styles as well as properly evaluate the models.
contrasting
train_16686
A series of works by (Koppel et al., 2011Koppel and Winter, 2014) and their shared tasks (Stamatatos et al., 2018) show huge progress on author profiling and attribute classification tasks.
none of the prior works have collected a stylistic language dataset to have multiple styles in conjunction, parallely annotated by a human.
contrasting
train_16687
With a single image, on the other hand, the outputs produced by annotators tend to be diverse.
the image can be explained with a variety of contents, so the output meaning can drift away from the reference sentence.
contrasting
train_16688
Top five uni/bigram keywords are chosen at each story, which are called global keywords.
another top three uni/bigram keywords are chosen at each image/sentence in a story, which are called local keywords.
contrasting
train_16689
Context-Aware Model (CAM) A simple baseline model would compute the semantic representation of each sentence in the synopsis using a pretrained sentence encoder.
classifying segments in isolation without considering the context in which they appear, might yield inferior semantic representations.
contrasting
train_16690
Specifically, we want to track the five sentences with the highest posterior probability of being TPs and sequentially assign them TP labels based on their position.
it is possible to have a cluster of neighboring sentences with high probability, even though they all belong to the same TP.
contrasting
train_16691
The performance of the end-to-end TAM model drops slightly compared to the same model using goldstandard TP annotations.
it still remains competitive against the baselines, indicating that tracking TPs in screenplays fully automatically is feasible.
contrasting
train_16692
We would also like to point out that, as can be seen in the table, our results have quite high variance -the main source of it is the nature of our training/evaluation setup where we average over 10 runs with 10 different sets of seed dialogues.
in the majority of cases with comparable means, DiKTNet has a lower variance than The comparison of the setups with different latent representations also gives us some insight: while the VAE-powered HRED model improves on the baseline in multiple cases, it lacks generalization potential compared to the LAED setup.
contrasting
train_16693
Negative candidates which are lexically similar to the ground truth response should result in models that carefully consider each word in order to produce fine-grained representations and identify minute differences between candidate responses.
very semantically distant candidate responses should result in very broad and abstract representations of language.
contrasting
train_16694
Identity fraud detection is of great importance in many real-world scenarios such as the financial industry.
few studies addressed this problem before.
contrasting
train_16695
Based on the above finding, we aim to design a dialogue system to detect identity fraud by asking derived questions.
there are three major challenges in achieving this goal.
contrasting
train_16696
To generate derived questions, we link all personal information entities to nodes in an existing Chinese KG 3 and crawl triplets that are directly related to them.
owing to the fact that the KG is largely sparse, nearly a half of entities 4 cannot be linked.
contrasting
train_16697
Besides, existing work requires labeled data, which is often hard to get.
we focus on detecting identity fraud through multi-turn interactions and use reinforcement learning to explore the anti-fraud policy without any labeled data.
contrasting
train_16698
In such tasks, a set of system actions can be pre-defined based on the business logic and slots.
the system actions in our task are selecting nodes in the KG to generate questions.
contrasting
train_16699
The neural encoder-decoder models have shown great promise in neural conversation generation.
they cannot perceive and express the intention effectively, and hence often generate dull and generic responses.
contrasting