id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_4600
In machine translation, since every sentence and its translation are semantically equivalent, there exists a 1-to-1 relationship between them.
in general purpose dialog, a general response (e.g., "I don't know") could correspond to a large variety of input utterances.
contrasting
train_4601
(2017) incorporated the topic information from an external corpus into the Seq2Seq framework to guide the generation.
external dataset may not be always available or consistent with the conversation dataset in topics.
contrasting
train_4602
In the usage space, words like "脂 肪 肝(fatty liver)" and "久 坐(outsit)" lie closely which are both specific words, and both are far from the general words like "胖(fat)".
in the semantic space, "脂 肪 肝(fatty liver)" is close to "胖(fat)" since they are semantically related, and both are far from the word "久坐(outsit)".
contrasting
train_4603
Previous studies show that capturing those matched segment pairs at different granularities across context and response is the key to multiturn response selection (Wu et al., 2017).
existing models only consider the textual relevance, which suffers from matching response that latently depends on previous turns.
contrasting
train_4604
As demonstrated, important matching pairs in selfattention-match in stack 0 are nouns, verbs, like "package" and "packages", those are similar in topics.
matching scores between prepositions or pronouns pairs, such as "do" and "what", become more important in self-attention-match in stack 4.
contrasting
train_4605
Generating emotional language is a key step towards building empathetic natural language processing agents.
a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels.
contrasting
train_4606
In recent years, a handful of medium to large scale, emotional corpora in the area of emotion analysis (Go et al., 2016) and dialog (Li et al., 2017b) are proposed.
all of them are limited to a traditional, small set of labels, for example, "happiness," "sadness," "anger," etc.
contrasting
train_4607
VAE (Kingma and Welling, 2013) encodes data in a probability distribution, and then samples from the distribution to generate examples.
the original frameworks do not support end-to-end generation.
contrasting
train_4608
Social media is a natural source of conversations, and people use emojis extensively within their posts.
not all emojis are used to express emotion and frequency of emojis are unevenly distributed.
contrasting
train_4609
In our case, a tweet can be reached only if at least one of the 64 emojis is used as a word, meaning it has to be a single character separated by blank space.
this kind of tweets is arguably cleaner, as it is often the case that this emoji is used to wrap up the whole post and clusters of repeated emojis are less likely to appear in such tweets.
contrasting
train_4610
The situation would be more serious is λ in Equation 7 is set higher.
this phenomenon does not impair the fluency of generated sentences, as can be seen in Figure 5.
contrasting
train_4611
Mathematically, it is analytically proven that α = 0.5 for an i.i.d process, and the proof is included as Supplementary Material.
α = 1.0 when all segments always contain the same proportion of the elements of W .
contrasting
train_4612
Therefore, Taylor exponent can reasonably serve for evaluating machinegenerated text.
to character-level neural language models, neural-network-based machine translation (NMT) models are, in fact, capable of maintaining the burstiness of the original text.
contrasting
train_4613
However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.
to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.
contrasting
train_4614
This is a verb phrase in the sentence "In the above figure, x is prime and 2x = 3y."
it is a noun phrase in the sentence "The equation 2x = 3y has 2 solutions."
contrasting
train_4615
One therefore would not expect much, if any, performance gain from infusion of sentiment information.
such infusion should not subtract or harm the quality of word embeddings either.
contrasting
train_4616
(Mohammad et al., 2016) is a metaphorical expression.
when the sentence is parsed into a verbdirect object phrase, climb ladder, it appears literal.
contrasting
train_4617
Taking Skip-gram for example, empirically, input vectors of words with the same POS, occurring within the same contexts tend to be close in the vector space (Mikolov et al., 2013), as they are frequently updated by back propagating the errors from the same context words.
input vectors of words with different POS, playing different semantic and syntactic roles tend to be distant from each other, as they seldom occur within the same contexts, resulting in their input vectors rarely being updated equally.
contrasting
train_4618
They constructed a Gaussian graphical model which can extrapolate continuous representations for unknown words.
these morphology-based models directly exploit the internal compositions of words to encode morphological regularities into word embeddings, and some by-products are also produced like morpheme embeddings.
contrasting
train_4619
It is expected that the hearing will go on for two days.
the Republican complainant in the House wanted to summon 15 people including Lewinsky to testify in court.
contrasting
train_4620
It can be seen all current tree-based NMT systems use only one tree for encoding or decoding.
we hope to utilize multiple trees (i.e., a forest).
contrasting
train_4621
The attributive phrase of "Czech border region" is a complete sentence.
the attributive is not allowed to be a complete sentence in Chinese.
contrasting
train_4622
For the example in Figure 4, we observed that for s2s model, the decoder paid attention to the word "Czech" twice, which causes the output sentence contains the Chinese translation of Czech twice.
for our forest model, by using the syntax information, the decoder paid attention to the phrase "In the Czech Republic" only once, making the decoder generates the correct output.
contrasting
train_4623
(2016), very recently, Zaremoodi and Haffari (2017) proposed a forest-based NMT method by representing the packed forest with a forest-structured neural network.
their method was evaluated in small-scale MT settings (each training dataset consists of under 10k parallel sentences).
contrasting
train_4624
The training of these methods is fast, because of the linear structures of RNNs.
all these syntax-based NMT systems used only the 1-best parsing tree, making the systems sensitive to parsing errors.
contrasting
train_4625
The first N − 1 layers are identical and represent the original layers of Trans- Context encoder: The context encoder is composed of a stack of N identical layers and replicates the original Transformer encoder.
to related work (Jean et al., 2017;Wang et al., 2017), we found in preliminary experiments that using separate encoders does not yield an accurate model.
contrasting
train_4626
This confirms that the model does rely on context information to achieve the improvement in translation quality, and is not merely better regularized.
the model is robust towards being shown a random context and obtains a performance similar to the context-agnostic baseline.
contrasting
train_4627
Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.
it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.
contrasting
train_4628
To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.
the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.
contrasting
train_4629
We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.
for German→English , the target memory model performs the best show- Table 6: Analysis of target context model.
contrasting
train_4630
It can be seen that the source sentence has the noun "Qimonda" but the sentencelevel NMT model fails to attend to it when generating the translation.
the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.
contrasting
train_4631
Until now, this knowledge was only used as part of simple disparate heuristics and manual disambiguation procedures.
it is possible to plot this spatial data on a world map, which can then be reshaped into a 1D feature vector, or a Map Vector, the geographic representation of location mentions.
contrasting
train_4632
Table 1 shows the effectiveness of this heuristic, which is competitive with many geocoders, even outperforming some.
the baseline is not effective on WikToR as the dataset was deliberately constructed as a tough ambiguity test.
contrasting
train_4633
We use the text data to identify activities that are potential goal-acts for a location.
we also need to identify locations and want to include both proper names (e.g., Disneyland) as well as nominals (e.g., store, beach), so Named Entity Recognition will not suffice.
contrasting
train_4634
When adding Activity Similarity into the algorithm, we find that A L slightly improves performance, but A O and A E do not.
we also tried combining them and obtained improved results by using A L and A E together, yielding an MRR P score of 0.42.
contrasting
train_4635
And "phone" arguably is not a location at all, but most human annotators treated it as a virtual location, listing goal-acts related to telephones.
our algorithm considered phones to be similar to computers, which makes sense for today's smartphones.
contrasting
train_4636
First, we have the Marginal Popularity score and Conditional Popularity score as two context-independent features, which could compensate for each other.
as discussed in the previous section, some popular public meanings (e.g., "Artificial Intelligence") can be rarely mentioned in enterprise corpus by their full names, therefore both their marginal popularity score and conditional popularity score can be very low.
contrasting
train_4637
We also want to compare our method with the state-of-the-art Entity Linking (EL) systems based on public knowledge bases such as Wikipedia.
it is unfair to directly compare as most enterprise specific meanings are unknown to them.
contrasting
train_4638
Recently, there have been a few works (Jain et al., 2007;Larkey et al., 2000;Nadeau and Turney, 2005;Taneva et al., 2013) on automatically mining acronym meanings by leveraging Web data (e.g., query sessions, click logs).
it is hard to apply them directly to enterprises, since most data in enterprises are raw text and therefore the query sessions/click logs are rarely available.
contrasting
train_4639
It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.
as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.
contrasting
train_4640
tended the conference, the projection of submit a paper onto the main axis is clearly before attended.
this projection requires strong external knowledge that a paper should be submitted before attending a conference.
contrasting
train_4641
Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.
the "hardest hit" in "Asian crisis before hardest hit" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.
contrasting
train_4642
With the availability of large datasets and the recent progress made by neural methods, variants of sequence to sequence learning (seq2seq) (Sutskever et al., 2014) architectures have been successfully applied for building conversational systems (Serban et al., , 2017b.
despite these methods being the stateof-the art frameworks for conversation generation, they suffer from problems such as lack of diversity in responses and generation of short, repetitive and uninteresting responses (Liu et al., 2016;Serban et al., , 2017b.
contrasting
train_4643
Tech Support dataset contains conversations pertaining to an employee seeking assistance from an agent (technical support) -to resolve problems such as password reset, software installation/licensing, and wireless access.
to Ubuntu dataset, this dataset has clearly two distinct users -employee and agent.
contrasting
train_4644
Table 4 compares our model with other recent generative models (Serban et al., 2017a) -LSTM (Shang et al., 2015), HRED ) & VHRED (Serban et al., 2017b).We do not compare our model with Multi-Resolution RNN (MRNN) (Serban et al., 2017a), as MRNN explicitly utilizes the activities and entities during the generation process.
the proposed EED model and the other models used for comparison are agnostic to the activity and entity information.
contrasting
train_4645
Such systems allow users to formulate questions with greater flexibility.
although state-of-the-art systems have achieved a high accuracy of 80% to 90% (Dong and Lapata, 2016) on well-curated datasets like GEO (Zelle and Ray, 1996) and ATIS (Zettlemoyer and Collins, 2007), the best accuracies on datasets with questions formulated by real human users, e.g., WebQuestions (Berant et al., 2013), GraphQuestions , and WikiSQL (Zhong et al., 2017), are still far from enough for real use, typically in the range of 20% to 60%.
contrasting
train_4646
(2016) incorporate coarse-grained user interaction, i.e., asking the user to verify the correctness of the final results.
for real-world questions, it may not always be possible for users to verify result correctness, especially in the absence of supporting evidence.
contrasting
train_4647
(2017) have shown that incorporating fine-grained user interaction can greatly improve the accuracy of NLIDBs.
they require that the users have intimate knowledge of SQL, an assumption that does not hold for general users.
contrasting
train_4648
Metric validation in Grammatical Error Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings.
such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement.
contrasting
train_4649
We test this hypothesis by computing the average difference in GLEU score between all pairs in the sampled chains, and find it to be slightly negative (-0.00025), which is in line with GLEU's small negative τ .
plotting the GLEU scores of the originals grouped by the number of errors they contain, we find they correlate well ( Figure 5), indicating that GLEU performs well in comparing the quality of corrections of different sentences.
contrasting
train_4650
As we can see in Section 4, ensemble indeed improves the performance of baseline models.
real world deployment is usually constrained by computation and memory resources.
contrasting
train_4651
In Section 4.2, improvements from distilling the ensemble have been witnessed in both the transition-based dependency parsing and neural machine translation experiments.
questions like "Why the ensemble works better?
contrasting
train_4652
When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.
when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.
contrasting
train_4653
xcomp This is an instance of a general principle that, if there is a shortening of an MAE multiword phrase into a single word, the annotations on that word should mirror the edges in and out of the original phrase's subgraph (as in 's fudge expressions).
in contrast to the UD Treebank, we did not attempt to split up these words into their component words (e.g.
contrasting
train_4654
As before, we observed significant increases in accuracy moving from the Morpho-Tagger to the ARK Tagger settings.
neither adding embeddings nor synthetic data appeared to significantly increase accuracy for these features.
contrasting
train_4655
3) to calculate the probability distribution over V i.e., P g j (v) where v ∈ V , and then choose the token with the highest generation probability.
in our case, tokens in the target sequence Y might be exactly copied from the input X (e.g., "Italian").
contrasting
train_4656
It helps to reduce search space because all key information of X for task completion has been included in B t .
to recent work (Eric and Manning, 2017a) that also employs a copy-attention mechanism to generate a knowledge-base search API and machine responses, our proposed method advances in two aspects: on one hand, bspans reduce the search space from U 1 R 1 ...U t R t to B t−1 R t−1 U t by compressing key points for the task completion given past dialogues; on the other hand, because bspans revisit context by only handling the B t with a fixed length, the time complexity of TSCP is only O(T ), comparing O(T 2 ) in (Eric and Manning, 2017a).
contrasting
train_4657
This approach is also referred to as Language Model Type condition (Wen et al., 2016b) The standard cross entropy is adopted as our objective function to train a language model: In response generation, every token is treated equally.
in our case, tokens for task completion are more important.
contrasting
train_4658
Previous work predefines all slot values in a belief tracker.
a user may request new attributes that has not been predefined as a classification label, which results in an entity mismatch.
contrasting
train_4659
On the one hand, these specialized embeddings are more difficult to obtain than word embeddings from language modeling.
these embeddings are not specific to any dialogue domain and are directly usable for new domains.
contrasting
train_4660
One can find that in (a) hop 1, there is no clear separation of two different colors but each of which tends to group together.
the separation becomes clearer in (b) hop 6 as each color clusters into several groups such as location, cuisine, and number.
contrasting
train_4661
In addition, Ptr-Unk often cannot copy the correct token from the input, as shown by "PAD" in Table 1.
mem2Seq is able to produce the correct responses in this two examples.
contrasting
train_4662
Reinforcement Learning (Ranzato et al., 2016), Beam Search (Wiseman and Rush, 2016)) to improve both responses relevance and entity F1 score.
we preferred to keep our model as simple as possible in order to show that it works well even without advanced training methods.
contrasting
train_4663
(Li et al., 2016a) proposed a mutual information model(MMI) to tackle this problem.
it is not a unified training model, instead it still trained original Seq2Seq model, and used the Maximum Mutual Information criterion only for testing to rerank the primary top-n list.
contrasting
train_4664
With these latent embedding in the mid of Seq2Seq, the mechanism-aware Seq2Seq can generate different mechanism responses.
most of these models are using an averaged approach for optimization, similar to that in Seq2Seq.
contrasting
train_4665
We can see that Mechanism produces responses with the same meaning, such as 'Wade is so amazing' and 'It is really good'.
our CVaR models give specific responses with diverse meanings.
contrasting
train_4666
End-to-end learning framework is useful for building dialog systems for its simplicity in training and efficiency in model updating.
current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information.
contrasting
train_4667
Because traditional modular-based systems are harder to train, to update with new data and to debug errors, end-to-end trainable systems are more popular.
no work has tried to incorporate sentiment information in the end-to-end trainable systems so far to create sentiment-adaptive systems that are easy to train.
contrasting
train_4668
A common practice is to simulate users.
building a user simulator is not a trivial task.
contrasting
train_4669
Many previous RL studies used delayed rewards, mostly task success.
delayed rewards make the converging speed slow, so some studies integrated estimated per-turn immediate reward.
contrasting
train_4670
One possible reason is that a total of eight dialogic features were added to the model, and some of them might contain noises and therefore caused the model to overfit.
using predicted sentiment information as an extra feature, which is a more condensed information, outperformed the other models both in terms of turn-level F-1 score and dialog accuracy which indicates if all turns in a dialog are correct.
contrasting
train_4671
SAMPLE performs best for CHAR: best results in five out of eight cases.
its coverage is low: N = 58.
contrasting
train_4672
The transfer or share of knowledge between languages is a popular solution to resource scarcity in NLP.
the effectiveness of cross-lingual transfer can be challenged by variation in syntactic structures.
contrasting
train_4673
On one hand, Latin would display the attribute-value pairs TENSE=FUTURE and CASE=DATIVE among the features of erit and superis.
in English the function words (will and to) add nodes to the dependency structure, modifying the equivalent words (be and gods).
contrasting
train_4674
However, they have also weaknesses: the Jaccard index of feature sets is not reliable for languages with a limited number of morphologically expressed grammatical categories.
the tree edit distance measure requires resources (such as treebanks and parallel corpora) that are not available for many languages.
contrasting
train_4675
This implies that the amount of data required would not just be twice, but probably 10 or 100 times more than that for training a monolingual system with similar accuracy.
apart from user-generated content on the Web and social media, it is extremely difficult to gather large volumes of CM data because (a) CM is rare in formal text, and (b) speech data is hard to gather and even harder to transcribe.
contrasting
train_4676
3), in the EC model, a pair of monolingual parallel tweets gives rise to a large number (typically exponential in the length of the tweet) of CM tweets.
in reality, only a few of those tweets would be observed.
contrasting
train_4677
We find that with proper representation settings, the same conclusion holds for neural NER.
lattice LSTM is a better choice compared with both word LSTM and character LSTM.
contrasting
train_4678
A CNN representation of character sequences gives a slightly higher F1-score compared to LSTM character representations.
further using character bigram information leads to increased F1-score over word+char LSTM, but decreased F1-score over word+char CNN.
contrasting
train_4679
The quality of the lexicon may affect the accuracy of our NER model since noise words can potentially confuse NER.
our lattice model can potentially learn to select more correct words during NER training.
contrasting
train_4680
There has been significant amount of work on Semantic Role Labeling (Lang and Lapata, 2011;Titov and Khoddam, 2015;Roth and Lapata, 2016), which can be considered as nary relation extraction.
we are interested in inducing the schemata, i.e., the type signature of these relations.
contrasting
train_4681
For example, a prediction that a city is in France might depend on the conjunction of several facets of textual evidence linking the city to the French language, the Euro, and Norman history.
the common maximum aggregation approach is to move the final prediction layer to the sentence-to-vector modules and then aggregate by max-pooling the sentence level predictions.
contrasting
train_4682
Large KBs and corpora are needed to train KBP systems in order to collect enough mentions for each relation.
most of the existing Knowledge Base Population tasks are small in size (e.g.
contrasting
train_4683
In principle, instead of relying on the linear combination of relation embeddings matrices R k , we could directly predict a context-specific relation embedding where g is a neural network.
in preliminary experiments we observed that this resulted in overfitting and poor performance.
contrasting
train_4684
Anonymization gives comparable overall performance gains on our graph-to-sequence model as the copy mechanism (comparing Graph2seq+Anon with Graph2seq+copy).
the copy mechanism has several advantages over anonymization as discussed in Section 3.5.
contrasting
train_4685
Though directly connected in the original graph, their distance in the serialization result (the input of S2S) is 26, which may be why S2S makes these mistakes.
g2S handles "a / account" and "o / old" correctly.
contrasting
train_4686
In the training instances, serialized nodes that are close to each other can originate from neighboring graph nodes, or distant graph nodes, which prevents the decoder from confidently deciding the correct relation between them.
g2S sends the node "p / provide" simultaneously with relation "ARg0" when calculating hidden states for "a / agree", which facilitates the yielding of "the agreement provides".
contrasting
train_4687
This representation allows easy data share between KBs.
usually the elements of a triple are stored as Uniform Resource Identifiers (URIs), and many predicates (words or phrases) are not intuitive; this representation is difficult to comprehend by humans.
contrasting
train_4688
The adapted standard triple encoder has an advantage in preserving the intra-triple relationship.
it has not considered the structural rela- tionships between the entities in different triples.
contrasting
train_4689
Both models have a predefined relationship between the vertices (Graph LSTM uses spatial relationships: top, bottom, left, or right between super-pixels; Tree LSTM uses dependencies between words in a sentence as the relationship).
a KB has an open set of relationships between the vertices (i.e., the predicate defines the relationship between entities/vertices) which make our problem more difficult to model.
contrasting
train_4690
Language models based on Recurrent Neural Networks (RNNs) have brought substantial advancements across a wide range of language tasks (Jozefowicz et al., 2016;Bahdanau et al., 2015;Chopra et al., 2016).
when used for longform text generation, RNNs often lead to degenerate text that is repetitive, self-contradictory, and overly generic, as shown in Figure 1.
contrasting
train_4691
Repetition can be reduced by prohibiting recurrence of the trigrams as a hard rule (Paulus et al., 2018).
such hard constraints do not stop RNNs from repeating through paraphrasing while preventing occasional intentional repetition.
contrasting
train_4692
The RNN language model decomposes into per-word probabilities via the chain rule.
in order to allow for more expressivity over long range context we do not require the discriminative model scores to factorize over the elements of y, addressing a key limitation of RNNs.
contrasting
train_4693
The absolute performance of all the evaluated systems on BLEU and Meteor is quite low (Table 1), as expected.
in relative terms L2W is superior or competitive with all the baselines, of which ADAPTIVELM performs best.
contrasting
train_4694
Our goal is to generate a sentence containing the target word.
vanilla seq2seq model cannot guarantee the target word to appear in the generated sequence all the time.
contrasting
train_4695
Both descriptions are true, but the latter is most salient given the player's goal.
sometimes, none of the aspects may stand out as being most salient, and the most salient aspect may even change from commentator to commentator.
contrasting
train_4696
Hence, RAML enjoys a much more stable optimization without the need of pretraining.
in order to optimize the RAML objective (Eqn.
contrasting
train_4697
MS-MARCO uses web queries as questions and the answers are synthesized by workers from documents relevant to the query.
in most cloze-style datasets (Mostafazadeh et al., 2016;Onishi et al., 2016) the questions are created automatically by deleting a word/entity from a sentence.
contrasting
train_4698
This is indeed true for the SQuAD, TriviaQA and NewsQA datasets.
in our dataset, in many cases the answers do not correspond to an exact span in the document but are synthesized by humans.
contrasting
train_4699
Note that (Tan et al., 2017) recently proposed an answer generation model for the MS MARCO dataset.
the authors have not released their code and therefore, in the interest of reproducibility of our work, we omit incorporating this model in this paper.
contrasting