id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_4000
(3)) penalizes more heavily reconstruction error of frequent cooccurrences, improving on PPMI-SVD's L 2 loss, which weights all reconstruction errors equally.
as it does not penalize reconstruction errors for pairs with zero counts in the co-occurrence matrix, no effort is made to scatter the vectors for these pairs.
contrasting
train_4001
de en es fr id it ja ko pt-br sv de 16 1 14 14 14 13 10 12 14 0 en 45 1 1 1 1 1 1 1 5 es 24 14 14 13 10 12 15 0 fr 14 14 13 10 12 14 0 id 14 13 10 12 14 0 it 13 10 12 13 0 ja 763 10 10 0 ko 20 12 0 pt-br 15 0 sv 25 While all these experiments have been performed on sentences with gold PoS tags, preliminary experiments assuming predicted tags instead show analogous results: the absolute values of LAS and UAS are slightly smaller across the board, but the behavior in relative terms is the same, and the bilingual models that improved over the monolingual baseline in the gold experiments keep doing so under this setting.
table 2 shows the performance of the monolingual and bilingual models under the universal tags only configuration.
contrasting
train_4002
By starting a new topic in conversation, the leading speaker brings novelty to the existing context, which often involves relatively long and complex utterances.
the following speaker has to accommodate this change of context, by first producing short acknowledging phrases at the early stage, and gradually increase his contribution as the topic develops.
contrasting
train_4003
For computation, each word is represented by a weighted feature vector, where features typically correspond to words that co-occur in a particular context.
dSMs tend to retrieve both synonyms (such as formal-conventional) and antonyms (such as formal-informal) as related words and cannot sufficiently distinguish between the two relations.
contrasting
train_4004
Furthermore, in MTE we can expect shorter texts, which are typically much more similar.
in cQA, the question and the intended answers might differ significantly both in terms of length and in lexical content.
contrasting
train_4005
Most of these papers concentrate on providing advanced neural architectures in order to better model the problem at hand.
our goal here is different: we extend and reuse an existing pairwise NN framework from a different but related problem.
contrasting
train_4006
", the non-reduced argument "Obama, the newly elected president" can be reduced to the minimal argument "Obama", as both specify the same answer to the role question "who flew to Russia?".
a restrictive modifier is an integral part of the meaning of the containing NP, and hence should not be removed, as in "She wore the necklace that her mother gave her".
contrasting
train_4007
", in which "The tall boys and girls" would reduce to two overlapping arguments: {"The tall boys", "The tall girls"}.
non-distributive conjuncts cannot be split.
contrasting
train_4008
Semantic classification of collocations has been addressed, for instance, in (Wanner et al., 2006;Gelbukh and Kolesnikova., 2012;Moreno et al., 2013;.
to the best of our knowledge, our work is the first to automatically retrieve and typify collocations simultaneously.
contrasting
train_4009
So we perform better even though we only use 8 of 300 dimensions!
the greatest advantage of UD is that we only need 100GB of RAM, 80% less than W2V.
contrasting
train_4010
In sum, our comparison provides no clear answer to the question posed by the title of this paper.
it shows conclusively that different context types yield semantic spaces with different properties, and that the optimal context type depends on the actual application and language.
contrasting
train_4011
A full discourse parser typically consists of a pipeline of classifiers: explicit and implicit DCs are first classified and then processed separately by 2 classifiers (Xue et al., 2015).
the pragmatic listener of the RSA model considers if the speaker would prefer a particular DC, explicit or implicit, when expressing the intended sense.
contrasting
train_4012
Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity.
neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations.
contrasting
train_4013
POET improves the results for all tag pairs for CELEX.
initial experiments indicated that it is not effective for SIGMORPHON16 because its training sets are not large enough.
contrasting
train_4014
Many previous approaches have been effectively applied to CWS problem (Lafferty et al., 2001;Xue and Shen, 2003;Sun et al., 2012;Sun, 2014;Cheng et al., 2015).
these approaches incorporated many handcrafted features, thus restricting the generalization ability of these models.
contrasting
train_4015
Intuitively, extra hidden layers are able to improve accuracy performance.
it is common that extra hidden layers decrease classification accuracy.
contrasting
train_4016
An ideal metric should be oriented towards the recall for the "BAD" class.
the case of F 1 -BAD score shows that this is not the only requirement: in order to be useful the metric should not favour pessimistic labellings, i.e., all or most words labelled as "BAD".
contrasting
train_4017
One of the most reliable ways of comparing metrics is to measure their correlation with human judgements.
for the word-level QE task, asking humans to rate a system labelling or to compare the outputs of two or more QE systems is a very expensive process.
contrasting
train_4018
synthetic systems from real ones and in the task of discriminating among real systems, despite the fact that its d scores are not the best.
f 1 -BAD is not far behind: it has high values for d scores and can identify synthetic datasets quite often.
contrasting
train_4019
For instance, people estimate the size of cities they recognize to be larger than that of unknown cities (Goldstein and Gigerenzer, 2002).
the same holds for individuals/groups/characteristics we research.
contrasting
train_4020
The differences are very striking, with LexStat-Partial outperforming its nonpartial counterpart by up to four points, and SCA-Partial outperforming the classical SCA variant by almost five points.
2 we do not find strong differences in the performance of the cluster algorithms.
contrasting
train_4021
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features.
in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks.
contrasting
train_4022
Recently, neuralbased models for multi-task learning have become very popular, ranging from computer vision (Misra et al., 2016;Zhang et al., 2014) to natural language processing (Collobert and Weston, 2008;Luong et al., 2015), since they provide a convenient way of combining information from multiple tasks.
most existing work on multi-task learning (Liu et al., 2016c,b) Figure 1: Two sharing schemes for task A and task B.
contrasting
train_4023
For the positive sentence "Five stars, my baby can fall asleep soon in the stroller", both models capture the informative pattern "Five stars" 6 .
sP-MTL makes a wrong prediction due to misunderstanding of the word "asleep".
contrasting
train_4024
Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results.
less complex (local) tagging models based on BiL-STMs perform robustly across classification scenarios, being able to catch longrange dependencies inherent to the AM problem.
contrasting
train_4025
Moreover, from a machine learning perspective, pipeline approaches are problematic because they solve subtasks independently and thus lead to error propagation rather than exploiting interrelationships between variables.
to this, we investigate neural techniques for end-to-end learning in computational AM, which do not require the hand-crafting of features or constraints.
contrasting
train_4026
We observe that programs with three expressions use a more limited set of properties, mainly focusing on answering a few types of questions such as "who plays meg in family guy", "what college did jeff corwin go to" and "which countries does russia border".
programs with two expressions use a more diverse set of properties, which could explain the lower performance compared to programs with three expressions.
contrasting
train_4027
(2016) further employ sentence-level attention mechanism in neural relation extraction, and achieves the state-of-the-art performance.
most RE systems concentrate on extracting relational facts from mono-lingual data.
contrasting
train_4028
Our model falls under the second class of approaches where utterances are first mapped to an intermediate representation containing natural language predicates.
rather than using an external parser (Reddy et al., 2014 or manually specified CCG grammars (Kwiatkowski et al., 2013), we induce intermediate representations in the form of predicate-argument structures from data.
contrasting
train_4029
This property makes FunQL logical forms convenient to be predicted with recurrent neural networks (Vinyals et al., 2015;Choe and Charniak, 2016;.
funQL is less expressive than lambda calculus, partially due to the elimination of variables.
contrasting
train_4030
One of the great promises of semantic analysis (over more surface forms of analysis) is its cross-linguistic potential.
while the theoretical and applicative importance of universality in semantics has long been recognized (Goddard, 2011), the nature of universal semantics remains unknown.
contrasting
train_4031
If we do not discriminate the unknown words and assign different unknown words with the same token unk , it would be impossible for us to know what is the exact word that unk represents for in the real test.
when using our proposed unknown word processing method, if the model predicts a unkX as the answer, we can simply scan through the original document and identify its position according to its unknown word number X and replace the unkX with the real word.
contrasting
train_4032
The distinction between discourse modes is expected to be clarified conceptually by considering their different communication purposes.
there would still be specific ambiguous and vague cases.
contrasting
train_4033
Even the simple features like length can judge that short essays tend to have low scores.
when the lengths of essays are close, AES would face greater challenges, because it is required to deeper understand the properties of well written essays.
contrasting
train_4034
The attention scores a i are determined by a dot product between h i with each z j , followed by a softmax over the source sequence: In preliminary experiments, we did not find the MLP attention of (Bahdanau et al., 2015) to perform significantly better in terms of BLEU nor perplexity.
we found the dot-product attention to be more favorable in terms of training and evaluation speed.
contrasting
train_4035
Motivated by the hypothesis that the this may be due to the decoder depending on the length of the source sentence (which it cannot determine without position embeddings), we explicitly provided a distributed representation of the input length to the decoder and attention module.
this did not cause a change in attention patterns nor did it improve translation accuracy.
contrasting
train_4036
Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures.
nMT systems with deep architecture in their encoder or decoder Rnns often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult.
contrasting
train_4037
Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.
our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.
contrasting
train_4038
During decoding we find the most likely sequence of instructions z given x, which can be performed with a stack-based decoder.
it is important to refer that each generated instruction must be executed to obtain v i .
contrasting
train_4039
To create a themed corpus about 'love', for instance, we would aggregate love poetry to train the model, which would thus learn an implicit representation of love.
this forces us to generate poetry according to discrete themes and styles from pretrained models, requiring a new training corpus for each model.
contrasting
train_4040
The path structure T 1 where B is shared by two predicates (mission and operator) will favour the use of a participial or a passive subject relative clause.
the branching structure T 2 will favour the use of a new clause with a pronominal subject or a coordinated VP.
contrasting
train_4041
Through a gated matching layer, the resulting question-aware passage representation effectively encodes question information for each passage word.
recurrent networks can only memorize limited passage context in practice despite its theoretical capability.
contrasting
train_4042
Those that are automatically generated from natural occurring data can be very large (Hill et al., 2016;, which allow the training of more expressive models.
they are in cloze style, in which the goal is to predict the missing word (often a named entity) in a passage.
contrasting
train_4043
Once the source sequence is encoded, another decoding RNN model is to generate a target sequence through the following prediction model: where s t is the RNN hidden state at time t, the predicted target word y t at time t is typically performed by a sof tmax classifier over a settled vocabulary (e.g.
30,000 words) through function g. The prediction model of classical decoders for each target word y i share the same context vector c. a fixed vector is not enough to obtain a better result on generating a long targets.The attention mechanism in the decoding can dynamically choose context c t at each time step (Bahdanau et al., 2014), for example, representing c t as the weighted sum of the source where the function ρ use to compute the attentive strength with each source state, which usually adopts a neural network such as multi-layer perceptron (MLP).
contrasting
train_4044
For the automatically collected WIKISUGGEST dataset, noisy question-answer pairs were problematic, as discussed in Section 3.
the models frequently guessed the spurious answer.
contrasting
train_4045
There are several tailor-made languages designed for querying KBs, such as SPARQL (Prudhommeaux and Seaborne, 2008).
to handle such query languages, users are required to not only be familiar with the particular language grammars, but also be aware of the architectures of the KBs.
contrasting
train_4046
(2015) make use of the context and the type of the answer.
the representation of the question end is oligotrophic.
contrasting
train_4047
Figure 1 (a) as either house or building, ignoring the lexical similarity between these two names.
humans seem to be more flexible as to the chosen level of generality.
contrasting
train_4048
It would be interesting to test other IC models on T3 and compare their results against the ones reported here.
note that IC-Wang is 'tailored' for T3 because it takes as input the whole sentence (minus the word to be generated), while common sequential IC approaches can only generate a word depending on the previous words in the sentence.
contrasting
train_4049
Ideally, an AI system should acquire such knowledge through direct physical interactions with the world.
such a physically interactive system does not seem feasible in the foreseeable future.
contrasting
train_4050
When considered in verb implications, size, weight, strength, and rigidness concern individual-level semantics; the relative properties implied by verbs in these dimensions are true in general.
speed concerns stage-level semantics; its implied relations hold only during a window surrounding the verb.
contrasting
train_4051
Relatedly, previous work uses natural language inference to infer new facts from a dataset of commonsense facts that can be extracted from unstructured text (Angeli and Manning, 2014).
we focus on a small number of specific types of knowledge without access to an existing database of knowledge.
contrasting
train_4052
Another novel aspect is that our dynamic oracle is approximate, i.e., based on efficiently-computable approximations of the loss due to the complexity of calculating its actual value in a non-monotonic and non-projective scenario.
this is not a problem in practice: experimental results show how our parser and oracle can use non-monotonic actions to repair erroneous attachments, outperforming the monotonic version developed by Gómez-Rodríguez and Fernández-González (2015) in a large majority of the datasets tested.
contrasting
train_4053
This implies that the calculation of the two non-monotonic upper bounds is less efficient than the linear loss computation in the monotonic scenario.
a non-monotonic algorithm that uses the lower bound as loss expression is the fastest option (even faster than the monotonic approach) as the oracle does not need to compute cycles at all, speeding up the training process.
contrasting
train_4054
Sheshadri and Lease (2013) survey and benchmark methods.
these models are almost all in the binary or multiclass classification setting; only a few have considered sequence labeling.
contrasting
train_4055
(2009)'s model and other baselines.
due to the technical difficulty of the joint approach with CRFs, they resorted to strong modeling assumptions.
contrasting
train_4056
proposed HMM models for aggregating crowdsourced discourse segmentation labels.
they did not consider the general sequence labeling setting.
contrasting
train_4057
The posteriors can then be easily evaluated: In the standard M-step, the parameters are estimated using maximum likelihood.
we found a Variational Bayesian (VB) update procedure for the HMM parameters similar to (Johnson, 2007;Beal, 2003) provides some improvement and stability.
contrasting
train_4058
Needless to say, the ability to obtain this annotated data for many languages is limited.
we can expect that for most languages we can obtain large amounts of unlabeled surface forms that may allow for semi-supervised learning over this unlabeled data (entirety of Fig.
contrasting
train_4059
timization methods such as stochastic gradient descent (SGD) (Robbins and Monro, 1951); stochastic methods of this type are particularly important for training with large data sets.
this approach often provides a maximum a posteriori (MAP) estimate of model parameters.
contrasting
train_4060
pSGLD and dropout have similar behavior: they explore the parameter space during learning, and thus coverge slower than RMSprop on the training dataset.
the learned uncertainty alleviates overfitting and results in lower errors on the validation and testing datasets.
contrasting
train_4061
The reduction of computational costs has been early designed by imposing a budget (Dekel and Singer, 2006;Wang and Vucetic, 2010), that is limiting the maximum number of SVs in a model.
in complex tasks, such methods still require large budgets to reach adequate accuracies.
contrasting
train_4062
Cho and Saul (2009) introduced a family of kernel functions that mimic the computation of large multilayer neural networks.
such kernels can be applied only on vector inputs.
contrasting
train_4063
Notice that other low-dimensional approximations of kernel functions have been studied, as for example the randomized feature mappings proposed in Rahimi and Recht (2008).
these assume that (i) instances have vectorial form and (ii) shift-invariant kernels are adopted.
contrasting
train_4064
An intuitive approach is to search the most similar existing review for the new review, then take the found reviewer's behavioral features as the new reviewers' features (detailed in Section 5.3).
there is abundant behavioral information in the review graph ( Figure 1), it is difficult for the traditional discrete manual behavioral features to record the global behavioral information (Wang et al., 2016).
contrasting
train_4065
Using the methods described in Section 2, from the corpus, we collected a number of instances of users' preferences regarding various topics.
twitter users do not necessarily express preferences for all topics.
contrasting
train_4066
This result again indicates that our proposed method reasonably utilizes known preferences to complete missing preferences.
the performance of the majority baseline decreased as it received more information regarding the users.
contrasting
train_4067
Employing a single axis (e.g., liberal to conservative) or a few axes (e.g., political parties and candidates of elections), these studies provide intuitive visualizations and interpretations along the respective axes.
this study is the first attempt to recognize and organize various axes of topics on social media with no prior assumptions regarding the axes.
contrasting
train_4068
Empirical results here probably reflected the underlying assumptions that PCA treats missing elements as zero and not as missing data.
in the present work, we properly distinguish missing values from zero, excluding missing elements of the original matrix from the objective function of Equation 2.
contrasting
train_4069
And DS for RE assumes that if two entities have a relationship in a known knowledge base, then all sentences that mention these two entities will express that relationship in some way (Mintz et al., 2009).
when we use DS for RE to EE, we meet following challenges: Triggers are not given out in existing knowledge bases.
contrasting
train_4070
The Dynamic Multi-pooling Convolutional Neural Networks (DMCNNs) is the best reported CNN-based model for event extraction by using human-annotated training data.
our automatically labeled data face a noise problem, which is a intrinsic problem of using DS to construct training data (Hoffmann et al., 2011;Surdeanu et al., 2012).
contrasting
train_4071
For example, if k = 2, we will get 25, 797 sentences labeled as people.marriage events and we will get 534 labeled sentences, if k = 3.
when we set k = 1, although more labeled data are generated, the precision could not be guaranteed.
contrasting
train_4072
(2014) extended the distant supervision approach to fill slots in plane crash.
the method can only extract arguments of one plane crash type and need flight number strings as input.
contrasting
train_4073
They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.
they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.
contrasting
train_4074
For bag level models, since reliable and less reliable sentences are all aggregated into a sentence bag, we can not determine which bag is reliable and which is not.
bag level models can still build a curriculum by changing the content of a bag, e.g., keeping reliable sentences in the bag first, then gradually adding less reliable ones, and training with Equation 5, which could benefit from the prior knowledge of data quality as well.
contrasting
train_4075
This work treats code generation as a sequence-tosequence modeling problem, and introduce methods to generate words from character-level models, and copy variable names from input descriptions.
unlike most work in semantic parsing, it does not consider the fact that code has to be well-defined programs in the target syntax.
contrasting
train_4076
Metrics As is standard in semantic parsing, we measure accuracy, the fraction of correctly generated examples.
because generating an exact match for complex code structures is nontrivial, we follow Ling et al.
contrasting
train_4077
2), with the number of productions in the grammar remaining unchanged.
dJANGO has a broader domain, and thus unary closure results in more productions in the grammar (237 for dJANGO vs. 100 for HS), increasing sparsity.
contrasting
train_4078
An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a;Artetxe et al., 2016).
dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.
contrasting
train_4079
Such representations have found a place in many semantic applications but there is no clear consensus as to the best representation.
with the rise of supervised machine learning techniques, a new requirement has come to the fore: the ability of human annotators to quickly and reliably generate semantic representations as training data.
contrasting
train_4080
2 https://github.com/MiuLab/TC-Bot Our work is motivated by the neural GenQA (Yin et al., 2016a) and neural enquirer (Yin et al., 2016b) models for querying KBs via natural language in a fully "neuralized" way.
the key difference is that these systems assume that users can compose a complicated, compositional natural language query that can uniquely identify the element/answer in the KB.
contrasting
train_4081
RL-Soft achieves a success rate of 74% on the human evaluation and 80% against the simulated user, indicating minimal overfitting.
all agents take a higher number of turns against real users as compared to the simulator, due to the noisier inputs.
contrasting
train_4082
It is also effective to control the hypothesis length by the minimum and maximum lengths to some extent, where the minimum and maximum are selected as fixed ratios to the length of the input speech.
since there are exceptionally long or short transcripts compared to the input speech, it is difficult to balance saving such exceptional transcripts and preventing hypotheses with irrelevant lengths.
contrasting
train_4083
Finally, we achieved 1xRT with one-pass decoding when using a beam width around 3 to 5, even though it was a single-threaded process on a CPU.
the decoding process has not yet achieved realtime ASR since CTC and the attention mechanism need to access all of the frames of the input utterance even when predicting the first label.
contrasting
train_4084
English texts produced by native speakers of a variety of languages have been used to reconstruct phylogenetic trees, with varying degrees of success (Nagata and Whittaker, 2013;Berzak et al., 2014).
to language learners, however, translators translate into their mother tongue, so the texts we studied were written by highly competent native speakers.
contrasting
train_4085
This does not undermine the force of translation universals: we demonstrated how explicitation, in the form of cohesive markers, can help identify translations.
it may be possible to define classi-fiers implementing other universal facets of translation, e.g., simplification, which will yield good separation between O and T. explicitation fails in the reproduction of language typology, whereas interference-based features produce trees of considerable quality.
contrasting
train_4086
Similarly, MORSE performs better on Turkish with a 7% absolute margin in terms of F1 score.
morfessor surpasses mORSE in performance on Finnish by a large margin as well, especially in terms of recall.
contrasting
train_4087
In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data.
the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications.
contrasting
train_4088
A number of models (Simonyan and Zisserman, 2015;He et al., 2015He et al., , 2016Conneau et al., 2016) increase the number of feature maps whenever downsampling is performed, causing the total computational complexity to be a function of the depth.
we fix the number of feature maps, as we found that increasing the number of feature maps only does harm -increasing computation time substantially without accuracy improvement, as shown later in the experiments.
contrasting
train_4089
This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.
this translation model is unable to handle semantic meaning.
contrasting
train_4090
(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.
their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.
contrasting
train_4091
We see that both models can generate phrases that relate to the topic of information retrieval and video.
most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.
contrasting
train_4092
RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.
with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.
contrasting
train_4093
Siskind, 1996;Frank et al., 2007;Fazly et al., 2010).
several studies presented models that learn from sensory rather than symbolic input, which is rich with regards to the signal itself, but very limited in scale and variation (e.g.
contrasting
train_4094
The pattern observed for the two datasets is slightly different: due to the systematic conversion of words to synthetic speech in COCO, using the number of time steps for this dataset yields the highest R 2 .
this feature is not as informative for predicting the utterance length in Flickr8K due to noise and variation in human speech, and is in fact outperformed by some of the features extracted from the model.
contrasting
train_4095
This conclusion seems contradictory to the perspective of interactive alignment at the first glance.
here we are starting with a very highlevel model of dialogue that has does not refer to linguistic devices.
contrasting
train_4096
Ideal output responses should be both coherent and diverse.
most models end up with generic and dull responses.
contrasting
train_4097
Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.
many attempts have also been made to improve the architecture of encoderdecoder models.
contrasting
train_4098
On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.
after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.
contrasting
train_4099
HCNs also use an RNN to accumulate dialog state and choose actions.
hCNs differ in that they use developer-provided action templates, which can contain entity references, such as "<city>, right?".
contrasting