id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21800
Finally, similar to our work, memory networks (Prakash et al., 2017) have recently been used for diagnosis coding.
we would like to note two significant differences between the memory network from Prakash et al.
contrasting
train_21801
We use a CNN to encode each document following what is now a fairly standard approach consisting of an embedding layer, a convolution layer, a max-pooling layer, and an output layer (Collobert et al., 2011;Kim, 2014).
in our architecture, the CNN additionally aids in getting interme-diate representations for the multi-head matching network component (Section 3.2).
contrasting
train_21802
For popular concepts, lists of relevant concept stocks can be found from analyst reports from financial websites.
concepts are dynamic and flexible.
contrasting
train_21803
Considering the previous N-1 words, N-gram language models predict the next word.
this leads to the loss of longterm dependencies.
contrasting
train_21804
Many binarization methods have been proposed (Courbariaux et al., 2015Rastegari et al., 2016;Xiang et al., 2017).
only a few (Hou et al., 2016;Edel and Köppe, 2016) are related to recurrent neural network.
contrasting
train_21805
(Hou et al., 2016) implements a character level binarized language model with a vocabulary size of 87.
they did not do a comprehensive study on binarized large vocabulary LSTM language models.
contrasting
train_21806
At run-time, the input embedding and the output embedding are binarized matrices.
at train-time, float versions of the embeddings, which are used for calculating the binarized version of embeddings, are still maintained.
contrasting
train_21807
Batch normalization is hard to apply to a recurrent neural network, due to the dependency over entire sequences.
the structure of the batch normalization is quite useful.
contrasting
train_21808
The BELM model outperforms the baseline model on the MEN task, although it doesnt perform as well as the baseline model on the WS-353 task.
the MEN dataset contains many more word pairs, which makes the results on this dataset more convincing.
contrasting
train_21809
Models with hops in the range of 3 − 10 outperform the single layer variant.
each added hop contributes a new set of parameters for memory representation, leading to an increase in total parameters of the model and making it susceptible to overfitting.
contrasting
train_21810
In order to avoid error propagation and effectively utilize contexts, prior work leveraged history for contextual SLU.
most previous models only paid attention to the related content in history utterances, ignoring their temporal information.
contrasting
train_21811
Previous work proposed an end-to-end time-aware attention network to leverage both contextual and temporal information for spoken language understanding and achieved the significant improvement, showing that the temporal attention can guide the attention effectively .
the time-aware attention function is an inflexible hand-crafted setting, which is a fixed function of time for assessing the attention.
contrasting
train_21812
The assumption underlying these uses is that reading rate is a property of the reader (or con-trolled by the reader).
variation in reading rate across different passages for the same readers has also been reported (Foulke, 1968;Tauroza and Allison, 1990;Ardoin et al., 2005;Compton et al., 2004;Beigman Klebanov et al., 2017).
contrasting
train_21813
In general, the usage of high-specificity color words increases in more difficult conditions, as expected.
we see that Chinese speakers use them significantly less than English speakers.
contrasting
train_21814
Perplexity is a common intrinsic evaluation metric for generation models.
3 for comparing monolingual and bilingual models, we found perplexity to be unhelpful, owing largely to its vocabulary-dependent definition.
contrasting
train_21815
Specifically, if we fix the vocabulary in advance to include tokens from both languages, then the monolingual model performs unreasonably poorly, and bilingual training helps immensely.
this is an unfair comparison: the monolingual model's high perplexity is dominated by low probabilities assigned to rare tokens in the opposite-language data that it did not see.
contrasting
train_21816
Intuitively, we expect adding more training data on the same task will improve the model, regardless of language.
we find that the effect of dataset size is not so straightforward.
contrasting
train_21817
Preliminary experiments involving augmentation of the data by duplicating and deleting constituents show no gains, suggesting that the improvement depends on certain kinds of regularities in the English data that are not provided by artificial manipulations.
more investigation is needed to thoroughly assess the role of general-purpose regularization in our observations.
contrasting
train_21818
Adding a hard constraint on the output vocabulary would make this equivalent to a simple form of multitask learning (Caruana, 1997;Collobert and Weston, 2008).
allowing the model to use tokens from either language at any time is simpler and results in better modeling of mixedlanguage data, which is more common in non-English environments.
contrasting
train_21819
This is the first approach we are aware of to frame a general learning problem as optimization over a space of natural language strings.
many closely related ideas have been explored in the literature.
contrasting
train_21820
The advantage is that no firm decisions need to be made about object categories.
such approaches are hard to interpret and are dataset dependent (Vinyals et al., 2017).
contrasting
train_21821
There is also recent work applying attentionbased models (Xu et al., 2015) on explicit object proposals (Anderson et al., 2018;Li et al., 2017), which may capture object-level information from the attention mechanism.
attention-based models require object information in the form of vectors, whereas our models use information of objects as categorical variables which allow for easy manipulation but are not compatible with the standard attention-based models.
contrasting
train_21822
just because there is only one person in the image does not mean that it is less important than the ten cars depicted.
object size correlates with object importance in IC, i.e.
contrasting
train_21823
We have also tried computing the correlation with f (t c ) (frequency of the category being mentioned regardless of whether or not it is depicted).
we found the word matching process too noisy as it is not constrained or grounded on the image (e.g.
contrasting
train_21824
Of course, the categories are not mutually exclusive and object co-occurrence may also play a role.
we leave this analysis for future work.
contrasting
train_21825
Previous work suggests that incorporating visual features for less concrete concepts can be harmful in word similarity tasks (Hill and Korhonen, 2014b;Kiela and Bottou, 2014;.
it is less clear if this intuition applies to more practical tasks (e.g., retrieval), or if this problem can be overcome simply by applying the "right" machine learning algorithm.
contrasting
train_21826
Other work removed the dependency on a true character list by determining all names through coreference resolution.
this work also depended on the availability of scripts (Ramanathan et al., 2014).
contrasting
train_21827
The second person reference represents a multi-instance constraint that suggests that the mentioned name is one of the characters that are present in the scene, which increases the probability of this character to be one of the speakers of the surrounding segments.
the third person reference represents a negative constraint, as it suggests that the speaker does not exist in the scene, which lowers the character probability of the character being one of the speakers of the next or the previous subtitle segments.
contrasting
train_21828
One recent ensembling approach to VQA (Fukui et al., 2016) combined multiple models that use multimodal compact bilinear pooling with attention and achieved state-of-the-art accuracy on the VQA 2016 challenge.
their ensemble uses simple softmax averaging to combine outputs from multiple systems.
contrasting
train_21829
These advantages have motivated recent work on explainable AI systems, particularly in computer vision (Antol et al., 2015;Goyal et al., 2016;.
there has been no prior work on using explanations for ensembling multiple models or improving performance on a challenging task.
contrasting
train_21830
Evaluation on either split requires submitting the output to the competition's online server.
1 there are fewer restrictions on the number of submissions that can be made to the test-dev compared to the test-standard.
contrasting
train_21831
Pre-trained word representations (Mikolov et al., 2013;Pennington et al., 2014) are a key component in many neural language understanding models.
learning high quality representations can be challenging.
contrasting
train_21832
1 Due to their ability to capture syntactic and semantic information of words from large scale unlabeled text, pretrained word vectors (Turian et al., 2010;Mikolov et al., 2013;Pennington et al., 2014) are a standard component of most state-ofthe-art NLP architectures, including for question answering , textual entailment (Chen et al., 2017) and semantic role labeling .
these approaches for learning word vectors only allow a single contextindependent representation for each word.
contrasting
train_21833
(2017) pretrain encoder-decoder pairs using language models and sequence autoencoders and then fine tune with task specific supervision.
after pretraining the biLM with unlabeled data, we fix the weights and add additional taskspecific model capacity, allowing us to leverage large, rich and universal biLM representations for cases where downstream training data size dictates a smaller supervised model.
contrasting
train_21834
As a result, the biLM provides three layers of representations for each input token, including those outside the training set due to the purely character input.
traditional word embedding methods only provide one layer of representation for tokens in a fixed vocabulary.
contrasting
train_21835
They are spread across several parts of speech (e.g., "played", "playing" as verbs, and "player", "game" as nouns) but concentrated in the sportsrelated senses of "play".
the bottom two rows show nearest neighbor sentences from the SemCor dataset (see below) using the biLM's context representation of "play" in the source sentence.
contrasting
train_21836
Similar to WSD, the biLM representations are competitive with carefully tuned, task specific biLSTMs (Ling et al., 2015;Ma and Hovy, 2016).
unlike WSD, accuracies using the first biLM layer are higher than the top layer, consistent with results from deep biL-STMs in multi-task training (Søgaard and Goldberg, 2016;Hashimoto et al., 2017) and MT (Belinkov et al., 2017).
contrasting
train_21837
2 Zettlemoyer and Collins (2009) addressed the problem with lambda calculus, using a semantic parser trained separately with context-independent data.
we generate executable formal queries and require only interaction query annotations for training.
contrasting
train_21838
At turn i, this model has access to the complete prefix of the interactionĪ[: i − 1] and the current requestx i .
the concatenationbased encoder (Section 4.2) has access only to information from the previous h utterances.
contrasting
train_21839
Formally, we modify Equation 1 to encodex i : where h I i−1 is the discourse state following utter- E is modified analogously.
to the concatenation-based model, the recurrence processes a single utterance.
contrasting
train_21840
The position embedding is also used to compute the context vector c k : The discourse state and attention over previous utterances allow the model to consider the interaction history when generating queries.
we observe that context-dependent reasoning often requires generating sequences that were generated in previous turns.
contrasting
train_21841
Using the types only, while ignoring the indices, avoids learning biases that arise from the arbitrary ordering of the tokens in the training data.
it does not allow distinguishing between entries with the same type for generation decisions; for example, the common case where multiple cities are mentioned in an interaction.
contrasting
train_21842
The relatively high performance of FULL-0 shows that substituting segment copying with attention maintains and even improves the system effectiveness.
the best performance is provided with FULL, which combines both.
contrasting
train_21843
In general, the model resolves references well.
it fails to recover constraints mentioned in the past following a user focus state change (Grosz and Sidner, 1986).
contrasting
train_21844
We used a subset of 50 randomly selected text segments from the test set described in §4.
for the human evaluation, we only used the final 60 words 6 of the story segments to keep the amount of reading and context manageable for Turkers.
contrasting
train_21845
When asked to explain why they selected the sentence they did, a few Turkers attributed their choices to connections between pronouns in EN-GEN's suggestions to characters mentioned in the story excerpt.
a more frequent occurrence was Turkers citing a mismatch in entities as their reason for rejecting an option.
contrasting
train_21846
To this end, a specific architecture with 886 hidden units can simulate any Turing machine in real-time (i.e., each Turing machine step is simulated in a single time step).
their RNN encodes the whole input in its internal state, performs the actual computation of the Turing machine when reading the terminating token, and then encodes the output (provided an output is produced) in a particular hidden unit.
contrasting
train_21847
Not all probabilistic context-free grammars are consistent; necessary and sufficient conditions for consistency are given by Booth and Thompson (1973).
probabilistic context-free grammars obtained by training on a finite corpus using popular methods (such as expectation-maximization) are guaranteed to be consistent (Nederhof and Satta, 2006).
contrasting
train_21848
Our results show the non-existence of (efficient) algorithms for interesting problems that researchers using RNN in natural language processing tasks may have hoped to find.
the non-existence of such efficient or exact algorithms gives evidence for the necessity of approximation, greedy or heuristic algorithms to solve those problems in practice.
contrasting
train_21849
10 This demonstrates that given sufficient alternative signal, systems often do ignore gender biased cues.
winoBias provides an analysis of system bias in an adversarial setup, showing, when examples are challenging, systems are likely to make gender biased predictions.
contrasting
train_21850
Prior work (Carstens et al., 2014;Rajendran et al., 2016a) in identifying arguments in online reviews has considered sentence-level statements to be arguments based on abstract argumentation models.
to extract arguments at a finer level based on the idea of structured arguments is a harder task, requiring us to manually annotate argument components such that they can be used by supervised learning techniques.
contrasting
train_21851
In our earlier work (Rajendran et al., 2016b), we propose an approach for reconstructing structures similar to enthymemes in opinions that are present in online reviews.
the annotated dataset used in our approach was small and not useful for deep learning models.
contrasting
train_21852
The accuracy of the LSTM model in predicting the labels of annotated opinions improves with the size of the automatically labelled dataset.
the accuracy of the reserved method decreases in performance after 20 iterations 2 .
contrasting
train_21853
This supports our conjecture that the GRU layer has difficulty learning the kind of coreference-based reasoning required in this dataset, and that the bias towards coreferent recency helps with that.
perhaps surprisingly, given enough data both models perform comparably.
contrasting
train_21854
Many simple NLG models are based on recurrent neural networks (RNN) and sequence-to-sequence (seq2seq) model, which basically contains a encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization using a simple cross entropy loss training criterion.
the simple encoderdecoder architecture usually suffers from generating complex and long sentences, because the decoder has to learn all grammar and diction knowledge.
contrasting
train_21855
These systems are well-suited to unconstrained translation, as often the phrase table entries are good translations of source phrases.
when rhythm and rhyme constraints are applied to PBMT, translation options become extremely limited, to the extent that it is often impossible to generate any translation that obeys the poetic constraints (Greene et al., 2010).
contrasting
train_21856
To illustrate the annotation, consider an excerpt from a response to the prompt "It is better to have broad knowledge of many academic subjects than to specialize in one specific subject"; metaphors are italicized: I ultimately agree with the fact that it is better to be specialized on a specific subject than to spread energy on different subjects.
i say ultimately, because being and staying focused on one subject means always to discard other subjects.
contrasting
train_21857
In some cases, the difference could be attributed to data being indomain; the sets marked with a plus in Table 4 are taken from the same testing programs as the training data for Arg, although the specific prompts are different.
arg does better across the board, including data completely unrelated to the annotation campaign.
contrasting
train_21858
A recent survey outlined eight categories of features used in hate speech detection (Schmidt and Wiegand, 2017): simple surface (Warner and Hirschberg, 2012;Waseem and Hovy, 2016), word generalization (Warner and Figure 1: Our hate speech classifier.
to existing methods that focus on a single target Tweet as input (center), we incorporate intra-user (right) and interuser (left) representations to enhance performance.
contrasting
train_21859
The former requires the user history to be labeled instances.
labeling user history requires significant human effort.
contrasting
train_21860
This is shown in the middle branch in Figure 1.
this method is likely to fail when the target tweet is noisy or the critical words for making predictions are out of vocabulary.
contrasting
train_21861
This inference problem is solved using gradient descent.
the energy surface is non-convex, which prevents gradient descent inference from finding the exact structure y min that globally minimizes the energy function.
contrasting
train_21862
In general, if SPEN ranks every pair of output structures identical to the score function, the optimum points of the score function match those of SPEN.
forcing the ranking constraint for every pair of output structures is not tractable, so we need to approximate it by sampling some candidate pairs.
contrasting
train_21863
For ∆LATER this correctly leads to a 0-prediction.
cOMPARE predicts strong change, because due to polysemy there is a high probability to sample distantly related use pairs in the cOM-PARE group.
contrasting
train_21864
To improve original word embedding models, there are various studies leveraging external knowledge to update word embeddings with post processing (Faruqui et al., 2015;Kiela et al., 2015; or supervised objectives (Yu and Dredze, 2014;Nguyen et al., 2016).
these approaches are limited by reliable semantic resources, which are hard to obtain or annotate.
contrasting
train_21865
Example 11 shows the advantage of combining the vector representation with orthographic distance, i.e., our model could find translations of sleddogs that have similar meaning, while in examples 12 and 13 orthographic distance helped to pick the correct translation which is the closest in terms of edit distance.
in example 14 orthographic distance caused an error because the incorrect prediction is too close to the source word in orthographic distance.
contrasting
train_21866
While antonymy represents words which are strongly associated but highly dissimilar to each other, synonymy refers to words that are highly similar in meaning.
antonyms and synonyms often occur in similar context, as they are interchangeable in their substitution.
contrasting
train_21867
These representations, when used as model inputs, have been shown to lead to faster learning and better results in a wide variety of settings (Erhan et al., 2009(Erhan et al., , 2010Cases et al., 2017).
many domains require more specialized representations but lack sufficient data to train them from scratch.
contrasting
train_21868
As a result, existing implementations of GloVe use an inner loop to compute this cost and associated derivatives.
since f (0) = 0, the second bracket is irrelevant whenever X ij = 0, and so replacing log X ij with (for any k) does not affect the objective and reveals that the cost function can be readily vectorized as where M = W W + b1 + 1b − g(X).
contrasting
train_21869
Van de Cruys et al., 2013; Dima and Hinrichs, 2015).
dima (2016) recently showed that similar performance is achieved by representing the NC as a concatenation of its constituent embeddings, and argued that it stems from memorizing prototypical words for each relation.
contrasting
train_21870
These vectors were used to classify new NCs based on the nearest neighbor in the VSM.
the model was only tested on a small dataset and performed similarly to previous methods.
contrasting
train_21871
For the two intermediate clinical groups, aMCD and mMCD, the use of local average information from a small window including only the previous word (ψ 1 ) also produces good results.
there is no consensus regarding the source of switch identification, as for aMCD both semantic similarity and association strength were effective, and for mMCD it was semantic relatedness that provided a better characterization of the groups.
contrasting
train_21872
5 This also works for a euclidean distance similarity metric.
ing in the sum in (5) also appear in (6): the word analogy assumption is now most certainly broken: suppose that π permutes only two vectors, a 2 and a 3 and leaves all other vectors as is: There are two conventional practices in evaluating word embeddings that we aim to show are problematic: normalisation and the exclusion of premise vectors in prediction.
contrasting
train_21873
Their models are largely based on memory networks , originally developed for reasoning-focused machine reading comprehension tasks.
to memory networks, where each input sentence/word occupies a memory slot and is then accessed via attention independently, recent advances in machine reading suggest that processing inputs sequentially is beneficial to overall performance (Seo et al., 2017;Henaff et al., 2017).
contrasting
train_21874
Systems that rely on neural machine translation (Yuan and Briscoe, 2016;Xie et al., 2016;Schmaltz et al., 2017;Ji et al., 2017) are not yet able to achieve as high performance as SMT systems according to automatic evaluation metrics (see Table 1 for comparison on the CoNLL-2014 test set).
it has been shown that the neural approach can produce more fluent output, which might be desirable by human evaluators (Napoles et al., 2017).
contrasting
train_21875
As opposed to pipelining, rescoring improves precision at the expense of recall and is more effective for the CoNLL data resulting in up to 54. produces similar results as rescoring with the NMT ensemble.
the best result for rescoring is lower than for pipelining on that test set.
contrasting
train_21876
The main purpose of our analysis is to look for structure in an array of entrainment measures.
we first check whether similarity is significantly greater for partners than non-partners for our lexical measures since PPL and KLD have not previously been used for lexical entrainment and Nenkova et al.
contrasting
train_21877
The automatic placement of new concepts in a taxonomy has also been investigated as a shared task in SemEval 2016 (Jurgens and Pilehvar, 2016).
to the best of our knowledge, there is no work that applies generative paraphrasing to expand a taxonomy.
contrasting
train_21878
A smaller increase in coverage can be achieved if we consider noisy paraphrases as well, 8.1% for Moses and 3.2% for seq2seq.
these additional sentences may contain significantly more noise.
contrasting
train_21879
We can see that in our dataset nouns are indeed preferred by our non-expert annotators.
when looking at a smaller amount of annotations, the number of annotated verbs and adjectives increases.
contrasting
train_21880
These KBs are useful resources in many applications such as semantic searching and ranking (Kasneci et al., 2008;Schuhmacher and Ponzetto, 2014;Xiong et al., 2017), question answering (Zhang et al., 2016;Hao et al., 2017) and machine reading (Yang and Mitchell, 2017).
the KBs are still incomplete, i.e., missing a lot of valid triples (Socher et al., 2013;West et al., 2014).
contrasting
train_21881
Results in Table 2 show that both MT and MTDS can significantly improve NOM detection over the baseline, and adaptive data selection in MTDS further improves over the MT model.
there is no gain at all for NAM detection for both languages.
contrasting
train_21882
For NLP, bootstrapping is a popular approach to semi-supervised learning due its relative simplicity coupled with reasonable performance (Abney, 2007).
a crucial limitation of bootstrapping, which is typically iterative, is that, as learning advances, the task often drifts semantically into a related but different space, e.g., from learning women names into learning flower names (McIntosh, 2010;Yangarber, 2003).
contrasting
train_21883
Recently, many existing systems on structured prediction focus on increasing the level of structural dependencies within the model.
the theoretical and experimental study of Sun (2014a) suggests that complex structures are tend to increase the overfitting risk, and can potentially be harm- At that time, I [Person] came to West Dongting with my parents' [Person] dream of building a state farm.
contrasting
train_21884
Many tasks cannot achieve a satisfying result on Chinese literature text compared to other corpus.
understanding Chinese literature text is of great importance to Chinese literature research.
contrasting
train_21885
The punctuation is a natural break point of the sentence, which makes subtrees more like the traditional dependency trees in the aspect of integrity.
the original dependency trees cannot be sufficiently regularized.
contrasting
train_21886
AutoSlog takes a fundamentally syntaxdriven approach to identifying patterns, which suggests the discovered patterns (and associated performance boost) is due to exploiting syntax.
the performance gains could also be due to additional contextual information that bigrams and larger n-grams provide over unigrams alone, rather than their syntactic properties.
contrasting
train_21887
Compiling, updating and translating them has traditionally been left mostly to domain experts and professional lexicographers.
the last two decades have witnessed a growing interest in automating the construction of lexicographic resources.
contrasting
train_21888
In the early days of DE, rulebased approaches leveraged linguistic cues observed in definitional data (Rebeyrolle and Tanguy, 2000;Klavans and Muresan, 2001;Malaisé et al., 2004;Saggion and Gaizauskas, 2004;Storrer and Wellinghoff, 2006).
in order to deal with problems like language dependence and domain specificity, machine learning was incorporated in more recent contributions (Del Gaudio et al., 2013), which focused on encoding informative lexico-syntactic patterns in feature vectors (Cui et al., 2005;Fahmi and Bouma, 2006;Westerhout and Monachesi, 2007;Borg et al., 2009), both in supervised and semi-supervised settings (Reiplinger et al., 2012;.
contrasting
train_21889
Examples where syntactic cues are leveraged include medical acronym expansion (Pustejovsky et al., 2001), hyponym-hypernym extraction and detection (Hearst, 1992;Shwartz et al., 2016), and definition extraction either from the web (Saggion and Gaizauskas, 2004), scholarly articles (Reiplinger et al., 2012), and more recently from Wikipedia-like definitions (Boella et al., 2014).
the interplay between syntactic information and the generalization potential of neural networks remains unexplored in definition modeling, although intuitively it seems reasonable to assume that a syntax-informed architecture should have more tools at its disposal for discriminating between definitional and non-definitional knowledge.
contrasting
train_21890
Among our proposed systems, the overall best performance in Wikipedia definitions is obtained by the CNN l configuration.
incorporating a BLSTM layer contributes towards the best performing model on the NLP-specific WCL W00 dataset (C-BLSTM100 d ).
contrasting
train_21891
Thus, the loss calculation technique of the Covington dynamic oracle is not directly applicable to the 2-Planar parser.
if we statically choose a canonical plane assignment and we calculate loss with respect to that assignment (i.e., creating a correct arc in the non-canonical plane incurs loss), then the Covington technique, based on counting individually unreachable arcs and then correcting for the presence of cycles, works for the 2-Planar parser.
contrasting
train_21892
Indeed, if Switch transitions are always allowed, their cost is zero because they can always be undone and thus never affect the reachability of any arcs.
when consecutive Switch transitions are banned to ensure parser termination, choosing to Switch can have consequences as, in the resulting configuration, the parser will be forced to take one of the other four transitions, which may lead to suboptimal outcomes compared to not having switched.
contrasting
train_21893
The latter seems to work better on treebanks with less non-projectivity such as the English, Chinese and Japanese datasets, and worse on those with higher amounts like Turkish, Dutch or Basque.
some cases like Czech or Catalan go against this trend.
contrasting
train_21894
to compute the marginal in the denominator.
posterior probability of all the parameters of interest (here, Ψ = {τ, v, θ}) can be computed from samples drawn using a Markov chain Monte Carlo (MCMC) method.
contrasting
train_21895
With our method, intransitive verb run in H works as a soft constraint on the verb in T and corrects its structure successfully.
there are some cases where using only surface forms as a cue forces the assignment of categories which is We show the results on JSeM in Table 2.
contrasting
train_21896
In the above experiments, our method worked well, mainly due to the fact that the sentences in these datasets have comparably simple structure.
in other datasets, there are naturally more complex cases as in Table 3 (d), where we want different syntactic analyses for occurences of words with the same surface form.
contrasting
train_21897
Because of their efficiency and ease of implementation, transition-based parsers are the most common systems for dependency parsing.
efficiency comes at a price, namely a loss in expressivity: while graph-based parsers are able to produce any tree spanning the input sentence, many transition-based systems are restricted to projective trees.
contrasting
train_21898
In this paper, we focus on the foiled captions classification task (Section 2), and propose the use of explicit object detections as salient image cues for solving the task.
to methods from previous work that make use of word based information extracted from captions (Heuer et al., 2016;Yao et al., 2016;Wu et al., 2018), we use explicit object category information directly extracted from the images.
contrasting
train_21899
The query-todocument attention module in BiDAF added the attended-context vector to the document representation instead of the query representation.
the inverse attention from the objects to the words is important in our task because the representation of the object depends on its corresponding words.
contrasting