id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17000
Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT.
in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation.
contrasting
train_17001
(2018) for new tasks is the more suitable method, as it updates all layers of the BERT network.
sentEval can still give an impression on the quality of our sentence embeddings for various tasks.
contrasting
train_17002
Cosine-similarity treats all dimensions equally.
sentEval fits a logistic regression classifier to the sentence embeddings.
contrasting
train_17003
InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018) both use (u, v, |u − v|, u * v) as input for a softmax classifier.
in our architecture, adding the element-wise u * v decreased the performance.
contrasting
train_17004
Theorem 1 shows that minimizing the corrupted risk R AUC-Corr (g) implies minimizing both the clean risk R AUC (g) and excessive terms.
since we only aim to minimize the clean risk, minimizing the corrupted risk may not effectively minimize the clean risk since excessive terms can be minimized instead and lead to overfitting.
contrasting
train_17005
Autoencoders have long been used for representation learning of images and text (Li et al., 2015).
precisely reconstructing the clean input is probably too easy for high-capacity models.
contrasting
train_17006
A simple way to do so is putting a softmax classification layer on top of each encoder network.
we find that it is beneficial to directly perform classification in the k-means clustering space.
contrasting
train_17007
(2018) present a joint framework for deep multi-view clustering (DMJC) that is the closest work to ours.
dMJC only works with single-view inputs and the feature representations are learned using a multi-view fusion mechanism.
contrasting
train_17008
However, DMJC only works with single-view inputs and the feature representations are learned using a multi-view fusion mechanism.
aV-KMEaNS assumes that the inputs can be naturally partitioned into multiple views and carry out learning with the multi-view inputs directly.
contrasting
train_17009
In comparison to the semi-supervised neural neural models, CNN-RevGrad, Bi-LSTM-RevGrad, Adv-CNN, Adv-Bi-LSTM, SSL-VAE, which use the knowledge of a test set just like our model, we gain 8-11 pts in F1 score.
when evaluating on BioInfer dataset as a test set and AIMed as a training set, our model is in tie w.r.t.
contrasting
train_17010
Local explanation approaches focus on highlighting a handful of crucial features (Baehrens et al., 2010) or deriving simpler, more readable models from a complex one, e.g., a binary decision tree (Frosst and Hinton, 2017), or by local approximation with linear models (Ribeiro et al., 2016).
although they can explicitly show the representations learned in the specific hidden neurons (Frosst and Hinton, 2017), these approaches base their effectiveness on the user ability to study the quality of the reasoning and of the accountability as a side effect of the quality and the coherence of the features selection: this can be very hard in tasks where boundaries between classes are not well defined.
contrasting
train_17011
The MI not only measures how accurate the output agrees with its input, but also whether a meaningful latent variable can be learned in the latent space.
the estimation and maximization of the MI in the high-dimensional space are difficult.
contrasting
train_17012
It has been proposed in (Hoffman and Johnson, 2016) that decomposing the ELBO can produce an equivalent way to define the ELBO: where q φ (x, z) denotes the variational joint distribution induced by the posterior denotes the mutual information between x and z under q φ (x, z), and denotes the aggregated posterior (Makhzani et al., 2016).
the zero-forcing effect of the KL indicates if the latent variable z is a good representation of x, the mutual information between x and z should take a large value and the KL term in Equation (3) would be non-zero.
contrasting
train_17013
It is consistent with our motivation that the MI-regularized objective can alleviate the posterior collapse to improve VAEs' capability of generating fluent and diverse sentences with the effective use of the latent variable.
laggingVAE treats the MI as a stopping criterion for training the encoder RNN at the beginning of optimization.
contrasting
train_17014
Code and models are available at: https://github.com/drimpossible/Sampling-Bias-Active-Learning * indicates equal contribution † Work done at Verisk | AI Deep neural networks (DNNs) trained on large datasets provide state-of-the-art results on various NLP problems (Devlin et al., 2019) including text classification (Howard and Ruder, 2018).
the cost and time needed to get labeled data and to train models is a serious impediment to creating new and/or better models.
contrasting
train_17015
A recent empirical study (Siddhant and Lipton, 2018) investigating active learning in NLP suggests that Bayesian active learning outperforms classical uncertainty sampling across all settings.
the approaches have been limited to rel-atively small datasets.
contrasting
train_17016
Note that the current trend in deep learning is to train large models on very large datasets.
the aforementioned issues have not yet been investigated in the literature in such a setup.
contrasting
train_17017
We observe that across queries (∩Q), FTZ with entropy strategy has a balanced representation from all classes (high mean) with a high probability (low std) while Multinomial Naive Bayes (MNB) results in more biased queries (lower mean) with high probability (high std) as studied previously.
we did not find evidence of class bias in the resulting sample (∩S) in both models: Fast-Text and Naive Bayes (column 5 and 6 from Table 2).
contrasting
train_17018
A popular approach is to use an adversary to remove information about a target feature, often gender or ethnicity, from a model's internal representations (Edwards and Storkey, 2016;Kim et al., 2019).
the biases we consider are related to features that are essential to the overall task, so they cannot simply be ignored.
contrasting
train_17019
Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification.
exhaustive verification is computationally very expensive.
contrasting
train_17020
Apparently this R is perfectly compact (only one word).
this rationale does not provide a valid explanation.
contrasting
train_17021
In this case, the two entities themselves suffice to serve as the rationales.
our model preserves the words like "working".
contrasting
train_17022
The corresponding objective function for MLE is derived from the Kullback-Leibler (KL) divergence between the empirical probability distribution representing the data and the parametric probability distribution output by the model.
the word frequency discrepancies in natural language make performance extremely uneven: while the perplexity is usually very low for frequent words, it is especially difficult to predict rare words.
contrasting
train_17023
These improvements have been obtained by explicitly incorporating in the model different ways of treating words according to their frequency.
learning is always made via (or approximating) Maximum Likelihood Estimation, which finds the distribution that maximizes entropy subject to the constraints given by training examples.
contrasting
train_17024
Applied to our prob-lem, the objective becomes: (2) In order to use the previously defined objectives, we need to compute the model probabilities p θ (y|x), which are usually obtained using a softmax function: applied on scores (or logits) s θ (x, y ) output by the model for every possible target token y ∈ Y.
as explained earlier, Y can be very large, hence computing all the scores and summing them is extremely slow.
contrasting
train_17025
It is interesting to note that the power transformations will here be applied on the posterior classification probabilities p C θ instead of categorical probabilities p θ .
the three divergences presentend in Section 3 are defined on positive measures: in theory, we can simply use the exp function on the scores s θ and do not need to normalize them: neither the α and β divergences are scale invariant (see right column of table 1 and Cichocki and Amari 2010).
contrasting
train_17026
This could be expected: intuitively, these values of α should make the model 'stretch' the probability mass.
as learning progresses, this phenomenon lessens, and the performance on rare words gets worse.
contrasting
train_17027
Besides, they are especially close across all values of γ: we can assume that this indicates that the corresponding Obj γ are 'closer' to the MLE objective.
tracking these values during training 13 shows that they all behave very similarly to perplexity.
contrasting
train_17028
word sequences alone (Kiperwasser and Goldberg, 2016;Dozat and Manning, 2016;Teng and Zhang, 2018), thereby allowing the output layer to make local predictions.
though explicitly capturing output label dependencies, CRF can be limited by its Markov assumptions, particularly when being used on top of neural encoders.
contrasting
train_17029
In NLP, label embeddings have been exploited for better text classification (Tang et al., 2015;Nam et al., 2016;.
relatively little work has been done investigating label embeddings for sequence labeling.
contrasting
train_17030
Capturing non-local dependencies between labels, these methods, however, are slower compared with CRF.
to these lines of work, our method is both asymptotically faster and empirically more accurate compared with neural CRF.
contrasting
train_17031
It gives an accuracy slightly higher than that of LSTM-softmax, which demonstrates the advantage of label embeddings.
it significantly underperforms BiLSTM-LAN (p-value<0.01), which shows the advantage of hierarchically-refined label distribution sequence encoding.
contrasting
train_17032
However, it predicts "with" incorrectly as "PP/NP" with the former supertag ending with "/PP".
biLSTM-LAN can capture potential long-term dependency and better determine the supertags based on global label information.
contrasting
train_17033
This suffices to compute interval bounds for feedforward networks and CNNs.
common NLP model components like LSTMs and attention also rely on softmax (for attention), element-wise multiplication (for LSTM gates), and dot product (for computing attention scores).
contrasting
train_17034
Across model architectures, we found that one epoch of certifiably robust training takes between 2× and 4× longer than one epoch of standard training.
iBP certificates are much faster to compute at test time than genetic attack accuracy.
contrasting
train_17035
Surprisingly, certifiably robust training nearly eliminated robustness errors in which the genetic attack had to change many words: the genetic attack either caused an error by changing a couple words, or was unable to trigger an error at all.
data augmentation is unable to cover the exponentially large space of perturbations that involve many words, so it does not prevent errors caused by changing many words.
contrasting
train_17036
Language model pre-training, such as BERT, has achieved remarkable results in many NLP tasks.
it is unclear why the pretraining-then-fine-tuning paradigm can improve performance and generalization capability across different tasks.
contrasting
train_17037
Recent work (Tenney et al., 2019b;Liu et al., 2019a;Goldberg, 2019;Tenney et al., 2019a) has shown that the pre-trained models can encode syntactic and semantic information of language.
it is unclear why pre-training * Contribution during internship at Microsoft Research.
contrasting
train_17038
In other words, the training loss of fine-tuning BERT tends to monotonously decrease along the optimization direction, which eases optimization and accelerates training convergence.
the path from random initial point to the end point is more rough, which requires a more carefully tweaked optimizer to obtain reasonable performance.
contrasting
train_17039
Second, existing approaches have focused on using language either as a standalone replacement for labeled data (Hancock et al., 2018), or to drive learning such as through specifying features for learning tasks (Eisenstein et al., 2009).
many realistic scenarios of learning from language would involve not learning from language alone, but learning from a mix of supervision, including both traditional labeled data, and natural language advice.
contrasting
train_17040
The advantage of using natural language as a medium is that it allows us to unify the different modes of supervision into a single, familiar user interface.
using natural language as a medium of supervision comes with its own set of challenges, as we discuss next.
contrasting
train_17041
In the E-step of the Posterior Regularization training (Ganchev et al., 2010), the computation of the posterior regularizer remains unchanged.
the M-step is modified so that the classifier parameters θ are learned using both the inferred labels for the unlabeled data, and provided labels for the labeled examples.
contrasting
train_17042
In learning scenarios where there is little labeled data, we would like to rely primarily on constraints specified from natural language explanations, and unlabeled data.
in scenarios where there is a lot of labeled data available enabling robust inductive inference, we would like to primarily rely on it rather than explanations.
contrasting
train_17043
Language models are traditionally evaluated with perplexity.
this measure suffers from several shortcomings, in particular strong dependence on vocabulary size and lack of ability to directly evaluate scores assigned to malformed sentences.
contrasting
train_17044
A better model is expected to give higher probability to sentences in the test set, that is, lower perplexity.
this measure is not always well aligned with the quality of a language model as it should be.
contrasting
train_17045
One method of obtaining sentence-sets is feeding acoustic waves into an ASR system and tracking the resulting lattices.
this requires access to the audio signals, as well as a trained system for the relevant languages.
contrasting
train_17046
Besides the source domain data, some methods utilize the target domain lexicons , unlabeled (Liu and Zhang, 2012) or partial-labeled target domain data to boost the sequence labeling adaptation performance, which belong to unsupervised or semi-supervised domain adaptation.
we focus on supervised sequence labeling domain adaptation, where huge improvement can be achieved by utilizing only small-scale annotated data from the target domain.
contrasting
train_17047
There are also some methods (Guo et al., 2018;Kim et al., 2017;Zeng et al., 2018) explicitly weighting multiple source domain models for target samples in multi-source domain adaptation.
our work focuses on the supervised single source domain adaptation, which devote to implementing the knowledge fusion between the source domain and the target domain, not within multiple source domains.
contrasting
train_17048
From Figure 3(a), we can observe that model performance is improved when we add 20MB t data.
adding more clean back translated data starts to hurt the model performances.
contrasting
train_17049
Our goal is to learn a language model p θ based on sentences sampled from the training distribution When x , classical statistical theory guarantees that a model trained via MLE performs well on the test distribution given sufficient data.
when p test x is not identical to p train x , MLE can perform poorly no matter how much data is observed.
contrasting
train_17050
incurs high losses) on rare sentences, we might initially try to define P as individual training examples to ensure low loss on all data points.
this is far too conservative, since the worst-case distribution would consist of exactly one data point.
contrasting
train_17051
In MLE, minimizing KL(p train is equivalent to minimizing the log loss because log p train x (x) can be treated as a constant.
in topic CVaR, the analogous baseline entropy term log p x|z (x | z) depends on z and thus is not a constant with respect to the outer supremum.
contrasting
train_17052
As we add data from ONEBWORD and α * decreases to 0.7, we find some positive transfer effects where the increased data from the ONEBWORD corpus improves the performance on Yelp.
as the fraction of nuisance data grows further and α * drops below 0.4 the MLE models suffer large increases in perplexity, incurring up to 10 additional points of perplexity.
contrasting
train_17053
Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a wide range of natural language processing tasks by pretraining on large corpora of unlabeled text.
the applicability of this approach is unknown when the target domain varies substantially from the pretraining corpus.
contrasting
train_17054
In either case, a primary benefit of contextualized word embeddings is that they seed the learner with distributional information from large unlabeled datasets.
the texts used to build pretrained contextualized word embedding models are drawn from a narrow set of domains: • Wikipedia in BERT (Devlin et al., 2019) and ULMFiT (Howard and Ruder, 2018); • Newstext (Chelba et al., 2013) in ELMo (Peters et al., 2018); • BooksCorpus (Zhu et al., 2015) in BERT (Devlin et al., 2019) and GPT (Radford et al., 2018).
contrasting
train_17055
This was shown to significantly reduce the transfer loss in Portuguese, and later in English (Yang and Eisenstein, 2016).
this approach relies on hand-crafted features, and does not benefit from contemporary neural pretraining architectures.
contrasting
train_17056
Upon inspection of the gathered labels, the high difficulty comes from the fact that there were many Turkers who labeled the data as neutral and also many who labeled it as contradiction.
an example that is easy for humans but difficult for the DNN models (Table 2, row 2) requires more abstract thinking than the earlier example.
contrasting
train_17057
On the other hand, the last example is very difficult for humans (row 4), possibly due to the relatively neutral text.
for the DNN models certain terms such as "stultifyingly contrived" may signal a more negative review and lead to the item being easier.
contrasting
train_17058
In particular, the internal uncertainty of the model is used as the basis for selecting how training examples are weighted.
model uncertainty depends upon the original training data the model was trained on, while here we use an external measure of uncertainty.
contrasting
train_17059
generate target sequences iteratively but require the target sequence length to be predicted at start.
our in-place edit model allows target sequence length to change with appends.
contrasting
train_17060
To tackle this problem, generative adversarial networks (GAN) with reinforcement learning (RL) training approaches have been introduced to text generation tasks Che et al., 2017;Lin et al., 2017;Shi et al., 2018;, where the discriminator is trained to distinguish real and generated text samples to provide reward signals for the generator, and the generator is optimized via policy gradient .
recent studies have shown that potential issues of training GANs on discrete data are more severe than exposure bias (Semeniuta1 et al., 2018;Caccia et al., 2018).
contrasting
train_17061
4a shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size.
flowSeq has much larger gains in the decoding speed w.r.t.
contrasting
train_17062
Our observation is that conventional methods use a single representation for the input sentence, which makes it hard to apply prior knowledge of compositionality.
our approach leverages such knowledge with two representations, one generating attention maps, and the other mapping attended input words to output symbols.
contrasting
train_17063
On the other hand, "twice" contains only functional information, but no primitive information.
in general cases, each word may contain both primitive and functional information.
contrasting
train_17064
Due to its importance, pronoun resolution has seen a series of different approaches, such as rule-based systems (Lee et al., 2013) and end-to-end-trained neural models (Lee et al., 2017;Liu et al., 2019).
the recently released dataset GAP (Webster et al., 2018) shows that most of these solutions perform worse than naïve baselines when the answer cannot be deduced from the syntax.
contrasting
train_17065
In MASKEDWIKI, we looked for examples where masked nouns can be replaced with a pronoun, and only in 7 examples, we obtained a naturalsounding and grammatically correct sentence.
we estimated that 63% of the annotated examples in WIKICREM form a naturalsounding sentence when the appropriate pronoun is inserted, showing that WIKICREM consists of examples that are much closer to the target data.
contrasting
train_17066
These annotations can be found in Appendix A.
as shown in Section 6.2, training on WIKICREM alone does not match the performance of training on the data from the target distribution.
contrasting
train_17067
Additionally, the simplicity entailed by the vector space abstraction makes it an engineering-friendly representation, also explaining its widespread adoption and use (Freitas, 2015).
the latent features (dense vectors) at the center of most of the best-performing models have limited their application to two main uses: (i) computing semantic similarity and relat-edness measures and (ii) performing vocabulary generalization as an input layer on Machine Learning (ML) models.
contrasting
train_17068
For the triple (planet, moon, body) using immediate definitions of pivot and comparison incorrectly suggests that the triple (planet, moon, body) is discriminative.
after expanding using super-type definitions, the model correctly identifies body as a property of both planet and moon.
contrasting
train_17069
Pre-trained language models such as BERT have proven to be highly effective for natural language processing (NLP) tasks.
the high demand for computing resources in training such models hinders their application in practice.
contrasting
train_17070
Another direction is weight quantization (Gong et al., 2014;Polino et al., 2018), in which connection weights are constrained to a set of discrete values, allowing weights to be represented by fewer bits.
most of these pruning and quantization approaches perform on convolutional networks.
contrasting
train_17071
Using a weighted combination of ground-truth labels and soft predictions from the last layer of the teacher network, the student network can achieve comparable performance to the teacher model on the training set.
with the number of epochs increasing, the student model learned with this vanilla KD framework quickly reaches saturation on the test set (see Figure 2 in Section 4).
contrasting
train_17072
Ideally, we should pre-train BERT 6 [Large] and BERT 6 [Base] from scratch, and use the weights learned from the pretraining step for weight initialization in KD training.
due to computational limits of training BERT 6 from scratch, we only initialize the student model with the first six layers of BERT 12 or BERT 24 .
contrasting
train_17073
That is, we assume that the members of variational family Q are dimensional-wise independent, meaning that the posterior q can be writ- The simplicity of this form makes the estimation of ELBO very easy.
it also leads to a particular training difficulty called posterior collapse, where the KL divergence term becomes zero and the factorized variational posterior collapses to the prior.
contrasting
train_17074
The training PPL is monotonically decreasing in general.
when λ is small and the dependency relationships over latent codes are lost, the model quickly overfits, as the KL divergence quickly becomes zero and the validation loss starts to increase.
contrasting
train_17075
However, removing PE in the entire model greatly degrades the performance (from 34.71 to 14.47).
for SP, removing PE from our proposed attention variant dramatically degrades the performance (from 24.28 to 30.92).
contrasting
train_17076
The decoder self-attention is not an orderagnostic operation with respect to the order of inputs.
incorporating positional embedding into the attention mechanism may still improve performance.
contrasting
train_17077
Actually, the above process is based on the assumption that there is no dependency among the labels.
as shown in Figure 1, this assumption is hard to be satisfied in real-world datasets, and the complex label dependencies receive little attention in multi-label classification (Dembszynski et al., 2010;Dembczyński et al., 2012).
contrasting
train_17078
The training policies and prediction policies for all the labels can be viewed as a series of hyper-parameters.
to learn high-quality policies, one needs to specify both explicit and implicit label dependencies, which is not manually realistic.
contrasting
train_17079
After training, a single prediction policy (usually, a threshold of 0.5) is applied to all labels to generate the prediction.
as mentioned in Figure 1, these methods ignore the explicit and implicit label dependencies among the labels.
contrasting
train_17080
While disabling some heads improves the results, disabling the others hurts the results.
it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance.
contrasting
train_17081
Then we evaluate the word distributions between the generated documentd Y i and the ground-truth document d Y i , and use such signal to optimize the model.
straight-forward loss design, e.g., KL-divergence or Wasserstein distance between the word distributions ofd Y i and d Y i is not differentiable w.r.t.
contrasting
train_17082
However, the top 1 accuracy of the word translation is poor (< 20%), which makes it unable to select trustable sentence pairs using such a weak lexicon.
by leveraging our cross-lingual word embedding and unsupervised sentence representation, the selected sentences are much better (see more in Section 3.5).
contrasting
train_17083
(Fung and Cheung, 2004) use a small set of parallel corpora to initialize their EM lexical learning and further use this lexicon to iteratively mine sentence pairs.
in our unsupervised scenario, we have no bilingual sentence pairs to train such a model or lexicon to further select new sentence pairs.
contrasting
train_17084
For example, as you go from bottom to top layers, information about the past in left-to-right language models gets vanished and predictions about the future get formed.
for MLM, representations initially acquire information about the context around the token, partially forgetting the token identity and producing a more generalized token representation.
contrasting
train_17085
Second, both MT and MLM focus on a given token, as it either needs to be reconstructed or translated.
lM produces a representation needed for predicting the next token.
contrasting
train_17086
The languages were selected to represent different language families and morphological types, as we argue that fully unsupervised CLWEs have been designed to support exactly these setups.
we show that even the most robust unsupervised CLWE method (Artetxe et al., 2018b) still fails for a large number of language pairs: 87/210 BLI setups are unsuccessful, yielding (near-)zero BLI performance.
contrasting
train_17087
The superiority of weakly supervised methods (e.g., FULL+SL+SYM) over unsupervised methods is especially pronounced for distant and typologi-cally heterogeneous language pairs.
our study also indicates that even carefully engineered projection-based methods with some seed supervision yield lower absolute performance for such pairs.
contrasting
train_17088
A good transformation matrix W can make p G(vs) similar to p v t , so that D l can no longer distinguish between them.
this kind of similarity is at the distribution level.
contrasting
train_17089
Comparison with the state-of-the-art From the results shown in Table 3, we can see that in most cases, our method works better than previous supervised and unsupervised approaches.
the performance of Artetxe et al.
contrasting
train_17090
our model are all related to "electronic device".
the translation candidates provided by MUSE are all related to "artillery", a different sense of the the word "battery" both in English and in Chinese, but not as common.
contrasting
train_17091
(2018) proposed a co-training algorithm to combine topological features and literal descriptions of entities.
combining these multi-aspect information of entities (i.e., topological connections, relations and attributes, as well as literal descriptions) remains under-explored.
contrasting
train_17092
Note that MAN propagates relation and attribute information through the graph structure.
for aligning a pair of entities, we observe that considering the relations and attributes of neighboring entities, besides their own ones, may introduce noise.
contrasting
train_17093
These figures prove that these two aspects of features are useful in making alignment decisions.
compared to MAN, HMAN shows more significant performance drops, which also demonstrates that employing the feedforward networks can better categorize relation and attribute features than GCNs in this scenario.
contrasting
train_17094
Since MAN propagates relation and attribute features via graph structures, it can still implicitly capture topological knowledge of entities even after we remove the topological features.
hMAN loses such structure knowledge when topological features are excluded, and thus its results are worse.
contrasting
train_17095
In this work, we also employ GCNs.
in contrast to Wang et al.
contrasting
train_17096
This suggests that our model can capture the characteristics of the source dataset via pretraining when using small supervision from language adaptation (i.e., small α).
pretraining introduces bias to the source space and the performance drops when larger weights are given to language adaptation; see the results with pretraining in Figure 3.
contrasting
train_17097
As Figure 5a shows, there is greater reduction in the classification loss for smaller values of α, i.e., when classification loss contributes more to the overall loss; see Equation (4).
figure 5b shows that the CSA loss decreases with larger values of α as the model pays more attention to the CSA loss; see the red and green lines in figure 5b.
contrasting
train_17098
Our method described so far does not rely on alignments from external statistical toolkits but performs self-training on alignments extracted from the layer average baseline.
gIZA++ provides a robust method to compute accurate alignments.
contrasting
train_17099
The improvement of the multi-task approach over the layer average baseline suggests that learning to translate helps produce better alignments as well.
still the multi-task approach falls short of the statistical and neural baselines, which have a strong advantage of having access to the full/partial target context.
contrasting