{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:58:38.533814Z" }, "title": "Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets", "authors": [ { "first": "Lan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University", "location": {} }, "email": "lan.zhang@monash.edu" }, { "first": "Victor", "middle": [], "last": "Prokhorov", "suffix": "", "affiliation": { "laboratory": "Language Technology Lab", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Ehsan", "middle": [], "last": "Shareghi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University", "location": {} }, "email": "ehsan.shareghi@monash.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disentanglement metrics, as well as on downstream classification tasks and homotopy. To facilitate the evaluation, we propose two synthetic datasets with known generative factors. Our experiments highlight the existing gap in the text domain and illustrate that certain elements such as representation sparsity (as an inductive bias), or representation coupling with the decoder could impact disentanglement. To the best of our knowledge, our work is the first attempt on the intersection of unsupervised representation disentanglement and text, and provides the experimental framework and datasets for examining future developments in this direction. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disentanglement metrics, as well as on downstream classification tasks and homotopy. To facilitate the evaluation, we propose two synthetic datasets with known generative factors. Our experiments highlight the existing gap in the text domain and illustrate that certain elements such as representation sparsity (as an inductive bias), or representation coupling with the decoder could impact disentanglement. To the best of our knowledge, our work is the first attempt on the intersection of unsupervised representation disentanglement and text, and provides the experimental framework and datasets for examining future developments in this direction. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning task-agnostic unsupervised representations of data has been the center of attention across various areas of Machine Learning and more specifically NLP. However, little is known about the way these continuous representations organise information about data. In recent years, the NLP community has focused on the question of design and selection of suitable linguistic tasks to probe the presence of syntactic or semantic phenomena in representations as a whole (Bosc and Vincent, 2020; Voita and Titov, 2020; Torroba Hennigen et al., 2020; Pimentel et al., 2020; Hewitt and Liang, 2019; Ettinger et al., 2018; Marvin and Linzen, 2018; Conneau et al., 2018) . Nonetheless, a finegrain understanding of information organisation in coordinates of a continuous representation is yet to be achieved.", "cite_spans": [ { "start": 469, "end": 493, "text": "(Bosc and Vincent, 2020;", "ref_id": "BIBREF5" }, { "start": 494, "end": 516, "text": "Voita and Titov, 2020;", "ref_id": "BIBREF40" }, { "start": 517, "end": 547, "text": "Torroba Hennigen et al., 2020;", "ref_id": "BIBREF39" }, { "start": 548, "end": 570, "text": "Pimentel et al., 2020;", "ref_id": "BIBREF33" }, { "start": 571, "end": 594, "text": "Hewitt and Liang, 2019;", "ref_id": "BIBREF18" }, { "start": 595, "end": 617, "text": "Ettinger et al., 2018;", "ref_id": "BIBREF15" }, { "start": 618, "end": 642, "text": "Marvin and Linzen, 2018;", "ref_id": "BIBREF28" }, { "start": 643, "end": 664, "text": "Conneau et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Arguably, a necessity to move in this direction is agreeing on the cognitive process behind language generation (fusing semantic, syntactic, and lexical components), which can then be reflected in the design of representation learning frameworks. However, this still remains generally as an area of debate and perhaps less pertinent in the era of self-supervised masked language models and the resulting surge of new state-of-the-art results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Even in the presence of such an agreement, learning to disentangle the surface realization of the underlying factors of data (e.g., semantics, syntactic, lexical) in the representation space is a nontrivial task. Additionally, there is no established study for evaluating such models in NLP. A handful of recent works have looked into disentanglement for text by splitting the representation space into predefined disentangled subspaces such as style and content (Cheng et al., 2020; John et al., 2019) , or syntax and semantics (Balasubramanian et al., 2021; Bao et al., 2019; , and rely on supervision during training. However, a generalizable and realistic approach needs to be unsupervised and capable of identifying the underlying factors solely via the regularities presented in data.", "cite_spans": [ { "start": 463, "end": 483, "text": "(Cheng et al., 2020;", "ref_id": "BIBREF10" }, { "start": 484, "end": 502, "text": "John et al., 2019)", "ref_id": "BIBREF18" }, { "start": 529, "end": 559, "text": "(Balasubramanian et al., 2021;", "ref_id": "BIBREF1" }, { "start": 560, "end": 577, "text": "Bao et al., 2019;", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In areas such as image processing, the same question has been receiving a lot of attention and inspired a wave of methods for learning and evaluating unsupervised representation disentanglement (Ross and Doshi-Velez, 2021; Mathieu et al., 2019; Kim and Mnih, 2018; Burgess et al., 2018; Higgins et al., , 2017 and creation of large scale datasets (Dittadi et al., 2021) . It has been argued that disentanglement is the means towards representation interpretability (Mathieu et al., 2019) , generalization (Montero et al., 2021) , and robustness Bengio, 2013) . However, these benefits are yet to be realized and evaluated in text domain.", "cite_spans": [ { "start": 194, "end": 222, "text": "(Ross and Doshi-Velez, 2021;", "ref_id": "BIBREF38" }, { "start": 223, "end": 244, "text": "Mathieu et al., 2019;", "ref_id": "BIBREF29" }, { "start": 245, "end": 264, "text": "Kim and Mnih, 2018;", "ref_id": "BIBREF23" }, { "start": 265, "end": 286, "text": "Burgess et al., 2018;", "ref_id": "BIBREF7" }, { "start": 287, "end": 309, "text": "Higgins et al., , 2017", "ref_id": "BIBREF20" }, { "start": 347, "end": 369, "text": "(Dittadi et al., 2021)", "ref_id": "BIBREF13" }, { "start": 465, "end": 487, "text": "(Mathieu et al., 2019)", "ref_id": "BIBREF29" }, { "start": 505, "end": 527, "text": "(Montero et al., 2021)", "ref_id": "BIBREF31" }, { "start": 545, "end": 558, "text": "Bengio, 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we take a representative set of unsupervised disentanglement learning frameworks widely used in image domain ( \u00a72.1) and apply them to two artificially created corpora with known underlying generative factors ( \u00a73). Having known generative factors (while being ignored during the training phase) allows us to evaluate the performance of these models on imposing representation disentanglement via 6 disentanglement metrics ( \u00a72.2; \u00a74.1). Additionally, taking the highest scoring models and corresponding representations, we investigate the impact of representation disentanglement on two downstream text classification tasks ( \u00a74.3), and dimension-wise homotopy ( \u00a74.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that existing disentanglement models, when evaluated on a wide range of metrics, are inconsistent and highly sensitive to model initialisation. However, where disentanglement is achieved, it shows its positive impact on improving downstream task performance. Our work highlights the potential and existing challenges of disentanglement on text. We hope our proposed datasets, accessible description of disentanglement metrics and models, and experimental framework will set the path for developments of models specific to for text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let x denote data points and z denote latent variables in the latent representation space, and assume data points are generated by the combination of two random process: The first random process samples a point z (i) from the latent space with prior distribution of z, denoted by p(z). The second random process generates a point x (i) from the data space, denoted by p(x|z (i) ).", "cite_spans": [ { "start": 213, "end": 216, "text": "(i)", "ref_id": null }, { "start": 332, "end": 335, "text": "(i)", "ref_id": null }, { "start": 374, "end": 377, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models and Metrics", "sec_num": "2" }, { "text": "We consider z as a disentangled representation for x, if the changes in single latent dimensions of z are sensitive to changes in single generative factors of x while being relatively invariant to changes in other factors . Several probabilistic models are designed to reveal this process, here we look at some of the most widely used ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models and Metrics", "sec_num": "2" }, { "text": "A prominent approach for learning disentangled representations is through adjusting Variational Auto-Encoders (VAEs) (Kingma and Welling, 2014) objective function, which decompose the representation space into independently learned coordinates. We start by introducing vanilla VAE, and then cover some of its widely used extensions that encourage disentanglement: VAE uses a combination of a probabilistic encoder q \u03c6 (z|x) and decoder p \u03b8 (x|z), parameterised by \u03c6 and \u03b8, to learn this statistical relationship between x and z. The VAEs are trained by maximizing the lower bound of the logarithmic data distribution log p(x), called evidence lower bound,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "E q \u03c6 (z|x) log p \u03b8 (x|z) \u2212 D KL (q \u03c6 (z|x), p(z))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "The first term of is the expectation of the logarithm of data likelihood under the posterior distribution of z. The second term is KL-divergence, measuring the distance between the posterior distribution q \u03c6 (z|x) and the prior distribution p(z) and can be seen as a regularisation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "\u03b2-VAE (Higgins et al., 2017) adds a hyperparameter \u03b2 to control the regularisation from the KL-term via the following objective function:", "cite_spans": [ { "start": 6, "end": 28, "text": "(Higgins et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "E q \u03c6 (z|x) log p \u03b8 (x|z) \u2212 \u03b2D KL (q \u03c6 (z|x), p(z))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "Reconstructing under \u03b2-VAE (with the right value of \u03b2) framework encourages encoding data points on a set of representational axes on which nearby points along those dimensions are also close in original data space (Burgess et al., 2018) . (Burgess et al., 2018) extends \u03b2-VAE via constraint optimisation:", "cite_spans": [ { "start": 215, "end": 237, "text": "(Burgess et al., 2018)", "ref_id": "BIBREF7" }, { "start": 240, "end": 262, "text": "(Burgess et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Models", "sec_num": "2.1" }, { "text": "E q \u03c6 (z|x) log p \u03b8 (x|z) \u2212 \u03b2 |D KL (q \u03c6 (z|x), p(z)) \u2212 C|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCI-VAE", "sec_num": null }, { "text": "where C is a positive real value which represents the target KL-divergence term value. This has an information-theoretic interpretation, where the placed constraint C on the KL term is seen as the amount of information transmitted from a sender (encoder) to a receiver (decoder) via the message (z) (Alemi et al., 2018) , and impacts the sharpness of the posterior distribution (Prokhorov et al., 2019) . This constraint allows the model to prioritize underlying factors of data according to the availability of channel capacity and their contributions to the reconstruction loss improvement.", "cite_spans": [ { "start": 299, "end": 319, "text": "(Alemi et al., 2018)", "ref_id": "BIBREF0" }, { "start": 378, "end": 402, "text": "(Prokhorov et al., 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "CCI-VAE", "sec_num": null }, { "text": "MAT-VAE (Mathieu et al., 2019) introduces an additional term to \u03b2-VAE,", "cite_spans": [ { "start": 8, "end": 30, "text": "(Mathieu et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "CCI-VAE", "sec_num": null }, { "text": "D M M D (q \u03c6 (z), p \u03b8 (z)), E q \u03c6 (z|x) [log p \u03b8 (x|z)] \u2212 \u03b2D KL (q \u03c6 (z|x), p(z)) \u2212\u03bbD M M D (q \u03c6 (z), p(z))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCI-VAE", "sec_num": null }, { "text": "where D M M D is computed using maximum mean discrepancy (Gretton et al. (2012) , MMD) and \u03bb is the scalar weight. This term regularises the aggregated posterior q \u03c6 (z) with a factorised spikeand-slab prior (Mitchell and Beauchamp, 1988) , which aims for disentanglement via clustering and sparsifying the representations of z.", "cite_spans": [ { "start": 57, "end": 79, "text": "(Gretton et al. (2012)", "ref_id": "BIBREF16" }, { "start": 208, "end": 238, "text": "(Mitchell and Beauchamp, 1988)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "CCI-VAE", "sec_num": null }, { "text": "In text modelling, the presence of powerful autoregressive decoders poses a common optimisation challenge for training VAEs called posterior collapse, where the learned posterior distribution q \u03c6 (z|x), collapses to the prior p(z). Posterior collapse results in the latent variables z being ignored by the decoder. Several strategies have been proposed to alleviate this problem from different angles such as choice of decoders (Yang et al., 2017; Bowman et al., 2016) , adding more dependency between encoder and decoder (Dieng et al., 2019), adjusting the training process (Bowman et al., 2016; He et al., 2019) , imposing direct constraints to the KL term (Pelsmaeker and Aziz, 2020; Razavi et al., 2019; Burgess et al., 2018; Higgins et al., 2017) . In this work, both \u03b2-VAE (with \u03b2 < 1) and CCI-VAE are effective methods to avoid KL-collpase.", "cite_spans": [ { "start": 428, "end": 447, "text": "(Yang et al., 2017;", "ref_id": "BIBREF41" }, { "start": 448, "end": 468, "text": "Bowman et al., 2016)", "ref_id": "BIBREF6" }, { "start": 575, "end": 596, "text": "(Bowman et al., 2016;", "ref_id": "BIBREF6" }, { "start": 597, "end": 613, "text": "He et al., 2019)", "ref_id": "BIBREF17" }, { "start": 659, "end": 686, "text": "(Pelsmaeker and Aziz, 2020;", "ref_id": "BIBREF32" }, { "start": 687, "end": 707, "text": "Razavi et al., 2019;", "ref_id": "BIBREF36" }, { "start": 708, "end": 729, "text": "Burgess et al., 2018;", "ref_id": "BIBREF7" }, { "start": 730, "end": 751, "text": "Higgins et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Issue of KL-Collapse", "sec_num": "2.1.1" }, { "text": "In this section we provide a short overview of six widely used disentanglement metrics, highlighting their key differences and commonalities, and refer the readers to the corresponding papers for exact details of computations. Eastwood and Williams (2018) define three criteria for disentangled representations: disentanglement, which measures the degree of one dimension only encoding information about no more than one generative factor; completeness, which measures whether a generative factor is only captured by one latent variable; informativeness, which measures the degree by which representations capture exact values of the generative factors. 2 They design a series of classification tasks to predict the value of a generative factor based on the latent code, and extract the relative importance of each latent code for each task to calculate disentanglement and completeness scores. Informativeness score is measured by the accuracy of the classifier directly. Other existing metrics reflect at least one of these three criteria, as summarised in Higgins et al. 2017focus on disentanglement and propose to use the absolute difference of two groups of representations with the same value on one generative factor to predict this generative factor. For perfectly disentangled representations, latent dimensions not encoding information about this generative factor would have zero difference. Hence, even simple linear classifiers could easily identify the generative factors based on the changes of values. Kim and Mnih (2018) consider both disentanglement and completeness by first finding the dimension which has the largest variance when fixing the value on one generative factor, and then using the found dimension to predict that generative factor. Kumar et al. (2018) propose a series of classification tasks each of which uses a single latent variable to predict the value of a generative factor and treat the average of the difference between the top two accuracy scores for each generative factor as the final disentanglement score.", "cite_spans": [ { "start": 227, "end": 255, "text": "Eastwood and Williams (2018)", "ref_id": "BIBREF14" }, { "start": 1518, "end": 1537, "text": "Kim and Mnih (2018)", "ref_id": "BIBREF23" }, { "start": 1765, "end": 1784, "text": "Kumar et al. (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "Apart from designing classification tasks for disentanglement evaluation, another method is based on estimating the mutual information (MI) between a single dimension of the latent variable and a single generative factor. Chen et al. (2018) propose to use the average of the gap (difference) between the largest normalised MI (by the information entropy of the generative factor) and the second largest normalised MI over all generative factors as the disentanglement score, whereas the modularity metric of Ridgeway and Mozer (2018) measures whether a single latent variable has the highest MI with only one generative factor and none with others.", "cite_spans": [ { "start": 222, "end": 240, "text": "Chen et al. (2018)", "ref_id": "BIBREF9" }, { "start": 508, "end": 533, "text": "Ridgeway and Mozer (2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "The algorithmic details for computing the above metrics are provided in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "Empirical Difference. To highlight the empirical difference between these metrics, we use a toy set built by permuting four letters: A B C D. Each letter representing a generative factor with 20 choices of assignments (i.e, X = {X1, . . . , X20}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "where X \u2208 {A, B, C, D}). We consider two settings where each generative factor is embedded in a single dimension (denoted by Ex.1), or two dimensions (denoted by Ex.2). In each setting we uniformly sample 20 values from -1 to 1 to represent 20 assignments per factor and use them to allocate the assignments into distinctive bins per each corresponding dimension. By concatenating dimensions for each generative factor, we construct two ideal disentangled representations for data points in this toy dataset, amounting to 4 and 8 dimensional representations, respectively. Using these representations (skipping the encoding step), we measured the above metrics. Table 1 (Ex.1 and Ex.2 columns) summarises the results, illustrating that out of the 6 metrics, Higgins et al. 2017; Ridgeway and Mozer (2018); Kim and Mnih (2018) are the only ones that reach the potential maximum (i.e., 100), while Chen et al. (2018) exhibits its sensitivity towards completeness when we allocate two dimensions per factors.", "cite_spans": [ { "start": 806, "end": 825, "text": "Kim and Mnih (2018)", "ref_id": "BIBREF23" }, { "start": 896, "end": 914, "text": "Chen et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 662, "end": 669, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "Data Requirement. Measuring the mentioned disentanglement metrics requires a dataset satisfying the following attributes: 1. A set F where each of its elements is a generative factor which should be disentangled through representations; 2. For each element f i \u2208 F, a value space V i which is the domain of f i ; 3. For each value v ij \u2208 V i , a sample space S ij which contains observations who has value v ij on generative factor f i while everything else is arbitrary. We present two synthetic datasets ( \u00a73) that meet these criteria and use them in our experiments ( \u00a74).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "2.2" }, { "text": "The use of synthetic datasets is the common practice for evaluating disentanglement in image domain (Dittadi et al., 2021; Higgins et al., 2017; Kim and Mnih, 2018) . Generative simplistic datasets in image domain define independent generative factors (e.g. shape, color) behind the data generation. However, a comparable resource is missing in text domain. We develop two synthetic generative datasets with varying degrees of difficulty to analyse and measure disentanglement: The YNOC dataset ( \u00a73.1) which has only three structures and generative factors appearing in every sentence, and the POS dataset ( \u00a73.2) which has more structures while some generative factors are not guaranteed to appear in every sentence. The YNOC dataset offers a simpler setting for disentanglement.", "cite_spans": [ { "start": 100, "end": 122, "text": "(Dittadi et al., 2021;", "ref_id": "BIBREF13" }, { "start": 123, "end": 144, "text": "Higgins et al., 2017;", "ref_id": "BIBREF20" }, { "start": 145, "end": 164, "text": "Kim and Mnih, 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Synthetic Datasets", "sec_num": "3" }, { "text": "Sentences in YNOC are generated by 4 generative factors: Year (Y), Name (N), Occupation (O), and City (C), describing the occupation of a person. Since we often use different means to express the same message, we considered three templates to generate YNOC sentences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YNOC Dataset", "sec_num": "3.1" }, { "text": "Template I. in Y, N was a/an O in C. Template II. in Y's C, N was a/an O. Template III. N was a/an O in C in Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YNOC Dataset", "sec_num": "3.1" }, { "text": "The templates were then converted into real sentences using 10 years, 40 names, 20 occupations, and 30 cities. This amounted to a total of 720K sentences, split as (60%,20%,20%) into training, validation, and test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YNOC Dataset", "sec_num": "3.1" }, { "text": "We use part-of-speech (POS) tags to simulate the structure of sentences and define a base grammar as \"n. v. n. end-punc.\", where 'n.' denotes noun, 'v.' denotes verb and 'end-punc.' denotes the punctuation which appears at the end of sentences. Then we define simple sentence structures as \"(adj.) n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "(adv.) v. (prep.) (adj.) n. end-punc.\", where 'adj.' denotes adjective, 'adv.' denotes adverb, 'prep.' denotes preposition, and '()' marks the arbitrary inclusion/removal of the corresponding POS tag. We populate the structures with 2 4 = 16 simple structures presented in Table 2 . Next, we define complex sentence structures as combinations of two simple sentence structures by applying one of the following three rules:", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "Rule I. conj1. S1 comma S2 end-punc. Rule II. S1 conj1. S2 end-punc. Rule III. S1 comma conj2. S2 end-punc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "where 'conj1.' and 'conj2.' denote two different kinds of conjunction, 'comma' denotes ',' and 'S1' and 'S2' are two simple sentence structures without 'end-punc.' We limit the number of POS tags that appear in 'S1' and 'S2' to 9 to control the complexity of generating sentences and obtain 279 complex structures in total. A maximum of 5 words is chosen for each POS to construct our sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "The frequency of appearance for each word in a sentence is limited to one. Although this construction does not focus on sentences being \"realistic\", it simulate natural text in terms of the presence of an underlying grammar and rules over POS tags. 3 We deliberately ignore semantics, since isolating semantics in terms of generative factors potentially involves analysis over multiple dimensions (combinatorial space) and quantifying grouped disentanglement requires suitable disentanglement metrics to be developed. We leave further exploration of this to our future work.", "cite_spans": [ { "start": 249, "end": 250, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "We split the dataset into training, validation and test sets with proportion 60%, 20%, 20%. This proportion is used for every structure to ensure they have representative sentences in each portion of the data splits. The final size of (training, validation, test) sets are (1723680, 574560, 574560). All three sets are unbiased on word selection for each POS tag: e.g., all 5 noun POS vocabs from Table 2 have equal frequency (i.e., 20%). Exactly the same proportions are preserved for validation and test sets.", "cite_spans": [], "ref_spans": [ { "start": 397, "end": 404, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "Through the process of the generation, we can define each POS tag as one ground truth generative factor for sentences. 4 Because the choices of words for different POS tags are independent, these generative factors are independent. However, for the same POS, the choices of words are dependent and POS tags are dependent on the structures as well. It is noteworthy that in contrast to the image domain where all generative factors are always present in the data, in POS dataset this cannot be guaranteed, making it a more challenging setting.", "cite_spans": [ { "start": 119, "end": 120, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POS Dataset", "sec_num": "3.2" }, { "text": "In this section, we examine the introduced disentanglement models on text. We measure the disentanglement scores of each model on our two synthetic datasets and quantify how well-correlated these metrics are with reconstruction loss, active units, and KL ( \u00a74.1). We then look at various strategies for coupling the latent code during decoding and highlight their impacts on training and disentanglement behaviors ( \u00a74.2). We continue our analysis by showing how the representation learned by the highest scoring model (on disentanglement metrics) performs compared to vanilla VAE in two text classification tasks ( \u00a74.3), and finish our analysis by looking at these models' generative behaviors ( \u00a74.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "Training Configuration. We adopt the VAE architecture from (Bowman et al., 2016), using a LSTM encoder-decoder. Unless stated otherwise, (word embedding, LSTM, representation embedding) dimensionalities for YNOC and POS datasets are (4D, 32D, 4D) and (4D, 64D, 8D), respectively, and we use the latent code to initialize the hidden state of the LSTM decoder. We use greedy decoding. All models are trained from multiple random starts using Adam (Kingma and Ba, 2015) with learning rate 0.001 for 10 epochs. We set batch size to 256 and 512 for YNOC and POS, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Analysis", "sec_num": "4" }, { "text": "Taking the models ( \u00a72.1) and also an Autoencoder (AE) as a baseline we use the YNOC and POS datasets to report average KL-divergence (KL), reconstruction loss (Rec.), and number of active units (AU) 5 in Table 3 , and illustrate disentanglement metrics' scores in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 265, "end": 273, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": "As demonstrated in Table 3 , different models pose various behaviors, noteworthy of those are:", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": "roles (e.g., subject-noun and object-noun, etc) is a possibility for future investigation. (1) the positive correlation of C with AU which intuitively means the increase of channel capacity demands more dimensions of the representation to carry information which then translates into having a better reconstruction of data, (2) the negative correlation between the increase of \u03b2 and decrease of reconstruction loss, (3) the best Rec. and AU are achieved by AE and MAT-VAE whereas the worst one is achieved by the (collapsed) vanilla-VAE, (4) the MAT-VAE (\u03b2 = 0.01, \u03bb = 0.1) model which induces more sparse representations 6 performs the best on both datasets, indicating the positive impact of representation sparsity as an inductive bias. As illustrated in Figure 1 , the difference between means of each disentanglement score on various models is relatively small, and due to large standard deviation on metrics, it is difficult to single out a superior model. This verifies findings of Lo-6 Sparsity is measured using Hoyer (Hurley and Rickard, 2009) . In this paper we report this as the average Hoyer over data points' posterior means. Hoyer for data point xi with posterior mean \u00b5i is calculated as", "cite_spans": [ { "start": 1027, "end": 1053, "text": "(Hurley and Rickard, 2009)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 758, "end": 766, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": "\u221a d\u2212||\u03bc i || 1 /||\u03bc i || 2 \u221a d\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": ", where d is the dimensionality of the representations and\u03bci = \u00b5i/\u03c3(\u00b5), where \u00b5 = {\u00b51, ..., \u00b5n}, and \u03c3(.) is the standard deviation. catello et al. (2019) on image domain. In Table 3 (Top-3 column) we report the number of appearances of a model among the top 3 highest scoring models on at least one disentanglement metric. The ranking suggests that \u03b2-VAE with smaller \u03b2 values reach better disentangled representations, and MAT-VAE performing superior on YNOC and poorly on POS, highlighting its more challenging nature. For MAT-VAE we also observe an interesting correlation between sparsity and disentanglement: for instance on YNOC, MAT-VAE (\u03b2 = 0.01, \u03bb = 0.1) achieves the highest Hoyer (See Table 4 ) and occurs 7 times among Top-3 (see Table 3 ). Interestingly, the success of MAT-VAE does not translate to POS dataset, where it underperforms AE. These two observations suggest that sparsity could be a facilitator for disentanglement, but achieving a stable level of sparsity remains as a challenge. The more recent development in the direction of sparsity, HSVAE (Prokhorov et al., 2020) , addresses the stability issue of MAT-VAE but we leave its exploration to future work.", "cite_spans": [ { "start": 1072, "end": 1096, "text": "(Prokhorov et al., 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 697, "end": 704, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 743, "end": 750, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": "To further analyse the inconsistency between different metrics we calculate the Pearson product- moment correlation coefficient between them and KL, -Rec, AU, Hoyer on POS and YNOC datasets. See the heatmap in Figure 2 . While text-specific metrics are yet to be developed, our experiment suggests Higgins et al. (2017) is a good candidate to try first for text domain as it seems to be the one with strong correlation with Hoyer, AU, -Rec, and KL and has the highest level of agreement (overall) with other metrics.", "cite_spans": [ { "start": 298, "end": 319, "text": "Higgins et al. (2017)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Disentanglement Metrics", "sec_num": "4.1" }, { "text": "In VAEs, we typically feed the decoder with the latent code as well as word embeddings during training. The method to couple the latent code with decoder could have some effects on disentanglement for text. To highlight this, we train with 4 different coupling strategies: Init, Concat, Init Concat, Concat w/o Emb. See Figure 3a for an accessible visualisation. To analyse the impact of coupling, we opt for CCI-VAE which allows the comparisons to be made for the same value of KL. We first use Concat w/o Emb to find an optimal KL in vanilla VAEs, which is then used as the C to train CCI-VAEs using the other coupling metrics on YNOC and POS datasets. For YNOC, C = 1.5, and for POS, C = 5.5. This is to keep KLdivergence and reconstruction loss at the same level for fair comparison across different strategies. We report results in Table 5 . Among the investigated coupling methods, the key distinguishing factor for disentanglement is their impacts on AU which is the highest for Concat.", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 329, "text": "Figure 3a", "ref_id": "FIGREF2" }, { "start": 837, "end": 844, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Coupling Latent Code and Decoder", "sec_num": "4.2" }, { "text": "Next, using Init as the baseline, we measure the absolute difference between disentanglement scores of different coupling methods in Figure 3b . In general, using concatenation can bring a large improvement in disentanglement. Using both initialization and concatenation do not lead to a better result. Despite our expectation, not feeding word embeddings into decoder during training does not encourage disentanglement due to the added reliance on the latent code.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 142, "text": "Figure 3b", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Coupling Latent Code and Decoder", "sec_num": "4.2" }, { "text": "A confounding factor which could pollute this analysis is the role of strong auto-regressive decoding of VAEs and the type of information captured by the decoder in such scenario. While a preliminary analysis has been provided recently (Bosc and Vincent, 2020) , this has been vastly underexplored and requires more explicit attempts. We leave deeper investigation of this to future work.", "cite_spans": [ { "start": 236, "end": 260, "text": "(Bosc and Vincent, 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Coupling Latent Code and Decoder", "sec_num": "4.2" }, { "text": "To examine the performance of these models on real-world downstream task setting, we consider the classification task. For our classification datasets, we use DBpedia (14 classes) and Yahoo Question (10 classes) (Zhang et al., 2015) . Each class of these two datasets has (10k, 1k, 1k) randomly chosen sentences in (train, dev, test) sets. We train Vanilla-VAE, \u03b2-VAE (\u03b2 = 0.2), CCI-VAE (C = 10), and MAT-VAE (\u03b2 = 0.01, \u03bb = 0.1) from Table 3 on DBpedia and Yahoo (without the labels), then freeze the trained encoders and place a classifier on top to use the mean vector representations from the encoder as a feature to train a classifier.", "cite_spans": [ { "start": 212, "end": 232, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 434, "end": 441, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Disentanglement and Classification", "sec_num": "4.3" }, { "text": "We set the dimensionality of word embedding, LSTM, and the latent space to 128, 512, 32, respectively. The VAE models are trained using a batch size of 64, for 6 epochs with Adam (learning rate 0.001). For the classifier, we use a single linear layer with 1024 neurons, followed by a Softmax and train it for 15 epochs, using Adam (learning rate 0.001) and batch size 512. We illustrate the mean and standard deviation across 3 runs of models in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 454, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Disentanglement and Classification", "sec_num": "4.3" }, { "text": "We observe that the ranking of classification accuracy among the models on DBpedia is consistent with their Top-3 performance in Table 3 , with MAT-VAE outperforming the other three variants. We see roughly the same trend for Yahoo, with MAT-VAE being the dominating model. This indicates Table 6 : An example of a 3D latent code transformation in the dimension-wise homotopy. In row i, \u2192 denotes the start and end points of interpolation, solid box denotes the two dimensions being interpolated, and dashed box denotes the updated dimensions from i \u2212 1.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 289, "end": 296, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Disentanglement and Classification", "sec_num": "4.3" }, { "text": "START z 1 [z 1,1 , z 1,2 , z 1,3 ] i = 1 z 1,1 z 1,1 , z 1,2 , z 1,3 \u2192 z 2,1 , z 1,2 , z 1,3 i = 2 z 1,2 z 2,1 , z 1,2 ,z 1,3 \u2192 z 2,1 , z 2,2 , z 1,3 i = 3 z 1,3 z 2,1 , z 2,2 , z 1,3 \u2192 z 2,1 , z 2,2 , z 2,3 END z 2 [z 2,1 , z 2,2 , z 2,3 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement and Classification", "sec_num": "4.3" }, { "text": "that disentangled representations are likely to be easier to discriminate, although the role of sparsely learned representations could contribute to MAT-VAE's success as well (Prokhorov et al., 2020) .", "cite_spans": [ { "start": 175, "end": 199, "text": "(Prokhorov et al., 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Disentanglement and Classification", "sec_num": "4.3" }, { "text": "To observe the effect of disentanglement in homotopy (Bowman et al., 2016), we use the exactly same toy dataset introduced in \u00a72.1 and assess the homotopy behaviour of the highest scoring VAE vs. an ideal representation. To conduct homotopy, we interpolate between two sampled sequences' representations and pass the intermediate representations to decoder to generate the output. We use 4D word embedding, 16D LSTM, 4D latent space. We report the results for the VAEs scoring the highest on disentanglement (w.r.t. Higgins et al. (2017) denoted as VAE-Higg) and completeness (w.r.t. Chen et al. (2018) denoted as VAE-Chen). The VAE-Higg and VAE-Chen are \u03b2-VAE with \u03b2 = 0.4 and MAT-VAE with \u03b2 = 0.01, \u03bb = 0.1, respectively. Additionally, to highlight the role of generative factor in generation, we conduct a dimensionwise homotopy, transitioning from the first to the last sentence by interpolating between the dimensions one-by-one. This is implemented as follows: (i) using prior distribution 7 we sample two latent codes denoted by z 1 = (z 1,1 , z 1,2 , . . . , z 1,n ), z 2 = (z 2,1 , z 2,2 , . . . , z 2,n ); (ii) for i-th dimension, using z 1,i = (z 2,1 , . . . , z 2,i\u22121 , z 1,i , . . . , z 1,n ) as the start, we interpolate along the i-th dimension towards z 2,i = (z 2,1 , . . . , z 2,i , z 1,i+1 , . . . , z 1,n ). Table 6 illustrates this for a 3D latent code example.", "cite_spans": [ { "start": 516, "end": 537, "text": "Higgins et al. (2017)", "ref_id": "BIBREF20" }, { "start": 584, "end": 602, "text": "Chen et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1040, "end": 1060, "text": "= (z 1,1 , z 1,2 , .", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Disentanglement and Generation", "sec_num": "4.4" }, { "text": "Results: Table 7 reports the outputs for standard homotopy (top block) and dimension-wise homotopy. The results for standard homotopy demon-Ideal VAE-Higg VAE- Chen z1 A9 B17 C13 D3 A12 B14 C14 D12 A9 B4 C10 D15 Homotopy A20 B17 C1 D3 A12 B14 C14 D12 A7 B4 C10 D15 A4 B17 C12 D6 A8 B14 C14 D12 A14 B4 C10 D15 A3 B1 C6 D6 A20 B14 C14 D12 A20 B19 C10 D15 A13 B1 C6 D20 A15 B14 C14 D12 A8 B19 C10 D15 z2 A15 B2 C8 D10 A4 B14 C14 D12 A12 B19 C10 D15 z1 A9 B17 C13 D3 A12 B14 C14 D12 A9 B4 C10 D15 Dim 1 A20 B17 C13 D3 A12 B14 C14 D12 A7 B4 C10 D15 A4 B17 C13 D3 A8 B14 C14 D12 A4 B19 C10 D15 A3 B17 C13 D3 A20 B14 C14 D12 A8 B19 C10 D15 A13 B17 C13 D3 A18 B14 C14 D12 A12 B19 C10 D15 strate that the presence of ideally disentangled representation translates into disentangled generation in general. However, both VAE-Higg and VAE-Chen seem to mainly be producing variations of the letter in the first position (letter A) during the interpolation. The same observation holds in the dimension-wise experiments. VAE-Chen also produces variations of the letter in the second position (letter B) along with the variation of letter A, which suggests the lesser importance of completeness for disentangled representations. This indicates that despite the relative superior performance of certain models on the metrics and classification tasks, the amount of disentanglement present in the representation is not sufficient enough to be reflected by the generative behavior of these models. As a future work, we would look into the role of auto-regressive decoding and teacherforcing as confounding factors that can potentially affect the disentanglement process.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 7", "ref_id": "TABREF11" }, { "start": 160, "end": 589, "text": "Chen z1 A9 B17 C13 D3 A12 B14 C14 D12 A9 B4 C10 D15 Homotopy A20 B17 C1 D3 A12 B14 C14 D12 A7 B4 C10 D15 A4 B17 C12 D6 A8 B14 C14 D12 A14 B4 C10 D15 A3 B1 C6 D6 A20 B14 C14 D12 A20 B19 C10 D15 A13 B1 C6 D20 A15 B14 C14 D12 A8 B19 C10 D15 z2 A15 B2 C8 D10 A4 B14 C14 D12 A12 B19 C10 D15 z1 A9 B17 C13 D3 A12 B14 C14 D12 A9 B4 C10 D15 Dim 1 A20 B17 C13 D3 A12 B14 C14 D12 A7 B4 C10 D15 A4 B17 C13 D3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Disentanglement and Generation", "sec_num": "4.4" }, { "text": "z 1,2 A15 B17 C13 D3 A4 B14 C14 D12 A12 B19 C10 D15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Disentanglement and Generation", "sec_num": "4.4" }, { "text": "We evaluated a set of recent unsupervised disentanglement learning frameworks widely used in image domain on two artificially created corpora with known underlying generative factors. Our experiments highlight the existing gaps in text domain, the daunting tasks state-of-the-art models from image domain face on text, and the confounding elements that pose further challenges towards representation disentanglement in text domain. Motivated by our findings, in future, we will explore the role of inductive biases such as representation sparsity in achieving representation disentanglement. Additionally, we will look into alternative forms of decoding and training which may compromise reconstruction quality but increase the reliance of decoding on the representation, hence allowing for a more controlled analysis and evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Our synthetic datasets and experimental framework provide a set of quantitative and qualitative measures to facilitate and future research in developing new models, datasets, and evaluation metrics specific for text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "space R ij , who has a bijection mapping with S ij . Hence, when sampling representations which have the same value on one generative factor, we only need to sample in one R ij .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Under these notations, we write the pseudo code of metrics in Algorithm 1-6. For Algorithm 5 and 6, although we only use one criterion in the main paper, we still provide the details for other criteria. We set N = 1000 and L = 64 for Algorithm 1 and 2, and N = 10000 for Algorithm 3, 4, 5, and 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Algorithm 1 Metric of Higgins et al. (2017) 1: D = \u2205 2: for f i \u2208 F do Find the value v ij on f i for s n 6: Find the value v ij on f i for s n 8:", "cite_spans": [ { "start": 22, "end": 43, "text": "Higgins et al. (2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Sample (z (1) 1 , . . . , z (1) L ) from R ij 7: Sample (z (2) 1 , . . . , z (2) L ) from R ij 8: z n = 1 L L l=1 |z (1) l \u2212 z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Sample (z 1 , . . . , z L ) from R ij 9: d * n = arg max d var( z 1,d \u03c3 d , . . . , z L,d \u03c3 d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "D = {(d * n , f i )} D 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Split D into training set TR and test set TE with proportion (80%, 20%) 12: Train 10 majority vote classifiers on TR 13: Calculate the accuracy on TE for 10 models 14: Calculate the mean and variance of accuracy Algorithm 3 Metric of Kumar et al. (2018) 1: for f i \u2208 F do 2:", "cite_spans": [ { "start": 234, "end": 253, "text": "Kumar et al. (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "for v ij \u2208 V i do 3: p(v ij ) = Count(S ij ) j Count(S ij )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "Sample N j = N \u00d7 p(v ij ) representations z j from R ij", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "for d = 1, 2, . . . , dim z do 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "D d = \u2205 7: for v ij \u2208 V i do 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "for n = 1, 2, . . . , N j do 9: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "D id = {(z j n,d , v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "SAP i = acc d * \u2212 max d =d * acc d 15: score = avg(SAP i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Code and datasets are available at https://github. com/lanzhang128/disentanglement", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These criteria are referred to modularity, compactness and explicitness byRidgeway and Mozer (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For structures which can produce more than 10k sentences (e.g. longer structures), we randomly choose 10k.4 While we consider POS tags as the generative factors in this paper, further sub-categorisation of POS tags based on position (e.g., first-noun and second-noun, etc) or grammatical", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "i is active if Covariancex(E i\u223cq(i|x) [i]) > 0.01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Instead of prior, we sample two sentences from test set and use their representations. This is to avoid the situation where samples are not in the well-estimated region of the posterior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "To evaluate representations learned by a model on a dataset having the attributes of Data Requirement, we further require a series of representation Algorithm 4 Metric of Chen et al. (2018) 1:5:11:15:16:17:18:19: score = avg(M IG i )Algorithm 5 Metric of Ridgeway and Mozer (2018) Modularity:1: Same steps as Algorithm 4 without step 17, 18 and 19 2:11: score = avg(1 \u2212 \u03b4 d ) Explicitness:1: for f i \u2208 F do 2:5:for n = 1, 2, . . . , N j do 7:Split D i into training set TR i and test set TE i with proportion (80%, 20%) 9:Train an one-versus-rest logistic regress classifier on TR i", "cite_spans": [ { "start": 171, "end": 189, "text": "Chen et al. (2018)", "ref_id": "BIBREF9" }, { "start": 255, "end": 280, "text": "Ridgeway and Mozer (2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "A Disentanglement Metrics Algorithms", "sec_num": null }, { "text": "Record the ROC area-under-the-curve (AUC) auc ij on TR i for every v ij 11: score = avg(auc ij )Algorithm 6 Metric of Eastwood and Williams (2018) 1: for f i \u2208 F do 2:for n = 1, 2, . . . , N j do 7:Split D i into training set TR i and test set TE i with proportion (80%, 20%) 9:Train a random forest classifier on TR i", "cite_spans": [ { "start": 118, "end": 146, "text": "Eastwood and Williams (2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "10:", "sec_num": null }, { "text": "Informativeness score inf i is the accuracy on TE i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "10:", "sec_num": null }, { "text": "r id is the relative importance of dimension d in predicting v ij , obtained from the random forest ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fixing a broken elbo", "authors": [ { "first": "Alexander", "middle": [], "last": "Alemi", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Dillon", "suffix": "" }, { "first": "A", "middle": [], "last": "Rif", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Saurous", "suffix": "" }, { "first": "", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2018, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "159--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dil- lon, Rif A Saurous, and Kevin Murphy. 2018. Fix- ing a broken elbo. In International Conference on Machine Learning, pages 159-168. PMLR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Polarized-VAE: Proximity based disentangled representation learning for text generation", "authors": [ { "first": "Vikash", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Kobyzev", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Shapiro", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "416--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikash Balasubramanian, Ivan Kobyzev, Hareesh Bahuleyan, Ilya Shapiro, and Olga Vechtomova. 2021. Polarized-VAE: Proximity based disentan- gled representation learning for text generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 416-423, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generating sentences from disentangled syntactic and semantic spaces", "authors": [ { "first": "Yu", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Xin-Yu", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6008--6019", "other_ids": { "DOI": [ "10.18653/v1/P19-1602" ] }, "num": null, "urls": [], "raw_text": "Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-yu Dai, and Jiajun Chen. 2019. Generating sentences from disentangled syn- tactic and semantic spaces. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6008-6019, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Representation learning: A review and new perspectives", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "A", "middle": [], "last": "Courville", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "35", "issue": "8", "pages": "1798--1828", "other_ids": { "DOI": [ "10.1109/TPAMI.2013.50" ] }, "num": null, "urls": [], "raw_text": "Y. Bengio, A. Courville, and P. Vincent. 2013. Rep- resentation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep learning of representations: Looking forward", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2013, "venue": "Statistical Language and Speech Processing -First International Conference, SLSP 2013", "volume": "7978", "issue": "", "pages": "1--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio. 2013. Deep learning of representa- tions: Looking forward. In Statistical Language and Speech Processing -First International Con- ference, SLSP 2013, Tarragona, Spain, July 29-31, 2013. Proceedings, volume 7978 of Lecture Notes in Computer Science, pages 1-37. Springer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Do sequence-tosequence VAEs learn global features of sentences?", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4296--4318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc and Pascal Vincent. 2020. Do sequence-to- sequence VAEs learn global features of sentences? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4296-4318, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Vinyals", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "J\u00f3zefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "10--21", "other_ids": { "DOI": [ "10.18653/v1/k16-1002" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal J\u00f3zefowicz, and Samy Ben- gio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Confer- ence on Computational Natural Language Learning, CoNLL 2016, pages 10-21, Berlin, Germany. ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Understanding disentangling in \u03b2-vae", "authors": [ { "first": "Christopher", "middle": [ "P" ], "last": "Burgess", "suffix": "" }, { "first": "Irina", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Arka", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Matthey", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Watters", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Desjardins", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lerchner", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher P. Burgess, Irina Higgins, Arka Pal, Lo\u00efc Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. 2018. Understanding disentan- gling in \u03b2-vae. CoRR, abs/1804.03599.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A multi-task approach for disentangling syntax and semantics in sentence representations", "authors": [ { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qingming", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2453--2464", "other_ids": { "DOI": [ "10.18653/v1/N19-1254" ] }, "num": null, "urls": [], "raw_text": "Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. A multi-task approach for dis- entangling syntax and semantics in sentence repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453-2464, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Isolating sources of disentanglement in variational autoencoders", "authors": [ { "first": "T", "middle": [ "Q" ], "last": "Ricky", "suffix": "" }, { "first": "Xuechen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" }, { "first": "B", "middle": [], "last": "Roger", "suffix": "" }, { "first": "David", "middle": [ "K" ], "last": "Grosse", "suffix": "" }, { "first": "", "middle": [], "last": "Duvenaud", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "31", "issue": "", "pages": "2610--2620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricky T. Q. Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. 2018. Isolating sources of dis- entanglement in variational autoencoders. In Ad- vances in Neural Information Processing Systems, volume 31, pages 2610-2620. Curran Associates, Inc.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving disentangled text representation learning with information-theoretic guidance", "authors": [ { "first": "Pengyu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Martin Renqiang Min", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Malon", "suffix": "" }, { "first": "Yitong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7530--7541", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.673" ] }, "num": null, "urls": [], "raw_text": "Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, and Lawrence Carin. 2020. Improving disentangled text representation learning with information-theoretic guidance. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7530-7541, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Avoiding latent variable collapse with generative skip models", "authors": [ { "first": "B", "middle": [], "last": "Adji", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Dieng", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Kim", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2019, "venue": "The 22nd International Conference on Artificial Intelligence and Statistics", "volume": "2019", "issue": "", "pages": "2397--2405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. 2019. Avoiding latent variable col- lapse with generative skip models. In The 22nd In- ternational Conference on Artificial Intelligence and Statistics, AISTATS 2019, volume 89 of Proceedings of Machine Learning Research, pages 2397-2405, Naha, Okinawa, Japan. PMLR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "On the transfer of disentangled representations in realistic settings", "authors": [ { "first": "Andrea", "middle": [], "last": "Dittadi", "suffix": "" }, { "first": "Frederik", "middle": [], "last": "Tr\u00e4uble", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Locatello", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Wuthrich", "suffix": "" }, { "first": "Vaibhav", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Ole", "middle": [], "last": "Winther", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Dittadi, Frederik Tr\u00e4uble, Francesco Locatello, Manuel Wuthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, and Bernhard Sch\u00f6lkopf. 2021. On the transfer of disentangled representations in realis- tic settings. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A framework for the quantitative evaluation of disentangled representations", "authors": [ { "first": "Cian", "middle": [], "last": "Eastwood", "suffix": "" }, { "first": "K", "middle": [ "I" ], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cian Eastwood and Christopher K. I. Williams. 2018. A framework for the quantitative evaluation of dis- entangled representations. In International Confer- ence on Learning Representations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Assessing composition in sentence vector representations", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1790--1801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sen- tence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790-1801, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A kernel two-sample test", "authors": [ { "first": "Arthur", "middle": [], "last": "Gretton", "suffix": "" }, { "first": "Karsten", "middle": [ "M" ], "last": "Borgwardt", "suffix": "" }, { "first": "Malte", "middle": [ "J" ], "last": "Rasch", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2012, "venue": "Journal of Machine Learning Research", "volume": "13", "issue": "25", "pages": "723--773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch\u00f6lkopf, and Alexander Smola. 2012. A kernel two-sample test. Journal of Machine Learn- ing Research, 13(25):723-773.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lagging inference networks and posterior collapse in variational autoencoders", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Spokoyny", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational au- toencoders. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards a definition of disentangled representations", "authors": [ { "first": "Irina", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "David", "middle": [], "last": "Amos", "suffix": "" }, { "first": "David", "middle": [], "last": "Pfau", "suffix": "" }, { "first": "S\u00e9bastien", "middle": [], "last": "Racani\u00e8re", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Matthey", "suffix": "" }, { "first": "Danilo", "middle": [ "J" ], "last": "Rezende", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lerchner", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irina Higgins, David Amos, David Pfau, S\u00e9bastien Racani\u00e8re, Lo\u00efc Matthey, Danilo J. Rezende, and Alexander Lerchner. 2018. Towards a defi- nition of disentangled representations. CoRR, abs/1812.02230.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "authors": [ { "first": "Irina", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Matthey", "suffix": "" }, { "first": "Arka", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Burgess", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Botvinick", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lerchner", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irina Higgins, Lo\u00efc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Confer- ence on Learning Representations, ICLR 2017, Con- ference Track Proceedings, Toulon, France.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Comparing measures of sparsity", "authors": [ { "first": "N", "middle": [], "last": "Hurley", "suffix": "" }, { "first": "S", "middle": [], "last": "Rickard", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Information Theory", "volume": "55", "issue": "10", "pages": "4723--4741", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Hurley and S. Rickard. 2009. Comparing measures of sparsity. IEEE Transactions on Information The- ory, 55(10):4723-4741.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Disentangled representation learning for non-parallel text style transfer", "authors": [ { "first": "Vineet", "middle": [], "last": "John", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "424--434", "other_ids": { "DOI": [ "10.18653/v1/P19-1041" ] }, "num": null, "urls": [], "raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 424-434, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Disentangling by factorising", "authors": [ { "first": "Hyunjik", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 35th International Conference on Machine Learning", "volume": "80", "issue": "", "pages": "2649--2658", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In Proceedings of the 35th Inter- national Conference on Machine Learning, ICML 2018, volume 80 of Proceedings of Machine Learn- ing Research, pages 2649-2658, Stockholmsm\u00e4ssan, Stockholm Sweden. PMLR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "2nd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "VARIATIONAL INFERENCE OF DISENTANGLED LATENT CONCEPTS FROM UNLABELED OBSERVATIONS", "authors": [ { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Sattigeri", "suffix": "" }, { "first": "Avinash", "middle": [], "last": "Balakrishnan", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Kumar, Prasanna Sattigeri, and Avinash Bal- akrishnan. 2018. VARIATIONAL INFERENCE OF DISENTANGLED LATENT CONCEPTS FROM UNLABELED OBSERVATIONS. In International Conference on Learning Representations.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "authors": [ { "first": "Francesco", "middle": [], "last": "Locatello", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Lucic", "suffix": "" }, { "first": "Gunnar", "middle": [], "last": "Raetsch", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Bachem", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "4114--4124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesco Locatello, Stefan Bauer, Mario Lucic, Gun- nar Raetsch, Sylvain Gelly, Bernhard Sch\u00f6lkopf, and Olivier Bachem. 2019. Challenging common as- sumptions in the unsupervised learning of disentan- gled representations. In Proceedings of the 36th In- ternational Conference on Machine Learning, ICML 2019, volume 97 of Proceedings of Machine Learn- ing Research, pages 4114-4124, Long Beach, Cali- fornia, USA. PMLR.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/D18-1151" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Disentangling disentanglement in variational autoencoders", "authors": [ { "first": "Emile", "middle": [], "last": "Mathieu", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Rainforth", "suffix": "" }, { "first": "Yee Whye", "middle": [], "last": "Siddharth", "suffix": "" }, { "first": "", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "4402--4412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emile Mathieu, Tom Rainforth, N Siddharth, and Yee Whye Teh. 2019. Disentangling disentangle- ment in variational autoencoders. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4402-4412. PMLR.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Bayesian variable selection in linear regression", "authors": [ { "first": "T", "middle": [ "J" ], "last": "Mitchell", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Beauchamp", "suffix": "" } ], "year": 1988, "venue": "Journal of the American Statistical Association", "volume": "83", "issue": "404", "pages": "1023--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. J. Mitchell and J. J. Beauchamp. 1988. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023- 1032.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The role of disentanglement in generalisation", "authors": [ { "first": "", "middle": [], "last": "Milton Llera Montero", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Casimir", "suffix": "" }, { "first": "Rui Ponte", "middle": [], "last": "Ludwig", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Costa", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Malhotra", "suffix": "" }, { "first": "", "middle": [], "last": "Bowers", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milton Llera Montero, Casimir JH Ludwig, Rui Ponte Costa, Gaurav Malhotra, and Jeffrey Bowers. 2021. The role of disentanglement in generalisation. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Effective estimation of deep generative language models", "authors": [ { "first": "Tom", "middle": [], "last": "Pelsmaeker", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7220--7236", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.646" ] }, "num": null, "urls": [], "raw_text": "Tom Pelsmaeker and Wilker Aziz. 2020. Effective es- timation of deep generative language models. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7220- 7236, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Information-theoretic probing for linguistic structure", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Valvoda", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Hall Maudslay", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4609--4622", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.420" ] }, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hierarchical sparse variational autoencoder for text encoding", "authors": [ { "first": "Victor", "middle": [], "last": "Prokhorov", "suffix": "" }, { "first": "Yingzhen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Shareghi", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.12421" ] }, "num": null, "urls": [], "raw_text": "Victor Prokhorov, Yingzhen Li, Ehsan Shareghi, and Nigel Collier. 2020. Hierarchical sparse varia- tional autoencoder for text encoding. arXiv preprint arXiv:2009.12421.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "On the importance of the Kullback-Leibler divergence term in variational autoencoders for text generation", "authors": [ { "first": "Victor", "middle": [], "last": "Prokhorov", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Shareghi", "suffix": "" }, { "first": "Yingzhen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "118--127", "other_ids": { "DOI": [ "10.18653/v1/D19-5612" ] }, "num": null, "urls": [], "raw_text": "Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mo- hammad Taher Pilehvar, and Nigel Collier. 2019. On the importance of the Kullback-Leibler diver- gence term in variational autoencoders for text gen- eration. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 118-127, Hong Kong. Association for Computational Linguis- tics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Preventing posterior collapse with delta-vaes", "authors": [ { "first": "Ali", "middle": [], "last": "Razavi", "suffix": "" }, { "first": "A\u00e4ron", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Oord", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Poole", "suffix": "" }, { "first": "", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Razavi, A\u00e4ron van den Oord, Ben Poole, and Oriol Vinyals. 2019. Preventing posterior collapse with delta-vaes. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Learning deep disentangled embeddings with the f-statistic loss", "authors": [ { "first": "Karl", "middle": [], "last": "Ridgeway", "suffix": "" }, { "first": "C", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Mozer", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "31", "issue": "", "pages": "185--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Ridgeway and Michael C Mozer. 2018. Learning deep disentangled embeddings with the f-statistic loss. In Advances in Neural Information Processing Systems, volume 31, pages 185-194. Curran Asso- ciates, Inc.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Benchmarks, algorithms, and metrics for hierarchical disentanglement", "authors": [ { "first": "Andrew", "middle": [], "last": "Slavin Ross", "suffix": "" }, { "first": "Finale", "middle": [], "last": "Doshi-Velez", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Slavin Ross and Finale Doshi-Velez. 2021. Benchmarks, algorithms, and metrics for hierarchi- cal disentanglement. CoRR, abs/2102.05185.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Intrinsic probing through dimension selection", "authors": [ { "first": "Adina", "middle": [], "last": "Lucas Torroba Hennigen", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Williams", "suffix": "" }, { "first": "", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "197--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucas Torroba Hennigen, Adina Williams, and Ryan Cotterell. 2020. Intrinsic probing through dimen- sion selection. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 197-216, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Informationtheoretic probing with minimum description length", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "183--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Improved variational autoencoders for text modeling using dilated convolutions", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3881--3890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved varia- tional autoencoders for text modeling using dilated convolutions. In Proceedings of the 34th Inter- national Conference on Machine Learning, ICML 2017, volume 70 of Proceedings of Machine Learn- ing Research, pages 3881-3890, Sydney, NSW, Aus- tralia. PMLR.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", "volume": "1", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Proceedings of the 28th International Conference on Neural Information Processing Sys- tems -Volume 1, NIPS'15, page 649-657, Cam- bridge, MA, USA. MIT Press.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Disentanglement scores across six metrics on top: YNOC dataset and bottom: POS dataset. For better illustration, we multiply the scores ofEastwood and Williams (2018) andKumar et al. (2018) by 10.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "differences between disentanglement metrics' scores of Init. coupling and others ( \u00a74.2).", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Different coupling strategies for the latent code and decoder and their impacts on disentanglement on POS and YNOC.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Classification accuracy on DBpedia and Yahoo Question using different VAE models. Results are reported as mean and std across 3 randomly initialised runs.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "= {(z n , f i )} D 10: Split D into training set TR and test set TE with proportion (80%, 20%) 11: Train 10 MLPs with only input and output layer on TR 12: Calculate the accuracy on TE for 10 models 13: Calculate the mean and variance of accuracy Algorithm 2 Metric ofKim and Mnih (2018) 1: D = \u2205 2:for d = 1, 2, . . . , dim z do 3: Calculate the standard deviation \u03c3 d of dimension d 4: for f i \u2208 F do 5: for n = 1, 2, . . . , N do 6:Sample s n from j S ij 7:", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "content": "
MetricDis. Com. Info. Ex.1\u2191 Ex.2\u2191
Higgins et al. (2017)YesNoNo100100
Ridgeway and Mozer (2018)YesNoNo100100
Kim and Mnih (2018)YesYesNo100100
Chen et al. (2018)NoYesNo81.055.73
Eastwood and Williams (2018) YesYesYes66.47 63.45
Kumar et al. (2018)NoYesYes4.683.98
", "text": "", "type_str": "table", "num": null, "html": null }, "TABREF1": { "content": "", "text": "The disentanglement (Dis.), completeness (Com.), and informativeness (Info.) criteria reflected in six metrics. The Ex.1 and Ex.2 columns are corresponding metrics' scores (%) on two ideally disentangled representations.", "type_str": "table", "num": null, "html": null }, "TABREF2": { "content": "
Simple Sentence Structures# of Sentences
n. v100,000
n. [dogs cats foxes horses tigers]
v. [want need have get require]
adv. [really recently gradually frequently eventually]
adj. [happy big small beautiful fantastic]
prep. [on in for to of]
conj1. [although because when where whereas]
conj2. [and or]
comma [,]
end-punc. [. !]
", "text": ". n. end-punc. 200 n. v. adj. n. end-punc.1,000 n. adv. v. n. end-punc.1,000 n. adv. v. adj. n. end-punc.5,000 n. v. prep. n. end-punc.1,000 n. v. prep. adj. n. end-punc.5,000 n. adv. v. prep. n. end-punc.5,000 n. adv. v. prep. adj. n. end-punc.25,000 adj. n. v. n. end-punc.1,000 adj. n. v. adj. n. end-punc. 4,000 adj. n. adv. v. n. end-punc.5,000 adj. n. adv. v. adj. n. end-punc. 20,000 adj. n. v. prep. n. end-punc.5,000 adj. n. v. prep. adj. n. end-punc. 20,000 adj. n. adv. v. prep. n. end-punc.25,000 adj. n. adv. v. prep. adj. n. end-punc.", "type_str": "table", "num": null, "html": null }, "TABREF3": { "content": "", "text": "Simple sentence structures and the vocabulary used for each POS tag in our synthetic dataset.", "type_str": "table", "num": null, "html": null }, "TABREF5": { "content": "
$XWRHQFRGHU%HWD9$(%HWD&&,9$(&
9DQLOOD9$(%HWD9$(%HWD0$79$(%HWD
%HWD9$(%HWD&&,9$(&0$79$(%HWD
6FRUHV
+LJJLQVHWDO 5LGJHZD\\DQG0R]HU .LPDQG0QLK&KHQHWDO (DVWZRRGDQG:LOOLDPV .XPDUHWDO
$XWRHQFRGHU%HWD9$(%HWD&&,9$(&
9DQLOOD9$(%HWD9$(%HWD0$79$(%HWD
%HWD9$(%HWD&&,9$(&0$79$(%HWD
6FRUHV
+LJJLQVHWDO 5LGJHZD\\DQG0R]HU .LPDQG0QLK&KHQHWDO (DVWZRRGDQG:LOOLDPV .XPDUHWDO
", "text": "Results are calculated on the test set. We report mean value and standard deviation across 5 runs.", "type_str": "table", "num": null, "html": null }, "TABREF6": { "content": "
AEVAE\u03b2-VAECCI-VAEMAT-VAE
\u03b2 = 0.2\u03b2 = 0.4\u03b2 = 0.8C = 5C = 10\u03b2 = 0.1
", "text": ", \u03bb = 0.1 \u03b2 = 0.01, \u03bb = 0.1 YNOC 0.22\u00b10.03 0.03\u00b10.02 0.30\u00b10.03 0.30\u00b10.02 0.30\u00b10.05 0.32\u00b10.04 0.30\u00b10.01 0.36\u00b10.03 0.43\u00b10.09 POS 0.30\u00b10.05 0.21\u00b10.03 0.25\u00b10.00 0.27\u00b10.01 0.29\u00b10.04 0.29\u00b10.05 0.28\u00b10.01 0.29\u00b10.00 0.28\u00b10.01", "type_str": "table", "num": null, "html": null }, "TABREF7": { "content": "
<12&326'DWDVHW
.XPDUHWDO
(DVWZRRGDQG :LOOLDPV
&KHQHWDO
.LPDQG 0QLK
5LGJHZD\\DQG 0R]HU
+LJJLQVHWDO
+R\\HU
$8
5HF
./
./5HF$8+R\\HU+LJJLQVHWDO5LGJHZD\\DQG 0R]HU.LPDQG 0QLK&KHQHWDO(DVWZRRGDQG :LOOLDPV.XPDUHWDO
Figure 2: Correlation coefficients between six disentan-
glement metrics, Init. Concat.Init.Concat.Concat. w/o Emb.
\u2026\u2026\u2026\u2026
\u2026\u2026\u2026\u2026
1 21 21 2
(a) Different coupling strategies for the latent code and de-
coder ( \u00a74.2). Gray box denotes decoder. <12&
0.10POS, Concat.YNOC, Concat.
-0.05 0.00 0.05 Absolute DifferenceH ig g in s e t a l. 2 0 1 7R id g e w a y a n d M o z e r, 2 0 1 8 POS, Init.+Concat. POS, Concat. w/o Emb.YNOC, Init.+Concat. YNOC, Concat. w/o Emb.
", "text": "Hoyer scores are calculated on the test set. We report mean value and standard deviation across 5 runs. Hoyer, AU, Rec, and KL on Upper Triangle: YNOC dataset and Lower Triangle: POS dataset.", "type_str": "table", "num": null, "html": null }, "TABREF9": { "content": "", "text": "", "type_str": "table", "num": null, "html": null }, "TABREF11": { "content": "
", "text": "The homotopy experiments, comparing an ideal generator and the best disentangled VAEs according to Higgins et al. (2017) (VAE-Higg) and Chen et al.", "type_str": "table", "num": null, "html": null }, "TABREF12": { "content": "
11:Train a linear SVM classifier on TR d
12:Record the accuracy acc d on TE id
13:d
", "text": "ij )} D id 10: Split D d into training set TR d and test set TE d with proportion (80%, 20%)", "type_str": "table", "num": null, "html": null } } } }