text
stringlengths
62
2.94k
Evaluation of dynamic causal modelling and Bayesian model selection using simulations of networks of spiking neurons ; Inferring the mechanisms underlying physiological and pathological processes in the brain from recorded electrical activity is challenging. Bayesian model selection and dynamic causal modelling aim to identify likely biophysical models to explain data and to fit the model parameters. Here, we use data generated by simulations to investigate the effectiveness of Bayesian model selection and dynamic causal modelling when applied at steady state in the frequency domain to identify and fit JansenRit models. We first investigate the impact of the necessary assumption of linearity on the dynamics of the JansenRit model. We then apply dynamic causal modelling and Bayesian model selection to data generated from simulations of linear neural mass models, nonlinear neural mass models, and networks of discrete spiking neurons. Action potentials are a characteristic feature of neuronal dynamics but have not previously been explicitly included in simulations used to test Bayesian model selection or dynamic causal modelling. We find that the assumption of linearity abolishes the qualitative transitions seen as a function of the connectivity parameter in the original JansenRit model. As with previous work, we find that the recovery procedures are effective when applied to data from linear JansenRit neural mass models, however, when applying them to nonlinear neural mass models and networks of discrete spiking neurons we find that their effectiveness is significantly reduced, suggesting caution is required when applying these methods.
An online sequencetosequence model for noisy speech recognition ; Generative models have long been the dominant approach for speech recognition. The success of these models however relies on the use of sophisticated recipes and complicated machinery that is not easily accessible to nonpractitioners. Recent innovations in Deep Learning have given rise to an alternative discriminative models called SequencetoSequence models, that can almost match the accuracy of state of the art generative models. While these models are easy to train as they can be trained endtoend in a single step, they have a practical limitation that they can only be used for offline recognition. This is because the models require that the entirety of the input sequence be available at the beginning of inference, an assumption that is not valid for instantaneous speech recognition. To address this problem, online sequencetosequence models were recently introduced. These models are able to start producing outputs as data arrives, and the model feels confident enough to output partial transcripts. These models, like sequencetosequence are causal the output produced by the model until any time, t, affects the features that are computed subsequently. This makes the model inherently more powerful than generative models that are unable to change features that are computed from the data. This paper highlights two main contributions an improvement to online sequencetosequence model training, and its application to noisy settings with mixed speech from two speakers.
MULDEF Multimodelbased Defense Against Adversarial Examples for Neural Networks ; Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential risks on safety and security critical applications. However, existing defense approaches are still vulnerable to attacks, especially in a whitebox attack scenario. To address this issue, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of 1 a general defense framework based on multiple models and 2 a technique for generating these multiple models to achieve high defense capability. In particular, given a target model, our framework includes multiple models constructed from the target model to form a model family. The model family is designed to achieve robustness diversity i.e., an adversarial example successfully attacking one model cannot succeed in attacking other models in the family. At runtime, a model is randomly selected from the family to be applied on each input example. Our general framework can inspire rich future research to construct a desirable model family achieving higher robustness diversity. Our evaluation results show that MulDef with only up to 5 models in the family can substantially improve the target model's accuracy on adversarial examples by 2274 in a whitebox attack scenario, while maintaining similar accuracy on legitimate examples.
SelfDistillation Mixup Training for Nonautoregressive Neural Machine Translation ; Recently, nonautoregressive NAT models predict outputs in parallel, achieving substantial improvements in generation speed compared to autoregressive AT models. While performing worse on raw data, most NAT models are trained as student models on distilled data generated by AT teacher models, which is known as sequencelevel Knowledge Distillation. An effective training strategy to improve the performance of AT models is SelfDistillation Mixup SDM Training, which pretrains a model on raw data, generates distilled data by the pretrained model itself and finally retrains a model on the combination of raw data and distilled data. In this work, we aim to view SDM for NAT models, but find directly adopting SDM to NAT models gains no improvements in terms of translation quality. Through careful analysis, we observe the invalidation is correlated to Modeling Diversity and Confirmation Bias between the AT teacher model and the NAT student models. Based on these findings, we propose an enhanced strategy named SDMRT by adding two stages to classic SDM one is PreRerank on selfdistilled data, the other is FineTune on Filtered teacherdistilled data. Our results outperform baselines by 0.6 to 1.2 BLEU on multiple NAT models. As another bonus, for Iterative Refinement NAT models, our methods can outperform baselines within half iteration number, which means 2X acceleration.
The Model Forest Ensemble Kalman Filter ; Traditional data assimilation uses information obtained from the propagation of one physicsdriven model and combines it with information derived from realworld observations in order to obtain a better estimate of the truth of some natural process. However, in many situations multiple simulation models that describe the same physical phenomenon are available. Such models can have different sources. On one hand there are theoryguided models are constructed from first physical principles, while on the other there are datadriven models that are constructed from snapshots of high fidelity information. In this work we provide a possible way to make use of this collection of models in data assimilation by generalizing the idea of model hierarchies into model forests collections of high fidelity and low fidelity models organized in a groping of model trees such as to capture various relationships between different models. We generalize the multifidelity ensemble Kalman filter that previously operated on model hierarchies into the model forest ensemble Kalman filter through a generalized theory of linear control variates. This new filter allows for much more freedom when treading the line between accuracy and speed. Numerical experiments with a high fidelity quasigeostrophic model and two of its low fidelity reduced order models validate the accuracy of our approach.
A Systematic Survey of Prompt Engineering on VisionLanguage Foundation Models ; Prompt engineering is a technique that involves augmenting a large pretrained model with taskspecific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pretrained models in realworld tasks. In past years, Prompt engineering has been wellstudied in natural language processing. Recently, it has also been intensively studied in visionlanguage modeling. However, there is currently a lack of a systematic overview of prompt engineering on pretrained visionlanguage models. This paper aims to provide a comprehensive survey of cuttingedge research in prompt engineering on three types of visionlanguage models multimodaltotext generation models e.g. Flamingo, imagetext matching models e.g. CLIP, and texttoimage generation models e.g. Stable Diffusion. For each type of model, a brief model summary, prompting methods, promptingbased applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on visionlanguage models, language models, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.
Towards FewCall Model Stealing via Active SelfPaced Knowledge Distillation and DiffusionBased Image Generation ; Diffusion models showcased strong capabilities in image synthesis, being used in many computer vision tasks with great success. To this end, we propose to explore a new use case, namely to copy blackbox classification models without having access to the original training data, the architecture, and the weights of the model, iethe model is only exposed through an inference API. More specifically, we can only observe the soft or hard labels for some image samples passed as input to the model. Furthermore, we consider an additional constraint limiting the number of model calls, mostly focusing our research on fewcall model stealing. In order to solve the model extraction task given the applied restrictions, we propose the following framework. As training data, we create a synthetic data set called proxy data set by leveraging the ability of diffusion models to generate realistic and diverse images. Given a maximum number of allowed API calls, we pass the respective number of samples through the blackbox model to collect labels. Finally, we distill the knowledge of the blackbox teacher attacked model into a student model copy of the attacked model, harnessing both labeled and unlabeled data generated by the diffusion model. We employ a novel active selfpaced learning framework to make the most of the proxy data during distillation. Our empirical results on two data sets confirm the superiority of our framework over two stateoftheart methods in the fewcall model extraction scenario.
Generating Annotated HighFidelity Images Containing Multiple Coherent Objects ; Recent developments related to generative models have made it possible to generate diverse highfidelity images. In particular, layouttoimage generation models have gained significant attention due to their capability to generate realistic complex images containing distinct objects. These models are generally conditioned on either semantic layouts or textual descriptions. However, unlike natural images, providing auxiliary information can be extremely hard in domains such as biomedical imaging and remote sensing. In this work, we propose a multiobject generation framework that can synthesize images with multiple objects without explicitly requiring their contextual information during the generation process. Based on a vectorquantized variational autoencoder VQVAE backbone, our model learns to preserve spatial coherency within an image as well as semantic coherency between the objects and the background through two powerful autoregressive priors PixelSNAIL and LayoutPixelSNAIL. While the PixelSNAIL learns the distribution of the latent encodings of the VQVAE, the LayoutPixelSNAIL is used to specifically learn the semantic distribution of the objects. An implicit advantage of our approach is that the generated samples are accompanied by objectlevel annotations. We demonstrate how coherency and fidelity are preserved with our method through experiments on the MultiMNIST and CLEVR datasets; thereby outperforming stateoftheart multiobject generative methods. The efficacy of our approach is demonstrated through application on medical imaging datasets, where we show that augmenting the training set with generated samples using our approach improves the performance of existing models.
Generating unseen complex scenes are we there yet ; Although recent complex scene conditional generation models generate increasingly appealing scenes, it is very hard to assess which models perform better and why. This is often due to models being trained to fit different data splits, and defining their own experimental setups. In this paper, we propose a methodology to compare complex scene conditional generation models, and provide an indepth analysis that assesses the ability of each model to 1 fit the training distribution and hence perform well on seen conditionings, 2 to generalize to unseen conditionings composed of seen object combinations, and 3 generalize to unseen conditionings composed of unseen object combinations. As a result, we observe that recent methods are able to generate recognizable scenes given seen conditionings, and exploit compositionality to generalize to unseen conditionings with seen object combinations. However, all methods suffer from noticeable image quality degradation when asked to generate images from conditionings composed of unseen object combinations. Moreover, through our analysis, we identify the advantages of different pipeline components, and find that 1 encouraging compositionality through instancewise spatial conditioning normalizations increases robustness to both types of unseen conditionings, 2 using semantically aware losses such as the scenegraph perceptual similarity helps improve some dimensions of the generation process, and 3 enhancing the quality of generated masks and the quality of the individual objects are crucial steps to improve robustness to both types of unseen conditionings.
CodeGenTest An Automatic Code Generation Model Integrating Program Test Information ; Automatic code generation is to generate the program code according to the given natural language description. The current mainstream approach uses neural networks to encode natural language descriptions, and output abstract syntax trees AST at the decoder, then convert the AST into program code. While the generated code largely conforms to specific syntax rules, two problems are still ignored. One is missing program testing, an essential step in the process of complete code implementation; the other is only focusing on the syntax compliance of the generated code, while ignoring the more important program functional requirements. The paper proposes a CodeGenTest model, which adds program testing steps and incorporates program testing information to iteratively generate code that meets the functional requirements of the program, thereby improving the quality of code generation. At the same time, the paper proposes a new evaluation metric, test accuracy TestAcc, which represents the proportion of passing program test in generated code. Different from the previous evaluation metric, which only evaluates the quality of code generation from the perspective of character similarity, the TestAcc can evaluate the quality of code generation from the Program functions. Moreover, the paper evaluates the CodeGentest model on a python data set hearthstone legend. The experimental results show the proposed method can effectively improve the quality of generated code. Compared with the existing optimal model, CodeGenTest model improves the Bleu value by 0.2, RougeL value by 0.3 and TestAcc by 6.
MDM Molecular Diffusion Model for 3D Molecule Generation ; Molecule generation, especially generating 3D molecular geometries from scratch i.e., 3D textitde novo generation, has become a fundamental task in drug designs. Existing diffusionbased 3D molecule generation methods could suffer from unsatisfactory performances, especially when generating large molecules. At the same time, the generated molecules lack enough diversity. This paper proposes a novel diffusion model to address those two challenges. First, interatomic relations are not in molecules' 3D point cloud representations. Thus, it is difficult for existing generative models to capture the potential interatomic forces and abundant local constraints. To tackle this challenge, we propose to augment the potential interatomic forces and further involve dual equivariant encoders to encode interatomic forces of different strengths. Second, existing diffusionbased models essentially shift elements in geometry along the gradient of data density. Such a process lacks enough exploration in the intermediate steps of the Langevin dynamics. To address this issue, we introduce a distributional controlling variable in each diffusionreverse step to enforce thorough explorations and further improve generation diversity. Extensive experiments on multiple benchmarks demonstrate that the proposed model significantly outperforms existing methods for both unconditional and conditional generation tasks. We also conduct case studies to help understand the physicochemical properties of the generated molecules.
Causal Intervention for Abstractive Related Work Generation ; Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research. However, most existing abstractive models ignore the inherent causality of related work generation, leading to low quality of generated related work and spurious correlations that affect the models' generalizability. In this study, we argue that causal intervention can address these limitations and improve the quality and coherence of the generated related works. To this end, we propose a novel Causal Intervention Module for Related Work Generation CaM to effectively capture causalities in the generation process and improve the quality and coherence of the generated related works. Specifically, we first model the relations among sentence order, document relation, and transitional content in related work generation using a causal graph. Then, to implement the causal intervention and mitigate the negative impact of spurious correlations, we use docalculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an endtoend generation model. Extensive experiments on two realworld datasets show that causal interventions in CaM can effectively promote the model to learn causal relations and produce related work of higher quality and coherence.
Factorization of Language Models through BackingOff Lattices ; Factorization of statistical language models is the task that we resolve the most discriminative model into factored models and determine a new model by combining them so as to provide better estimate. Most of previous works mainly focus on factorizing models of sequential events, each of which allows only one factorization manner. To enable parallel factorization, which allows a model event to be resolved in more than one ways at the same time, we propose a general framework, where we adopt a backingoff lattice to reflect parallel factorizations and to define the paths along which a model is resolved into factored models, we use a mixture model to combine parallel paths in the lattice, and generalize Katz's backingoff method to integrate all the mixture models got by traversing the entire lattice. Based on this framework, we formulate two types of model factorizations that are used in natural language modeling.
Specification, Construction, and Exact Reduction of State Transition System Models of Biochemical Processes ; Biochemical reaction systems may be viewed as discrete event processes characterized by a number of states and state transitions. These systems may be modeled as state transition systems with transitions representing individual reaction events. Since they often involve a large number of interactions, it can be difficult to construct such a model for a system, and since the resulting statelevel model can involve a huge number of states, model analysis can be difficult or impossible. Here, we describe methods for the highlevel specification of a system using hypergraphs, for the automated generation of a statelevel model from a highlevel model, and for the exact reduction of a statelevel model using information from the highlevel model. Exact reduction is achieved through the automated application of symmetry reduction and invariant manifold reduction techniques to the highlevel model, allowing potentially significant reductions without the need to generate a full model. The application of the method to biochemical reaction systems is illustrated by models describing a hypothetical ionchannel at several levels of complexity. The method allows for the reduction of the otherwise intractable example models to a manageable size.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window ; Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging DMA extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well that of other methods. Keywords Bayesian model averaging; Model uncertainty; Nowcasting; Occam's window.
Multidisciplinary Engineering Models Methodology and Case Study in Spreadsheet Analytics ; This paper demonstrates a methodology to help practitioners maximise the utility of complex multidisciplinary engineering models implemented as spreadsheets, an area presenting unique challenges. As motivation we investigate the expanding use of Integrated Resource ManagementIRM models which assess the sustainability of urban masterplan designs. IRM models reflect the inherent complexity of multidisciplinary sustainability analysis by integrating models from many disciplines. This complexity makes their use timeconsuming and reduces their adoption. We present a methodology and toolkit for analysing multidisciplinary engineering models implemented as spreadsheets to alleviate such problems and increase their adoption. For a given output a relevant slice of the model is extracted, visualised and analysed by computing model and interdisciplinary metrics. A sensitivity analysis of the extracted model supports engineers in their optimisation efforts. These methods expose, manage and reduce model complexity and risk whilst giving practitioners insight into multidisciplinary model composition. We report application of the methodology to several generations of an industrial IRM model and detail the insight generated, particularly considering model evolution.
Local Probabilistic Model for Bayesian Classification a Generalized Local Classification Model ; In Bayesian classification, it is important to establish a probabilistic model for each class for likelihood estimation. Most of the previous methods modeled the probability distribution in the whole sample space. However, realworld problems are usually too complex to model in the whole sample space; some fundamental assumptions are required to simplify the global model, for example, the class conditional independence assumption for naive Bayesian classification. In this paper, with the insight that the distribution in a local sample space should be simpler than that in the whole sample space, a local probabilistic model established for a local region is expected much simpler and can relax the fundamental assumptions that may not be true in the whole sample space. Based on these advantages we propose establishing local probabilistic models for Bayesian classification. In addition, a Bayesian classifier adopting a local probabilistic model can even be viewed as a generalized local classification model; by tuning the size of the local region and the corresponding local model assumption, a fitting model can be established for a particular classification problem. The experimental results on several realworld datasets demonstrate the effectiveness of local probabilistic models for Bayesian classification.
Model checking and model synthesisfrom partial models a logicbased perspective ; I consider the following generic scenario an abstract model M of some 'real' system is only partially presented, or partially known to us, and we have to ensure that the actual system satisfies a given specification, formalised in some logical language. This scenario has at least two essentially different interpretations, leading to two, essentially different, formal logical and algorithmic problems Model Synthesis from Partial Models, where some 'admissible' extension of M to a full model must satisfy the specification, and Model Checking of Partial Models, where all 'admissible' extensions of M to a full model must satisfy the specification. These problems naturally extend the classical logical decision problems of Satisfiability, Validity, and Model Checking. Here I briefly discuss both problems in the contexts of classical, modal and temporal logics. I make some observations, state some open questions, and outline a general tableauxstyle procedure that solves the problem of unconstrained model synthesis from finite partial models for several wellknown modal and temporal logics, incl. K, LTL, CTL, ATL.
Model Compression for Domain Adaptation through Causal Effect Estimation ; Recent improvements in the predictive quality of natural language processing systems are often dependent on a substantial increase in the number of model parameters. This has led to various attempts of compressing such models, but existing methods have not considered the differences in the predictive power of various model components or in the generalizability of the compressed models. To understand the connection between model compression and outofdistribution generalization, we define the task of compressing language representation models such that they perform best in a domain adaptation setting. We choose to address this problem from a causal perspective, attempting to estimate the average treatment effect ATE of a model component, such as a single layer, on the model's predictions. Our proposed ATEguided Model Compression scheme AMoC, generates many model candidates, differing by the model components that were removed. Then, we select the best candidate through a stepwise regression model that utilizes the ATE to predict the expected performance on the target domain. AMoC outperforms strong baselines on dozens of domain pairs across three text classification and sequence tagging tasks.
The Generalized Cascade Click Model A Unified Framework for Estimating Click Models ; Given the vital importance of search engines to find digital information, there has been much scientific attention on how users interact with search engines, and how such behavior can be modeled. Many models on user search engine interaction, which in the literature are known as click models, come in the form of Dynamic Bayesian Networks. Although many authors have used the resemblance between the different click models to derive estimation procedures for these models, in particular in the form of expectation maximization EM, still this commonly requires considerable work, in particular when it comes to deriving the Estep. What we propose in this paper, is that this derivation is commonly unnecessary many existing click models can in fact, under certain assumptions, be optimized as they were InputOutput Hidden Markov Models IOHMMs, for which the forwardbackward equations immediately provide this Estep. To arrive at that conclusion, we will present the Generalized Cascade Model GCM and show how this model can be estimated using the IOHMM EM framework, and provide two examples of how existing click models can be mapped to GCM. Our GCM approach to estimating click models has also been implemented in the gecasmo Python package.
Transmuted Generalized Inverse Weibull Distribution ; A generalization of the generalized inverse Weibull distribution socalled transmuted generalized inverse Weibull dis tribution is proposed and studied. We will use the quadratic rank transmutation map QRTM in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expressions for the mo ments, quantiles, and moment generating function of the new dis tribution are derived.We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the exibility of the transmuted version versus the generalized inverseWeibull distribution.
FairGAN Fairnessaware Generative Adversarial Networks ; Fairnessaware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks GAN, we present fairnessaware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN.
Factorising AMR generation through syntax ; Generating from Abstract Meaning Representation AMR is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph. To explicitly account for this underspecification, we break down generating from AMR into two steps first generate a syntactic structure, and then generate the surface form. We show that decomposing the generation process this way leads to stateoftheart single model performance generating from AMR without additional unlabelled data. We also demonstrate that we can generate meaningpreserving syntactic paraphrases of the same AMR graph, as judged by humans.
Extensions of Generic DOL for Generic Ontology Design Patterns ; Generic ontologies were introduced as an extension Generic DOL of the Distributed Ontology, Modeling and Specification Language, DOL, with the aim to provide a language for Generic Ontology Design Patterns. In this paper we present a number of new language constructs that increase the expressivity and the generality of Generic DOL, among them sequential and optional parameters, list parameters with recursion, and local subpatterns. These are illustrated with nontrivial patterns generic value sets and nested qualitatively graded relations, demonstrated as definitional building blocks in an application domain.
Shared Loss between Generators of GANs ; Generative adversarial networks are generative models that are capable of replicating the implicit probability distribution of the input data with high accuracy. Traditionally, GANs consist of a Generator and a Discriminator which interact with each other to produce highly realistic artificial data. Traditional GANs fall prey to the mode collapse problem, which means that they are unable to generate the different variations of data present in the input dataset. Recently, multiple generators have been used to produce more realistic output by mitigating the mode collapse problem. We use this multiple generator framework. The novelty in this paper lies in making the generators compete against each other while interacting with the discriminator simultaneously. We show that this causes a dramatic reduction in the training time for GANs without affecting its performance.
BRST Invariant Theory Of A Generalized 11 Dimensional Nonlinear Sigma Model With Topological Term ; We give a generalized Lagrangian density of 11 Dimensional O3 nonlinear sigma model with subsidiary constraints, different Lagrange multiplier fields and topological term, find a lost intrinsic constraint condition, convert the subsidiary constraints into inner constraints in the nonlinear sigma model, give the example of not introducing the lost constraint, by comparing the example with the case of introducing the lost constraint, we obtain that when not introducing the lost constraint, one has to obtain a lot of various nonintrinsic constraints. We further deduce the gauge generator, give general BRST transformation of the model under the general conditions. It is discovered that there exists a gauge parameter originating from the freedom degree of BRST transformation in a general O3 nonlinear sigma model, and we gain the general commutation relations of ghost field.
Hyperbolic towers and independent generic sets in the theory of free groups ; We use hyperbolic towers to answer some model theoretic questions around the generic type in the theory of free groups. We show that all the finitely generated models of this theory realize the generic type p0, but that there is a finitely generated model which omits p02. We exhibit a finitely generated model in which there are two maximal independent sets of realizations of the generic type which have different cardinalities. We also show that a free product of homogeneous groups is not necessarily homogeneous.
Fermion Hierarchy from Sfermion Anarchy ; We present a framework to generate the hierarchical flavor structure of Standard Model quarks and leptons from loops of superpartners. The simplest model consists of the minimal supersymmetric standard model with tree level Yukawa couplings for the third generation only and anarchic squark and slepton mass matrices. Agreement with constraints from low energy flavor observables, in particular Kaon mixing, is obtained for supersymmetric particles with masses at the PeV scale or above. In our framework both the second and the first generation fermion masses are generated at 1loop. Despite this, a novel mechanism generates a hierarchy among the first and second generations without imposing a symmetry or small parameters. A secondtofirst generation mass ratio of order 100 is typical. The minimal supersymmetric standard model thus includes all the necessary ingredients to realize a fermion spectrum that is qualitatively similar to observation, with hierarchical masses and mixing. The minimal framework produces only a few quantitative discrepancies with observation, most notably the muon mass is too low. We discuss simple modifications which resolve this and also investigate the compatibility of our model with gauge and Yukawa coupling Unification.
On Bivariate Generalized Linear Failure RatePower Series Class of Distributions ; Recently it has been observed that the bivariate generalized linear failure rate distribution can be used quite effectively to analyze lifetime data in two dimensions. This paper introduces a more general class of bivariate distributions. We refer to this new class of distributions as bivariate generalized linear failure rate power series model. This new class of bivariate distributions contains several lifetime models such as generalized linear failure ratepower series, bivariate generalized linear failure rate and bivariate generalized linear failure rate geometric distributions as special cases among others. The construction and characteristics of the proposed bivariate distribution are presented along with estimation procedures for the model parameters based on maximum likelihood. The marginal and conditional laws are also studied. We present an application to the real data set where our model provides a better fit than other models.
Extension of the General Thermal Field Equation for nanosized emitters ; During the previous decade, K.L. Jensen et. al. developed a general analytical model that successfully describes electron emission from metals both in the field and thermionic regimes, as well as in the transition region. In that development, the standard image corrected triangular potential barrier was used. This barrier model is valid only for planar surfaces and therefore cannot be used in general for modern nanometric emitters. In a recent publication the authors showed that the standard FowlerNordheim theory can be generalized for highly curved emitters if a quadratic term is included to the potential model. In this paper we extend this generalization for high temperatures and include both the thermal and intermediate regimes. This is achieved by applying the general method developed by Jensen to the quadratic barrier model of our previous publication. We obtain results that are in good agreement with fully numerical calculations for radii R4nm, while our calculated current density differs by a factor up to 27 from the one predicted by the Jensen's standard GeneralThermalField GTF equation. Our extended GTF equation has application to modern sharp electron sources, beam simulation models and vacuum breakdown theory.
Modelbased Test Generation for Robotic Software Automata versus BeliefDesireIntention Agents ; Robotic code needs to be verified to ensure its safety and functional correctness, especially when the robot is interacting with people. Testing real code in simulation is a viable option. However, generating tests that cover rare scenarios, as well as exercising most of the code, is a challenge amplified by the complexity of the interactions between the environment and the software. Modelbased test generation methods can automate otherwise manual processes and facilitate reaching rare scenarios during testing. In this paper, we compare using BeliefDesireIntention BDI agents as models for test generation with more conventional automatabased techniques that exploit model checking, in terms of practicality, performance, transferability to different scenarios, and exploration coverage', through two case studies a cooperative manufacturing task, and a home care scenario. The results highlight the advantages of using BDI agents for test generation. BDI agents naturally emulate the agency present in HumanRobot Interactions HRIs, and are thus more expressive than automata. The performance of the BDIbased test generation is at least as high, and the achieved coverage is higher or equivalent, compared to test generation based on model checking automata.
Automatic Generation of Grounded Visual Questions ; In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.
Adversarial examples for generative models ; We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder VAE and the VAEGAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAEGAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classificationbased adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks ; Adversarial attacks and the development of deep neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization LVQ models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current stateoftheart in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.
Strategies for Structuring Story Generation ; Writers generally rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarsetofine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities. The model first generates the predicateargument structure of the text, where different mentions of the same entity are marked with placeholder tokens. It then generates a surface realization of the predicateargument structure, and finally replaces the entity placeholders with contextsensitive names and references. Human judges prefer the stories from our models to a wide range of previous approaches to hierarchical text generation. Extensive analysis shows that our methods can help improve the diversity and coherence of events and entities in generated stories.
Rank3DGAN Semantic mesh generation using relative attributes ; In this paper, we investigate a novel problem of using generative adversarial networks in the task of 3D shape generation according to semantic attributes. Recent works map 3D shapes into 2D parameter domain, which enables training Generative Adversarial Networks GANs for 3D shape generation task. We extend these architectures to the conditional setting, where we generate 3D shapes with respect to subjective attributes defined by the user. Given pairwise comparisons of 3D shapes, our model performs two tasks it learns a generative model with a controlled latent space, and a ranking function for the 3D shapes based on their multichart representation in 2D. The capability of the model is demonstrated with experiments on HumanShape, Basel Face Model and reconstructed 3D CUB datasets. We also present various applications that benefit from our model, such as multiattribute exploration, mesh editing, and mesh attribute transfer.
Sample Complexity Bounds for 1bit Compressive Sensing and Binary Stable Embeddings with Generative Priors ; The goal of standard 1bit compressive sensing is to accurately recover an unknown sparse vector from binaryvalued measurements, each indicating the sign of a linear function of the vector. Motivated by recent advances in compressive sensing with generative models, where a generative modeling assumption replaces the usual sparsity assumption, we study the problem of 1bit compressive sensing with generative models. We first consider noiseless 1bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements and a Lipschitz continuous generative prior, as well as a nearmatching algorithmindependent lower bound. Moreover, we demonstrate that the Binary epsilonStable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1bit compressive sensing with Lipschitz continuous generative models with sufficiently many Gaussian measurements. In addition, we apply our results to neural network generative models, and provide a proofofconcept numerical experiment demonstrating significant improvements over sparsitybased approaches.
Image Generation and Editing with Variational Info Generative AdversarialNetworks ; Recently there has been an enormous interest in generative models for images in deep learning. In pursuit of this, Generative Adversarial Networks GAN and Variational AutoEncoder VAE have surfaced as two most prominent and popular models. While VAEs tend to produce excellent reconstructions but blurry samples, GANs generate sharp but slightly distorted images. In this paper we propose a new model called Variational InfoGAN ViGAN. Our aim is two fold i To generated new images conditioned on visual descriptions, and ii modify the image, by fixing the latent representation of image and varying the visual description. We evaluate our model on Labeled Faces in the Wild LFW, celebA and a modified version of MNIST datasets and demonstrate the ability of our model to generate new images as well as to modify a given image by changing attributes.
DEFactor Differentiable Edge Factorizationbased Probabilistic Graph Generation ; Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative models have shown a promising way of performing denovo molecular design. Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph nondifferentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototypebased molecular graph conditional generation tasks.
Neural Text Generation Past, Present and Beyond ; This paper presents a systematic survey on recent development of neural text generation models. Specifically, we start from recurrent neural network language models with the traditional maximum likelihood estimation training scheme and point out its shortcoming for text generation. We thus introduce the recently proposed methods for text generation based on reinforcement learning, reparametrization tricks and generative adversarial nets GAN techniques. We compare different properties of these models and the corresponding techniques to handle their common problems such as gradient vanishing and generation diversity. Finally, we conduct a benchmarking experiment with different types of neural text generation models on two wellknown datasets and discuss the empirical results along with the aforementioned model properties.
Text Generation by Learning from Demonstrations ; Current approaches to text generation largely rely on autoregressive models and maximum likelihood estimation. This paradigm leads to i diverse but lowquality samples due to mismatched learning objective and evaluation metric likelihood vs. quality and ii exposure bias due to mismatched history distributions gold vs. modelgenerated. To alleviate these problems, we frame text generation as an offline reinforcement learning RL problem with expert demonstrations i.e., the reference, where the goal is to maximize quality given modelgenerated histories. We propose GOLD generation by offpolicy learning from demonstrations an easytooptimize algorithm that learns from the demonstrations by importance weighting. Intuitively, GOLD upweights confident tokens and downweights unconfident ones in the reference during training, avoiding optimization issues faced by prior RL approaches that rely on online data collection. According to both automatic and human evaluation, models trained by GOLD outperform those trained by MLE and policy gradient on summarization, question generation, and machine translation. Further, our models are less sensitive to decoding algorithms and alleviate exposure bias.
XLXMERT Paint, Caption and Answer Questions with MultiModal Transformers ; Mirroring the success of masked language models, visionandlanguage counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question Can these models go the other way and generate images from pieces of text Our analysis of a popular representative from this model family LXMERT finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce XLXMERT, an extension to LXMERT with training refinements including discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pretraining datasets to the right objectives which enables it to paint. XLXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT. Finally, we demonstrate the generality of these training refinements by adding image generation capabilities into UNITER to produce XUNITER.
Injecting Entity Types into EntityGuided Text Generation ; Recent successes in deep generative modeling have led to significant advances in natural language generation NLG. Incorporating entities into neural generation models has demonstrated great improvements by assisting to infer the summary topic and to generate coherent content. To enhance the role of entity in NLG, in this paper, we aim to model the entity type in the decoding phase to generate contextual words accurately. We develop a novel NLG model to produce a target sequence based on a given list of entities. Our model has a multistep decoder that injects the entity types into the process of entity mention generation. Experiments on two public news datasets demonstrate type injection performs better than existing type embedding concatenation baselines.
From Machine Translation to CodeSwitching Generating HighQuality CodeSwitched Text ; Generating codeswitched text is a problem of growing interest, especially given the scarcity of corpora containing large volumes of real codeswitched text. In this work, we adapt a stateoftheart neural machine translation model to generate HindiEnglish codeswitched sentences starting from monolingual Hindi sentences. We outline a carefully designed curriculum of pretraining steps, including the use of synthetic codeswitched text, that enable the model to generate highquality codeswitched text. Using text generated from our model as data augmentation, we show significant reductions in perplexity on a language modeling task, compared to using text from other generative models of CS text. We also show improvements using our text for a downstream codeswitched natural language inference task. Our generated text is further subjected to a rigorous evaluation using a human evaluation study and a range of objective metrics, where we show performance comparable and sometimes even superior to codeswitched text obtained via crowd workers who are native Hindi speakers.
Learning Neural Templates for Text Generation ; While neural, encoderdecoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely a uninterpretable, and b difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model HSMM decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.
Learning Comment Generation by Leveraging UserGenerated Data ; Existing models on opendomain comment generation are difficult to train, and they produce repetitive and uninteresting responses. The problem is due to multiple and contradictory responses from a single article, and by the rigidity of retrieval methods. To solve this problem, we propose a combined approach to retrieval and generation methods. We propose an attentive scorer to retrieve informative and relevant comments by leveraging usergenerated data. Then, we use such comments, together with the article, as input for a sequencetosequence model with copy mechanism. We show the robustness of our model and how it can alleviate the aforementioned issue by using a large scale comment generation dataset. The result shows that the proposed generative model significantly outperforms strong baseline such as Seq2Seq with attention and Information Retrieval models by around 27 and 30 BLEU1 points respectively.
Reinforcement Learning Based Emotional Editing Constraint Conversation Generation ; In recent years, the generation of conversation content based on deep neural networks has attracted many researchers. However, traditional neural language models tend to generate general replies, lacking logical and emotional factors. This paper proposes a conversation content generation model that combines reinforcement learning with emotional editing constraints to generate more meaningful and customizable emotional replies. The model divides the replies into three clauses based on pregenerated keywords and uses the emotional editor to further optimize the final reply. The model combines multitask learning with multiple indicator rewards to comprehensively optimize the quality of replies. Experiments shows that our model can not only improve the fluency of the replies, but also significantly enhance the logical relevance and emotional relevance of the replies.
Question Generation by Transformers ; A machine learning model was developed to automatically generate questions from Wikipedia passages using transformers, an attentionbased model eschewing the paradigm of existing recurrent neural networks RNNs. The model was trained on the inverted Stanford Question Answering Dataset SQuAD, which is a reading comprehension dataset consisting of 100,000 questions posed by crowdworkers on a set of Wikipedia articles. After training, the question generation model is able to generate simple questions relevant to unseen passages and answers containing an average of 8 words per question. The word error rate WER was used as a metric to compare the similarity between SQuAD questions and the modelgenerated questions. Although the high average WER suggests that the questions generated differ from the original SQuAD questions, the questions generated are mostly grammatically correct and plausible in their own right.
Towards Understanding of Medical Randomized Controlled Trials by Conclusion Generation ; Randomized controlled trials RCTs represent the paramount evidence of clinical medicine. Using machines to interpret the massive amount of RCTs has the potential of aiding clinical decisionmaking. We propose a RCT conclusion generation task from the PubMed 200k RCT sentence classification dataset to examine the effectiveness of sequencetosequence models on understanding RCTs. We first build a pointergenerator baseline model for conclusion generation. Then we finetune the stateoftheart GPT2 language model, which is pretrained with general domain data, for this new medical domain task. Both automatic and human evaluation show that our GPT2 finetuned models achieve improved quality and correctness in the generated conclusions compared to the baseline pointergenerator model. Further inspection points out the limitations of this current approach and future directions to explore.
Just Noticeable Difference for Machines to Generate Adversarial Images ; One way of designing a robust machine learning algorithm is to generate authentic adversarial images which can trick the algorithms as much as possible. In this study, we propose a new method to generate adversarial images which are very similar to true images, yet, these images are discriminated from the original ones and are assigned into another category by the model. The proposed method is based on a popular concept of experimental psychology, called, Just Noticeable Difference. We define Just Noticeable Difference for a machine learning model and generate a least perceptible difference for adversarial images which can trick a model. The suggested model iteratively distorts a true image by gradient descent method until the machine learning algorithm outputs a false label. Deep Neural Networks are trained for object detection and classification tasks. The cost function includes regularization terms to generate just noticeably different adversarial images which can be detected by the model. The adversarial images generated in this study looks more natural compared to the output of state of the art adversarial image generators.
PlotMachines OutlineConditioned Generation with Dynamic Plot State Tracking ; We propose the task of outlineconditioned story generation given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent narrative that is consistent with the provided outline. This task is challenging as the input only provides a rough sketch of the plot, and thus, models need to generate a story by interweaving the key points provided in the outline. This requires the model to keep track of the dynamic states of the latent plot, conditioning on the input outline while generating the full story. We present PlotMachines, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states. In addition, we enrich PlotMachines with highlevel discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative. Comprehensive experiments over three fiction and nonfiction datasets demonstrate that largescale language models, such as GPT2 and Grover, despite their impressive generation performance, are not sufficient in generating coherent narratives for the given outline, and dynamic plot state tracking is important for composing narratives with tighter, more consistent plots.
GPTGNN Generative PreTraining of Graph Neural Networks ; Graph neural networks GNNs have been demonstrated to be powerful in modeling graphstructured data. However, training GNNs usually requires abundant taskspecific labeled data, which is often arduously expensive to obtain. One effective way to reduce the labeling effort is to pretrain an expressive GNN model on unlabeled data with selfsupervision and then transfer the learned model to downstream tasks with only a few labels. In this paper, we present the GPTGNN framework to initialize GNNs by generative pretraining. GPTGNN introduces a selfsupervised attributed graph generation task to pretrain a GNN so that it can capture the structural and semantic properties of the graph. We factorize the likelihood of the graph generation into two components 1 Attribute Generation and 2 Edge Generation. By modeling both components, GPTGNN captures the inherent dependency between node attributes and graph structure during the generative process. Comprehensive experiments on the billionscale Open Academic Graph and Amazon recommendation data demonstrate that GPTGNN significantly outperforms stateoftheart GNN models without pretraining by up to 9.1 across various downstream tasks.
BezierSketch A generative model for scalable vector sketches ; The study of neural generative models of human sketches is a fascinating contemporary modeling problem due to the links between sketch image generation and the human drawing process. The landmark SketchRNN provided breakthrough by sequentially generating sketches as a sequence of waypoints. However this leads to lowresolution image generation, and failure to model long sketches. In this paper we present B'ezierSketch, a novel generative model for fully vector sketches that are automatically scalable and highresolution. To this end, we first introduce a novel inverse graphics approach to stroke embedding that trains an encoder to embed each stroke to its best fit B'ezier curve. This enables us to treat sketches as short sequences of paramaterized strokes and thus train a recurrent sketch generator with greater capacity for longer sketches, while producing scalable highresolution results. We report qualitative and quantitative results on the Quick, Draw benchmark.
Invertible ZeroShot Recognition Flows ; Deep generative models have been successfully applied to ZeroShot Learning ZSL recently. However, the underlying drawbacks of GANs and VAEs e.g., the hardness of training with ZSLoriented regularizers and the limited generation quality hinder the existing generative ZSL models from fully bypassing the seenunseen bias. To tackle the above limitations, for the first time, this work incorporates a new family of generative models i.e., flowbased models into ZSL. The proposed Invertible Zeroshot Flow IZF learns factorized data embeddings i.e., the semantic factors and the nonsemantic ones with the forward pass of an invertible flow network, while the reverse pass generates data samples. This procedure theoretically extends conventional generative flows to a factorized conditional scheme. To explicitly solve the bias problem, our model enlarges the seenunseen distributional discrepancy based on negative samplebased distance measurement. Notably, IZF works flexibly with either a naive Bayesian classifier or a heldout trainable one for zeroshot recognition. Experiments on widelyadopted ZSL benchmarks demonstrate the significant performance gain of IZF over existing methods, in both classic and generalized settings.
Beyond CLS through Ranking by Generation ; Generative models for Information Retrieval, where ranking of documents is viewed as the task of generating a query from a document's language model, were very successful in various IR tasks in the past. However, with the advent of modern deep neural networks, attention has shifted to discriminative ranking functions that model the semantic similarity of documents and queries instead. Recently, deep generative models such as GPT2 and BART have been shown to be excellent text generators, but their effectiveness as rankers have not been demonstrated yet. In this work, we revisit the generative framework for information retrieval and show that our generative approaches are as effective as stateoftheart semantic similaritybased discriminative models for the answer selection task. Additionally, we demonstrate the effectiveness of unlikelihood losses for IR.
SketchInspector a Deep Mixture Model for HighQuality Sketch Generation of Cats ; With the involvement of artificial intelligence AI, sketches can be automatically generated under certain topics. Even though breakthroughs have been made in previous studies in this area, a relatively high proportion of the generated figures are too abstract to recognize, which illustrates that AIs fail to learn the general pattern of the target object when drawing. This paper posits that supervising the process of stroke generation can lead to a more accurate sketch interpretation. Based on that, a sketch generating system with an assistant convolutional neural network CNN predictor to suggest the shape of the next stroke is presented in this paper. In addition, a CNNbased discriminator is introduced to judge the recognizability of the end product. Since the baseline model is ineffective at generating multiclass sketches, we restrict the model to produce one category. Because the image of a cat is easy to identify, we consider cat sketches selected from the QuickDraw data set. This paper compares the proposed model with the original SketchRNN on 75K humandrawn cat sketches. The result indicates that our model produces sketches with higher quality than human's sketches.
Transformerbased Conditional Variational Autoencoder for Controllable Story Generation ; We investigate largescale latent variable models LVMs for neural story generation an underexplored application for opendomain long text with objectives in two threads generation effectiveness and controllability. LVMs, especially the variational autoencoder VAE, have achieved both effective and controllable generation through exploiting flexible distributional latent representations. Recently, Transformers and its variants have achieved remarkable effectiveness without explicit latent representation learning, thus lack satisfying controllability in generation. In this paper, we advocate to revive latent variable modeling, essentially the power of representation learning, in the era of Transformers to enhance controllability without hurting stateoftheart generation effectiveness. Specifically, we integrate latent representation vectors with a Transformerbased pretrained architecture to build conditional variational autoencoder CVAE. Model components such as encoder, decoder and the variational posterior are all built on top of pretrained language models GPT2 specifically in this paper. Experiments demonstrate stateoftheart conditional generation ability of our model, as well as its excellent representation learning capability and controllability.
This Face Does Not Exist ... But It Might Be Yours Identity Leakage in Generative Models ; Generative adversarial networks GANs are able to generate high resolution photorealistic images of objects that do not exist. These synthetic images are rather difficult to detect as fake. However, the manner in which these generative models are trained hints at a potential for information leakage from the supplied training data, especially in the context of synthetic faces. This paper presents experiments suggesting that identity information in face images can flow from the training corpus into synthetic samples without any adversarial actions when building or using the existing model. This raises privacyrelated questions, but also stimulates discussions of a the face manifold's characteristics in the feature space and b how to create generative models that do not inadvertently reveal identity information of real subjects whose images were used for training. We used five different face matchers facerecognition, FaceNet, ArcFace, SphereFace and Neurotechnology MegaMatcher and the StyleGAN2 synthesis model, and show that this identity leakage does exist for some, but not all methods. So, can we say that these synthetically generated faces truly do not exist Databases of real and synthetically generated faces are made available with this paper to allow full replicability of the results discussed in this work.
Counterfactual Generative Networks ; Neural networks are prone to learning shortcuts they often model simple correlations, ignoring more complex ones that potentially generalize better. Prior works on image classification show that instead of learning a connection to object shape, deep classifiers tend to exploit spurious correlations with lowlevel texture or the background for solving the classification task. In this work, we take a step towards more robust and interpretable classifiers that explicitly expose the task's causal structure. Building on current advances in deep generative modeling, we propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision. By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background; hence, they allow for generating counterfactual images. We demonstrate the ability of our model to generate such images on MNIST and ImageNet. Further, we show that the counterfactual images can improve outofdistribution robustness with a marginal drop in performance on the original classification task, despite being synthetic. Lastly, our generative model can be trained efficiently on a single GPU, exploiting common pretrained models as inductive biases.
Robustness to Augmentations as a Generalization metric ; Generalization is the ability of a model to predict on unseen domains and is a fundamental task in machine learning. Several generalization bounds, both theoretical and empirical have been proposed but they do not provide tight bounds .In this work, we propose a simple yet effective method to predict the generalization performance of a model by using the concept that models that are robust to augmentations are more generalizable than those which are not. We experiment with several augmentations and composition of augmentations to check the generalization capacity of a model. We also provide a detailed motivation behind the proposed method. The proposed generalization metric is calculated based on the change in the output of the model after augmenting the input. The proposed method was the first runner up solution for the NeurIPS competition on Predicting Generalization in Deep Learning.
Continual Learning with Fully Probabilistic Models ; We present an approach for continual learning CL that is based on fully probabilistic or generative models of machine learning. In contrast to, e.g., GANs that are generative in the sense that they can generate samples, fully probabilistic models aim at modeling the data distribution directly. Consequently, they provide functionalities that are highly relevant for continual learning, such as density estimation outlier detection and sample generation. As a concrete realization of generative continual learning, we propose Gaussian Mixture Replay GMR. GMR is a pseudorehearsal approach using a Gaussian Mixture Model GMM instance for both generator and classifier functionalities. Relying on the MNIST, FashionMNIST and Devanagari benchmarks, we first demonstrate unsupervised task boundary detection by GMM density estimation, which we also use to reject untypical generated samples. In addition, we show that GMR is capable of classconditional sampling in the way of a cGAN. Lastly, we verify that GMR, despite its simple structure, achieves stateoftheart performance on common classincremental learning problems at very competitive time and memory complexity.
Do Grammatical Error Correction Models Realize Grammatical Generalization ; There has been an increased interest in data generation approaches to grammatical error correction GEC using pseudo data. However, these approaches suffer from several issues that make them inconvenient for realworld deployment including a demand for large amounts of training data. On the other hand, some errors based on grammatical rules may not necessarily require a large amount of data if GEC models can realize grammatical generalization. This study explores to what extent GEC models generalize grammatical knowledge required for correcting errors. We introduce an analysis method using synthetic and real GEC datasets with controlled vocabularies to evaluate whether models can generalize to unseen errors. We found that a current standard Transformerbased GEC model fails to realize grammatical generalization even in simple settings with limited vocabulary and syntax, suggesting that it lacks the generalization ability required to correct errors from provided training examples.
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss ; Neural models trained for next utterance generation in dialogue task learn to mimic the ngram sequences in the training set with training objectives like negative loglikelihood NLL or crossentropy. Such commonly used training objectives do not foster generating alternate responses to a context. But, the effects of minimizing an alternate training objective that fosters a model to generate alternate response and score it on semantic similarity has not been well studied. We hypothesize that a language generation model can improve on its diversity by learning to generate alternate text during training and minimizing a semantic loss as an auxiliary objective. We explore this idea on two different sized data sets on the task of next utterance generation in goal oriented dialogues. We make two observations 1 minimizing a semantic objective improved diversity in responses in the smaller data set Frames but only asgoodas minimizing the NLL in the larger data set MultiWoZ 2 large language model embeddings can be more useful as a semantic loss objective than as initialization for token embeddings.
A Temporal Variational Model for Story Generation ; Recent language models can generate interesting and grammatically correct text in story generation but often lack plot development and longterm coherence. This paper experiments with a latent vector planning approach based on a TDVAE Temporal Difference Variational Autoencoder, using the model for conditioning and reranking for text generation. The results demonstrate strong performance in automatic cloze and swapping evaluations. The human judgments show stories generated with TDVAE reranking improve on a GPT2 medium baseline and show comparable performance to a hierarchical LSTM reranking model. Conditioning on the latent vectors proves disappointing and deteriorates performance in human evaluation because it reduces the diversity of generation, and the models don't learn to progress the narrative. This highlights an important difference between technical task performance e.g. cloze and generating interesting stories.
SCHAVAE Hierarchical Context Aggregation for FewShot Generation ; A fewshot generative model should be able to generate data from a novel distribution by only observing a limited set of examples. In fewshot learning the model is trained on data from many sets from distributions sharing some underlying properties such as sets of characters from different alphabets or objects from different categories. We extend current latent variable models for sets to a fully hierarchical approach with an attentionbased point to setlevel aggregation and call our method SCHAVAE for SetContextHierarchicalAggregation Variational Autoencoder. We explore likelihoodbased model comparison, iterative data sampling, and adaptationfree outofdistribution generalization. Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime. This work generalizes deep latent variable approaches to fewshot learning, taking a step toward largescale fewshot generation with a formulation that readily works with current stateoftheart deep generative models.
A Binded VAE for Inorganic Material Generation ; Designing new industrial materials with desired properties can be very expensive and time consuming. The main difficulty is to generate compounds that correspond to realistic materials. Indeed, the description of compounds as vectors of components' proportions is characterized by discrete features and a severe sparsity. Furthermore, traditional generative model validation processes as visual verification, FID and Inception scores are tailored for images and cannot then be used as such in this context. To tackle these issues, we develop an original BindedVAE model dedicated to the generation of discrete datasets with high sparsity. We validate the model with novel metrics adapted to the problem of compounds generation. We show on a real issue of rubber compound design that the proposed approach outperforms the standard generative models which opens new perspectives for material design optimization.
Diverse Text Generation via Variational EncoderDecoder Models with Gaussian Process Priors ; Generating high quality texts with high diversity is important for many NLG applications, but current methods mostly focus on building deterministic models to generate higher quality texts and do not provide many options for promoting diversity. In this work, we present a novel latent structured variable model to generate high quality texts by enriching contextual representation learning of encoderdecoder models. Specifically, we introduce a stochastic function to map deterministic encoder hidden states into random context variables. The proposed stochastic function is sampled from a Gaussian process prior to 1 provide infinite number of joint Gaussian distributions of random context variables diversitypromoting and 2 explicitly model dependency between context variables accurateencoding. To address the learning challenge of Gaussian processes, we propose an efficient variational inference approach to approximate the posterior distribution of random context variables. We evaluate our method in two typical text generation tasks paraphrase generation and text style transfer. Experimental results on benchmark datasets demonstrate that our method improves the generation quality and diversity compared with other baselines.
Diversifying Neural Dialogue Generation via Negative Distillation ; Generative dialogue models suffer badly from the generic response problem, limiting their applications to a few toy scenarios. Recently, an interesting approach, namely negative training, has been proposed to alleviate this problem by reminding the model not to generate highfrequency responses during training. However, its performance is hindered by two issues, ignoring lowfrequency but generic responses and bringing lowfrequency but meaningless responses. In this paper, we propose a novel negative training paradigm, called negative distillation, to keep the model away from the undesirable generic responses while avoiding the above problems. First, we introduce a negative teacher model that can produce querywise generic responses, and then the student model is required to maximize the distance with multilevel negative knowledge. Empirical results show that our method outperforms previous negative training methods significantly.
3DILG Irregular Latent Grids for 3D Generative Modeling ; We propose a new representation for encoding 3D shapes as neural fields. The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation. Existing works on neural fields are gridbased representations with latents defined on a regular grid. In contrast, we define latents on irregular grids, enabling our representation to be sparse and adaptive. In the context of shape reconstruction from point clouds, our shape representation built on irregular grids improves upon gridbased methods in terms of reconstruction accuracy. For shape generation, our representation promotes highquality shape generation using autoregressive probabilistic models. We show different applications that improve over the current state of the art. First, we show results for probabilistic shape reconstruction from a single higher resolution image. Second, we train a probabilistic model conditioned on very low resolution images. Third, we apply our model to categoryconditioned generation. All probabilistic experiments confirm that we are able to generate detailed and high quality shapes to yield the new state of the art in generative 3D shape modeling.
Towards Goal, Feasibility, and DiversityOriented Deep Generative Models in Design ; Deep Generative Machine Learning Models DGMs have been growing in popularity across the design community thanks to their ability to learn and mimic complex data distributions. DGMs are conventionally trained to minimize statistical divergence between the distribution over generated data and distribution over the dataset on which they are trained. While sufficient for the task of generating realistic fake data, this objective is typically insufficient for design synthesis tasks. Instead, design problems typically call for adherence to design requirements, such as performance targets and constraints. Advancing DGMs in engineering design requires new training objectives which promote engineering design objectives. In this paper, we present the first Deep Generative Model that simultaneously optimizes for performance, feasibility, diversity, and target achievement. We benchmark performance of the proposed method against several Deep Generative Models over eight evaluation metrics that focus on feasibility, diversity, and satisfaction of design performance targets. Methods are tested on a challenging multiobjective bicycle frame design problem with skewed, multimodal data of different datatypes. The proposed framework was found to outperform all Deep Generative Models in six of eight metrics.
SCGG A Deep StructureConditioned Graph Generative Model ; Deep learningbased graph generation approaches have remarkable capacities for graph data modeling, allowing them to solve a wide range of realworld problems. Making these methods able to consider different conditions during the generation procedure even increases their effectiveness by empowering them to generate new graph samples that meet the desired criteria. This paper presents a conditional deep graph generation method called SCGG that considers a particular type of structural conditions. Specifically, our proposed SCGG model takes an initial subgraph and autoregressively generates new nodes and their corresponding edges on top of the given conditioning substructure. The architecture of SCGG consists of a graph representation learning network and an autoregressive generative model, which is trained endtoend. Using this model, we can address graph completion, a rampant and inherently difficult problem of recovering missing nodes and their associated edges of partially observed graphs. Experimental results on both synthetic and realworld datasets demonstrate the superiority of our method compared with stateoftheart baselines.
The Chamber Ensemble Generator Limitless HighQuality MIR Data via Generative Modeling ; Data is the lifeblood of modern machine learning systems, including for those in Music Information Retrieval MIR. However, MIR has long been mired by small datasets and unreliable labels. In this work, we propose to break this bottleneck using generative modeling. By pipelining a generative model of notes Coconet trained on Bach Chorales with a structured synthesis model of chamber ensembles MIDIDDSP trained on URMP, we demonstrate a system capable of producing unlimited amounts of realistic chorale music with rich annotations including mixes, stems, MIDI, notelevel performance attributes staccato, vibrato, etc., and even finegrained synthesis parameters pitch, amplitude, etc.. We call this system the Chamber Ensemble Generator CEG, and use it to generate a large dataset of chorales from four different chamber ensembles CocoChorales. We demonstrate that data generated using our approach improves stateoftheart models for music transcription and source separation, and we release both the system and the dataset as an opensource foundation for future work in the MIR community.
Understanding Pure CLIP Guidance for Voxel Grid NeRF Models ; We explore the task of text to 3D object generation using CLIP. Specifically, we use CLIP for guidance without access to any datasets, a setting we refer to as pure CLIP guidance. While prior work has adopted this setting, there is no systematic study of mechanics for preventing adversarial generations within CLIP. We illustrate how different imagebased augmentations prevent the adversarial generation problem, and how the generated results are impacted. We test different CLIP model architectures and show that ensembling different models for guidance can prevent adversarial generations within bigger models and generate sharper results. Furthermore, we implement an implicit voxel grid model to show how neural networks provide an additional layer of regularization, resulting in better geometrical structure and coherency of generated objects. Compared to prior work, we achieve more coherent results with higher memory efficiency and faster training speeds.
Leveraging Key Information Modeling to Improve LessData Constrained News Headline Generation via Duality FineTuning ; Recent language generative models are mostly trained on largescale datasets, while in some real scenarios, the training datasets are often expensive to obtain and would be smallscale. In this paper we investigate the challenging task of lessdata constrained generation, especially when the generated news headlines are short yet expected by readers to keep readable and informative simultaneously. We highlight the key information modeling task and propose a novel duality finetuning method by formally defining the probabilistic duality constraints between key information prediction and headline generation tasks. The proposed method can capture more information from limited data, build connections between separate tasks, and is suitable for lessdata constrained generation tasks. Furthermore, the method can leverage various pretrained generative regimes, e.g., autoregressive and encoderdecoder models. We conduct extensive experiments to demonstrate that our method is effective and efficient to achieve improved performance in terms of language modeling metric and informativeness correctness metric on two public datasets.
Improving Chinese Story Generation via Awareness of Syntactic Dependencies and Semantics ; Story generation aims to generate a long narrative conditioned on a given input. In spite of the success of prior works with the application of pretrained models, current neural models for Chinese stories still struggle to generate highquality long text narratives. We hypothesise that this stems from ambiguity in syntactically parsing the Chinese language, which does not have explicit delimiters for word segmentation. Consequently, neural models suffer from the inefficient capturing of features in Chinese narratives. In this paper, we present a new generation framework that enhances the feature capturing mechanism by informing the generation model of dependencies between words and additionally augmenting the semantic representation learning through synonym denoising training. We conduct a range of experiments, and the results demonstrate that our framework outperforms the stateoftheart Chinese generation models on all evaluation metrics, demonstrating the benefits of enhanced dependency and semantic representation learning.
DORE Document Ordered Relation Extraction based on Generative Framework ; In recent years, there is a surge of generationbased information extraction work, which allows a more direct use of pretrained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit documentlevel relation extraction DocRE where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models. We have released our code at httpsgithub.comayyyqDORE.
A generalized AIC for models with singularities and boundaries ; The Akaike information criterion AIC is a common tool for model selection. It is frequently used in violation of regularity conditions at parameter space singularities and boundaries. The expected AIC is generally not asymptotically equivalent to its target at singularities and boundaries, and convergence to the target at nearby parameter points may be slow. We develop a generalized AIC for candidate models with or without singularities and boundaries. We show that the expectation of this generalized form converges everywhere in the parameter space, and its convergence can be faster than that of the AIC. We illustrate the generalized AIC on example models from phylogenomics, showing that it can outperform the AIC and gives rise to an interpolated effective number of model parameters, which can differ substantially from the number of parameters near singularities and boundaries. We outline methods for estimating the often unknown generating parameter and bias correction term of the generalized AIC.
ActionGPT Leveraging Largescale Language Models for Improved and Generalized Action Generation ; We introduce ActionGPT, a plugandplay framework for incorporating Large Language Models LLMs into textbased action generation models. Action phrases in current motion capture datasets contain minimal and tothepoint information. By carefully crafting prompts for LLMs, we generate richer and finegrained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. We introduce a generic approach compatible with stochastic e.g. VAEbased and deterministic e.g. MotionCLIP texttomotion models. In addition, the approach enables multiple text descriptions to be utilized. Our experiments show i noticeable qualitative and quantitative improvement in the quality of synthesized motions, ii benefits of utilizing multiple LLMgenerated descriptions, iii suitability of the prompt function, and iv zeroshot generation capabilities of the proposed approach. Project page httpsactiongpt.github.io
When Neural Networks Fail to Generalize A Model Sensitivity Perspective ; Domain generalization DG aims to train a model to perform well in unseen domains under different distributions. This paper considers a more realistic yet more challenging scenario,namely Single Domain Generalization SingleDG, where only a single source domain is available for training. To tackle this challenge, we first try to understand when neural networks fail to generalize We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as model sensitivity. Based on our analysis, we propose a novel strategy of Spectral Adversarial Data Augmentation SADA to generate augmented images targeted at the highly sensitive frequencies. Models trained with these hardtolearn samples can effectively suppress the sensitivity in the frequency space, which leads to improved generalization performance. Extensive experiments on multiple public datasets demonstrate the superiority of our approach, which surpasses the stateoftheart singleDG methods.
The Infinite Index Information Retrieval on Generative TextToImage Models ; Conditional generative models such as DALLE and Stable Diffusion generate images based on a userdefined text, the prompt. Finding and refining prompts that produce a desired image has become the art of prompt engineering. Generative models do not provide a builtin retrieval model for a user's information need expressed through prompts. In light of an extensive literature review, we reframe prompt engineering for generative models as interactive textbased retrieval on a novel kind of infinite index. We apply these insights for the first time in a case study on image generation for game design with an expert. Finally, we envision how active learning may help to guide the retrieval of generated images.
GenerationAugmented Query Expansion For Code Retrieval ; Pretrained language models have achieved promising success in code retrieval tasks, where a natural language documentation query is given to find the most relevant existing code snippet. However, existing models focus only on optimizing the documentation code pairs by embedding them into latent space, without the association of external knowledge. In this paper, we propose a generationaugmented query expansion framework. Inspired by the human retrieval process sketching an answer before searching, in this work, we utilize the powerful code generation model to benefit the code retrieval task. Specifically, we demonstrate that rather than merely retrieving the target code snippet according to the documentation query, it would be helpful to augment the documentation query with its generation counterpart generated code snippets from the code generation model. To the best of our knowledge, this is the first attempt that leverages the code generation model to enhance the code retrieval task. We achieve new stateoftheart results on the CodeSearchNet benchmark and surpass the baselines significantly.
ForegroundBackground Separation through Concept Distillation from Generative Image Foundation Models ; Curating datasets for object segmentation is a difficult task. With the advent of largescale pretrained generative models, conditional image generation has been given a significant boost in result quality and ease of use. In this paper, we present a novel method that enables the generation of general foregroundbackground segmentation models from simple textual descriptions, without requiring segmentation labels. We leverage and explore pretrained latent diffusion models, to automatically generate weak segmentation masks for concepts and objects. The masks are then used to finetune the diffusion model on an inpainting task, which enables finegrained removal of the object, while at the same time providing a synthetic foreground and background dataset. We demonstrate that using this method beats previous methods in both discriminative and generative performance and closes the gap with fully supervised training while requiring no pixelwise object labels. We show results on the task of segmenting four different objects humans, dogs, cars, birds and a use case scenario in medical image analysis. The code is available at httpsgithub.comMischaDfobadiffusion.
Emerging Synergies in Causality and Deep Generative Models A Survey ; In the field of artificial intelligence AI, the quest to understand and model datagenerating processes DGPs is of paramount importance. Deep generative models DGMs have proven adept in capturing complex data distributions but often fall short in generalization and interpretability. On the other hand, causality offers a structured lens to comprehend the mechanisms driving data generation and highlights the causaleffect dynamics inherent in these processes. While causality excels in interpretability and the ability to extrapolate, it grapples with intricacies of highdimensional spaces. Recognizing the synergistic potential, we delve into the confluence of causality and DGMs. We elucidate the integration of causal principles within DGMs, investigate causal identification using DGMs, and navigate an emerging research frontier of causality in largescale generative models, particularly generative large language models LLMs. We offer insights into methodologies, highlight open challenges, and suggest future directions, positioning our comprehensive review as an essential guide in this swiftly emerging and evolving area.
Learning EndtoEnd Channel Coding with Diffusion Models ; It is a known problem that deeplearningbased endtoend E2E channel coding systems depend on a known and differentiable channel model, due to the learning process and based on the gradientdescent optimization methods. This places the challenge to approximate or generate the channel or its derivative from samples generated by pilot signaling in realworld scenarios. Currently, there are two prevalent methods to solve this problem. One is to generate the channel via a generative adversarial network GAN, and the other is to, in essence, approximate the gradient via reinforcement learning methods. Other methods include using scorebased methods, variational autoencoders, or mutualinformationbased methods. In this paper, we focus on generative models and, in particular, on a new promising method called diffusion models, which have shown a higher quality of generation in imagebased tasks. We will show that diffusion models can be used in wireless E2E scenarios and that they work as good as Wasserstein GANs while having a more stable training procedure and a better generalization ability in testing.
Geometry of Score Based Generative Models ; In this work, we look at Scorebased generative models also called diffusion generative models from a geometric perspective. From a new view point, we prove that both the forward and backward process of adding noise and generating from noise are Wasserstein gradient flow in the space of probability measures. We are the first to prove this connection. Our understanding of Scorebased and Diffusion generative models have matured and become more complete by drawing ideas from different fields like Bayesian inference, control theory, stochastic differential equation and Schrodinger bridge. However, many open questions and challenges remain. One problem, for example, is how to decrease the sampling time We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results. Furthermore, geometric perspective enables us to devise an intuitive geometric solution to the problem of faster sampling. By augmenting traditional scorebased generative models with a projection step, we show that we can generate high quality images with significantly fewer samplingsteps.
Optimal Spatial Deconvolution and Message Reconstruction from a Large Generative Model of Models ; We introduce a generalpurpose univariate signal deconvolution method based on the principles of an approach to Artificial General Intelligence. This approach is based on a generative model that combines information theory and algorithmic probability that required a large calculation of an estimation of a universal distribution' to build a generalpurpose model of models independent of probability distributions. This was used to investigate how nonrandom data may encode information about the physical properties such as dimension and length scales in which a signal or message may have been originally encoded, embedded, or generated. This multidimensional space reconstruction method is based on information theory and algorithmic probability, and it is agnostic, but not independent, with respect to the chosen computable or semicomputable approximation method or encodingdecoding scheme. The results presented in this paper are useful for applications in coding theory, particularly in zeroknowledge oneway communication channels, such as in deciphering messages sent by generating sources of unknown nature for which no prior knowledge is available. We argue that this can have strong potential for cryptography, signal processing, causal deconvolution, life, and techno signature detection.
Deep Generative Model and Its Applications in Efficient Wireless Network Management A Tutorial and Case Study ; With the phenomenal success of diffusion models and ChatGPT, deep generation models DGMs have been experiencing explosive growth from 2022. Not limited to content generation, DGMs are also widely adopted in Internet of Things, Metaverse, and digital twin, due to their outstanding ability to represent complex patterns and generate plausible samples. In this article, we explore the applications of DGMs in a crucial task, i.e., improving the efficiency of wireless network management. Specifically, we firstly overview the generative AI, as well as three representative DGMs. Then, a DGMempowered framework for wireless network management is proposed, in which we elaborate the issues of the conventional network management approaches, why DGMs can address them efficiently, and the stepbystep workflow for applying DGMs in managing wireless networks. Moreover, we conduct a case study on network economics, using the stateoftheart DGM model, i.e., diffusion model, to generate effective contracts for incentivizing the mobile AIGenerated Content AIGC services. Last but not least, we discuss important open directions for the further research.
3Daware Image Generation using 2D Diffusion Models ; In this paper, we introduce a novel 3Daware image generation method that leverages 2D diffusion models. We formulate the 3Daware image generation task as multiview 2D image set generation, and further to a sequential unconditionalconditional multiview image generation process. This allows us to utilize 2D diffusion models to boost the generative modeling power of the method. Additionally, we incorporate depth information from monocular depth estimators to construct the training data for the conditional diffusion model using only still images. We train our method on a largescale dataset, i.e., ImageNet, which is not addressed by previous methods. It produces highquality images that significantly outperform prior methods. Furthermore, our approach showcases its capability to generate instances with large view angles, even though the training images are diverse and unaligned, gathered from inthewild realworld environments.
TextConditioned Sampling Framework for TexttoImage Generation with Masked Generative Models ; Tokenbased masked generative models are gaining popularity for their fast inference time with parallel decoding. While recent tokenbased approaches achieve competitive performance to diffusionbased models, their generation performance is still suboptimal as they sample multiple tokens simultaneously without considering the dependence among them. We empirically investigate this problem and propose a learnable sampling model, TextConditioned Token Selection TCTS, to select optimal tokens via localized supervision with text information. TCTS improves not only the image quality but also the semantic alignment of the generated images with the given texts. To further improve the image quality, we introduce a cohesive sampling strategy, Frequency Adaptive Sampling FAS, to each group of tokens divided according to the selfattention maps. We validate the efficacy of TCTS combined with FAS with various generative tasks, demonstrating that it significantly outperforms the baselines in imagetext alignment and image quality. Our textconditioned sampling framework further reduces the original inference time by more than 50 without modifying the original generative model.
Tractable Control for Autoregressive Language Generation ; Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints sampling from the conditional distribution Prtexttext alpha is intractable for even the simplest lexical constraints alpha. To overcome this challenge, we propose to use tractable probabilistic models TPMs to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo Generating Language with Tractable Constraints. To demonstrate the effectiveness of this framework, we use distilled hidden Markov models, where we can efficiently compute Prtexttext alpha, to guide autoregressive generation from GPT2. GeLaTo achieves stateoftheart performance on challenging benchmarks for constrained text generation e.g., CommonGen, beating various strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive TPMs.
Generating Data for Symbolic Language with Large Language Models ; While large language models LLMs bring not only performance but also complexity, recent work has started to turn LLMs into data generators rather than task inferencers, where another affordable task model is trained for efficient deployment and inference. However, such an approach has primarily been applied to natural language tasks and has not yet been explored for symbolic language tasks with complex structured outputs e.g., semantic parsing and code generation. In this paper, we propose SymGen which utilizes LLMs for generating various annotationexpensive symbolic language data. SymGen consists of an informative prompt to steer generation and an agreementbased verifier to improve data correctness. We conduct extensive experiments on six symbolic language tasks across various settings. Compared with the LLMs, we demonstrate the 1sized task model can achieve comparable or better performance, largely cutting inference and deployment costs. We also show that generated data with only a few human demonstrations can be as effective as over 10 times the amount of humanannotated data when training the task model, saving a considerable amount of annotation effort. SymGen sheds new light on data generation for complex tasks, and we release the code at hrefhttpsgithub.comHKUNLPSymGenhttpsgithub.comHKUNLPSymGen.
Learning Subpocket Prototypes for Generalizable Structurebased Drug Design ; Generating molecules with high binding affinities to target proteins a.k.a. structurebased drug design is a fundamental and challenging task in drug discovery. Recently, deep generative models have achieved remarkable success in generating 3D molecules conditioned on the protein pocket. However, most existing methods consider molecular generation for protein pockets independently while neglecting the underlying connections such as subpocketlevel similarities. Subpockets are the local protein environments of ligand fragments and pockets with similar subpockets may bind the same molecular fragment motif even though their overall structures are different. Therefore, the trained models can hardly generalize to unseen protein pockets in realworld applications. In this paper, we propose a novel method DrugGPS for generalizable structurebased drug design. With the biochemical priors, we propose to learn subpocket prototypes and construct a global interaction graph to model the interactions between subpocket prototypes and molecular motifs. Moreover, a hierarchical graph transformer encoder and motifbased 3D molecule generation scheme are used to improve the model's performance. The experimental results show that our model consistently outperforms baselines in generating realistic drug candidates with high affinities in challenging outofdistribution settings.
TransDimensional Generative Modeling via Jump Diffusion Models ; We propose a new class of generative models that naturally handle data of varying dimensionality by jointly modeling the state and dimension of each datapoint. The generative process is formulated as a jump diffusion process that makes jumps between different dimensional spaces. We first define a dimension destroying forward noising process, before deriving the dimension creating timereversed generative process along with a novel evidence lower bound training objective for learning to approximate it. Simulating our learned approximation to the timereversed generative process then provides an effective way of sampling data of varying dimensionality by jointly generating state values and dimensions. We demonstrate our approach on molecular and video datasets of varying dimensionality, reporting better compatibility with testtime diffusion guidance imputation tasks and improved interpolation capabilities versus fixed dimensional models that generate state values and dimensions separately.
SketchAShape ZeroShot Sketchto3D Shape Generation ; Significant progress has recently been made in creative applications of large pretrained models for downstream tasks in 3D vision, such as texttoshape generation. This motivates our investigation of how these pretrained models can be used effectively to generate 3D shapes from sketches, which has largely remained an open challenge due to the limited sketchshape paired datasets and the varying level of abstraction in the sketches. We discover that conditioning a 3D generative model on the features obtained from a frozen large pretrained vision model of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time. This suggests that the large pretrained vision model features carry semantic signals that are resilient to domain shifts, i.e., allowing us to use only RGB renderings, but generalizing to sketches at inference time. We conduct a comprehensive set of experiments investigating different design factors and demonstrate the effectiveness of our straightforward approach for generation of multiple 3D shapes per each input sketch regardless of their level of abstraction without requiring any paired datasets during training.
The Imitation Game Detecting Human and AIGenerated Texts in the Era of Large Language Models ; The potential of artificial intelligence AIbased large language models LLMs holds considerable promise in revolutionizing education, research, and practice. However, distinguishing between humanwritten and AIgenerated text has become a significant task. This paper presents a comparative study, introducing a novel dataset of humanwritten and LLMgenerated texts in different genres essays, stories, poetry, and Python code. We employ several machine learning models to classify the texts. Results demonstrate the efficacy of these models in discerning between human and AIgenerated text, despite the dataset's limited sample size. However, the task becomes more challenging when classifying GPTgenerated text, particularly in story writing. The results indicate that the models exhibit superior performance in binary classification tasks, such as distinguishing humangenerated text from a specific LLM, compared to the more complex multiclass tasks that involve discerning among humangenerated and multiple LLMs. Our findings provide insightful implications for AI text detection while our dataset paves the way for future research in this evolving area.
Optimality of Glauber dynamics for generalpurpose Ising model sampling and free energy approximation ; Recently, Eldan, Koehler, and Zeitouni 2020 showed that Glauber dynamics mixes rapidly for general Ising models so long as the difference between the largest and smallest eigenvalues of the coupling matrix is at most 1 epsilon for any fixed epsilon 0. We give evidence that Glauber dynamics is in fact optimal for this generalpurpose sampling task. Namely, we give an averagecase reduction from hypothesis testing in a Wishart negativelyspiked matrix model to approximately sampling from the Gibbs measure of a general Ising model for which the difference between the largest and smallest eigenvalues of the coupling matrix is at most 1 epsilon for any fixed epsilon 0. Combined with results of Bandeira, Kunisky, and Wein 2019 that analyze lowdegree polynomial algorithms to give evidence for the hardness of the former spiked matrix problem, our results in turn give evidence for the hardness of generalpurpose sampling improving on Glauber dynamics. We also give a similar reduction to approximating the free energy of general Ising models, and again infer evidence that simulated annealing algorithms based on Glauber dynamics are optimal in the generalpurpose setting.
Sedenion algebra for three leptonquark generations and its relations to SU5 ; In this work, we analyze two models beyond the Standard Models descriptions that make ad hoc hypotheses of three pointlike lepton and quark generations without explanations of their physical origins. Instead of using the same Dirac equation involving four anticommutative matrices for all such structureless elementary particles, we consider in the first model the use of sixteen directproduct matrices of quaternions that are related to Diracs gamma matrices. This associative directproduct matrix model could not generate three fermion generations satisfying Einsteins massenergy relation. We show that sedenion algebra contains five distinct quaternion subalgebras and three octonion subalgebras but with a common intersecting quaternion algebra. This model naturally leads to precisely three generations as each of the nonassociative octonion subalgebra leads to one fermion generation. Moreover, we demonstrate the use of basic sedenion.
Teaching TexttoImage Models to Communicate ; Various works have been extensively studied in the research of texttoimage generation. Although existing models perform well in texttoimage generation, there are significant challenges when directly employing them to generate images in dialogs. In this paper, we first highlight a new problem dialogtoimage generation, that is, given the dialog context, the model should generate a realistic image which is consistent with the specified conversation as response. To tackle the problem, we propose an efficient approach for dialogtoimage generation without any intermediate translation, which maximizes the extraction of the semantic information contained in the dialog. Considering the characteristics of dialog structure, we put segment token before each sentence in a turn of a dialog to differentiate different speakers. Then, we finetune pretrained texttoimage models to enable them to generate images conditioning on processed dialog context. After finetuning, our approach can consistently improve the performance of various models across multiple metrics. Experimental results on public benchmark demonstrate the effectiveness and practicability of our method.
Spectral Properties of the Generalized SpinFermion Models ; In order to account for competition and interplay of localized and itinerant magnetic behaviour in correlated many body systems with complex spectra the various types of spinfermion models have been considered in the context of the Irreducible Green's Functions IGF approach. Examples are generalized df model and KondoHeisenberg model. The calculations of the quasiparticle excitation spectra with damping for these models has been performed in the framework of the equation ofmotion method for twotime temperature Green's Functions within a nonperturbative approach. A unified scheme for the construction of Generalized Mean Fields elastic scattering corrections and selfenergy inelastic scattering in terms of the Dyson equation has been generalized in order to include the presence of the two interacting subsystems of localized spins and itinerant electrons. A general procedure is given to obtain the quasiparticle damping in a selfconsistent way. This approach gives the complete and compact description of quasiparticles and show the flexibility and richness of the generalized spinfermion model concept.
Matter Power Spectrum for the Generalized Chaplygin Gas Model The Newtonian Approach ; We model the cosmic medium as the mixture of a generalized Chaplygin gas and a pressureless matter component. Within a neoNewtonian approach in which, different from standard Newtonian cosmology, the pressure enters the homogeneous and isotropic background dynamics we compute the matter power spectrum. The 2dFGRS data are used to discriminate between unified models of the dark sector a purely baryonic matter component of roughly 5 percent of the total energy content and roughly 95 percent generalized Chaplygin gas and different models, for which there is separate dark matter, in addition to that accounted for by the generalized Chaplygin gas. Leaving the corresponding density parameters free, we find that the unified models are strongly disfavored. On the other hand, using unified model priors, the observational data are also well described, in particular for small and large values of the generalized Chaplygin gas parameter alpha. The latter result is in agreement with a recent, more qualitative but fully relativistic, perturbation analysis in Gorini et al.