text
stringlengths
62
2.94k
LOT A StoryCentric Benchmark for Evaluating Chinese Long Text Understanding and Generation ; Standard multitask benchmarks are essential for developing pretraining models that can generalize to various downstream tasks. Existing benchmarks for natural language processing NLP usually focus only on understanding or generating short texts. However, long text modeling requires many distinct abilities in contrast to short texts, such as the modeling of longrange discourse and commonsense relations, and the coherence and controllability of generation. The lack of standardized benchmarks makes it difficult to assess these abilities of a model and fairly compare different models, especially Chinese models. Therefore, we propose a storycentric benchmark named LOT for evaluating Chinese long text modeling, which aggregates two understanding tasks and two generation tasks. We construct new datasets for these tasks based on humanwritten Chinese stories with hundreds of words. Furthermore, we release an encoderdecoderbased Chinese long text pretraining model named LongLM with up to 1 billion parameters. We pretrain LongLM on 120G Chinese novels with two generative tasks including text infilling and conditional continuation. Extensive experiments show that LongLM outperforms similarsized pretraining models substantially on both the understanding and generation tasks in LOT.
Concurrent generative models inform prediction error in the human auditory pathway ; Predictive coding is the leading algorithmic framework to understand how expectations shape our experience of reality. Its main tenet is that sensory neurons encode prediction error the residuals between a generative model of the sensory world and the actual sensory input. However, it is yet unclear how this scheme generalises to the multilevel hierarchical architecture of sensory processing. Theoretical accounts of predictive coding agree that neurons computing prediction error and the generative model exist at all levels of the processing hierarchy. However, there is not a current consensus of how predictions from independent models at different stages are integrated during the computation of prediction error. Here we investigated predictive processing with respect to two independent concurrent generative models in the auditory pathway using functional magnetic resonance imaging. We used two paradigms where human participants listened to sequences of either pure tones or FMsweeps while we recorded BOLD responses in inferior colliculus IC, medial geniculate body MGB, and auditory cortex AC. Each paradigm included the induction of two generative models one based on local stimulus statistics; and another model based on the subjective expectations induced by task instruction. We used Bayesian model comparison to test whether neural responses in IC, MGB, and AC encoded prediction error with respect to either of the two generative models, or a combination of both. Results showed that neural populations in bilateral IC, MGB, and AC encode prediction error with respect to a combination of the two generative models, suggesting that the predictive architecture of predictive coding might be more complex than previously hypothesised.
LAFITE Towards LanguageFree Training for TexttoImage Generation ; One of the major challenges in training texttoimage generation models is the need of a large number of highquality imagetext pairs. While image samples are often easily accessible, the associated text descriptions typically require careful human captioning, which is particularly time and costconsuming. In this paper, we propose the first work to train texttoimage generation models without any text data. Our method leverages the wellaligned multimodal semantic space of the powerful pretrained CLIP model the requirement of textconditioning is seamlessly alleviated via generating text features from image features. Extensive experiments are conducted to illustrate the effectiveness of the proposed method. We obtain stateoftheart results in the standard texttoimage generation tasks. Importantly, the proposed languagefree model outperforms most existing models trained with full imagetext pairs. Furthermore, our method can be applied in finetuning pretrained models, which saves both training time and cost in training texttoimage generation models. Our pretrained model obtains competitive results in zeroshot texttoimage generation on the MSCOCO dataset, yet with around only 1 of the model size and training data size relative to the recently proposed large DALLE model.
DALLEval Probing the Reasoning Skills and Social Biases of TexttoImage Generation Models ; Recently, DALLE, a multimodal transformer language model, and its variants, including diffusion models, have shown highquality texttoimage generation capabilities. However, despite the realistic image generation results, there has not been a detailed analysis of how to evaluate such models. In this work, we investigate the visual reasoning capabilities and social biases of different texttoimage models, covering both multimodal transformer language models and diffusion models. First, we measure three visual reasoning skills object recognition, object counting, and spatial relation understanding. For this, we propose PaintSkills, a compositional diagnostic evaluation dataset that measures these skills. Despite the highfidelity image generation capability, a large gap exists between the performance of recent models and the upper bound accuracy in object counting and spatial relation understanding skills. Second, we assess the gender and skin tone biases by measuring the genderskin tone distribution of generated images across various professions and attributes. We demonstrate that recent texttoimage generation models learn specific biases about gender and skin tone from web imagetext pairs. We hope our work will help guide future progress in improving texttoimage generation models on visual reasoning skills and learning socially unbiased representations. Code and data httpsgithub.comjminDallEval
The Benefits of ModelBased Generalization in Reinforcement Learning ; ModelBased Reinforcement Learning RL is widely believed to have the potential to improve sample efficiency by allowing an agent to synthesize large amounts of imagined experience. Experience Replay ER can be considered a simple kind of model, which has proved effective at improving the stability and efficiency of deep RL. In principle, a learned parametric model could improve on ER by generalizing from real experience to augment the dataset with additional plausible experience. However, given that learned value functions can also generalize, it is not immediately obvious why model generalization should be better. Here, we provide theoretical and empirical insight into when, and how, we can expect data generated by a learned model to be useful. First, we provide a simple theorem motivating how learning a model as an intermediate step can narrow down the set of possible value functions more than learning a value function directly from data using the Bellman equation. Second, we provide an illustrative example showing empirically how a similar effect occurs in a more concrete setting with neural network function approximation. Finally, we provide extensive experiments showing the benefit of modelbased learning for online RL in environments with combinatorial complexity, but factored structure that allows a learned model to generalize. In these experiments, we take care to control for other factors in order to isolate, insofar as possible, the benefit of using experience generated by a learned model relative to ER alone.
StrokeGAN FewShot SemiSupervised Chinese Font Generation with Stroke Encoding ; The generation of Chinese fonts has a wide range of applications. The currently predominated methods are mainly based on deep generative models, especially the generative adversarial networks GANs. However, existing GANbased models usually suffer from the wellknown mode collapse problem. When mode collapse happens, the kind of GANbased models will be failure to yield the correct fonts. To address this issue, we introduce a onebit stroke encoding and a fewshot semisupervised scheme i.e., using a few paired data as semisupervised information to explore the local and global structure information of Chinese characters respectively, motivated by the intuition that strokes and characters directly embody certain local and global modes of Chinese characters. Based on these ideas, this paper proposes an effective model called textitStrokeGAN, which incorporates the stroke encoding and the fewshot semisupervised scheme into the CycleGAN model. The effectiveness of the proposed model is demonstrated by amounts of experiments. Experimental results show that the mode collapse issue can be effectively alleviated by the introduced onebit stroke encoding and fewshot semisupervised training scheme, and that the proposed model outperforms the stateoftheart models in fourteen font generation tasks in terms of four important evaluation metrics and the quality of generated characters. Besides CycleGAN, we also show that the proposed idea can be adapted to other existing models to improve their performance. The effectiveness of the proposed model for the zeroshot traditional Chinese font generation is also evaluated in this paper.
RetrievalAugmented Multimodal Language Modeling ; Recent multimodal models such as DALLE and CM3 have achieved remarkable progress in texttoimage and imagetotext generation. However, these models store all learned knowledge e.g., the appearance of the Eiffel Tower in the model parameters, requiring increasingly larger models and training data to capture more knowledge. To integrate knowledge in a more scalable and modular way, we propose a retrievalaugmented multimodal model, which enables a base multimodal model generator to refer to relevant text and images fetched by a retriever from external memory e.g., documents on the web. Specifically, for the retriever, we use a pretrained CLIP, and for the generator, we train a CM3 Transformer on the LAION dataset. Our resulting model, named RetrievalAugmented CM3 RACM3, is the first multimodal model that can retrieve and generate both text and images. We show that RACM3 significantly outperforms baseline multimodal models such as DALLE and CM3 on both image and caption generation tasks 12 FID and 17 CIDEr improvements on MSCOCO, while requiring much less compute for training 30 of DALLE. Moreover, we show that RACM3 exhibits novel capabilities, such as faithful image generation and multimodal incontext learning e.g., image generation from demonstrations.
Hierarchically branched diffusion models for classconditional generation ; Diffusion models have attained stateoftheart performance in generating realistic objects, including when conditioning generation on class labels. Current classconditional diffusion models, however, implicitly model the diffusion process on all classes in a flat fashion, ignoring any known relationships between classes. Classlabeled datasets, including those common in scientific domains, are rife with internal structure. To take advantage of this structure, we propose hierarchically branched diffusion models as a novel framework for classconditional generation. Branched diffusion models explicitly leverage the inherent relationships between distinct classes in the dataset to learn the underlying diffusion process in a hierarchical manner. We highlight several advantages of branched diffusion models over the current stateoftheart methods for classconditional diffusion. Firstly, they can be easily extended to novel classes in a continuallearning setting at scale. Secondly, they enable more sophisticated forms of conditional generation, such as analogybased conditional generation i.e. transmutation. Finally, they offer a novel interpretability into the classconditional generation process. We extensively evaluate branched diffusion models on several benchmark and large realworld scientific datasets, spanning different data modalities images, tabular data, and graphs. In particular, we showcase the advantages of branched diffusion models on a realworld singlecell RNAseq dataset, where our branched model leverages the intrinsic hierarchical structure between human cell types.
Beyond Statistical Similarity Rethinking Metrics for Deep Generative Models in Engineering Design ; Deep generative models, such as Variational Autoencoders VAEs, Generative Adversarial Networks GANs, Diffusion Models, and Transformers, have shown great promise in a variety of applications, including image and speech synthesis, natural language processing, and drug discovery. However, when applied to engineering design problems, evaluating the performance of these models can be challenging, as traditional statistical metrics based on likelihood may not fully capture the requirements of engineering applications. This paper doubles as a review and a practical guide to evaluation metrics for deep generative models DGMs in engineering design. We first summarize wellaccepted classic' evaluation metrics for deep generative models grounded in machine learning theory and typical computer science applications. Using case studies, we then highlight why these metrics seldom translate well to design problems but see frequent use due to the lack of established alternatives. Next, we curate a set of designspecific metrics which have been proposed across different research communities and can be used for evaluating deep generative models. These metrics focus on unique requirements in design and engineering, such as constraint satisfaction, functional performance, novelty, and conditioning. We structure our review and discussion as a set of practical selection criteria and usage guidelines. Throughout our discussion, we apply the metrics to models trained on simple 2dimensional example problems. Finally, to illustrate the selection process and classic usage of the presented metrics, we evaluate three deep generative models on a multifaceted bicycle frame design problem considering performance target achievement, design novelty, and geometric constraints. We publicly release the code for the datasets, models, and metrics used throughout the paper at decode.mit.eduprojectsmetrics.
Systematically Finding Security Vulnerabilities in BlackBox Code Generation Models ; Recently, large language models for code generation have achieved breakthroughs in several programming language tasks. Their advances in competitionlevel programming problems have made them an emerging pillar in AIassisted pair programming. Tools such as GitHub Copilot are already part of the daily programming workflow and are used by more than a million developers. The training data for these models is usually collected from opensource repositories e.g., GitHub that contain software faults and security vulnerabilities. This unsanitized training data can lead language models to learn these vulnerabilities and propagate them in the code generation procedure. Given the wide use of these models in the daily workflow of developers, it is crucial to study the security aspects of these models systematically. In this work, we propose the first approach to automatically finding security vulnerabilities in blackbox code generation models. To achieve this, we propose a novel blackbox inversion approach based on fewshot prompting. We evaluate the effectiveness of our approach by examining code generation models in the generation of highrisk security weaknesses. We show that our approach automatically and systematically finds 1000s of security vulnerabilities in various code generation models, including the commercial blackbox model GitHub Copilot.
Quantifying Sample Anonymity in ScoreBased Generative Models with Adversarial Fingerprinting ; Recent advances in scorebased generative models have led to a huge spike in the development of downstream applications using generative models ranging from data augmentation over image and video generation to anomaly detection. Despite publicly available trained models, their potential to be used for privacy preserving data sharing has not been fully explored yet. Training diffusion models on private data and disseminating the models and weights rather than the raw dataset paves the way for innovative largescale datasharing strategies, particularly in healthcare, where safeguarding patients' personal health information is paramount. However, publishing such models without individual consent of, e.g., the patients from whom the data was acquired, necessitates guarantees that identifiable training samples will never be reproduced, thus protecting personal health data and satisfying the requirements of policymakers and regulatory bodies. This paper introduces a method for estimating the upper bound of the probability of reproducing identifiable training images during the sampling process. This is achieved by designing an adversarial approach that searches for anatomic fingerprints, such as medical devices or dermal art, which could potentially be employed to reidentify training images. Our method harnesses the learned scorebased model to estimate the probability of the entire subspace of the score function that may be utilized for onetoone reproduction of training samples. To validate our estimates, we generate anomalies containing a fingerprint and investigate whether generated samples from trained generative models can be uniquely mapped to the original training samples. Overall our results show that privacybreaching images are reproduced at sampling time if the models were trained without care.
Training Priors Predict TextToImage Model Performance ; Texttoimage models can often generate some relations, i.e., astronaut riding horse, but fail to generate other relations composed of the same basic parts, i.e., horse riding astronaut. These failures are often taken as evidence that the models rely on training priors rather than constructing novel images compositionally. This paper tests this intuition directly on the stablediffusion 2.1 texttoimage model. By looking at the subjectverbobject SVO triads that form the backbone of these prompts e.g., astronaut, ride, horse, we find that the more often an SVO triad appears in the training data, the better the model can generate an image aligned with that triad. Here, by aligned we mean that each of the terms appears in the generated image in the proper relation to each other. However, this increased frequency also diminishes how well the model can generate an image aligned with the flipped triad. For example, if astronaut riding horse appears frequently in the training data, the image for horse riding astronaut will tend to be poorly aligned. We also find that models often struggle to generate terms in atypical roles, e.g., if horse is more often the semantic patient object, the model might struggle to visualize it as a semantic agent subject. Our results thus show that current models are biased to generate images aligned with relations seen in training and provide important new data in the ongoing debate on whether these texttoimage models employ abstract compositional structure in a traditional sense, or rather, interpolate between relations explicitly seen in the training data.
On the Stability of Iterative Retraining of Generative Models on their own Data ; Deep generative models have made tremendous progress in modeling complex data, often exhibiting generation quality that surpasses a typical human's ability to discern the authenticity of samples. Undeniably, a key driver of this success is enabled by the massive amounts of webscale data consumed by these models. Due to these models' striking performance and ease of availability, the web will inevitably be increasingly populated with synthetic content. Such a fact directly implies that future iterations of generative models must contend with the reality that their training is curated from both clean data and artificially generated data from past models. In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets of real and synthetic data on their stability. We first prove the stability of iterative training under the condition that the initial generative models approximate the data distribution well enough and the proportion of clean training data w.r.t. synthetic data is large enough. We empirically validate our theory on both synthetic and natural images by iteratively training normalizing flows and stateoftheart diffusion models on CIFAR10 and FFHQ.
SBML2Modelica integrating biochemical models within openstandard simulation ecosystems ; Motivation SBML is the most widespread language for the definition of biochemical models. Although dozens of SBML simulators are available, there is a general lack of support to the integration of SBML models within openstandard generalpurpose simulation ecosystems. This hinders cosimulation and integration of SBML models within larger model networks, in order to, e.g. enable in silico clinical trials of drugs, pharmacological protocols, or engineering artefacts such as biomedical devices against Virtual Physiological Human models. Modelica is one of the most popular existing openstandard generalpurpose simulation languages, supported by many simulators. Modelica models are especially suited for the definition of complex networks of heterogeneous models from virtually all application domains. Models written in Modelica and in 100 other languages can be readily exported into blackbox Functional MockUp Units FMUs, and seamlessly cosimulated and integrated into larger model networks within openstandard languageindependent simulation ecosystems. Results In order to enable SBML model integration within heterogeneous model networks, we present SBML2Modelica, a software system translating SBML models into wellstructured, userintelligible, easily modifiable Modelica models. SBML2Modelica is SBML Level 3 Version 2compliant and succeeds on 96.47 of the SBML Test Suite Core with a few rare, intricate and easily avoidable combinations of constructs unsupported and cleanly signalled to the user. Our experimental campaign on 613 models from the BioModels database with up to 5438 variables shows that the major opensource generalpurpose Modelica and FMU simulators achieve performance comparable to stateoftheart specialized SBML simulators. Availability and implementation httpsbitbucket.orgmclabsbml2modelica
Global Cellular Automata GCA A Massively Parallel Computing Model ; The Global Cellular Automata GCA Model is a generalization of the Cellular Automata CA Model. The GCA model consists of a collection of cells which change their states depending on the states of their neighbors, like in the classical CA model. In generalization of the CA model, the neighbors are no longer fixed and local, they are variable and global. In the basic GCA model, a cell is structured into a data part and a pointer part. The pointer part consists of several pointers that hold addresses to global neighbors. The data rule defines the new data state, and the pointer rule define the new pointer states. The cell's state is synchronously or asynchronously updated using the new data and new pointer states. Thereby the global neighbors can be changed from generation to generation. Similar to the CA model, only the own cell's state is modified. Thereby write conflicts cannot occur, all cells can work in parallel which makes it a massively parallel model. The GCA model is related to the CROW concurrent read owners write model, a specific PRAM parallel random access machine model. Therefore many of the wellstudied PRAM algorithms can be transformed into GCA algorithms. Moreover, the GCA model allows to describe a large number of data parallel applications in a suitable way. The GCA model can easily be implemented in software, efficiently interpreted on standard parallel architectures, and synthesized configured into special hardware target architectures. This article reviews the model, applications, and hardware architectures.
DeLiGAN Generative Adversarial Networks for Diverse and Limited Data ; A class of recent approaches for generating images, called Generative Adversarial Networks GAN, have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GANbased approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN a novel GANbased architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and handdrawn sketches, all using limited amounts of data. To quantitatively characterize intraclass diversity of generated samples, we also introduce a modified version of inceptionscore, a measure which has been found to correlate well with human assessment of generated samples.
Improving Bidirectional Generation between Different Modalities with Variational Autoencoders ; We investigate deep generative models that can exchange multiple modalities bidirectionally, e.g., generating images from corresponding texts and vice versa. A major approach to achieve this objective is to train a model that integrates all the information of different modalities into a joint representation and then to generate one modality from the corresponding other modality via this joint representation. We simply applied this approach to variational autoencoders VAEs, which we call a joint multimodal variational autoencoder JMVAE. However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully. Furthermore, we confirmed that this difficulty cannot be resolved even using a known solution. Therefore, in this study, we propose two models to prevent this difficulty JMVAEkl and JMVAEh. Results of our experiments demonstrate that these methods can prevent the difficulty above and that they generate modalities bidirectionally with equal or higher likelihood than conventional VAE methods, which generate in only one direction. Moreover, we confirm that these methods can obtain the joint representation appropriately, so that they can generate various variations of modality by moving over the joint representation or changing the value of another modality.
How Images Inspire Poems Generating Classical Chinese Poetry from Images with Memory Networks ; With the recent advances of neural models and natural language processing, automatic generation of classical Chinese poetry has drawn significant attention due to its artistic and cultural value. Previous works mainly focus on generating poetry given keywords or other text information, while visual inspirations for poetry have been rarely explored. Generating poetry from images is much more challenging than generating poetry from text, since images contain very rich visual information which cannot be described completely using several keywords, and a good poem should convey the image accurately. In this paper, we propose a memory based neural model which exploits images to generate poems. Specifically, an EncoderDecoder model with a topic memory network is proposed to generate classical Chinese poetry from images. To the best of our knowledge, this is the first work attempting to generate classical Chinese poetry from images with neural networks. A comprehensive experimental investigation with both human evaluation and quantitative analysis demonstrates that the proposed model can generate poems which convey images accurately.
Diesel Generator Model Parameterization for Microgrid Simulation Using Hybrid BoxConstrained LevenbergMarquardt Algorithm ; Existing generator parameterization methods, typically developed for large turbine generator units, are difficult to apply to small kWlevel diesel generators in microgrid applications. This paper presents a model parameterization method that estimates a complete set of kWlevel diesel generator parameters simultaneously using only loadstepchange tests with limited measurement points. This method provides a more costefficient and robust approach to achieve highfidelity modeling of diesel generators for microgrid dynamic simulation. A twostage hybrid boxconstrained LevenbergMarquardt HBCLM algorithm is developed to search the optimal parameter set given the parameter bounds. A heuristic algorithm, namely Generalized Oppositionbased Learning Genetic Algorithm GOLGA, is applied to identify proper initial estimates at the first stage, followed by a modified LevenbergMarquardt algorithm designed to fine tune the solution based on the firststage result. The proposed method is validated against dynamic simulation of a diesel generator model and field measurements from a 16kW diesel generator unit.
Exact generator and its high order expansions in the timeconvolutionless generalized master equation Applications to the spinboson model and exictation energy transfer ; The timeconvolutionless TCL quantum master equation provides a powerful tool to simulate reduced dynamics of a quantum system coupled to a bath. The key quantity in the TCL master equation is the socalled kernel or generator, which describes effects of the bath degrees of freedom. Since the exact TCL generators are usually hard to calculate analytically, most applications of the TCL generalized master equation have relied on approximate generators using second and fourth order perturbative expansions. By using the hierarchical equation of motion HEOM and extended HEOM methods, we present a new approach to calculate the exact TCL generator and its high order perturbative expansions. The new approach is applied to the spinboson model with different sets of parameters, to investigate the convergence of the high order expansions of the TCL generator. We also discuss circumstances where the exact TCL generator becomes singular for the spinboson model, and a model of excitation energy transfer in the FennaMatthewsOlson complex.
Variational Crossdomain Natural Language Generation for Spoken Dialogue Systems ; Crossdomain natural language generation NLG is still a difficult task within spoken dialogue modelling. Given a semantic representation provided by the dialogue manager, the language generator should generate sentences that convey desired information. Traditional templatebased generators can produce sentences with all necessary information, but these sentences are not sufficiently diverse. With RNNbased models, the diversity of the generated sentences can be high, however, in the process some information is lost. In this work, we improve an RNNbased generator by considering latent information at the sentence level during generation using the conditional variational autoencoder architecture. We demonstrate that our model outperforms the original RNNbased generator, while yielding highly diverse sentences. In addition, our model performs better when the training data is limited.
AutoEncoding Progressive Generative Adversarial Networks For 3D Multi Object Scenes ; 3D multi object generative models allow us to synthesize a large range of novel 3D multi object scenes and also identify objects, shapes, layouts and their positions. But multi object scenes are difficult to create because of the dataset being multimodal in nature. The conventional 3D generative adversarial models are not efficient in generating multi object scenes, they usually tend to generate either one object or generate fuzzy results of multiple objects. Autoencoder models have much scope in feature extraction and representation learning using the unsupervised paradigm in probabilistic spaces. We try to make use of this property in our proposed model. In this paper we propose a novel architecture using 3DConvNets trained with the progressive training paradigm that has been able to generate realistic high resolution 3D scenes of rooms, bedrooms, offices etc. with various pieces of furniture and objects. We make use of the adversarial autoencoder along with the WGANGP loss parameter in our discriminator loss function. Finally this new approach to multi object scene generation has also been able to generate more number of objects per scene.
Two Birds, One Stone A Simple, Unified Model for Text Generation from Structured and Unstructured Data ; A number of researchers have recently questioned the necessity of increasingly complex neural network NN architectures. In particular, several recent papers have shown that simpler, properly tuned models are at least competitive across several NLP tasks. In this work, we show that this is also the case for text generation from structured and unstructured data. We consider neural tabletotext generation and neural question generation NQG tasks for text generation from structured and unstructured data, respectively. Tabletotext generation aims to generate a description based on a given table, and NQG is the task of generating a question from a given passage where the generated question can be answered by a certain subspan of the passage using NN models. Experimental results demonstrate that a basic attentionbased seq2seq model trained with the exponential moving average technique achieves the state of the art in both tasks. Code is available at httpsgithub.comhshahidi2birdsgen.
OptiGAN Generative Adversarial Networks for Goal Optimized Sequence Generation ; One of the challenging problems in sequence generation tasks is the optimized generation of sequences with specific desired goals. Current sequential generative models mainly generate sequences to closely mimic the training data, without direct optimization of desired goals or properties specific to the task. We introduce OptiGAN, a generative model that incorporates both Generative Adversarial Networks GAN and Reinforcement Learning RL to optimize desired goal scores using policy gradients. We apply our model to text and realvalued sequence generation, where our model is able to achieve higher desired scores outperforming GAN and RL baselines, while not sacrificing output sample diversity.
Multichannel Generative Language Model Learning All Possible Factorizations Within and Across Channels ; A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning, but through two separate channels corresponding to their languages. In this work, we present the Multichannel Generative Language Model MGLM. MGLM is a generative joint distribution model over channels. MGLM marginalizes over all possible factorizations within and across all channels. MGLM endows flexible inference, including unconditional generation, conditional generation where 1 channel is observed and other channels are generated, and partially observed generation where incomplete observations are spread across all the channels. We experiment with the Multi30K dataset containing English, French, Czech, and German. We demonstrate experiments with unconditional, conditional, and partially conditional generation. We provide qualitative samples sampled unconditionally from the generative joint distribution. We also quantitatively analyze the qualitydiversity tradeoffs and find MGLM outperforms traditional bilingual discriminative models.
Latent Neural Differential Equations for Video Generation ; Generative Adversarial Networks have recently shown promise for video generation, building off of the success of image generation while also addressing a new challenge time. Although time was analyzed in some early work, the literature has not adequately grown with temporal modeling developments. We study the effects of Neural Differential Equations to model the temporal dynamics of video generation. The paradigm of Neural Differential Equations presents many theoretical strengths including the first continuous representation of time within video generation. In order to address the effects of Neural Differential Equations, we investigate how changes in temporal models affect generated video quality. Our results give support to the usage of Neural Differential Equations as a simple replacement for older temporal generators. While keeping run times similar and decreasing parameter count, we produce a new stateoftheart model in 64times64 pixel unconditional video generation, with an Inception Score of 15.20.
WakaVT A Sequential Variational Transformer for Waka Generation ; Poetry generation has long been a challenge for artificial intelligence. In the scope of Japanese poetry generation, many researchers have paid attention to Haiku generation, but few have focused on Waka generation. To further explore the creative potential of natural language generation systems in Japanese poetry creation, we propose a novel Waka generation model, WakaVT, which automatically produces Waka poems given userspecified keywords. Firstly, an additive maskbased approach is presented to satisfy the form constraint. Secondly, the structures of Transformer and variational autoencoder are integrated to enhance the quality of generated content. Specifically, to obtain novelty and diversity, WakaVT employs a sequence of latent variables, which effectively captures wordlevel variability in Waka data. To improve linguistic quality in terms of fluency, coherence, and meaningfulness, we further propose the fused multilevel selfattention mechanism, which properly models the hierarchical linguistic structure of Waka. To the best of our knowledge, we are the first to investigate Waka generation with models based on Transformer andor variational autoencoder. Both objective and subjective evaluation results demonstrate that our model outperforms baselines significantly.
RetGen A Joint framework for Retrieval and Grounded Text Generation Modeling ; Recent advances in largescale pretraining such as GPT3 allow seemingly high quality text to be generated from a given prompt. However, such generation systems often suffer from problems of hallucinated facts, and are not inherently designed to incorporate useful external information. Grounded generation models appear to offer remedies, but their training typically relies on rarelyavailable parallel data where informationrelevant documents are provided for context. We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal. The model learns to reward retrieval of the documents with the highest utility in generation, and attentively combines them using a MixtureofExperts MoE ensemble to generate followon text. We demonstrate that both generator and retriever can take advantage of this joint training and work synergistically to produce more informative and relevant text in both prose and dialogue generation.
Pretrained Language Models for Text Generation A Survey ; Text generation has become one of the most important yet challenging tasks in natural language processing NLP. The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models PLMs. In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text. We further summarize several important finetuning strategies for text generation. Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.
Stronger Generalization Guarantees for Robot Learning by Combining Generative Models and RealWorld Data ; We are motivated by the problem of learning policies for robotic systems with rich sensory inputs e.g., vision in a manner that allows us to guarantee generalization to environments unseen during training. We provide a framework for providing such generalization guarantees by leveraging a finite dataset of realworld environments in combination with a potentially inaccurate generative model of environments. The key idea behind our approach is to utilize the generative model in order to implicitly specify a prior over policies. This prior is updated using the realworld dataset of environments by minimizing an upper bound on the expected cost across novel environments derived via Probably Approximately Correct PACBayes generalization theory. We demonstrate our approach on two simulated systems with nonlinearhybrid dynamics and rich sensing modalities i quadrotor navigation with an onboard vision sensor, and ii grasping objects using a depth sensor. Comparisons with prior work demonstrate the ability of our approach to obtain stronger generalization guarantees by utilizing generative models. We also present hardware experiments for validating our bounds for the grasping task.
CGNeRF Conditional Generative Neural Radiance Fields ; While recent NeRFbased generative models achieve the generation of diverse 3Daware images, these approaches have limitations when generating images that contain userspecified characteristics. In this paper, we propose a novel model, referred to as the conditional generative neural radiance fields CGNeRF, which can generate multiview images reflecting extra input conditions such as images or texts. While preserving the common characteristics of a given input condition, the proposed model generates diverse images in fine detail. We propose 1 a novel unified architecture which disentangles the shape and appearance from a condition given in various forms and 2 the poseconsistent diversity loss for generating multimodal outputs while maintaining consistency of the view. Experimental results show that the proposed method maintains consistent image quality on various condition types and achieves superior fidelity and diversity compared to existing NeRFbased generative models.
Exploring Generative Adversarial Networks for TexttoImage Generation with Evolution Strategies ; In the context of generative models, texttoimage generation achieved impressive results in recent years. Models using different approaches were proposed and trained in huge datasets of pairs of texts and images. However, some methods rely on pretrained models such as Generative Adversarial Networks, searching through the latent space of the generative model by using a gradientbased approach to update the latent vector, relying on loss functions such as the cosine similarity. In this work, we follow a different direction by proposing the use of Covariance Matrix Adaptation Evolution Strategy to explore the latent space of Generative Adversarial Networks. We compare this approach to the one using Adam and a hybrid strategy. We design an experimental study to compare the three approaches using different text inputs for image generation by adapting an evaluation method based on the projection of the resulting samples into a twodimensional grid to inspect the diversity of the distributions. The results evidence that the evolutionary method achieves more diversity in the generation of samples, exploring different regions of the resulting grids. Besides, we show that the hybrid method combines the explored areas of the gradientbased and evolutionary approaches, leveraging the quality of the results.
GCISG Guided Causal Invariant Learning for Improved Syntoreal Generalization ; Training a deep learning model with artificially generated data can be an alternative when training data are scarce, yet it suffers from poor generalization performance due to a large domain gap. In this paper, we characterize the domain gap by using a causal framework for data generation. We assume that the real and synthetic data have common content variables but different style variables. Thus, a model trained on synthetic dataset might have poor generalization as the model learns the nuisance style variables. To that end, we propose causal invariance learning which encourages the model to learn a styleinvariant representation that enhances the syntoreal generalization. Furthermore, we propose a simple yet effective feature distillation method that prevents catastrophic forgetting of semantic knowledge of the real domain. In sum, we refer to our method as Guided Causal Invariant Syntoreal Generalization that effectively improves the performance of syntoreal generalization. We empirically verify the validity of proposed methods, and especially, our method achieves stateoftheart on visual syntoreal domain generalization tasks such as image classification and semantic segmentation.
Deep Spatial Domain Generalization ; Spatial autocorrelation and spatial heterogeneity widely exist in spatial data, which make the traditional machine learning model perform badly. Spatial domain generalization is a spatial extension of domain generalization, which can generalize to unseen spatial domains in continuous 2D space. Specifically, it learns a model under varying data distributions that generalizes to unseen domains. Although tremendous success has been achieved in domain generalization, there exist very few works on spatial domain generalization. The advancement of this area is challenged by 1 Difficulty in characterizing spatial heterogeneity, and 2 Difficulty in obtaining predictive models for unseen locations without training data. To address these challenges, this paper proposes a generic framework for spatial domain generalization. Specifically, We develop the spatial interpolation graph neural network that handles spatial data as a graph and learns the spatial embedding on each node and their relationships. The spatial interpolation graph neural network infers the spatial embedding of an unseen location during the test phase. Then the spatial embedding of the target location is used to decode the parameters of the downstreamtask model directly on the target location. Finally, extensive experiments on thirteen realworld datasets demonstrate the proposed method's strength.
Generating Sequences by Learning to SelfCorrect ; Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether finetuned or prompted with fewshot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for taskspecific adaptation. We present SelfCorrection, an approach that decouples an imperfect base generator an offtheshelf language model or supervised sequencetosequence model from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that SelfCorrection improves upon the base generator in three diverse generation tasks mathematical program synthesis, lexicallyconstrained generation, and toxicity control even when the corrector is much smaller than the base generator.
DuNST Dual Noisy Self Training for SemiSupervised Controllable Text Generation ; Selftraining ST has prospered again in language understanding by augmenting the finetuning of pretrained language models when labeled data is insufficient. However, it remains challenging to incorporate ST into attributecontrollable language generation. Augmented by only selfgenerated pseudo text, generation models overemphasize exploitation of the previously learned space, suffering from a constrained generalization boundary. We revisit ST and propose a novel method, DuNST to alleviate this problem. DuNST jointly models text generation and classification with a shared Variational AutoEncoder and corrupts the generated pseudo text by two kinds of flexible noise to disturb the space. In this way, our model could construct and utilize both pseudo text from given labels and pseudo labels from available unlabeled text, which are gradually refined during the ST process. We theoretically demonstrate that DuNST can be regarded as enhancing exploration towards the potential real text space, providing a guarantee of improved performance. Experiments on three controllable generation tasks show that DuNST could significantly boost control accuracy while maintaining comparable generation fluency and diversity against several strong baselines.
From dimer models to generalized lattice paths ; A recurrence relation of the generating function of the dimer model of Fibonacci type gives a functional relation for formal power series associated to lattice paths such as a Dyck, Motzkin and Schroder path. In this paper, we generalize the correspondence to the case of generalized lattice paths, kDyck, kMotzkin and kSchroder paths, by modifying the recurrence relation of the dimer model. We introduce five types of generalizations of the dimer model by keeping its combinatorial structures. This allows us to express the generating functions in terms of generalized lattice paths. The weight given to a generalized lattice path involves several statistics such as size, area, peaks and valleys, and heights of horizontal steps. We enumerate the generalized lattice paths by use of the recurrence relations and the Lagrange inversion theorem.
Improving Graph Generation by Restricting Graph Bandwidth ; Deep graph generative modeling has proven capable of learning the distribution of complex, multiscale structures characterizing realworld graphs. However, one of the main limitations of existing methods is their large output space, which limits generation scalability and hinders accurate modeling of the underlying distribution. To overcome these limitations, we propose a novel approach that significantly reduces the output space of existing graph generative models. Specifically, starting from the observation that many realworld graphs have low graph bandwidth, we restrict graph bandwidth during training and generation. Our strategy improves both generation scalability and quality without increasing architectural complexity or reducing expressiveness. Our approach is compatible with existing graph generative methods, and we describe its application to both autoregressive and oneshot models. We extensively validate our strategy on synthetic and real datasets, including molecular graphs. Our experiments show that, in addition to improving generation efficiency, our approach consistently improves generation quality and reconstruction accuracy. The implementation is made available.
DiffuseRoll Multitrack multicategory music generation based on diffusion model ; Recent advancements in generative models have shown remarkable progress in music generation. However, most existing methods focus on generating monophonic or homophonic music, while the generation of polyphonic and multitrack music with rich attributes is still a challenging task. In this paper, we propose a novel approach for multitrack, multiattribute symphonic music generation using the diffusion model. Specifically, we generate pianoroll representations with a diffusion model and map them to MIDI format for output. To capture rich attribute information, we introduce a color coding scheme to encode note sequences into color and position information that represents pitch,velocity, and instrument. This scheme enables a seamless mapping between discrete music sequences and continuous images. We also propose a postprocessing method to optimize the generated scores for better performance. Experimental results show that our method outperforms stateoftheart methods in terms of polyphonic music generation with rich attribute information compared to the figure methods.
XReCoSa MultiScale Context Aggregation For MultiTurn Dialogue Generation ; In multiturn dialogue generation, responses are not only related to the topic and background of the context but also related to words and phrases in the sentences of the context. However, currently widely used hierarchical dialog models solely rely on context representations from the utterancelevel encoder, ignoring the sentence representations output by the wordlevel encoder. This inevitably results in a loss of information while decoding and generating. In this paper, we propose a new dialog model XReCoSa to tackle this problem which aggregates multiscale context information for hierarchical dialog models. Specifically, we divide the generation decoder into upper and lower parts, namely the intention part and the generation part. Firstly, the intention part takes context representations as input to generate the intention of the response. Then the generation part generates words depending on sentence representations. Therefore, the hierarchical information has been fused into response generation. we conduct experiments on the English dataset DailyDialog. Experimental results exhibit that our method outperforms baseline models on both automatic metricbased and humanbased evaluations.
Nested Diffusion Processes for Anytime Image Generation ; Diffusion models are the current stateoftheart in image generation, synthesizing highquality images by breaking down the generation process into many finegrained denoising steps. Despite their good performance, diffusion models are computationally expensive, requiring many neural function evaluations NFEs. In this work, we propose an anytime diffusionbased method that can generate viable images when stopped at arbitrary times before completion. Using existing pretrained diffusion models, we show that the generation scheme can be recomposed as two nested diffusion processes, enabling fast iterative refinement of a generated image. In experiments on ImageNet and Stable Diffusionbased texttoimage generation, we show, both qualitatively and quantitatively, that our method's intermediate generation quality greatly exceeds that of the original diffusion model, while the final generation result remains comparable. We illustrate the applicability of Nested Diffusion in several settings, including for solving inverse problems, and for rapid textbased content creation by allowing user intervention throughout the sampling process.
Conditioning Diffusion Models via Attributes and Semantic Masks for Face Generation ; Deep generative models have shown impressive results in generating realistic images of faces. GANs managed to generate highquality, highfidelity images when conditioned on semantic masks, but they still lack the ability to diversify their output. Diffusion models partially solve this problem and are able to generate diverse samples given the same condition. In this paper, we propose a multiconditioning approach for diffusion models via crossattention exploiting both attributes and semantic masks to generate highquality and controllable face images. We also studied the impact of applying perceptualfocused loss weighting into the latent space instead of the pixel space. Our method extends the previous approaches by introducing conditioning on more than one set of features, guaranteeing a more finegrained control over the generated face images. We evaluate our approach on the CelebAHQ dataset, and we show that it can generate realistic and diverse samples while allowing for finegrained control over multiple attributes and semantic regions. Additionally, we perform an ablation study to evaluate the impact of different conditioning strategies on the quality and diversity of the generated images.
Randomized 3D Scene Generation for Generalizable SelfSupervised PreTraining ; Capturing and labeling realworld 3D data is laborious and timeconsuming, which makes it costly to train strong 3D models. To address this issue, recent works present a simple method by generating randomized 3D scenes without simulation and rendering. Although models pretrained on the generated synthetic data gain impressive performance boosts, previous works have two major shortcomings. First, they focus on only one downstream task i.e., object detection, and the generalization to other tasks is unexplored. Second, the contributions of generated data are not systematically studied. To obtain a deeper understanding of the randomized 3D scene generation technique, we revisit previous works and compare different data generation methods using a unified setup. Moreover, to clarify the generalization of the pretrained models, we evaluate their performance in multiple tasks i.e., object detection and semantic segmentation and with different pretraining methods i.e., masked autoencoder and contrastive learning. Moreover, we propose a new method to generate 3D scenes with spherical harmonics. It surpasses the previous formuladriven method with a clear margin and achieves onpar results with methods using realworld scans and CAD models.
A MetaGeneration framework for Industrial System Generation ; Generative design is an increasingly important tool in the industrial world. It allows the designers and engineers to easily explore vast ranges of design options, providing a cheaper and faster alternative to the trial and failure approaches. Thanks to the flexibility they offer, Deep Generative Models are gaining popularity amongst Generative Design technologies. However, developing and evaluating these models can be challenging. The field lacks accessible benchmarks, in order to evaluate and compare objectively different Deep Generative Models architectures. Moreover, vanilla Deep Generative Models appear to be unable to accurately generate multicomponents industrial systems that are controlled by latent design constraints. To address these challenges, we propose an industryinspired use case that incorporates actual industrial system characteristics. This use case can be quickly generated and used as a benchmark. We propose a MetaVAE capable of producing multicomponent industrial systems and showcase its application on the proposed use case.
Synthetic Demographic Data Generation for Card Fraud Detection Using GANs ; Using machine learning models to generate synthetic data has become common in many fields. Technology to generate synthetic transactions that can be used to detect fraud is also growing fast. Generally, this synthetic data contains only information about the transaction, such as the time, place, and amount of money. It does not usually contain the individual user's characteristics age and gender are occasionally included. Using relatively complex synthetic demographic data may improve the complexity of transaction data features, thus improving the fraud detection performance. Benefiting from developments of machine learning, some deep learning models have potential to perform better than other wellestablished synthetic data generation methods, such as microsimulation. In this study, we built a deeplearning Generative Adversarial Network GAN, called DGGAN, which will be used for demographic data generation. Our model generates samples during model training, which we found important to overcame class imbalance issues. This study can help improve the cognition of synthetic data and further explore the application of synthetic data generation in card fraud detection.
Diffusion to Confusion Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector ; Many physical adversarial patch generation methods are widely proposed to protect personal privacy from malicious monitoring using object detectors. However, they usually fail to generate satisfactory patch images in terms of both stealthiness and attack performance without making huge efforts on careful hyperparameter tuning. To address this issue, we propose a novel naturalistic adversarial patch generation method based on the diffusion models DM. Through sampling the optimal image from the DM model pretrained upon natural images, it allows us to stably craft highquality and naturalistic physical adversarial patches to humans without suffering from serious mode collapse problems as other deep generative models. To the best of our knowledge, we are the first to propose DMbased naturalistic adversarial patch generation for object detectors. With extensive quantitative, qualitative, and subjective experiments, the results demonstrate the effectiveness of the proposed approach to generate betterquality and more naturalistic adversarial patches while achieving acceptable attack performance than other stateoftheart patch generation methods. We also show various generation tradeoffs under different conditions.
ConceptLab Creative Generation using Diffusion Prior Constraints ; Recent texttoimage generative models have enabled us to transform our words into vibrant, captivating imagery. The surge of personalization techniques that has followed has also allowed us to imagine unique concepts in new scenes. However, an intriguing question remains How can we generate a new, imaginary concept that has never been seen before In this paper, we present the task of creative texttoimage generation, where we seek to generate new members of a broad category e.g., generating a pet that differs from all existing pets. We leverage the understudied Diffusion Prior models and show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior, resulting in a set of prior constraints. To keep our generated concept from converging into existing members, we incorporate a questionanswering model that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations. Finally, we show that our prior constraints can also serve as a strong mixing mechanism allowing us to create hybrids between generated concepts, introducing even more flexibility into the creative process.
Structure formation in modified gravity models alternative to dark energy ; We study structure formation in phenomenological models in which the Friedmann equation receives a correction of the form Halpharc2alpha, which realize an accelerated expansion without dark energy. In order to address structure formation in these model, we construct simple covariant gravitational equations which give the modified Friedmann equation with alpha2n where n is an integer. For n2, the underlying theory is known as a 5D braneworld model the DGP model. Thus the models interpolate between the DGP model n2, alpha1 and the LCDM model in general relativity n to infty, alpha to 0. Using the covariant equations, cosmological perturbations are analyzed. It is shown that in order to satisfy the Bianchi identity at a perturbative level, we need to introduce a correction term Emu nu in the effective equations. In the DGP model, Emu nu comes from 5D gravitational fields and correct conditions on Emu nu can be derived by solving the 5D perturbations. In the general case n2, we have to assume the structure of a modified theory of gravity to determine Emu nu. We show that structure formation is different from a dark energy model in general relativity with identical expansion history and that quantitative features of the difference crucially depend on the conditions on Emu nu, that is, the structure of the underlying theory of modified gravity. This implies that it is essential to identify underlying theories in order to test these phenomenological models against observational data and, once we identify a consistent theory, structure formation tests become essential to distinguish modified gravity models from dark energy models in general relativity.
Adversarial Message Passing For Graphical Models ; Bayesian inference on structured models typically relies on the ability to infer posterior distributions of underlying hidden variables. However, inference in implicit models or complex posterior distributions is hard. A popular tool for learning implicit models are generative adversarial networks GANs which learn parameters of generators by fooling discriminators. Typically, GANs are considered to be models themselves and are not understood in the context of inference. Current techniques rely on inefficient global discrimination of joint distributions to perform learning, or only consider discriminating a single output variable. We overcome these limitations by treating GANs as a basis for likelihoodfree inference in generative models and generalize them to Bayesian posterior inference over factor graphs. We propose local learning rules based on message passing minimizing a global divergence criterion involving cooperating local adversaries used to sidestep explicit likelihood evaluations. This allows us to compose models and yields a unified inference and learning framework for adversarial learning. Our framework treats model specification and inference separately and facilitates richly structured models within the family of Directed Acyclic Graphs, including components such as intractable likelihoods, nondifferentiable models, simulators and generally cumbersome models. A key result of our treatment is the insight that Bayesian inference on structured models can be performed only with sampling and discrimination when using nonparametric variational families, without access to explicit distributions. As a sideresult, we discuss the link to likelihood maximization. These approaches hold promise to be useful in the toolbox of probabilistic modelers and enrich the gamut of current probabilistic programming applications.
Adversarial Imitation Attack ; Deep learning models are known to be vulnerable to adversarial examples. A practical adversarial attack should require as little as possible knowledge of attacked models. Current substitute attacks need pretrained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples. Current scorebased and decisionbased attacks require lots of queries for the attacked models. In this study, we propose a novel adversarial imitation attack. First, it produces a replica of the attacked model by a twoplayer game like the generative adversarial networks GANs. The objective of the generative model is to generate examples that lead the imitation model returning different outputs with the attacked model. The objective of the imitation model is to output the same labels with the attacked model under the same inputs. Then, the adversarial examples generated by the imitation model are utilized to fool the attacked model. Compared with the current substitute attacks, imitation attacks can use less training data to produce a replica of the attacked model and improve the transferability of adversarial examples. Experiments demonstrate that our imitation attack requires less training data than the blackbox substitute attacks, but achieves an attack success rate close to the whitebox attack on unseen data with no query.
Normalizing Flow based Hidden Markov Models for Classification of Speech Phones with Explainability ; In pursuit of explainability, we develop generative models for sequential data. The proposed models provide stateoftheart classification results and robust performance for speech phone classification. We combine modern neural networks normalizing flows and traditional generative models hidden Markov models HMMs. Normalizing flowbased mixture models NMMs are used to model the conditional probability distribution given the hidden state in the HMMs. Model parameters are learned through judicious combinations of timetested Bayesian learning methods and contemporary neural network learning methods. We mainly combine expectationmaximization EM and minibatch gradient descent. The proposed generative models can compute likelihood of a data and hence directly suitable for maximumlikelihood ML classification approach. Due to structural flexibility of HMMs, we can use different normalizing flow models. This leads to different types of HMMs providing diversity in data modeling capacity. The diversity provides an opportunity for easy decision fusion from different models. For a standard speech phone classification setup involving 39 phones classes and the TIMIT dataset, we show that the use of standard features called melfrequencycepstralcoeffcients MFCCs, the proposed generative models, and the decision fusion together can achieve 86.6 accuracy by generative training only. This result is close to stateoftheart results, for examples, 86.2 accuracy of PyTorchKaldi toolkit 1, and 85.1 accuracy using light gated recurrent units 2. We do not use any discriminative learning approach and related sophisticated features in this article.
Evaluation and Comparison of Diffusion Models with Motif Features ; Diffusion models simulate the propagation of influence in networks. The design and evaluation of diffusion models has been subjective and empirical. When being applied to a network represented by a graph, the diffusion model generates a sequence of edges on which the influence flows, such sequence forms a temporal network. In most scenarios, the statistical properties or the characteristics of a network are inferred by analyzing the temporal networks generated by diffusion models. To analyze real temporal networks, the motif has been proposed as a reliable feature. However, it is unclear how the network topology and the diffusion model affect the motif feature of a generated temporal network. In this paper, we adopt the motif feature to evaluate the temporal graph generated by a diffusion model, thence the diffusion model itself. Two benchmarks for quantitively evaluating diffusion models with motif, stability and separability, are proposed and measured on numerous diffusion models. One motifbased metric is proposed to measure the similarity between diffusion models. The experiments suggest that the motif of a generated temporal network is dominated by the diffusion model, while the network topology is almost ignored. This result indicates that more practical and reliable diffusion models have to be designed with delicacy in order to capture the propagation patterns of real temporal networks.
The Medium Amplitude Response of Nonlinear MaxwellOldroyd Type Models in Simple Shear ; A general framework for MaxwellOldroyd type differential constitutive models is examined, in which an unspecified nonlinear function of the stress and rateofdeformation tensors is incorporated into the wellknown corotational version of the Jeffreys model discussed by Oldroyd. For medium amplitude simple shear deformations, the recently developed mathematical framework of medium amplitude parallel superposition MAPS rheology reveals that this generalized nonlinear Maxwell model can produce only a limited number of distinct signatures, which combine linearly in a wellposed basis expansion for the third order complex viscosity. This basis expansion represents a library of MAPS signatures for distinct constitutive models that are contained within the generalized nonlinear Maxwell model. We describe a framework for quantitative model identification using this basis expansion, and discuss its limitations in distinguishing distinct nonlinear features of the underlying constitutive models from medium amplitude shear stress data. The leading order contributions to the normal stress differences are also considered, revealing that only the second normal stress difference provides distinct information about the weakly nonlinear response space of the model. After briefly considering the conditions for timestrain separability within the generalized nonlinear Maxwell model, we apply the basis expansion of the third order complex viscosity to derive the medium amplitude signatures of the model in specific shear deformation protocols. Finally, we use these signatures for estimation of model parameters from rheological data obtained by these different deformation protocols, revealing that threetone oscillatory shear deformations produce data that is readily able to distinguish all features of the medium amplitude, simple shear response space of this generalized class of constitutive models.
CMUAWatermark A CrossModel Universal Adversarial Watermark for Combating Deepfakes ; Malicious applications of deepfakes i.e., technologies generating target facial attributes or entire faces from facial images have posed a huge threat to individuals' reputation and security. To mitigate these threats, recent studies have proposed adversarial watermarks to combat deepfake models, leading them to generate distorted outputs. Despite achieving impressive results, these adversarial watermarks have low imagelevel and modellevel transferability, meaning that they can protect only one facial image from one specific deepfake model. To address these issues, we propose a novel solution that can generate a CrossModel Universal Adversarial Watermark CMUAWatermark, protecting a large number of facial images from multiple deepfake models. Specifically, we begin by proposing a crossmodel universal attack pipeline that attacks multiple deepfake models iteratively. Then, we design a twolevel perturbation fusion strategy to alleviate the conflict between the adversarial watermarks generated by different facial images and models. Moreover, we address the key problem in crossmodel optimization with a heuristic approach to automatically find the suitable attack step sizes for different models, further weakening the modellevel conflict. Finally, we introduce a more reasonable and comprehensive evaluation method to fully test the proposed method and compare it with existing ones. Extensive experimental results demonstrate that the proposed CMUAWatermark can effectively distort the fake facial images generated by multiple deepfake models while achieving a better performance than existing methods.
Boosting the Adversarial Transferability of Surrogate Models with Dark Knowledge ; Deep neural networks DNNs are vulnerable to adversarial examples. And, the adversarial examples have transferability, which means that an adversarial example for a DNN model can fool another model with a nontrivial probability. This gave birth to the transferbased attack where the adversarial examples generated by a surrogate model are used to conduct blackbox attacks. There are some work on generating the adversarial examples from a given surrogate model with better transferability. However, training a special surrogate model to generate adversarial examples with better transferability is relatively underexplored. This paper proposes a method for training a surrogate model with dark knowledge to boost the transferability of the adversarial examples generated by the surrogate model. This trained surrogate model is named dark surrogate model DSM. The proposed method for training a DSM consists of two key components a teacher model extracting dark knowledge, and the mixing augmentation skill enhancing dark knowledge of training data. We conducted extensive experiments to show that the proposed method can substantially improve the adversarial transferability of surrogate models across different architectures of surrogate models and optimizers for generating adversarial examples, and it can be applied to other scenarios of transferbased attack that contain dark knowledge, like face verification. Our code is publicly available at urlhttpsgithub.comydc123DarkSurrogateModel.
Interpretable ODEstyle Generative Diffusion Model via Force Field Construction ; For a considerable time, researchers have focused on developing a method that establishes a deep connection between the generative diffusion model and mathematical physics. Despite previous efforts, progress has been limited to the pursuit of a single specialized method. In order to advance the interpretability of diffusion models and explore new research directions, it is essential to establish a unified ODEstyle generative diffusion model. Such a model should draw inspiration from physical models and possess a clear geometric meaning. This paper aims to identify various physical models that are suitable for constructing ODEstyle generative diffusion models accurately from a mathematical perspective. We then summarize these models into a unified method. Additionally, we perform a case study where we use the theoretical model identified by our method to develop a range of new diffusion model methods, and conduct experiments. Our experiments on CIFAR10 demonstrate the effectiveness of our approach. We have constructed a computational framework that attains highly proficient results with regards to image generation speed, alongside an additional model that demonstrates exceptional performance in both Inception score and FID score. These results underscore the significance of our method in advancing the field of diffusion models.
DeeDiff Dynamic UncertaintyAware Early Exiting for Accelerating Diffusion Model Generation ; Diffusion models achieve great success in generating diverse and highfidelity images. The performance improvements come with low generation speed per image, which hinders the application diffusion models in realtime scenarios. While some certain predictions benefit from the full computation of the model in each sample iteration, not every iteration requires the same amount of computation, potentially leading to computation waste. In this work, we propose DeeDiff, an early exiting framework that adaptively allocates computation resources in each sampling step to improve the generation efficiency of diffusion models. Specifically, we introduce a timestepaware uncertainty estimation module UEM for diffusion models which is attached to each intermediate layer to estimate the prediction uncertainty of each layer. The uncertainty is regarded as the signal to decide if the inference terminates. Moreover, we propose uncertaintyaware layerwise loss to fill the performance gap between full models and earlyexited models. With such loss strategy, our model is able to obtain comparable results as fulllayer models. Extensive experiments of classconditional, unconditional, and textguided generation on several datasets show that our method achieves stateoftheart performance and efficiency tradeoff compared with existing early exiting methods on diffusion models. More importantly, our method even brings extra benefits to baseline models and obtains better performance on CIFAR10 and CelebA datasets. Full code and model are released for reproduction.
A MultiState Power Model for Adequacy Assessment of Distributed Generation via Universal Generating Function ; The current and future developments of electric power systems are pushing the boundaries of reliability assessment to consider distribution networks with renewable generators. Given the stochastic features of these elements, most modeling approaches rely on Monte Carlo simulation. The computational costs associated to the simulation approach force to treating mostly smallsized systems, i.e. with a limited number of lumped components of a given renewable technology e.g. wind or solar, etc. whose behavior is described by a binary state, working or failed. In this paper, we propose an analytical multistate modeling approach for the reliability assessment of distributed generation DG. The approach allows looking to a number of diverse energy generation technologies distributed on the system. Multiple states are used to describe the randomness in the generation units, due to the stochastic nature of the generation sources and of the mechanical degradationfailure behavior of the generation systems. The universal generating function UGF technique is used for the individual component multistate modeling. A multiplicationtype composition operator is introduced to combine the UGFs for the mechanical degradation and renewable generation source states into the UGF of the renewable generator power output. The overall multistate DG system UGF is then constructed and classical reliability indices e.g. loss of load expectation LOLE, expected energy not supplied EENS are computed from the DG system generation and load UGFs. An application of the model is shown on a DG system adapted from the IEEE 34 nodes distribution test feeder.
A General Framework for Portfolio Theory. Part I theory and various models ; Utility and risk are two often competing measurements on the investment success. We show that efficient tradeoff between these two measurements for investment portfolios happens, in general, on a convex curve in the two dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz H. Markowitz, Portfolio Selection, 1959 and its natural generalization, the capital market pricing model, W. F. Sharpe, Mutual fund performance , 1966 are special cases of our general framework when the risk measure is taken to be the standard deviation and the utility function is the identity mapping. Using our general framework, we also recover the results in R. T. Rockafellar, S. Uryasev and M. Zabarankin, Master funds in portfolio analysis with general deviation measures, 2006 that extends the capital market pricing model to allow for the use of more general deviation measures. This generalized capital asset pricing model also applies to e.g. when an approximation of the maximum drawdown is considered as a risk measure. Furthermore, the consideration of a general utility function allows to go beyond the additive performance measure to a multiplicative one of cumulative returns by using the log utility. As a result, the growth optimal portfolio theory J. Lintner, The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets, 1965 and the leverage space portfolio theory R. Vince, The Leverage Space Trading Model, 2009 can also be understood under our general framework. Thus, this general framework allows a unification of several important existing portfolio theories and goes much beyond.
Generating similes effortlessly like a Pro A Style Transfer Approach for Simile Generation ; Literary tropes, from poetry to stories, are at the crux of human imagination and communication. Figurative language such as a simile go beyond plain expressions to give readers new insights and inspirations. In this paper, we tackle the problem of simile generation. Generating a simile requires proper understanding for effective mapping of properties between two concepts. To this end, we first propose a method to automatically construct a parallel corpus by transforming a large number of similes collected from Reddit to their literal counterpart using structured common sense knowledge. We then propose to finetune a pretrained sequence to sequence model, BARTcitelewis2019bart, on the literalsimile pairs to gain generalizability, so that we can generate novel similes given a literal sentence. Experiments show that our approach generates 88 novel similes that do not share properties with the training data. Human evaluation on an independent set of literal statements shows that our model generates similes better than two literary experts textit37footnoteWe average 32.6 and 41.3 for 2 humans. of the times, and three baseline systems including a recent metaphor generation model textit71footnoteWe average 82 ,63 and 68 for three baselines. of the times when compared pairwise.footnoteThe simile in the title is generated by our best model. Input Generating similes effortlessly, output Generating similes textitlike a Pro. We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
Training Generative Reversible Networks ; Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, bydesign reversible neural networks RevNets had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proofofprinciple experiment on the MNIST dataset suggesting that adversaryfree trained RevNets can discover meaningful latent dimensions without prespecifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings. Source code for this study is at httpsgithub.comrobintiborgenerativereversible
Controllable and Compositional Generation with LatentSpace EnergyBased Models ; Controllable generation is one of the key requirements for successful adoption of deep generative models in realworld applications, but it still remains as a great challenge. In particular, the compositional ability to generate novel concept combinations is out of reach for most current models. In this work, we use energybased models EBMs to handle compositional generation over a set of attributes. To make them scalable to highresolution image generation, we introduce an EBM in the latent space of a pretrained generative model such as StyleGAN. We propose a novel EBM formulation representing the joint distribution of data and attributes together, and we show how sampling from it is formulated as solving an ordinary differential equation ODE. Given a pretrained generator, all we need for controllable generation is to train an attribute classifier. Sampling with ODEs is done efficiently in the latent space and is robust to hyperparameters. Thus, our method is simple, fast to train, and efficient to sample. Experimental results show that our method outperforms the stateoftheart in both conditional sampling and sequential editing. In compositional generation, our method excels at zeroshot generation of unseen attribute combinations. Also, by composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photorealistic images of resolution 1024x1024. Code is available at httpsgithub.comNVlabsLACE.
ModelWizard Toward Interactive Model Construction ; Data scientists engage in model construction to discover machine learning models that well explain a dataset, in terms of predictiveness, understandability and generalization across domains. Questions such as what if we model common cause Z and what if Y's dependence on X reverses inspire many candidate models to consider and compare, yet current tools emphasize constructing a final model all at once. To more naturally reflect exploration when debating numerous models, we propose an interactive model construction framework grounded in composable operations. Primitive operations capture core steps refining data and model that, when verified, form an inductive basis to prove model validity. Derived, composite operations enable advanced model families, both generic and specialized, abstracted away from lowlevel details. We prototype our envisioned framework in ModelWizard, a domainspecific language embedded in F to construct Tabular models. We enumerate language design and demonstrate its use through several applications, emphasizing how language may facilitate creation of complex models. To future engineers designing data science languages and tools, we offer ModelWizard's design as a new model construction paradigm, speeding discovery of our universe's structure.
Bayesian inference for generalized extreme value distribution with Gaussian copula dependence ; Dependent generalized extreme value dGEV models have attracted much attention due to the dependency structure that often appears in real datasets. To construct a dGEV model, a natural approach is to assume that some parameters in the model are timevarying. A previous study has shown that a dependent Gumbel process can be naturally incorporated into a GEV model. The model is a nonlinear state space model with a hidden state that follows a Markov process, with its innovation following a Gumbel distribution. Inference may be made for the model using Bayesian methods, sampling the hidden process from a mixture normal distribution, used to approximate the Gumbel distribution. Thus the response follows an approximate GEV model. We propose a new model in which each marginal distribution is an exact GEV distribution. We use a variable transformation to combine the marginal CDF of a Gumbel distribution with the standard normal copula. Then our model is a nonlinear state space model in which the hidden state equation is Gaussian. We analyze this model using Bayesian methods, and sample the elements of the state vector using particle Gibbs with ancestor sampling PGAS. The PGAS algorithm turns out to be very efficient in solving nonlinear state space models. We also show our model is flexible enough to incorporate seasonality.
Recent advances in the 3D kinematic BabcockLeighton solar dynamo modeling ; In this review, we explain recent progress made in the BabcockLeighton dynamo models for the Sun, which have been most successful to explain various properties of the solar cycle. In general, these models are 2D axisymmetric and the meanfield dynamo equations are solved in the meriodional plane of the Sun. Various physical processes e.g., magnetic buoyancy and BabcockLeighton mechanism involved in these models are inherently 3D process and could not be modeled properly in a 2D framework. After pointing out limitations of 2D models e.g., Meanfield BabcockLeighton dynamo models and Surface Flux Transport models, we describe recently developed nextgeneration 3D dynamo models that implement more sophisticated flux emergence algorithm of buoyant flux tube rise through the convection zone and capture BabcockLeighton process more realistically than previous 2D models. The detailed results from these 3D dynamo models including surface flux transport counterpart are presented. We explain the cycle irregularities that are reproduced in 3D dynamo models by introducing scattering around the tilt angle only. Some results by assimilating observed photospheric convective velocity fields into the 3D models are also discussed, pointing out the wide opportunity that these 3D models hold to deliver.
Model Compression with Twostage Multiteacher Knowledge Distillation for Web Question Answering System ; Deep pretraining and finetuning models such as BERT and OpenAI GPT have demonstrated excellent results in question answering areas. However, due to the sheer amount of model parameters, the inference speed of these models is very slow. How to apply these complex models to real business scenarios becomes a challenging but practical problem. Previous model compression methods usually suffer from information loss during the model compression procedure, leading to inferior models compared with the original one. To tackle this challenge, we propose a Twostage Multiteacher Knowledge Distillation TMKD for short method for web Question Answering system. We first develop a general QA distillation task for student model pretraining, and further finetune this pretrained student model with multiteacher knowledge distillation on downstream tasks like Web QA task, MNLI, SNLI, RTE tasks from GLUE, which effectively reduces the overfitting bias in individual teacher models, and transfers more general knowledge to the student model. The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with substantial speedup of model inference.
Reconstruction of Pairwise Interactions using EnergyBased Models ; Pairwise models like the Ising model or the generalized Potts model have found many successful applications in fields like physics, biology, and economics. Closely connected is the problem of inverse statistical mechanics, where the goal is to infer the parameters of such models given observed data. An open problem in this field is the question of how to train these models in the case where the data contain additional higherorder interactions that are not present in the pairwise model. In this work, we propose an approach based on EnergyBased Models and pseudolikelihood maximization to address these complications we show that hybrid models, which combine a pairwise model and a neural network, can lead to significant improvements in the reconstruction of pairwise interactions. We show these improvements to hold consistently when compared to a standard approach using only the pairwise model and to an approach using only a neural network. This is in line with the general idea that simple interpretable models and complex blackbox models are not necessarily a dichotomy interpolating these two classes of models can allow to keep some advantages of both.
Generalized Spatial and Spatiotemporal ARCH Models ; In timeseries analyses, particularly for finance, generalized autoregressive conditional heteroscedasticity GARCH models are widely applied statistical tools for modelling volatility clusters i.e., periods of increased or decreased risk. In contrast, it has not been considered to be of critical importance until now to model spatial dependence in the conditional second moments. Only a few models have been proposed for modelling local clusters of increased risks. In this paper, we introduce a novel spatial GARCH process in a unified spatial and spatiotemporal GARCH framework, which also covers all previously proposed spatial ARCH models, exponential spatial GARCH, and timeseries GARCH models. In contrast to previous spatiotemporal and time series models, this spatial GARCH allows for instantaneous spillovers across all spatial units. For this common modelling framework, estimators are derived based on a nonlinear leastsquares approach. Eventually, the use of the model is demonstrated by a Monte Carlo simulation study and by an empirical example that focuses on real estate prices from 1995 to 2014 across the ZIPCode areas of Berlin. A spatial autoregressive model is applied to the data to illustrate how locally varying model uncertainties e.g., due to latent regressors can be captured by the spatial GARCHtype models.
Higgs Bundles and UV Completion in FTheory ; Ftheory admits 7branes with exceptional gauge symmetries, which can be compactified to give phenomenological fourdimensional GUT models. Here we study general supersymmetric compactifications of eightdimensional YangMills theory. They are mathematically described by meromorphic Higgs bundles, and therefore admit a spectral cover description. This allows us to give a rigorous and intrinsic construction of local models in Ftheory. We use our results to prove a nogo theorem showing that local SU5 models with three generations do not exist for generic moduli. However we show that threegeneration models do exist on the NoetherLefschetz locus. We explain how Ftheory models can be mapped to nonperturbative orientifold models using a scaling limit proposed by Sen. Further we address the construction of global models that do not have heterotic duals. We show how one may obtain a contractible worldvolume with a twocycle not inherited from the bulk, a necessary condition for implementing GUT breaking using fluxes. We also show that the complex structure moduli in global models can be arranged so that no dimension four or five proton decay can be generated.
Modelbased generation of natural language specifications ; Application of formal models provides many benefits for the software and system development, however, the learning curve of formal languages could be a critical factor for an industrial project. Thus, a natural language specification that reflects all the aspects of the formal model might help to understand the model and be especially useful for the stakeholders who do not know the corresponding formal language. Moreover, an automated generation of the documentation from the model would replace manual updates of the documentation for the cases the model is modified. This paper presents an ongoing work on generating natural language specifications from formal models. Our goal is to generate documentation in English from the basic modelling artefacts, such as data types, state machines, and architectural components. To allow further formal analysis of the generated specification, we restrict English to its subset, Attempto Controlled English.
Hierarchical approaches for flexible and interpretable binary regression models ; Binary regression models are ubiquitous in virtually every scientific field. Frequently, traditional generalized linear models fail to capture the variability in the probability surface that gives rise to the binary observations and novel methodology is required. This has generated a substantial literature comprised of binary regression models motivated by various applications. We describe a novel organization of generalizations to traditional binary regression methods based on the familiar threepart structure of generalized linear models random component, systematic component, link function. This new perspective facilitates both the comparison of existing approaches, and the development of novel, flexible models with interpretable parameters that capture applicationspecific data generating mechanisms. We use our proposed organizational structure to discuss some concerns with certain existing models for binary data based on quantile regression. We then use the framework to develop several new binary regression models tailored to occupancy data for European red squirrels Sciurus vulgaris.
Modelling Latent Skills for Multitask Language Generation ; We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that explicitly modelling these skills in a task embedding space can help with both positive transfer across tasks and with efficient adaptation to new tasks. We instantiate this task embedding space as a latent variable in a latent variable sequencetosequence model. We evaluate this hypothesis by curating a series of monolingual texttotext language generation datasets covering a broad range of tasks and domains and comparing the performance of models both in the multitask and fewshot regimes. We show that our latent task variable model outperforms other sequencetosequence baselines on average across tasks in the multitask setting. In the fewshot learning setting on an unseen test dataset i.e., a new task, we demonstrate that model adaptation based on inference in the latent task space is more robust than standard finetuning based parameter adaptation and performs comparably in terms of overall performance. Finally, we examine the latent task representations learnt by our model and show that they cluster tasks in a natural way.
Inverse Graphics GAN Learning to Generate 3D Shapes from Unstructured 2D Data ; Recent work has shown the ability to learn generative models for 3D shapes from only unstructured 2D images. However, training such models requires differentiating through the rasterization step of the rendering process, therefore past work has focused on developing bespoke rendering models which smooth over this nondifferentiable process in various ways. Such models are thus unable to take advantage of the photorealistic, fully featured, industrial renderers built by the gaming and graphics industry. In this paper we introduce the first scalable training technique for 3D generative models from 2D data which utilizes an offtheshelf nondifferentiable renderer. To account for the nondifferentiability, we introduce a proxy neural renderer to match the output of the nondifferentiable renderer. We further propose discriminator output matching to ensure that the neural renderer learns to smooth over the rasterization appropriately. We evaluate our model on images rendered from our generated 3D shapes, and show that our model can consistently learn to generate better shapes than existing models when trained with exclusively unstructured 2D images.
Generalized modeling of empirical socialecological systems ; Modeling socialecological systems is difficult due to the complexity of ecosystems and of individual and collective human behavior. Key components of the socialecological system are often oversimplified or omitted. Generalized modeling is a dynamical systems approach that can overcome some of these challenges. It can rigorously analyze qualitative system dynamics such as regime shifts despite incomplete knowledge of the model's constituent processes. Here, we review generalized modeling and use a recent study on the Baltic Sea cod fishery's boom and collapse to demonstrate its application to modeling the dynamics of empirical socialecological systems. These empirical applications demand new methods of analysis suited to larger, more complicated generalized models. Generalized modeling is a promising tool for rapidly developing mathematically rigorous, processbased understanding of a socialecological system's dynamics despite limited knowledge of the system.
A Bayesian Model for Generative Transitionbased Dependency Parsing ; We propose a simple, scalable, fully generative model for transitionbased dependency parsing with high accuracy. The model, parameterized by Hierarchical PitmanYor Processes, overcomes the limitations of previous generative models by allowing fast and accurate inference. We propose an efficient decoding algorithm based on particle filtering that can adapt the beam size to the uncertainty in the model while jointly predicting POS tags and parse trees. The UAS of the parser is on par with that of a greedy discriminative baseline. As a language model, it obtains better perplexity than a ngram model by performing semisupervised learning over a large unlabelled corpus. We show that the model is able to generate locally and syntactically coherent sentences, opening the door to further applications in language generation.
Graphical Representations for Ising and Potts Models in General External Fields ; This work is concerned with the theory of Graphical Representation for the Ising and Potts Models over general lattices with nontranslation invariant external field. We explicitly describe in terms of the Random Cluster Representation the distribution function and, consequently, the expected value of a single spin for the Ising and qstates Potts Models with general external fields. We also consider the Gibbs States for the EdwardsSokal Representation of the Potts Model with nontranslation invariant magnetic field and prove a version of the FKG Inequality for the so called General Random Cluster Model GRC Model with free and wired boundary conditions in the nontranslation invariant case. Adding the amenability hypothesis on the lattice, we obtain the uniqueness of the infinite connected component and the quasilocality of the Gibbs Measures for the GRC Model with such general magnetic fields. As a final application of the theory developed, we show the uniqueness of the Gibbs Measures for the Ferromagnetic Ising Model with a positive power law decay magnetic field, as conjectured in 8.
Latent Normalizing Flows for Discrete Sequences ; Normalizing flows are a powerful class of generative models for continuous random variables, showing both strong model flexibility and the potential for nonautoregressive generation. These benefits are also desired when modeling discrete random variables such as text, but directly applying normalizing flows to discrete sequences poses significant additional challenges. We propose a VAEbased generative model which jointly learns a normalizing flowbased distribution in the latent space and a stochastic mapping to an observed discrete space. In this setting, we find that it is crucial for the flowbased distribution to be highly multimodal. To capture this property, we propose several normalizing flow architectures to maximize model flexibility. Experiments consider common discrete sequence tasks of characterlevel language modeling and polyphonic music generation. Our results indicate that an autoregressive flowbased model can match the performance of a comparable autoregressive baseline, and a nonautoregressive flowbased model can improve generation speed with a penalty to performance.
Anchored Correlation Explanation Topic Modeling with Minimal Domain Knowledge ; While generative models such as Latent Dirichlet Allocation LDA have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation CorEx, an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an informationtheoretic framework. This framework naturally generalizes to hierarchical and semisupervised extensions with no additional modeling assumptions. In particular, wordlevel domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semisupervised variants of LDA.
Shaping Belief States with Generative Environment Models for RL ; When agents interact with a complex environment, they must form and maintain beliefs about the relevant aspects of that environment. We propose a way to efficiently train expressive generative models in complex environments. We show that a predictive algorithm with an expressive generative model can form stable beliefstates in visually rich and dynamic 3D environments. More precisely, we show that the learned representation captures the layout of the environment as well as the position and orientation of the agent. Our experiments show that the model substantially improves dataefficiency on a number of reinforcement learning RL tasks compared with strong modelfree baseline agents. We find that predicting multiple steps into the future overshooting, in combination with an expressive generative model, is critical for stable representations to emerge. In practice, using expressive generative models in RL is computationally expensive and we propose a scheme to reduce this computational burden, allowing us to build agents that are competitive with modelfree baselines.
Recent Advances in Scalable Network Generation ; Random graph models are frequently used as a controllable and versatile data source for experimental campaigns in various research fields. Generating such datasets at scale is a nontrivial task as it requires design decisions typically spanning multiple areas of expertise. Challenges begin with the identification of relevant domainspecific network features, continue with the question of how to compile such features into a tractable model, and culminate in algorithmic details arising while implementing the pertaining model. In the present survey, we explore crucial aspects of random graph models with known scalable generators. We begin by briefly introducing network features considered by such models, and then discuss random graphs alongside with generation algorithms. Our focus lies on modelling techniques and algorithmic primitives that have proven successful in obtaining massive graphs. We consider concepts and graph models for various domains such as social network, infrastructure, ecology, and numerical simulations, and discuss generators for different models of computation including sharedmemory parallelism, massiveparallel GPUs, and distributed systems.
Dimension Independent Generalization Error by Stochastic Gradient Descent ; One classical canon of statistics is that large models are prone to overfitting, and model selection procedures are necessary for high dimensional data. However, many overparameterized models, such as neural networks, perform very well in practice, although they are often trained with simple online methods and regularization. The empirical success of overparameterized models, which is often known as benign overfitting, motivates us to have a new look at the statistical generalization theory for online optimization. In particular, we present a general theory on the generalization error of stochastic gradient descent SGD solutions for both convex and locally convex loss functions. We further discuss data and model conditions that lead to a low effective dimension. Under these conditions, we show that the generalization error either does not depend on the ambient dimension p or depends on p via a polylogarithmic factor. We also demonstrate that in several widely used statistical models, the low effective dimension'' arises naturally in overparameterized settings. The studied statistical applications include both convex models such as linear regression and logistic regression and nonconvex models such as Mestimator and twolayer neural networks.
CAZSL ZeroShot Regression for Pushing Models by Generalizing Through Context ; Learning accurate models of the physical world is required for a lot of robotic manipulation tasks. However, during manipulation, robots are expected to interact with unknown workpieces so that building predictive models which can generalize over a number of these objects is highly desirable. In this paper, we study the problem of designing deep learning agents which can generalize their models of the physical world by building contextaware learning models. The purpose of these agents is to quickly adapt andor generalize their notion of physics of interaction in the real world based on certain features about the interacting objects that provide different contexts to the predictive models. With this motivation, we present contextaware zero shot learning CAZSL, pronounced as casual models, an approach utilizing a Siamese network architecture, embedding space masking and regularization based on context variables which allows us to learn a model that can generalize to different parameters or features of the interacting objects. We test our proposed learning algorithm on the recently released Omnipush datatset that allows testing of metalearning capabilities using lowdimensional data. Codes for CAZSL are available at httpswww.merl.comresearchlicenseCAZSL.
Improving Language Generation with Sentence Coherence Objective ; Conditional story generation and contextual text continuation have become increasingly popular topics in NLP community. Existing models are often prone to output paragraphs of texts that gradually diverge from the given prompt. Although the generated text may have a reasonable perplexity and diversity, it could easily be identified by human as gibberish. The goal of our project is to improve the coherence and consistency across sentences in a languagegeneration model. We aim to solve this issue by first training a sentence pair coherence classifier with GPT2 pretrained model, and then cotrain the GPT2 language model with this new coherence objective using a method analogous to the REINFORCE algorithm. This finetuned language model is able to generate lengthy paragraph conditioned on a given topic without diverging too much. The simplicity of this model allows it to be applicable to a variety of underlying language model architecture since it only modifies the final layer of the pretrained model.
Blind stain separation using modelaware generative learning and its applications on fluorescence microscopy images ; Multiple stains are usually used to highlight biological substances in biomedical image analysis. To decompose multiple stains for colocalization quantification, blind source separation is usually performed. Prior modelbased stain separation methods usually rely on stains' spatial distributions over an image and may fail to solve the colocalization problem. With the advantage of machine learning, deep generative models are used for this purpose. Since prior knowledge of imaging models is ignored in purely datadriven solutions, these methods may be suboptimal. In this study, a novel learningbased blind source separation framework is proposed, where the physical model of biomedical imaging is incorporated to regularize the learning process. The introduced modelrelevant adversarial loss couples all generators in the framework and limits the capacities of the generative models. Further more, a training algorithm is innovated for the proposed framework to avoid intergenerator confusion during learning. This paper particularly takes fluorescence unmixing in fluorescence microscopy images as an application example of the proposed framework. Qualitative and quantitative experimentation on a public fluorescence microscopy image set demonstrates the superiority of the proposed method over both prior modelbased approaches and learningbased methods.
Skill Rating for Generative Models ; We explore a new way to evaluate generative models using insights from evaluation of competitive games between human players. We show experimentally that tournaments between generators and discriminators provide an effective way to evaluate generative models. We introduce two methods for summarizing tournament outcomes tournament win rate and skill rating. Evaluations are useful in different contexts, including monitoring the progress of a single model as it learns during the training process, and comparing the capabilities of two different fully trained models. We show that a tournament consisting of a single model playing against past and future versions of itself produces a useful measure of training progress. A tournament containing multiple separate models using different seeds, hyperparameters, and architectures provides a useful relative comparison between different trained GANs. Tournamentbased rating methods are conceptually distinct from numerous previous categories of approaches to evaluation of generative models, and have complementary advantages and disadvantages.
Alternating Recurrent Dialog Model with Largescale Pretrained Language Models ; Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. The recent success of large pretrained language models such as BERT and GPT2 Devlin et al., 2019; Radford et al., 2019 have suggested the effectiveness of incorporating language priors in downstream NLP tasks. However, how much pretrained language models can help dialog response generation is still under exploration. In this paper, we propose a simple, general, and effective framework Alternating Roles Dialog Model ARDM. ARDM models each speaker separately and takes advantage of the large pretrained language model. It requires no supervision from human annotations such as belief states or dialog acts to achieve effective conversations. ARDM outperforms or is on par with stateoftheart methods on two popular taskoriented dialog datasets CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging, noncollaborative tasks such as persuasion. In persuasion tasks, ARDM is capable of generating humanlike responses to persuade people to donate to a charity.
Reluctant generalized additive modeling ; Sparse generalized additive models GAMs are an extension of sparse generalized linear models which allow a model's prediction to vary nonlinearly with an input variable. This enables the data analyst build more accurate models, especially when the linearity assumption is known to be a poor approximation of reality. Motivated by reluctant interaction modeling Yu et al. 2019, we propose a multistage algorithm, called textitreluctant generalized additive modeling RGAM, that can fit sparse generalized additive models at scale. It is guided by the principle that, if all else is equal, one should prefer a linear feature over a nonlinear feature. Unlike existing methods for sparse GAMs, RGAM can be extended easily to binary, count and survival data. We demonstrate the method's effectiveness on real and simulated examples.
BPS Skyrme neutron stars in generalized gravity ; We study the coupling of nuclear matter described by the BPS Skyrme model to generalized gravity. Concretely, we consider the Starobinsky model which provides the leadingorder correction to the EinsteinHilbert action. Static solutions describing neutron stars are found both for the full field theory and for the meanfield approximation. We always consider the full Starobinsky model in the nonperturbative approach, using appropriately generalized shooting methods for the numerical neutron star calculations. Many of our results are similar to previous investigations of neutron stars for the Starobinsky model using other models of nuclear matter, but there are some surprizing discrepancies. The Newtonian mass relevant for the surface redshift, e.g., results larger than the ADM mass in our model, in contrast to other investigations. This difference is related to the particularly high stiffness of nuclear matter described by the BPS Skyrme model and offers an interesting possibility to distinguish different models of nuclear matter within generalized gravity.
An EM Approach to Nonautoregressive Conditional Sequence Generation ; Autoregressive AR models have been the dominating approach to conditional sequence generation, but are suffering from the issue of high inference latency. Nonautoregressive NAR models have been recently proposed to reduce the latency by generating all output tokens in parallel but could only achieve inferior accuracy compared to their autoregressive counterparts, primarily due to a difficulty in dealing with the multimodality in sequence generation. This paper proposes a new approach that jointly optimizes both AR and NAR models in a unified ExpectationMaximization EM framework. In the Estep, an AR model learns to approximate the regularized posterior of the NAR model. In the Mstep, the NAR model is updated on the new posterior and selects the training examples for the next AR model. This iterative process can effectively guide the system to remove the multimodality in the output sequences. To our knowledge, this is the first EM approach to NAR sequence generation. We evaluate our method on the task of machine translation. Experimental results on benchmark data sets show that the proposed approach achieves competitive, if not better, performance with existing NAR models and significantly reduces the inference latency.
PlugandPlay Conversational Models ; There has been considerable progress made towards conversational models that generate coherent and fluent responses; however, this often involves training large language models on large dialogue datasets, such as Reddit. These large conversational models provide little control over the generated responses, and this control is further limited in the absence of annotated conversational datasets for attribute specific generation that can be used for finetuning the model. In this paper, we first propose and evaluate plugandplay methods for controllable response generation, which does not require dialogue specific datasets and does not rely on finetuning a large model. While effective, the decoding procedure induces considerable computational overhead, rendering the conversational model unsuitable for interactive usage. To overcome this, we introduce an approach that does not require further computation at decoding time, while also does not require any finetuning of a large language model. We demonstrate, through extensive automatic and human evaluation, a high degree of control over the generated conversational responses with regard to multiple desired attributes, while being fluent.
Unlocking Compositional Generalization in Pretrained Models Using Intermediate Representations ; Sequencetosequence seq2seq models are prevalent in semantic parsing, but have been found to struggle at outofdistribution compositional generalization. While specialized model architectures and pretraining of seq2seq models have been proposed to address this issue, the former often comes at the cost of generality and the latter only shows limited success. In this paper, we study the impact of intermediate representations on compositional generalization in pretrained seq2seq models, without changing the model architecture at all, and identify key aspects for designing effective representations. Instead of training to directly map natural language to an executable form, we map to a reversible or lossy intermediate representation that has stronger structural correspondence with natural language. The combination of our proposed intermediate representations and pretrained models is surprisingly effective, where the best combinations obtain a new stateoftheart on CFQ 14.8 accuracy points and on the templatesplits of three texttoSQL datasets 15.0 to 19.4 accuracy points. This work highlights that intermediate representations provide an important and potentially overlooked degree of freedom for improving the compositional generalization abilities of pretrained seq2seq models.
Towards a Deep Learning Model for Hadronization ; Hadronization is a complex quantum process whereby quarks and gluons become hadrons. The widelyused models of hadronization in event generators are based on physicallyinspired phenomenological models with many free parameters. We propose an alternative approach whereby neural networks are used instead. Deep generative models are highly flexible, differentiable, and compatible with Graphical Processing Unit GPUs. We make the first step towards a datadriven machine learningbased hadronization model by replacing a compont of the hadronization model within the Herwig event generator cluster model with a Generative Adversarial Network GAN. We show that a GAN is capable of reproducing the kinematic properties of cluster decays. Furthermore, we integrate this model into Herwig to generate entire events that can be compared with the output of the public Herwig simulator as well as with ee data.
SelfProgramming Artificial Intelligence Using CodeGenerating Language Models ; Recent progress in largescale language models has enabled breakthroughs in previously intractable computer programming tasks. Prior work in metalearning and neural architecture search has led to substantial successes across various task domains, spawning myriad approaches for algorithmically optimizing the design and learning dynamics of deep learning models. At the intersection of these research areas, we implement a codegenerating language model with the ability to modify its own source code. Selfprogramming AI algorithms have been of interest since the dawn of AI itself. Although various theoretical formulations of generalized selfprogramming AI have been posed, no such system has been successfully implemented to date under realworld computational constraints. Applying AIbased code generation to AI itself, we develop and experimentally validate the first practical implementation of a selfprogramming AI system. We empirically show that a selfprogramming AI implemented using a code generation model can successfully modify its own source code to improve performance and program submodels to perform auxiliary tasks. Our model can selfmodify various properties including model architecture, computational capacity, and learning dynamics.
On the Generalization and Adaption Performance of Causal Models ; Learning models that offer robust outofdistribution generalization and fast adaptation is a key challenge in modern machine learning. Modelling causal structure into neural networks holds the promise to accomplish robust zero and fewshot adaptation. Recent advances in differentiable causal discovery have proposed to factorize the data generating process into a set of modules, i.e. one module for the conditional distribution of every variable where only causal parents are used as predictors. Such a modular decomposition of knowledge enables adaptation to distributions shifts by only updating a subset of parameters. In this work, we systematically study the generalization and adaption performance of such modular neural causal models by comparing it to monolithic models and structured models where the set of predictors is not constrained to causal parents. Our analysis shows that the modular neural causal models outperform other models on both zero and fewshot adaptation in low data regimes and offer robust generalization. We also found that the effects are more significant for sparser graphs as compared to denser graphs.
Correcting Diverse Factual Errors in Abstractive Summarization via PostEditing and Language Model Infilling ; Abstractive summarization models often generate inconsistent summaries containing factual errors or hallucinated content. Recent works focus on correcting factual errors in generated summaries via postediting. Such correction models are trained using adversarial nonfactual summaries constructed using heuristic rules for injecting errors. However, generating nonfactual summaries using heuristics often does not generalize well to actual model errors. In this work, we propose to generate hard, representative synthetic examples of nonfactual summaries through infilling language models. With this data, we train a more robust factcorrection model to postedit the summaries to improve factual consistency. Through quantitative and qualitative experiments on two popular summarization datasets CNNDM and XSum we show that our approach vastly outperforms prior methods in correcting erroneous summaries. Our model FactEdit improves factuality scores by over 11 points on CNNDM and over 31 points on XSum on average across multiple summarization models, producing more factual summaries while maintaining competitive summarization quality.
Is Conditional Generative Modeling all you need for DecisionMaking ; Recent improvements in conditional generative modeling have made it possible to generate highquality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decisionmaking. We view decisionmaking not through the lens of reinforcement learning RL, but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a returnconditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at testtime that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decisionmaking.
Understanding the Distillation Process from Deep Generative Models to Tractable Probabilistic Circuits ; Probabilistic Circuits PCs are a general and unified computational framework for tractable probabilistic models that support efficient computation of various inference tasks e.g., computing marginal probabilities. Towards enabling such reasoning capabilities in complex realworld tasks, Liu et al. 2022 propose to distill knowledge through latent variable assignments from less tractable but more expressive deep generative models. However, it is still unclear what factors make this distillation work well. In this paper, we theoretically and empirically discover that the performance of a PC can exceed that of its teacher model. Therefore, instead of performing distillation from the most expressive deep generative model, we study what properties the teacher model and the PC should have in order to achieve good distillation performance. This leads to a generic algorithmic improvement as well as other datatypespecific ones over the existing latent variable distillation pipeline. Empirically, we outperform SoTA TPMs by a large margin on challenging image modeling benchmarks. In particular, on ImageNet32, PCs achieve 4.06 bitsperdimension, which is only 0.34 behind variational diffusion models Kingma et al., 2021.
Investigating Failures to Generalize for Coreference Resolution Models ; Coreference resolution models are often evaluated on multiple datasets. Datasets vary, however, in how coreference is realized i.e., how the theoretical concept of coreference is operationalized in the dataset due to factors such as the choice of corpora and annotation guidelines. We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets OntoNotes, PreCo, and Winogrande. Specifically, we distinguish between and break down model performance into categories corresponding to several types of coreference, including coreferring generic mentions, compound modifiers, and copula predicates, among others. This break down helps us investigate how stateoftheart models might vary in their ability to generalize across different coreference types. In our experiments, for example, models trained on OntoNotes perform poorly on generic mentions and copula predicates in PreCo. Our findings help calibrate expectations of current coreference resolution models; and, future work can explicitly account for those types of coreference that are empirically associated with poor generalization when developing models.
A data augmentation perspective on diffusion models and retrieval ; Diffusion models excel at generating photorealistic images from textqueries. Naturally, many approaches have been proposed to use these generative abilities to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large noisily supervised, but nonetheless, annotated datasets. It is an open question whether the generalization capabilities of diffusion models beyond using the additional data of the pretraining process for augmentation lead to improved downstream performance. We perform a systematic evaluation of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. While we find that personalizing diffusion models towards the target data outperforms simpler prompting strategies, we also show that using the training data of the diffusion model alone, via a simple nearest neighbor retrieval procedure, leads to even stronger downstream performance. Overall, our study probes the limitations of diffusion models for data augmentation but also highlights its potential in generating new training data to improve performance on simple downstream vision tasks.
The Design Space of Generative Models ; Card et al.'s classic paper The Design Space of Input Devices established the value of design spaces as a tool for HCI analysis and invention. We posit that developing design spaces for emerging pretrained, generative AI models is necessary for supporting their integration into humancentered systems and practices. We explore what it means to develop an AI model design space by proposing two design spaces relating to generative AI models the first considers how HCI can impact generative models i.e., interfaces for models and the second considers how generative models can impact HCI i.e., models as an HCI prototyping material.