text
stringlengths
62
2.94k
RSDiff Remote Sensing Image Generation from Text Using Diffusion Model ; Satellite imagery generation and superresolution are pivotal tasks in remote sensing, demanding highquality, detailed images for accurate analysis and decisionmaking. In this paper, we propose an innovative and lightweight approach that employs twostage diffusion models to gradually generate highresolution Satellite images purely based on text prompts. Our innovative pipeline comprises two interconnected diffusion models a LowResolution Generation Diffusion Model LRGDM that generates lowresolution images from text and a SuperResolution Diffusion Model SRDM conditionally produced. The LRGDM effectively synthesizes lowresolution by computing the correlations of the text embedding and the image embedding in a shared latent space, capturing the essential content and layout of the desired scenes. Subsequently, the SRDM takes the generated lowresolution image and its corresponding text prompts and efficiently produces the highresolution counterparts, infusing finegrained spatial details and enhancing visual fidelity. Experiments are conducted on the commonly used dataset, Remote Sensing Image Captioning Dataset RSICD. Our results demonstrate that our approach outperforms existing stateoftheart SoTA models in generating satellite images with realistic geographical features, weather conditions, and land structures while achieving remarkable superresolution results for increased spatial precision.
Efficient Network Generation Under General Preferential Attachment ; Preferential attachment PA models of network structure are widely used due to their explanatory power and conceptual simplicity. PA models are able to account for the scalefree degree distributions observed in many realworld large networks through the remarkably simple mechanism of sequentially introducing nodes that attach preferentially to highdegree nodes. The ability to efficiently generate instances from PA models is a key asset in understanding both the models themselves and the real networks that they represent. Surprisingly, little attention has been paid to the problem of efficient instance generation. In this paper, we show that the complexity of generating network instances from a PA model depends on the preference function of the model, provide efficient data structures that work under any preference function, and present empirical results from an implementation based on these data structures. We demonstrate that, by indexing growing networks with a simple augmented heap, we can implement a network generator which scales many orders of magnitude beyond existing capabilities 106 108 nodes. We show the utility of an efficient and general PA network generator by investigating the consequences of varying the preference functions of an existing model. We also provide quicknet, a freelyavailable opensource implementation of the methods described in this work.
MaskGAN Better Text Generation via Filling in the ; Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are stateoftheart for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maxi mum likelihood and teacher forcing. These methods are wellsuited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks GANs, which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actorcritic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.
ATOM Commit Message Generation Based on Abstract Syntax Tree and Hybrid Ranking ; Commit messages record code changes e.g., feature modifications and bug repairs in natural language, and are useful for program comprehension. Due to the frequent updates of software and time cost, developers are generally unmotivated to write commit messages for code changes. Therefore, automating the message writing process is necessitated. Previous studies on commit message generation have been benefited from generation models or retrieval models, but the code structure of changed code, i.e., AST, which can be important for capturing code semantics, has not been explicitly involved. Moreover, although generation models have the advantages of synthesizing commit messages for new code changes, they are not easy to bridge the semantic gap between code and natural languages which could be mitigated by retrieval models. In this paper, we propose a novel commit message generation model, named ATOM, which explicitly incorporates the abstract syntax tree for representing code changes and integrates both retrieved and generated messages through hybrid ranking. Specifically, the hybrid ranking module can prioritize the most accurate message from both retrieved and generated messages regarding one code change. We evaluate the proposed model ATOM on our dataset crawled from 56 popular Java repositories. Experimental results demonstrate that ATOM increases the stateoftheart models by 30.72 in terms of BLEU4 an accuracy measure that is widely used to evaluate text generation systems. Qualitative analysis also demonstrates the effectiveness of ATOM in generating accurate code commit messages.
EMIXER Endtoend Multimodal Xray Generation via Selfsupervision ; Deep generative models have enabled the automated synthesis of highquality data for diverse applications. However, the most effective generative models are specialized to data from a single domain e.g., images or text. Realworld applications such as healthcare require multimodal data from multiple domains e.g., both images and corresponding text, which are difficult to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an Endtoend MultImodal Xray genERative model EMIXER for jointly synthesizing xray images and corresponding freetext reports, all conditional on diagnosis labels. EMIXER is an conditional generative adversarial model by 1 generating an image based on a label, 2 encoding the image to a hidden embedding, 3 producing the corresponding text via a hierarchical decoder from the image embedding, and 4 a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables selfsupervision to leverage vast amount of unlabeled data. Extensive experiments with real Xray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of a variety of supervised tasks including COVID19 Xray classification with very limited samples. The quality of generated images and reports are also confirmed by radiologists. We quantitatively show that EMIXER generated synthetic datasets can augment Xray image classification, report generation models to achieve 5.94 and 6.9 improvement on models trained only on real data samples. Taken together, our results highlight the promise of state of generative models to advance clinical machine learning.
Deep Generative Models of Gravitational Waveforms via Conditional Autoencoder ; We construct few deep generative models of gravitational waveforms based on the semisupervising scheme of conditional autoencoders and their variational extensions. Once the training is done, we find that our best waveform model can generate the inspiralmerger waveforms of binary black hole coalescence with more than 97 average overlap matched filtering accuracy for the mass ratio between 1 and 10. Besides, the generation time of a single waveform takes about one millisecond, which is about 10 to 100 times faster than the EOBNR algorithm running on the same computing facility. Moreover, these models can also help to explore the space of waveforms. That is, with mainly the lowmassratio training set, the resultant trained model is capable of generating large amount of accurate highmassratio waveforms. This result implies that our generative model can speed up the waveform generation for the low latency search of gravitational wave events. With the improvement of the accuracy in future work, the generative waveform model may also help to speed up the parameter estimation and can assist the numerical relativity in generating the waveforms of higher mass ratio by progressively selftraining.
Different faces of generalized holographic dark energy ; In the formalism of generalized holographic dark energy HDE, the holographic cutoff is generalized to depend upon LmathrmIR LmathrmIR left Lmathrmp, dot Lmathrmp, ddot Lmathrmp, cdots, Lmathrmf, dot Lmathrmf, cdots, aright with Lmathrmp and Lmathrmf are the particle horizon and the future horizon, respectively moreover a is the scale factor of the universe. Based on such formalism, in the present paper, we show that a wide class of dark energy DE models can be regarded as different candidates of the generalized HDE family, with respective cutoffs. This can be thought as a symmetry between the generalized HDE and different DE models. In this regard, we consider several entropic dark energy models like Tsallis entropic DE, the R'enyi entropic DE, and the SharmaMittal entropic DE and showed that they are indeed equivalent with the generalized HDE. Such equivalence between the entropic DE and the generalized HDE is extended to the scenario where the respective exponents of the entropy functions are allowed to vary with the expansion of the universe. Besides the entropic DE models, the correspondence with the generalized HDE is also established for the Quintessence and for the Ricci DE models. In all the above cases, the effective equation of state EoS parameter corresponds to the holographic energy density are determined, by which the equivalence of various DE models with the respective generalized HDE models are further confirmed. The equivalent holographic cutoffs are determined by two ways 1 in terms of the particle horizon and its derivatives, 2 in terms of the future horizon horizon and its derivatives.
AutoLossGen Automatic Loss Function Generation for Recommender Systems ; In recommendation systems, the choice of loss function is critical since a good loss may significantly improve the model performance. However, manually designing a good loss is a big challenge due to the complexity of the problem. A large fraction of previous work focuses on handcrafted loss functions, which needs significant expertise and human effort. In this paper, inspired by the recent development of automated machine learning, we propose an automatic loss function generation framework, AutoLossGen, which is able to generate loss functions directly constructed from basic mathematical operators without prior knowledge on loss structure. More specifically, we develop a controller model driven by reinforcement learning to generate loss functions, and develop iterative and alternating optimization schedule to update the parameters of both the controller model and the recommender model. One challenge for automatic loss generation in recommender systems is the extreme sparsity of recommendation datasets, which leads to the sparse reward problem for loss generation and search. To solve the problem, we further develop a reward filtering mechanism for efficient and effective loss generation. Experimental results show that our framework manages to create tailored loss functions for different recommendation models and datasets, and the generated loss gives better recommendation performance than commonly used baseline losses. Besides, most of the generated losses are transferable, i.e., the loss generated based on one model and dataset also works well for another model or dataset. Source code of the work is available at httpsgithub.comrutgerswiselabAutoLossGen.
PhysioGAN Training High Fidelity Generative Model for Physiological Sensor Readings ; Generative models such as the variational autoencoder VAE and the generative adversarial networks GAN have proven to be incredibly powerful for the generation of synthetic data that preserves statistical properties and utility of realworld datasets, especially in the context of image and natural language text. Nevertheless, until now, there has no successful demonstration of how to apply either method for generating useful physiological sensory data. The stateoftheart techniques in this context have achieved only limited success. We present PHYSIOGAN, a generative model to produce high fidelity synthetic physiological sensor data readings. PHYSIOGAN consists of an encoder, decoder, and a discriminator. We evaluate PHYSIOGAN against the stateoftheart techniques using two different realworld datasets ECG classification and activity recognition from motion sensors datasets. We compare PHYSIOGAN to the baseline models not only the accuracy of class conditional generation but also the sample diversity and sample novelty of the synthetic datasets. We prove that PHYSIOGAN generates samples with higher utility than other generative models by showing that classification models trained on only synthetic data generated by PHYSIOGAN have only 10 and 20 decrease in their classification accuracy relative to classification models trained on the real data. Furthermore, we demonstrate the use of PHYSIOGAN for sensor data imputation in creating plausible results.
ELMER A NonAutoregressive Pretrained Language Model for Efficient and Effective Text Generation ; We study the text generation task under the approach of pretrained language models PLMs. Typically, an autoregressive AR method is adopted for generating texts in a tokenbytoken manner. Despite many advantages of AR generation, it usually suffers from inefficient inference. Therefore, nonautoregressive NAR models are proposed to generate all target tokens simultaneously. However, NAR models usually generate texts of lower quality due to the absence of token dependency in the output text. In this paper, we propose ELMER an efficient and effective PLM for NAR text generation to explicitly model the token dependency during NAR generation. By leveraging the early exit technique, ELMER enables the token generations at different layers, according to their prediction confidence a more confident token will exit at a lower layer. Besides, we propose a novel pretraining objective, Layer Permutation Language Modeling, to pretrain ELMER by permuting the exit layer for each token in sequences. Experiments on three text generation tasks show that ELMER significantly outperforms NAR models and further narrows the performance gap with AR PLMs eg ELMER 29.92 vs BART 30.61 ROUGEL in XSUM while achieving over 10 times inference speedup.
DiffusionStego Trainingfree Diffusion Generative Steganography via Message Projection ; Generative steganography is the process of hiding secret messages in generated images instead of cover images. Existing studies on generative steganography use GAN or Flow models to obtain high hiding message capacity and antidetection ability over cover images. However, they create relatively unrealistic stego images because of the inherent limitations of generative models. We propose DiffusionStego, a generative steganography approach based on diffusion models which outperform other generative models in image generation. DiffusionStego projects secret messages into latent noise of diffusion models and generates stego images with an iterative denoising process. Since the naive hiding of secret messages into noise boosts visual degradation and decreases extracted message accuracy, we introduce message projection, which hides messages into noise space while addressing these issues. We suggest three options for message projection to adjust the tradeoff between extracted message accuracy, antidetection ability, and image quality. DiffusionStego is a trainingfree approach, so we can apply it to pretrained diffusion models which generate highquality images, or even largescale texttoimage models, such as Stable diffusion. DiffusionStego achieved a high capacity of messages 3.0 bpp of binary messages with 98 accuracy, and 6.0 bpp with 90 accuracy as well as high quality with a FID score of 2.77 for 1.0 bpp on the FFHQ 64times64 dataset that makes it challenging to distinguish from real images in the PNG format.
Domain Adaptation based on Human Feedback for Enhancing Generative Model Denoising Abilities ; How can we apply human feedback into generative model As answer of this question, in this paper, we show the method applied on denoising problem and domain adaptation using human feedback. Deep generative models have demonstrated impressive results in image denoising. However, current image denoising models often produce inappropriate results when applied to domains different from the ones they were trained on. If there are Good' and Bad' result for unseen data, how to raise up quality of Bad' result. Most methods use an approach based on generalization of model. However, these methods require target image for training or adapting unseen domain. In this paper, to adapting domain, we deal with nontarget image for unseen domain, and improve specific failed image. To address this, we propose a method for finetuning inappropriate results generated in a different domain by utilizing human feedback. First, we train a generator to denoise images using only the noisy MNIST digit '0' images. The denoising generator trained on the source domain leads to unintended results when applied to target domain images. To achieve domain adaptation, we construct a noiseimage denoising generated image data set and train a reward model predict human feedback. Finally, we finetune the generator on the different domain using the reward model with auxiliary loss function, aiming to transfer denoising capabilities to target domain. Our approach demonstrates the potential to efficiently finetune a generator trained on one domain using human feedback from another domain, thereby enhancing denoising abilities in different domains.
It Ain't That Bad Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models ; Generative Transformerbased models have achieved remarkable proficiency on solving diverse problems. However, their generalization ability is not fully understood and not always satisfying. Researchers take basic mathematical tasks like ndigit addition or multiplication as important perspectives for investigating their generalization behaviors. Curiously, it is observed that when training on ndigit operations e.g., additions in which both input operands are ndigit in length, models generalize successfully on unseen ndigit inputs indistribution ID generalization, but fail miserably and mysteriously on longer, unseen cases outofdistribution OOD generalization. Studies try to bridge this gap with workarounds such as modifying position embedding, finetuning, and priming with more extensive or instructive data. However, without addressing the essential mechanism, there is hardly any guarantee regarding the robustness of these solutions. We bring this unexplained performance drop into attention and ask whether it is purely from random errors. Here we turn to the mechanistic line of research which has notable successes in model interpretability. We discover that the strong ID generalization stems from structured representations, while behind the unsatisfying OOD performance, the models still exhibit clear learned algebraic structures. Specifically, these models map unseen OOD inputs to outputs with equivalence relations in the ID domain. These highlight the potential of the models to carry useful information for improved generalization.
DualLip A System for Joint Lip Reading and Generation ; Lip reading aims to recognize text from talking lip, while lip generation aims to synthesize talking lip according to text, which is a key component in talking face generation and is a dual task of lip reading. In this paper, we develop DualLip, a system that jointly improves lip reading and generation by leveraging the task duality and using unlabeled text and lip video data. The key ideas of the DualLip include 1 Generate lip video from unlabeled text with a lip generation model, and use the pseudo pairs to improve lip reading; 2 Generate text from unlabeled lip video with a lip reading model, and use the pseudo pairs to improve lip generation. We further extend DualLip to talking face generation with two additionally introduced components lip to face generation and text to speech generation. Experiments on GRID and TCDTIMIT demonstrate the effectiveness of DualLip on improving lip reading, lip generation, and talking face generation by utilizing unlabeled data. Specifically, the lip generation model in our DualLip system trained with only10 paired data surpasses the performance of that trained with the whole paired data. And on the GRID benchmark of lip reading, we achieve 1.16 character error rate and 2.71 word error rate, outperforming the stateoftheart models using the same amount of paired data.
StrokeGAN Reducing Mode Collapse in Chinese Font Generation via Stroke Encoding ; The generation of stylish Chinese fonts is an important problem involved in many applications. Most of existing generation methods are based on the deep generative models, particularly, the generative adversarial networks GAN based models. However, these deep generative models may suffer from the mode collapse issue, which significantly degrades the diversity and quality of generated results. In this paper, we introduce a onebit stroke encoding to capture the key mode information of Chinese characters and then incorporate it into CycleGAN, a popular deep generative model for Chinese font generation. As a result we propose an efficient method called StrokeGAN, mainly motivated by the observation that the stroke encoding contains amount of mode information of Chinese characters. In order to reconstruct the onebit stroke encoding of the associated generated characters, we introduce a strokeencoding reconstruction loss imposed on the discriminator. Equipped with such onebit stroke encoding and strokeencoding reconstruction loss, the mode collapse issue of CycleGAN can be significantly alleviated, with an improved preservation of strokes and diversity of generated characters. The effectiveness of StrokeGAN is demonstrated by a series of generation tasks over nine datasets with different fonts. The numerical results demonstrate that StrokeGAN generally outperforms the stateoftheart methods in terms of content and recognition accuracies, as well as certain stroke error, and also generates more realistic characters.
EGANS Evolutionary Generative Adversarial Network Search for ZeroShot Learning ; Zeroshot learning ZSL aims to recognize the novel classes which cannot be collected for training a prediction model. Accordingly, generative models e.g., generative adversarial network GAN are typically used to synthesize the visual samples conditioned by the class semantic vectors and achieve remarkable progress for ZSL. However, existing GANbased generative ZSL methods are based on handcrafted models, which cannot adapt to various datasetsscenarios and fails to model instability. To alleviate these challenges, we propose evolutionary generative adversarial network search termed EGANS to automatically design the generative network with good adaptation and stability, enabling reliable visual feature sample synthesis for advancing ZSL. Specifically, we adopt cooperative dual evolution to conduct a neural architecture search for both generator and discriminator under a unified evolutionary adversarial framework. EGANS is learned by two stages evolution generator architecture search and evolution discriminator architecture search. During the evolution generator architecture search, we adopt a manytoone adversarial training strategy to evolutionarily search for the optimal generator. Then the optimal generator is further applied to search for the optimal discriminator in the evolution discriminator architecture search with a similar evolution search algorithm. Once the optimal generator and discriminator are searched, we entail them into various generative ZSL baselines for ZSL classification. Extensive experiments show that EGANS consistently improve existing generative ZSL methods on the standard CUB, SUN, AWA2 and FLO datasets. The significant performance gains indicate that the evolutionary neural architecture search explores a virgin field in ZSL.
The Quantum Dissipative Villain Model ; We introduce the Quantum Dissipative Villain QDV model as a prototype model to study tunneling in dissipative quantum mechanics. Dissipation is provided by a coupled linear environment. In the QDV model, the discrete character of a tunneling degree of freedom coupled to an environment is explicit, leading to a rich dual structure. We derive general exact mappings of the QDV model on several dual discrete representations, including pairs of selfdual models, for general linear environments and arbitrary temperatures. Selfduality allows to write exact equations for each correlation function of each representation. Analogies with the theory of classical network transformations are also presented. Finally we discuss the fundamental character of the QDV model. For instance, the standard CaldeiraLeggett model, which describes mesoscopic Josephson junctions in a circuit and many other physical systems, is a special QDV model. The selfdual structure of the QDV model allows then the exact generalization of the Schmid approximate selfduality to general linear environments and arbitrary temperatures.
Reflection KMatrices for 19Vertex Models ; We derive and classify all regular solutions of the boundary YangBaxter equation for 19vertex models known as ZamolodchikovFateev or A11 model, IzerginKorepin or A22 model, sl21 model and osp21 model. We find that there is a general solution for A11 and sl21 models. In both models it is a complete Kmatrix with three free parameters. For the A22 and osp21 models we find three general solutions, being two complete reflection Kmatrices solutions and one incomplete reflection Kmatrix solution with some null entries. In both models these solutions have two free parameters. Integrable spin1 Hamiltonians with general boundary interactions are also presented. Several reduced solutions from these general solutions are presented in the appendices.
Gravitationally influenced particle creation models and latetime cosmic acceleration ; In this work we focus on the gravitationally influenced adiabatic particle creation process, a mechanism that does not need any dark energy or modified gravity models to explain the current accelerating phase of the universe. Introducing some particle creation models that generalize some previous models in the literature, we constrain the cosmological scenarios using the latest compilation of the Type Ia Supernovae data only, the first indicator of the accelerating universe. Aside from the observational constraints on the models, we examine the models using two model independent diagnoses, namely the cosmography and Om. Further, we establish the general conditions to test the thermodynamic viabilities of any particle creation model. Our analysis shows that at latetime, the models have close resemblance to that of the LambdaCDM cosmology, and the models always satisfy the generalized second law of thermodynamics under certain conditions.
Bidirectional Helmholtz Machines ; Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine, involves training a topdown directed generative model together with a bottomup auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model which guarantees that the topdown and bottomup distributions can efficiently invert each other. We achieve this by interpreting both the topdown and the bottomup directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lowerbound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottomup and topdown approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient approximate inference.
Neural Topic Modeling with CycleConsistent Adversarial Training ; Advances on deep generative models have attracted significant research interest in neural topic modeling. The recently proposed Adversarialneural Topic Model models topics with an adversarially trained generator network and employs Dirichlet prior to capture the semantic patterns in latent topics. It is effective in discovering coherent topics but unable to infer topic distributions for given documents or utilize available document labels. To overcome such limitations, we propose Topic Modeling with Cycleconsistent Adversarial Training ToMCAT and its supervised version sToMCAT. ToMCAT employs a generator network to interpret topics and an encoder network to infer document topics. Adversarial training and cycleconsistent constraints are used to encourage the generator and the encoder to produce realistic samples that coordinate with each other. sToMCAT extends ToMCAT by incorporating document labels into the topic modeling process to help discover more coherent topics. The effectiveness of the proposed models is evaluated on unsupervisedsupervised topic modeling and text classification. The experimental results show that our models can produce both coherent and informative topics, outperforming a number of competitive baselines.
Modelfree, Modelbased, and General Intelligence ; During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling welldefined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their modelbased counterparts. Modelbased approaches, on the other hand, require models and scalable algorithms. Modelfree learners and modelbased solvers have close parallels with Systems 1 and 2 in current theories of the human mind the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between modelfree learners and modelbased solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general.
A Review of Learning with Deep Generative Models from Perspective of Graphical Modeling ; This document aims to provide a review on learning with deep generative models DGMs, which is an highlyactive area in machine learning and more generally, artificial intelligence. This review is not meant to be a tutorial, but when necessary, we provide selfcontained derivations for completeness. This review has two features. First, though there are different perspectives to classify DGMs, we choose to organize this review from the perspective of graphical modeling, because the learning methods for directed DGMs and undirected DGMs are fundamentally different. Second, we differentiate model definitions from model learning algorithms, since different learning algorithms can be applied to solve the learning problem on the same model, and an algorithm can be applied to learn different models. We thus separate model definition and model learning, with more emphasis on reviewing, differentiating and connecting different learning algorithms. We also discuss promising future research directions.
A Tale of Three Probabilistic Families Discriminative, Descriptive and Generative Models ; The pattern theory of Grenander is a mathematical framework where patterns are represented by probability models on random variables of algebraic structures. In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models. A discriminative model is in the form of a classifier. It specifies the conditional probability of the class label given the input signal. A descriptive model specifies the probability distribution of the signal, based on an energy function defined on the signal. A generative model assumes that the signal is generated by some latent variables via a transformation. We shall review these models within a common framework and explore their connections. We shall also review the recent developments that take advantage of the high approximation capacities of deep neural networks.
Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order ; Masked language model and autoregressive language model are two types of language models. While pretrained masked language models such as BERT overwhelm the line of natural language understanding NLU tasks, autoregressive language models such as GPT are especially capable in natural language generation NLG. In this paper, we propose a probabilistic masking scheme for the masked language model, which we call probabilistically masked language model PMLM. We implement a specific PMLM with a uniform prior distribution on the masking ratio named uPMLM. We prove that uPMLM is equivalent to an autoregressive permutated language model. One main advantage of the model is that it supports text generation in arbitrary order with surprisingly good quality, which could potentially enable new applications over traditional unidirectional generation. Besides, the pretrained uPMLM also outperforms BERT on a set of downstream NLU tasks.
Realtime Transient Simulation and Studies of Offshore Wind Turbines ; This paper presents developed realtime simulation models for offshore wind turbine generators in compliance with industry standards. The critical control functions such as negative sequence injection, sequence current limit, voltage ride through, and power curtailments are designed to meet the industry requirements for future electromagnetic transient EMT testing and controls of offshore wind farms. Averagevalue and switching detailed models are developed in the OpalRT realtime simulator. Realtime capabilities of these models are compared to show the effectiveness of the averagevalue model in terms of accuracy and computation efficiency. Studies of balanced and unbalanced faults illustrate the ability of the proposed turbine models to inject active and reactive currents during fault events. The models are validated against the secondgeneration generic wind turbine model proposed by Western Electricity Coordinating Council WECC. Validation results reveal that the proposed models are aligned with the WECC generic model. In addition, the models provide an extended capability in mitigating the active power oscillation during unbalanced fault conditions.
Model calibration using ESEm v1.0.0 an open, scalable Earth System Emulator ; Large computer models are ubiquitous in the earth sciences. These models often have tens or hundreds of tuneable parameters and can take thousands of corehours to run to completion while generating terabytes of output. It is becoming common practice to develop emulators as fast approximations, or surrogates, of these models in order to explore the relationships between these inputs and outputs, understand uncertainties and generate large ensembles datasets. While the purpose of these surrogates may differ, their development is often very similar. Here we introduce ESEm an opensource tool providing a general workflow for emulating and validating a wide variety of models and outputs. It includes efficient routines for sampling these emulators for the purpose of uncertainty quantification and model calibration. It is built on wellestablished, highperformance libraries to ensure robustness, extensibility and scalability. We demonstrate the flexibility of ESEm through three casestudies using ESEm to reduce parametric uncertainty in a general circulation model, explore precipitation sensitivity in a cloud resolving model and scenario uncertainty in the CMIP6 multimodel ensemble.
On the Power of Edge Independent Graph Models ; Why do many modern neuralnetworkbased graph generative models fail to reproduce typical realworld network characteristics, such as high triangle density In this work we study the limitations of edge independent random graph models, in which each edge is added to the graph independently with some probability. Such models include both the classic ErdosR'enyi and stochastic block models, as well as modern generative models such as NetGAN, variational graph autoencoders, and CELL. We prove that subject to a bounded overlap condition, which ensures that the model does not simply memorize a single graph, edge independent models are inherently limited in their ability to generate graphs with high triangle and other subgraph densities. Notably, such high densities are known to appear in realworld social networks and other graphs. We complement our negative results with a simple generative model that balances overlap and accuracy, performing comparably to more complex models in reconstructing many graph statistics.
Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality ; Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single highfidelity sample. We introduce Differentiable Diffusion Sampler Search DDSS a method that optimizes fast samplers for any pretrained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models GGDM, a family of flexible nonMarkovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPMDDIM baselines. Our method is compatible with any pretrained diffusion model without finetuning or retraining required.
Robust static and dynamic maximum flows ; We study the robust maximum flow problem and the robust maximum flow over time problem where a given number of arcs Gamma may fail or may be delayed. Two prominent models have been introduced for these problems either one assigns flow to arcs fulfilling weak flow conservation in any scenario, or one assigns flow to paths where an arc failure or delay affects a whole path. We provide a unifying framework by presenting novel general models, in which we assign flow to subpaths. These models contain the known models as special cases and unify their advantages in order to obtain less conservative robust solutions. We give a thorough analysis with respect to complexity of the general models. In particular, we show that the general models are essentially NPhard, whereas, e.g. in the static case with Gamma 1 an optimal solution can be computed in polynomial time. Further, we answer the open question about the complexity of the dynamic path model for Gamma 1. We also compare the solution quality of the different models. In detail, we show that the general models have better robust optimal values than the known models and we prove bounds on these gaps.
Sequential Models in the Synthetic Data Vault ; The goal of this paper is to describe a system for generating synthetic sequential data within the Synthetic data vault. To achieve this, we present the Sequential model currently in SDV, an endtoend framework that builds a generative model for multisequence, realworld data. This includes a novel neural networkbased machine learning model, conditional probabilistic autoregressive CPAR model. The overall system and the model is available in the open source Synthetic Data Vault SDV library httpsgithub.comsdvdevSDV, along with a variety of other models for different synthetic data needs. After building the Sequential SDV, we used it to generate synthetic data and compared its quality against an existing, nonsequential generative adversarial network based model called CTGAN. To compare the sequential synthetic data against its real counterpart, we invented a new metric called MultiSequence Aggregate Similarity MSAS. We used it to conclude that our Sequential SDV model learns higher level patterns than nonsequential models without any tradeoffs in synthetic data quality.
HyperRepresentations as Generative Models Sampling Unseen Neural Network Weights ; Learning representations of neural network weights given a model zoo is an emerging and challenging area with many potential applications from model inspection, to neural architecture search or knowledge distillation. Recently, an autoencoder trained on a model zoo was able to learn a hyperrepresentation, which captures intrinsic and extrinsic properties of the models in the zoo. In this work, we extend hyperrepresentations for generative use to sample new model weights. We propose layerwise loss normalization which we demonstrate is key to generate highperforming models and several sampling methods based on the topology of hyperrepresentations. The models generated using our methods are diverse, performant and capable to outperform strong baselines as evaluated on several downstream tasks initialization, ensemble sampling and transfer learning. Our results indicate the potential of knowledge aggregation from model zoos to new models via hyperrepresentations thereby paving the avenue for novel research directions.
Prediction can be safely used as a proxy for explanation in causally consistent Bayesian generalized linear models ; Bayesian modeling provides a principled approach to quantifying uncertainty in model parameters and model structure and has seen a surge of applications in recent years. Within the context of a Bayesian workflow, we are concerned with model selection for the purpose of finding models that best explain the data, that is, help us understand the underlying data generating process. Since we rarely have access to the true process, all we are left with during realworld analyses is incomplete causal knowledge from sources outside of the current data and model predictions of said data. This leads to the important question of when the use of prediction as a proxy for explanation for the purpose of model selection is valid. We approach this question by means of largescale simulations of Bayesian generalized linear models where we investigate various causal and statistical misspecifications. Our results indicate that the use of prediction as proxy for explanation is valid and safe only when the models under consideration are sufficiently consistent with the underlying causal structure of the true data generating process.
Towards Robust Recommender Systems via Triple Cooperative Defense ; Recommender systems are often susceptible to wellcrafted fake profiles, leading to biased recommendations. The wide application of recommender systems makes studying the defense against attack necessary. Among existing defense methods, dataprocessingbased methods inevitably exclude normal samples, while modelbased methods struggle to enjoy both generalization and robustness. Considering the above limitations, we suggest integrating data processing and robust model and propose a general framework, Triple Cooperative Defense TCD, which cooperates to improve model robustness through the cotraining of three models. Specifically, in each round of training, we sequentially use the highconfidence prediction ratings consistent ratings of any two models as auxiliary training data for the remaining model, and the three models cooperatively improve recommendation robustness. Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization. Through extensive experiments with five poisoning attacks on three realworld datasets, the results show that the robustness improvement of TCD significantly outperforms baselines. It is worth mentioning that TCD is also beneficial for model generalizations.
Toward Building General Foundation Models for Language, Vision, and VisionLanguage Understanding Tasks ; Foundation models or pretrained models have substantially improved the performance of various language, vision, and visionlanguage understanding tasks. However, existing foundation models can only perform the best in one type of tasks, namely language, vision, or visionlanguage. It is still an open question whether it is possible to construct a foundation model performing the best for all the understanding tasks, which we call a general foundation model. In this paper, we propose a new general foundation model, XFM the XFoundation Model. XFM has one language encoder, one vision encoder, and one fusion encoder, as well as a new training method. The training method includes two new techniques for learning XFM from text, image, and imagetext pair data. One is to stop gradients from the visionlanguage training when learning the language encoder. The other is to leverage the visionlanguage training to guide the learning of the vision encoder. Extensive experiments on benchmark datasets show that XFM can significantly outperform existing general foundation models and perform better than or comparable to existing foundation models specifically for language, vision, or visionlanguage understanding.
The Multicluster TwoWave Fading Model ; We introduce and characterize a natural generalization of the TwoWave with Diffuse Power TWDP fading model, by allowing that the incident waves arrive in different clusters. The newly proposed model, referred to as the Multicluster TwoWave MTW fading model, generalizes both the TWDP and the kappamu models under a common umbrella. The special case on which the model parameters reach extreme values is also analyzed, aimed to model harsh fading conditions reported in experimental measurements obtained in enclosed environments. The chief probability functions of both the MTW and the MTW Extreme fading models are obtained, including the probability density function, the cumulative distribution function and the generalized momentgenerating function. A number of applications for these models are exemplified, including outage probability in interferencelimited scenarios, energy detection, and composite fading modeling.
An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization ; Recently, diffusion models have achieved remarkable success in generating tasks, including image and audio generation. However, like other generative models, diffusion models are prone to privacy issues. In this paper, we propose an efficient querybased membership inference attack MIA, namely Proximal Initialization Attack PIA, which utilizes groundtruth trajectory obtained by epsilon initialized in t0 and predicted point to infer memberships. Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discretetime and continuoustime diffusion models. Moreover, previous works on the privacy of diffusion models have focused on vision tasks without considering audio tasks. Therefore, we also explore the robustness of diffusion models to MIA in the texttospeech TTS task, which is an audio generation task. To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the TTS task. Experimental results indicate that models with melspectrogram imagelike output are vulnerable to MIA, while models with audio output are relatively robust to MIA. Code is available at urlhttpsgithub.comkong13661PIA.
Privacy Distillation Reducing Reidentification Risk of Multimodal Diffusion Models ; Knowledge distillation in neural networks refers to compressing a large model or dataset into a smaller version of itself. We introduce Privacy Distillation, a framework that allows a texttoimage generative model to teach another model without exposing it to identifiable data. Here, we are interested in the privacy issue faced by a data provider who wishes to share their data via a multimodal generative model. A question that immediately arises is How can a data provider ensure that the generative model is not leaking identifiable information about a patient''. Our solution consists of 1 training a first diffusion model on real data 2 generating a synthetic dataset using this model and filtering it to exclude images with a reidentifiability risk 3 training a second diffusion model on the filtered synthetic data only. We showcase that datasets sampled from models trained with privacy distillation can effectively reduce reidentification risk whilst maintaining downstream performance.
Extracting Reward Functions from Diffusion Models ; Diffusion models have achieved remarkable results in image generation, and have similarly been used to learn highperforming policies in sequential decisionmaking tasks. Decisionmaking diffusion models can be trained on lowerquality data, and then be steered with a reward function to generate nearoptimal trajectories. We consider the problem of extracting a reward function by comparing a decisionmaking diffusion model that models lowreward behavior and one that models highreward behavior; a setting related to inverse reinforcement learning. We first define the notion of a relative reward function of two diffusion models and show conditions under which it exists and is unique. We then devise a practical learning algorithm for extracting it by aligning the gradients of a reward function parametrized by a neural network to the difference in outputs of both diffusion models. Our method finds correct reward functions in navigation environments, and we demonstrate that steering the base model with the learned reward functions results in significantly increased performance in standard locomotion benchmarks. Finally, we demonstrate that our approach generalizes beyond sequential decisionmaking by learning a rewardlike function from two largescale image generation diffusion models. The extracted reward function successfully assigns lower rewards to harmful images.
From BERT to GPT3 Codex Harnessing the Potential of Very Large Language Models for Data Management ; Large language models have recently advanced the state of the art on many natural language processing benchmarks. The newest generation of models can be applied to a variety of tasks with little to no specialized training. This technology creates various opportunities for applications in the context of data management. The tutorial will introduce participants to basic background on language models, discuss different methods to use language models, and give an overview and short demonstration of available libraries and APIs. Models for generating natural language will be considered as well as models, such as GPT3 Codex, which complete program code or generate code from natural language instructions. Finally, the tutorial will discuss recent research in the database community that exploits language models in the context of traditional database systems or proposes novel system architectures that are based on them. The tutorial is targeted at database researchers. No prior background on language models is required. The goal of the tutorial is to introduce database researchers to the latest generation of language models, and to their use cases in the domain of data management.
A prior regularized full waveform inversion using generative diffusion models ; Full waveform inversion FWI has the potential to provide highresolution subsurface model estimations. However, due to limitations in observation, e.g., regional noise, limited shots or receivers, and bandlimited data, it is hard to obtain the desired highresolution model with FWI. To address this challenge, we propose a new paradigm for FWI regularized by generative diffusion models. Specifically, we pretrain a diffusion model in a fully unsupervised manner on a prior velocity model distribution that represents our expectations of the subsurface and then adapt it to the seismic observations by incorporating the FWI into the sampling process of the generative diffusion models. What makes diffusion models uniquely appropriate for such an implementation is that the generative process retains the form and dimensions of the velocity model. Numerical examples demonstrate that our method can outperform the conventional FWI with only negligible additional computational cost. Even in cases of very sparse observations or observations with strong noise, the proposed method could still reconstruct a highquality subsurface model. Thus, we can incorporate our prior expectations of the solutions in an efficient manner. We further test this approach on field data, which demonstrates the effectiveness of the proposed method.
Diffusion Language Models Can Perform Many Tasks with Scaling and InstructionFinetuning ; The recent surge of generative AI has been fueled by the generative power of diffusion probabilistic models and the scalable capabilities of large language models. Despite their potential, it remains elusive whether diffusion language models can solve general language tasks comparable to their autoregressive counterparts. This paper demonstrates that scaling diffusion models w.r.t. data, sizes, and tasks can effectively make them strong language learners. We build competent diffusion language models at scale by first acquiring knowledge from massive data via masked language modeling pretraining thanks to their intrinsic connections. We then reprogram pretrained masked language models into diffusion language models via diffusive adaptation, wherein taskspecific finetuning and instruction finetuning are explored to unlock their versatility in solving general language tasks. Experiments show that scaling diffusion language models consistently improves performance across downstream language tasks. We further discover that instruction finetuning can elicit zeroshot and fewshot incontext learning abilities that help tackle many unseen tasks by following natural language instructions, and show promise in advanced and challenging abilities such as reasoning.
Use of delta N formalism Difficulties in generating large localtype nonGaussianity during inflation ; We discuss generation of nonGaussianity in density perturbation through the superhorizon evolution during inflation by using the socalled delta N formalism. We first provide a general formula for the nonlinearity parameter generated during inflation. We find that it is proportional to the slowroll parameters, multiplied by the model dependent factors that may enhance the nongaussianity to the observable ranges. Then we discuss three typical examples to illustrate how difficult to generate sizable nonGaussianity through the superhorizon evolution. First example is the double inflation model, which shows that temporal violation of slow roll conditions is not enough for the generation of nonGaussianity. Second example is the ordinary hybrid inflation model, which illustrates the importance of taking into account perturbations on small scales. Finally, we discuss KadotaStewart model. This model gives an example in which we have to choose rather unnatural initial conditions even if large nonGaussianity can be generated.
Topic Aware Neural Response Generation ; We consider incorporating topic information into the sequencetosequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequencetosequence TASeq2Seq model. The model utilizes topics to simulate prior knowledge of human that guides them to form informative and interesting responses in conversation, and leverages the topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention, synthesizes topic vectors by topic attention from the topic words of the message obtained from a pretrained LDA model, and let these vectors jointly affect the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical study on both automatic evaluation metrics and human annotations shows that TASeq2Seq can generate more informative and interesting responses, and significantly outperform thestateoftheart response generation models.
SemiSupervised QA with Generative DomainAdaptive Nets ; We study the problem of semisupervised question answeringutilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative DomainAdaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine modelgenerated questions with humangenerated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.
A Unified Querybased Generative Model for Question Generation and Question Answering ; We propose a querybased generative model for solving both tasks of question generation QG and question an swering QA. The model follows the classic encoder decoder framework. The encoder takes a passage and a query as input then performs query understanding by matching the query with the passage from multiple per spectives. The decoder is an attentionbased Long Short Term Memory LSTM model with copy and coverage mechanisms. In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage. During the training stage, we leverage a policygradient reinforcement learning algorithm to overcome exposure bias, a major prob lem resulted from sequence learning with crossentropy loss. For the QG task, our experiments show higher per formances than the stateoftheart results. When used as additional training data, the automatically generated questions even improve the performance of a strong ex tractive QA system. In addition, our model shows bet ter performance than the stateoftheart baselines of the generative QA task.
Infrafiltration Theorem and Some Inductive Sequence of Models of Generalized SecondOrder Dedekind Theory of Real Numbers With Exponentially Increasing Powers ; The paper is devoted to construction of some closed inductive sequence of models of the generalized secondorder Dedekind theory of real numbers with exponentially increasing powers. These models are not isomorphic whereas all models of the standard secondorder Dedekind theory are. The main idea in passing to generalized models is to consider instead of superstructures with the single common settheoretical equality and the single common settheoretical belonging superstructures with several generalized equalities and several generalized belongings for first and second orders. The basic tools for the presented construction are the infraproduct of collection of mathematical systems different from the factorized Lo's ultraproduct and the corresponding generalized infrafiltration theorem. As its auxiliary corollary we obtain the generalized compactness theorem for the generalized secondorder language.
VFlow More Expressive Generative Flows with Variational Data Augmentation ; Generative flows are promising tractable models for density modeling that define probabilistic distributions with invertible transformations. However, tractability imposes architectural constraints on generative flows, making them less expressive than other types of generative models. In this work, we study a previously overlooked constraint that all the intermediate representations must have the same dimensionality with the original data due to invertibility, limiting the width of the network. We tackle this constraint by augmenting the data with some extra dimensions and jointly learning a generative flow for augmented data as well as the distribution of augmented dimensions under a variational inference framework. Our approach, VFlow, is a generalization of generative flows and therefore always performs better. Combining with existing generative flows, VFlow achieves a new stateoftheart 2.98 bits per dimension on the CIFAR10 dataset and is more compact than previous models to reach similar modeling quality.
A Deep Generative Model for FragmentBased Molecule Generation ; Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learns their corresponding characterbased language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former generation of invalid and duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the wellknown paradigm of FragmentBased Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequencybased masking strategy that helps generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language modelbased competitors, reaching stateoftheart performances typical of graphbased approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit taskspecific supervision.
Dual Generator Generative Adversarial Networks for MultiDomain ImagetoImage Translation ; Stateoftheart methods for imagetoimage translation with Generative Adversarial Networks GANs can learn a mapping from one domain to another domain using unpaired image data. However, these methods require the training of one specific model for every pair of image domains, which limits the scalability in dealing with more than two image domains. In addition, the training stage of these methods has the common problem of model collapse that degrades the quality of the generated images. To tackle these issues, we propose a Dual Generator Generative Adversarial Network G2GAN, which is a robust and scalable approach allowing to perform unpaired imagetoimage translation for multiple domains using only dual generators within a single model. Moreover, we explore different optimization losses for better training of G2GAN, and thus make unpaired imagetoimage translation with higher consistency and better stability. Extensive experiments on six publicly available datasets with different scenarios, i.e., architectural buildings, seasons, landscape and human faces, demonstrate that the proposed G2GAN achieves superior model capacity and better generation performance comparing with existing imagetoimage translation GAN models.
Joint Multimodal Learning with Deep Generative Models ; We investigate deep generative models that can exchange multiple modalities bidirectionally, e.g., generating images from corresponding texts and vice versa. Recently, some studies handle multiple modalities on deep generative models, such as variational autoencoders VAEs. However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we should extract a joint representation that captures highlevel concepts among all modalities and through which we can exchange them bidirectionally. As described herein, we propose a joint multimodal variational autoencoder JMVAE, in which all modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to generate missing modalities from the remaining modalities properly, we develop an additional method, JMVAEkl, that is trained by reducing the divergence between JMVAE's encoder and prepared networks of respective modalities. Our experiments show that our proposed method can obtain appropriate joint representation from multiple modalities and that it can generate and reconstruct them more properly than conventional VAEs. We further demonstrate that JMVAE can generate multiple modalities bidirectionally.
Procedural Content Generation using Behavior Trees PCGBT ; Behavior trees BTs are a popular method for modeling NPC and enemy AI behavior and have been widely used in commercial games. In this work, rather than use BTs to model game playing agents, we use them for modeling game design agents, defining behaviors as content generation tasks rather than ingame actions. Similar to how traditional BTs enable modeling behaviors in a modular and dynamic manner, BTs for PCG enable simple subtrees for generating parts of levels to be combined modularly to form complex trees for generating whole levels as well as generators that can dynamically vary the generated content. We refer to this approach as Procedural Content Generation using Behavior Trees, or PCGBT, and demonstrate it by using BTs to model generators for Super Mario Bros., Mega Man and Metroid levels as well as dungeon layouts and discuss several ways in which this paradigm could be applied and extended in the future.
A Generalized Nonlocal Calculus with Application to the Peridynamics Model for Solid Mechanics ; A nonlocal vector calculus was introduced in 2 that has proved useful for the analysis of the peridynamics model of nonlocal mechanics and nonlocal diffusion models. A generalization is developed that provides a more general setting for the nonlocal vector calculus that is independent of particular nonlocal models. It is shown that general nonlocal calculus operators are integral operators with specific integral kernels. General nonlocal calculus properties are developed, including nonlocal integration by parts formula and Green's identities. The nonlocal vector calculus introduced in 2 is shown to be recoverable from the general formulation as a special example. This special nonlocal vector calculus is used to reformulate the peridynamics equation of motion in terms of the nonlocal gradient operator and its adjoint. A new example of nonlocal vector calculus operators is introduced, which shows the potential use of the general formulation for general nonlocal models.
Coverless information hiding based on Generative Model ; A new coverless image information hiding method based on generative model is proposed, we feed the secret image to the generative model database, and generate a meaningnormal and independent image different from the secret image, then, the generated image is transmitted to the receiver and is fed to the generative model database to generate another image visually the same as the secret image. So we only need to transmit the meaningnormal image which is not related to the secret image, and we can achieve the same effect as the transmission of the secret image. This is the first time to propose the coverless image information hiding method based on generative model, compared with the traditional image steganography, the transmitted image does not embed any information of the secret image in this method, therefore, can effectively resist steganalysis tools. Experimental results show that our method has high capacity, safety and reliability.
One Size Does Not Fit All Generating and Evaluating Variable Number of Keyphrases ; Different texts shall by nature correspond to different number of keyphrases. This desideratum is largely missing from existing neural keyphrase generation models. In this study, we address this problem from both modeling and evaluation perspectives. We first propose a recurrent generative model that generates multiple keyphrases as delimiterseparated sequences. Generation diversity is further enhanced with two novel techniques by manipulating decoder hidden states. In contrast to previous approaches, our model is capable of generating diverse keyphrases and controlling number of outputs. We further propose two evaluation metrics tailored towards the variablenumber generation. We also introduce a new dataset StackEx that expands beyond the only existing genre i.e., academic writing in keyphrase generation tasks. With both previous and new evaluation metrics, our model outperforms strong baselines on all datasets.
Generalization in Generation A closer look at Exposure Bias ; Exposure bias refers to the traintest discrepancy that seemingly arises when an autoregressive generative model uses only groundtruth contexts at training time but generated ones at test time. We separate the contributions of the model and the learning framework to clarify the debate on consequences and review proposed countermeasures. In this light, we argue that generalization is the underlying property to address and propose unconditional generation as its fundamental benchmark. Finally, we combine latent variable modeling with a recent formulation of exploration in reinforcement learning to obtain a rigorous handling of true and generated contexts. Results on language modeling and variational sentence autoencoding confirm the model's generalization capability.
A KnowledgeEnhanced Pretraining Model for Commonsense Story Generation ; Story generation, namely generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models e.g., GPT2 still suffer from repetition, logic conflicts, and lack of longrange coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledgeenhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we employ multitask learning which combines a discriminative objective to distinguish true and fake stories during finetuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than stateoftheart baselines, particularly in terms of logic and global coherence.
Paraphrase Augmented TaskOriented Dialog Generation ; Neural generative models have achieved promising performance on dialog generation tasks if given a huge data set. However, the lack of highquality dialog data and the expensive data annotation process greatly limit their application in realworld settings. We propose a paraphrase augmented response generation PARG framework that jointly trains a paraphrase model and a response generation model to improve the dialog generation performance. We also design a method to automatically construct paraphrase training data set based on dialog state and dialog act labels. PARG is applicable to various dialog generation models, such as TSCP Lei et al., 2018 and DAMD Zhang et al., 2019. Experimental results show that the proposed framework improves these stateoftheart dialog models further on CamRest676 and MultiWOZ. PARG also significantly outperforms other data augmentation methods in dialog generation tasks, especially under low resource settings.
Improving Truthfulness of Headline Generation ; Most studies on abstractive summarization report ROUGE scores between system and reference summaries. However, we have a concern about the truthfulness of generated summaries whether all facts of a generated summary are mentioned in the source text. This paper explores improving the truthfulness in headline generation on two popular datasets. Analyzing headlines generated by the stateoftheart encoderdecoder model, we show that the model sometimes generates untruthful headlines. We conjecture that one of the reasons lies in untruthful supervision data used for training the model. In order to quantify the truthfulness of articleheadline pairs, we consider the textual entailment of whether an article entails its headline. After confirming quite a few untruthful instances in the datasets, this study hypothesizes that removing untruthful instances from the supervision data may remedy the problem of the untruthful behaviors of the model. Building a binary classifier that predicts an entailment relation between an article and its headline, we filter out untruthful instances from the supervision data. Experimental results demonstrate that the headline generation model trained on filtered supervision data shows no clear difference in ROUGE scores but remarkable improvements in automatic and manual evaluations of the generated headlines.
PathGAN Local Path Planning with Attentive Generative Adversarial Networks ; To achieve autonomous driving without highdefinition maps, we present a model capable of generating multiple plausible paths from egocentric images for autonomous vehicles. Our generative model comprises two neural networks the feature extraction network FEN and path generation network PGN. The FEN extracts meaningful features from an egocentric image, whereas the PGN generates multiple paths from the features, given a driving intention and speed. To ensure that the paths generated are plausible and consistent with the intention, we introduce an attentive discriminator and train it with the PGN under generative adversarial networks framework. We also devise an interaction model between the positions in the paths and the intentions hidden in the positions and design a novel PGN architecture that reflects the interaction model, resulting in the improvement of the accuracy and diversity of the generated paths. Finally, we introduce ETRIDriving, a dataset for autonomous driving in which the recorded sensor data are labeled with discrete highlevel driving actions, and demonstrate the stateoftheart performance of the proposed model on ETRIDriving in terms of accuracy and diversity.
Better Distractions Transformerbased Distractor Generation and Multiple Choice Question Filtering ; For the field of education, being able to generate semantically correct and educationally relevant multiple choice questions MCQs could have a large impact. While question generation itself is an active research topic, generating distractors the incorrect multiple choice options receives much less attention. A missed opportunity, since there is still a lot of room for improvement in this area. In this work, we train a GPT2 language model to generate three distractors for a given question and text context, using the RACE dataset. Next, we train a BERT language model to answer MCQs, and use this model as a filter, to select only questions that can be answered and therefore presumably make sense. To evaluate our work, we start by using text generation metrics, which show that our model outperforms earlier work on distractor generation DG and achieves stateoftheart performance. Also, by calculating the question answering ability, we show that larger base models lead to better performance. Moreover, we conducted a human evaluation study, which confirmed the quality of the generated questions, but showed no statistically significant effect of the QA filter.
Using a Bidirectional LSTM Model with Attention Mechanism trained on MIDI Data for Generating Unique Music ; Generating music is an interesting and challenging problem in the field of machine learning. Mimicking human creativity has been popular in recent years, especially in the field of computer vision and image processing. With the advent of GANs, it is possible to generate new similar images, based on trained data. But this cannot be done for music similarly, as music has an extra temporal dimension. So it is necessary to understand how music is represented in digital form. When building models that perform this generative task, the learning and generation part is done in some highlevel representation such as MIDI Musical Instrument Digital Interface or scores. This paper proposes a bidirectional LSTM Long shortterm memory model with attention mechanism capable of generating similar type of music based on MIDI data. The music generated by the model follows the themestyle of the music the model is trained on. Also, due to the nature of MIDI, the tempo, instrument, and other parameters can be defined, and changed, post generation.
Long Text Generation by Modeling SentenceLevel and DiscourseLevel Coherence ; Generating long and coherent text is an important but challenging task, particularly for openended language generation tasks such as story generation. Despite the success in modeling intrasentence coherence, existing generation models e.g., BART still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the highlevel semantics and discourse structures in the context beyond tokenlevel cooccurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting intersentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than stateoftheart baselines.
ConRPG Paraphrase Generation using Contexts as Regularizer ; A longstanding issue with paraphrase generation is how to obtain reliable supervision signals. In this paper, we propose an unsupervised paradigm for paraphrase generation based on the assumption that the probabilities of generating two sentences with the same meaning given the same context should be the same. Inspired by this fundamental idea, we propose a pipelined system which consists of paraphrase candidate generation based on contextual language models, candidate filtering using scoring functions, and paraphrase model training based on the selected candidates. The proposed paradigm offers merits over existing paraphrase generation methods 1 using the context regularizer on meanings, the model is able to generate massive amounts of highquality paraphrase pairs; and 2 using humaninterpretable scoring functions to select paraphrase pairs from candidates, the proposed framework provides a channel for developers to intervene with the data generation process, leading to a more controllable model. Experimental results across different tasks and datasets demonstrate that the effectiveness of the proposed model in both supervised and unsupervised setups.
Semantic features of object concepts generated with GPT3 ; Semantic features have been playing a central role in investigating the nature of our conceptual representations. Yet the enormous time and effort required to empirically sample and norm features from human raters has restricted their use to a limited set of manually curated concepts. Given recent promising developments with transformerbased language models, here we asked whether it was possible to use such models to automatically generate meaningful lists of properties for arbitrary object concepts and whether these models would produce features similar to those found in humans. To this end, we probed a GPT3 model to generate semantic features for 1,854 objects and compared automaticallygenerated features to existing human feature norms. GPT3 generated many more features than humans, yet showed a similar distribution in the types of generated features. Generated feature norms rivaled human norms in predicting similarity, relatedness, and category membership, while variance partitioning demonstrated that these predictions were driven by similar variance in humans and GPT3. Together, these results highlight the potential of large language models to capture important facets of human knowledge and yield a new approach for automatically generating interpretable feature sets, thus drastically expanding the potential use of semantic features in psychological and linguistic studies.
On Analyzing Generative and Denoising Capabilities of Diffusionbased Deep Generative Models ; Diffusionbased Deep Generative Models DDGMs offer stateoftheart performance in generative modeling. Their main strength comes from their unique setup in which a model the backward diffusion process is trained to reverse the forward diffusion process, which gradually adds noise to the input signal. Although DDGMs are well studied, it is still unclear how the small amount of noise is transformed during the backward diffusion process. Here, we focus on analyzing this problem to gain more insight into the behavior of DDGMs and their denoising and generative capabilities. We observe a fluid transition point that changes the functionality of the backward diffusion process from generating a corrupted image from noise to denoising the corrupted image to the final sample. Based on this observation, we postulate to divide a DDGM into two parts a denoiser and a generator. The denoiser could be parameterized by a denoising autoencoder, while the generator is a diffusionbased model with its own set of parameters. We experimentally validate our proposition, showing its pros and cons.
Nominal Metaphor Generation with Multitask Learning ; Metaphor generation is a challenging task which can impact many downstream tasks such as improving user satisfaction with dialogue systems and story generation. This paper tackles the problem of Chinese nominal metaphor generation by introducing a multitask metaphor generation framework with selftraining and metaphor identification mechanisms. Selftraining addresses the data scarcity issue of metaphor datasets. That is, instead of solely relying on labelled metaphor datasets which are usually small in size, selftraining helps identify potential metaphors from a largescale unlabelled corpus for metaphor generation. The metaphor weighting mechanism enables our model to focus on the metaphorrelated parts of the input e.g., the comparison of the metaphor and comparator during model learning and thus improves the metaphoricity of the generated metaphors. Our model is trained on an annotated corpus consisting of 6.3k sentences that contain diverse metaphorical expressions. Experimental results show that our model is able to generate metaphors with better readability and creativity compared to the baseline models, even in the situation where training data is insufficient.
Controllable 3D Generative Adversarial Face Model via Disentangling Shape and Appearance ; 3D face modeling has been an active area of research in computer vision and computer graphics, fueling applications ranging from facial expression transfer in virtual avatars to synthetic data generation. Existing 3D deep learning generative models e.g., VAE, GANs allow generating compact face representations both shape and texture that can model nonlinearities in the shape and appearance space e.g., scatter effects, specularities, etc.. However, they lack the capability to control the generation of subtle expressions. This paper proposes a new 3D face generative model that can decouple identity and expression and provides granular control over expressions. In particular, we propose using a pair of supervised autoencoder and generative adversarial networks to produce highquality 3D faces, both in terms of appearance and shape. Experimental results in the generation of 3D faces learned with holistic expression labels, or Action Unit labels, show how we can decouple identity and expression; gaining finecontrol over expressions while preserving identity.
FR Folded Rationalization with a Unified Encoder ; Conventional works generally employ a twophase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces. However, such a twophase model may incur the degeneration problem where the predictor overfits to the noise generated by a not yet welltrained generator and in turn, leads the generator to converge to a suboptimal model that tends to select senseless pieces. To tackle this challenge, we propose Folded Rationalization FR that folds the two phases of the rationale model into one from the perspective of text semantic extraction. The key idea of FR is to employ a unified encoder between the generator and predictor, based on which FR can facilitate a better predictor by access to valuable information blocked by the generator in the traditional twophase model and thus bring a better generator. Empirically, we show that FR improves the F1 score by up to 10.3 as compared to stateoftheart methods.
Distance Based Image Classification A solution to generative classification's conundrum ; Most classifiers rely on discriminative boundaries that separate instances of each class from everything else. We argue that discriminative boundaries are counterintuitive as they define semantics by whattheyarenot; and should be replaced by generative classifiers which define semantics by whattheyare. Unfortunately, generative classifiers are significantly less accurate. This may be caused by the tendency of generative models to focus on easy to model semantic generative factors and ignore nonsemantic factors that are important but difficult to model. We propose a new generative model in which semantic factors are accommodated by shell theory's hierarchical generative process and nonsemantic factors by an instance specific noise term. We use the model to develop a classification scheme which suppresses the impact of noise while preserving semantic cues. The result is a surprisingly accurate generative classifier, that takes the form of a modified nearestneighbor algorithm; we term it distance classification. Unlike discriminative classifiers, a distance classifier defines semantics by whattheyare; is amenable to incremental updates; and scales well with the number of classes.
Equivariant ShapeConditioned Generation of 3D Molecules for LigandBased Drug Design ; Shapebased virtual screening is widely employed in ligandbased drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D chemical structures compared to known ligands. 3D deep generative models have the potential to automate this exploration of shapeconditioned 3D chemical space; however, no existing models can reliably generate valid druglike molecules in conformations that adopt a specific shape such as a known binding pose. We introduce a new multimodal 3D generative model that enables shapeconditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragmentbased generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformational structure to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shapeconditioned generation of chemically diverse molecular structures and shapeconstrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries.
3D Brain and Heart Volume Generative Models A Survey ; Generative models such as generative adversarial networks and autoencoders have gained a great deal of attention in the medical field due to their excellent data generation capability. This paper provides a comprehensive survey of generative models for threedimensional 3D volumes, focusing on the brain and heart. A new and elaborate taxonomy of unconditional and conditional generative models is proposed to cover diverse medical tasks for the brain and heart unconditional synthesis, classification, conditional synthesis, segmentation, denoising, detection, and registration. We provide relevant background, examine each task and also suggest potential future directions. A list of the latest publications will be updated on Github to keep up with the rapid influx of papers at httpsgithub.comcsyanbin3DMedicalGenerativeSurvey.
EtriCA EventTriggered ContextAware Story Generation Augmented by Cross Attention ; One of the key challenges of automatic story generation is how to generate a long narrative that can maintain fluency, relevance, and coherence. Despite recent progress, current story generation systems still face the challenge of how to effectively capture contextual and event features, which has a profound impact on a model's generation performance. To address these challenges, we present EtriCA, a novel neural generation model, which improves the relevance and coherence of the generated stories through residually mapping context features to event sequences with a crossattention mechanism. Such a feature capturing mechanism allows our model to better exploit the logical relatedness between events when generating stories. Extensive experiments based on both automatic and human evaluations show that our model significantly outperforms stateoftheart baselines, demonstrating the effectiveness of our model in leveraging context and event features.
GanLM EncoderDecoder Pretraining with an Auxiliary Discriminator ; Pretrained models have achieved remarkable success in natural language processing NLP. However, existing pretraining methods underutilize the benefits of language understanding for generation. Inspired by the idea of Generative Adversarial Networks GANs, we propose a GANstyle model for encoderdecoder pretraining by introducing an auxiliary discriminator, unifying the ability of language understanding and generation in a single model. Our model, named as GanLM, is trained with two pretraining objectives replaced token detection and replaced token denoising. Specifically, given masked source sentences, the generator outputs the target distribution and the discriminator predicts whether the target sampled tokens from distribution are incorrect. The target sentence is replaced with misclassified tokens to construct noisy previous context, which is used to generate the gold sentence. In general, both tasks improve the ability of language understanding and generation by selectively using the denoising data. Extensive experiments in language generation benchmarks show that GanLM with the powerful language understanding capability outperforms various strong pretrained language models PLMs and achieves stateoftheart performance.
DLT Conditioned layout generation with Joint DiscreteContinuous Diffusion Layout Transformer ; Generating visual layouts is an essential ingredient of graphic design. The ability to condition layout generation on a partial subset of component attributes is critical to realworld applications that involve user interaction. Recently, diffusion models have demonstrated highquality generative performances in various domains. However, it is unclear how to apply diffusion models to the natural representation of layouts which consists of a mix of discrete class and continuous location, size attributes. To address the conditioning layout generation problem, we introduce DLT, a joint discretecontinuous diffusion model. DLT is a transformerbased model which has a flexible conditioning mechanism that allows for conditioning on any given subset of all the layout component classes, locations, and sizes. Our method outperforms stateoftheart generative models on various layout generation datasets with respect to different metrics and conditioning settings. Additionally, we validate the effectiveness of our proposed conditioning mechanism and the joint continuousdiffusion process. This joint process can be incorporated into a wide range of mixed discretecontinuous generative tasks.
Spontaneous symmetry breaking in generative diffusion models ; Generative diffusion models have recently emerged as a leading approach for generating highdimensional data. In this paper, we show that the dynamics of these models exhibit a spontaneous symmetry breaking that divides the generative dynamics into two distinct phases 1 A linear steadystate dynamics around a central fixedpoint and 2 an attractor dynamics directed towards the data manifold. These two phases are separated by the change in stability of the central fixedpoint, with the resulting window of instability being responsible for the diversity of the generated samples. Using both theoretical and empirical evidence, we show that an accurate simulation of the early dynamics does not significantly contribute to the final generation, since early fluctuations are reverted to the central fixed point. To leverage this insight, we propose a Gaussian late initialization scheme, which significantly improves model performance, achieving up to 3x FID improvements on fast samplers, while also increasing sample diversity e.g., racial composition of generated CelebA images. Our work offers a new way to understand the generative dynamics of diffusion models that has the potential to bring about higher performance and less biased fastsamplers.
DreamTeacher Pretraining Image Backbones with Deep Generative Models ; In this work, we introduce a selfsupervised feature representation learning framework DreamTeacher that utilizes generative networks for pretraining downstream image backbones. We propose to distill knowledge from a trained generative model into standard image backbones that have been well engineered for specific perception tasks. We investigate two types of knowledge distillation 1 distilling learned generative features onto target image backbones as an alternative to pretraining these backbones on large labeled datasets such as ImageNet, and 2 distilling labels obtained from generative networks with task heads onto logits of target backbones. We perform extensive analyses on multiple generative models, dense prediction benchmarks, and several pretraining regimes. We empirically find that our DreamTeacher significantly outperforms existing selfsupervised representation learning approaches across the board. Unsupervised ImageNet pretraining with DreamTeacher leads to significant improvements over ImageNet classification pretraining on downstream datasets, showcasing generative models, and diffusion generative models specifically, as a promising approach to representation learning on large, diverse datasets without requiring manual annotation.
Progressive distillation diffusion for raw music generation ; This paper aims to apply a new deep learning approach to the task of generating raw audio files. It is based on diffusion models, a recent type of deep generative model. This new type of method has recently shown outstanding results with image generation. A lot of focus has been given to those models by the computer vision community. On the other hand, really few have been given for other types of applications such as music generation in waveform domain. In this paper the model for unconditional generating applied to music is implemented Progressive distillation diffusion with 1D UNet. Then, a comparison of different parameters of diffusion and their value in a full result is presented. One big advantage of the methods implemented through this work is the fact that the model is able to deal with progressing audio processing and generating , using transformation from 1channel 128 x 384 to 3channel 128 x 128 melspectrograms and looped generation. The empirical comparisons are realized across different selfcollected datasets.
ExampleBased Framework for Perceptually Guided Audio Texture Generation ; Generative models for synthesizing audio textures explicitly encode controllability by conditioning the model with labelled data. While datasets for audio textures can be easily recorded inthewild, semantically labeling them is expensive, timeconsuming, and prone to errors due to human annotator subjectivity. Thus, to control generation, there is a need to automatically infer userdefined perceptual factors of variation in the latent space of a generative model while modelling unlabeled textures. In this paper, we propose an examplebased framework to determine vectors to guide texture generation based on userdefined semantic attributes. By synthesizing a few synthetic examples to indicate the presence or absence of a semantic attribute, we can infer the guidance vectors in the latent space of a generative model to control that attribute during generation. Our results show that our method is capable of finding perceptually relevant and deterministic guidance vectors for controllable generation for both discrete as well as continuous textures. Furthermore, we demonstrate the application of this method to other tasks such as selective semantic attribute transfer.
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement ; While diffusion models demonstrate a remarkable capability for generating highquality images, their tendency to replicate' training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model LLM to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5 compared to the original diffusion model while maintaining the diversity and quality of generations.
Context based Textgeneration using LSTM networks ; Long shortterm memoryLSTM units on sequencebased models are being used in translation, questionanswering systems, classification tasks due to their capability of learning longterm dependencies. In Natural language generation, LSTM networks are providing impressive results on text generation models by learning language models with grammatically stable syntaxes. But the downside is that the network does not learn about the context. The network only learns the inputoutput function and generates text given a set of input words irrespective of pragmatics. As the model is trained without any such context, there is no semantic consistency among the generated sentences. The proposed model is trained to generate text for a given set of input words along with a context vector. A context vector is similar to a paragraph vector that grasps the semantic meaningcontext of the sentence. Several methods of extracting the context vectors are proposed in this work. While training a language model, in addition to the inputoutput sequences, context vectors are also trained along with the inputs. Due to this structure, the model learns the relation among the input words, context vector and the target word. Given a set of context terms, a well trained model will generate text around the provided context. Based on the nature of computing context vectors, the model has been tried out with two variations word importance and word clustering. In the word clustering method, the suitable embeddings among various domains are also explored. The results are evaluated based on the semantic closeness of the generated text to the given context.
Humanintheloop model explanation via verbatim boundary identification in generated neighborhoods ; The blackbox nature of machine learning models limits their use in casecritical applications, raising faithful and ethical concerns that lead to trust crises. One possible way to mitigate this issue is to understand how a mispredicted decision is carved out from the decision boundary. This paper presents a humanintheloop approach to explain machine learning models using verbatim neighborhood manifestation. Contrary to most of the current eXplainable Artificial Intelligence XAI systems, which provide hitormiss approximate explanations, our approach generates the local decision boundary of the given instance and enables human intelligence to conclude the model behavior. Our method can be divided into three stages 1 a neighborhood generation stage, which generates instances based on the given sample; 2 a classification stage, which yields classifications on the generated instances to carve out the local decision boundary and delineate the model behavior; and 3 a humanintheloop stage, which involves human to refine and explore the neighborhood of interest. In the generation stage, a generative model is used to generate the plausible synthetic neighbors around the given instance. After the classification stage, the classified neighbor instances provide a multifaceted understanding of the model behavior. Three intervention points are provided in the humanintheloop stage, enabling humans to leverage their own intelligence to interpret the model behavior. Several experiments on two datasets are conducted, and the experimental results demonstrate the potential of our proposed approach for boosting human understanding of the complex machine learning model.
A Generative Language Model for Fewshot AspectBased Sentiment Analysis ; Sentiment analysis is an important task in natural language processing. In recent works, pretrained language models are often used to achieve stateoftheart results, especially when training data is scarce. It is common to finetune on the downstream task, usually by adding taskspecific layers on top of the model. In this paper, we focus on aspectbased sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities. In particular, we are interested in fewshot settings. We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention GPT2 is used unless stated otherwise. This way, the model learns to accomplish the tasks via language generation without the need of training taskspecific layers. Our evaluation results on the singletask polarity prediction show that our approach outperforms the previous stateoftheart based on BERT on average performance by a large margins in fewshot and fullshot settings. More importantly, our generative approach significantly reduces the model variance caused by lowresource data. We further demonstrate that the proposed generative language model can handle joint and multitask settings, unlike previous work. We observe that the proposed sequence generation method achieves further improved performances on polarity prediction when the model is trained via joint and multitask settings. Further evaluation on similar sentiment analysis datasets, SST2, SST and OOS intent detection validates the superiority and noise robustness of generative language model in fewshot settings.
Collocation2Text Controllable Text Generation from Guide Phrases in Russian ; Large pretrained language models are capable of generating varied and fluent texts. Starting from the prompt, these models generate a narrative that can develop unpredictably. The existing methods of controllable text generation, which guide the narrative in the text in the userspecified direction, require creating a training corpus and an additional timeconsuming training procedure. The paper proposes and investigates Collocation2Text, a plugandplay method for automatic controllable text generation in Russian, which does not require finetuning. The method is based on two interacting models the autoregressive language ruGPT3 model and the autoencoding language ruRoBERTa model. The idea of the method is to shift the output distribution of the autoregressive model according to the output distribution of the autoencoding model in order to ensure a coherent transition of the narrative in the text towards the guide phrase, which can contain single words or collocations. The autoencoding model, which is able to take into account the left and right contexts of the token, tells the autoregressive model which tokens are the most and least logical at the current generation step, increasing or decreasing the probabilities of the corresponding tokens. The experiments on generating news articles using the proposed method showed its effectiveness for automatically generated fluent texts which contain coherent transitions between userspecified phrases.
Membership Inference Attacks Against Texttoimage Generation Models ; Texttoimage generation models have recently attracted unprecedented attention as they unlatch imaginative applications in all areas of life. However, developing such models requires huge amounts of data that might contain privacysensitive information, e.g., face identity. While privacy risks have been extensively demonstrated in the image classification and GAN generation domains, privacy risks in the texttoimage generation domain are largely unexplored. In this paper, we perform the first privacy analysis of texttoimage generation models through the lens of membership inference. Specifically, we propose three key intuitions about membership information and design four attack methodologies accordingly. We conduct comprehensive evaluations on two mainstream texttoimage generation models including sequencetosequence modeling and diffusionbased modeling. The empirical results show that all of the proposed attacks can achieve significant performance, in some cases even close to an accuracy of 1, and thus the corresponding risk is much more severe than that shown by existing membership inference attacks. We further conduct an extensive ablation study to analyze the factors that may affect the attack performance, which can guide developers and researchers to be alert to vulnerabilities in texttoimage generation models. All these findings indicate that our proposed attacks pose a realistic privacy threat to the texttoimage generation models.
Unified Multimodal Model with Unlikelihood Training for Visual Dialog ; The task of visual dialog requires a multimodal chatbot to answer sequential questions from humans about image content. Prior work performs the standard likelihood training for answer generation on the positive instances involving correct answers. However, the likelihood objective often leads to frequent and dull outputs and fails to exploit the useful knowledge from negative instances involving incorrect answers. In this paper, we propose a Unified Multimodal Model with UnLikelihood Training, named UniMMUL, to tackle this problem. First, to improve visual dialog understanding and generation by multitask learning, our model extends ViLBERT from only supporting answer discrimination to holding both answer discrimination and answer generation seamlessly by different attention masks. Specifically, in order to make the original discriminative model compatible with answer generation, we design novel generative attention masks to implement the autoregressive Masked Language Modeling autoregressive MLM task. And to attenuate the adverse effects of the likelihood objective, we exploit unlikelihood training on negative instances to make the model less likely to generate incorrect answers. Then, to utilize dense annotations, we adopt different finetuning methods for both generating and discriminating answers, rather than just for discriminating answers as in the prior work. Finally, on the VisDial dataset, our model achieves the best generative results 69.23 NDCG score. And our model also yields comparable discriminative results with the stateoftheart in both singlemodel and ensemble settings 75.92 and 76.17 NDCG scores.
Fair Generative Models via Transfer Learning ; This work addresses fair generative models. Dataset biases have been a major cause of unfairness in deep generative models. Previous work had proposed to augment large, biased datasets with small, unbiased reference datasets. Under this setup, a weaklysupervised approach has been proposed, which achieves stateoftheart quality and fairness in generated samples. In our work, based on this setup, we propose a simple yet effective approach. Specifically, first, we propose fairTL, a transfer learning approach to learn fair generative models. Under fairTL, we pretrain the generative model with the available large, biased datasets and subsequently adapt the model using the small, unbiased reference dataset. We find that our fairTL can learn expressive sample generation during pretraining, thanks to the large biased dataset. This knowledge is then transferred to the target model during adaptation, which also learns to capture the underlying fair distribution of the small reference dataset. Second, we propose fairTL, where we introduce two additional innovations to improve upon fairTL i multiple feedback and ii LinearProbing followed by FineTuning LPFT. Taking one step further, we consider an alternative, challenging setup when only a pretrained potentially biased model is available but the dataset that was used to pretrain the model is inaccessible. We demonstrate that our proposed fairTL and fairTL remain very effective under this setup. We note that previous work requires access to the large, biased datasets and is incapable of handling this more challenging setup. Extensive experiments show that fairTL and fairTL achieve stateoftheart in both quality and fairness of generated samples. The code and additional resources can be found at bearwithchris.github.iofairTL.
A Generative Adversarial Network for Climate Tipping Point Discovery TIPGAN ; We propose a new Tipping Point Generative Adversarial Network TIPGAN for better characterizing potential climate tipping points in Earth system models. We describe an adversarial game to explore the parameter space of these models, detect upcoming tipping points, and discover the drivers of tipping points. In this setup, a set of generators learn to construct model configurations that will invoke a climate tipping point. The discriminator learns to identify which generators are generating each model configuration and whether a given configuration will lead to a tipping point. The discriminator is trained using an oracle a surrogate climate model to test if a generated model configuration leads to a tipping point or not. We demonstrate the application of this GAN to invoke the collapse of the Atlantic Meridional Overturning Circulation AMOC. We share experimental results of modifying the loss functions and the number of generators to exploit the area of uncertainty in model state space near a climate tipping point. In addition, we show that our trained discriminator can predict AMOC collapse with a high degree of accuracy without the use of the oracle. This approach could generalize to other tipping points, and could augment climate modeling research by directing users interested in studying tipping points to parameter sets likely to induce said tipping points in their computationally intensive climate models.
Boosting Radiology Report Generation by Infusing Comparison Prior ; Recent transformerbased models have made significant strides in generating radiology reports from chest Xray images. However, a prominent challenge remains these models often lack prior knowledge, resulting in the generation of synthetic reports that mistakenly reference nonexistent prior exams. This discrepancy can be attributed to a knowledge gap between radiologists and the generation models. While radiologists possess patientspecific prior information, the models solely receive Xray images at a specific time point. To tackle this issue, we propose a novel approach that leverages a rulebased labeler to extract comparison prior information from radiology reports. This extracted comparison prior is then seamlessly integrated into stateoftheart transformerbased models, enabling them to produce more realistic and comprehensive reports. Our method is evaluated on English report datasets, such as IU Xray and MIMICCXR. The results demonstrate that our approach surpasses baseline models in terms of natural language generation metrics. Notably, our model generates reports that are free from false references to nonexistent prior exams, setting it apart from previous models. By addressing this limitation, our approach represents a significant step towards bridging the gap between radiologists and generation models in the domain of medical report generation.
How Ready are Pretrained Abstractive Models and LLMs for Legal Case Judgement Summarization ; Automatic summarization of legal case judgements has traditionally been attempted by using extractive summarization methods. However, in recent years, abstractive summarization models are gaining popularity since they can generate more natural and coherent summaries. Legal domainspecific pretrained abstractive summarization models are now available. Moreover, generaldomain pretrained Large Language Models LLMs, such as ChatGPT, are known to generate highquality text and have the capacity for text summarization. Hence it is natural to ask if these models are ready for offtheshelf application to automatically generate abstractive summaries for case judgements. To explore this question, we apply several stateoftheart domainspecific abstractive summarization models and generaldomain LLMs on Indian court case judgements, and check the quality of the generated summaries. In addition to standard metrics for summary quality, we check for inconsistencies and hallucinations in the summaries. We see that abstractive summarization models generally achieve slightly higher scores than extractive models in terms of standard summary evaluation metrics such as ROUGE and BLEU. However, we often find inconsistent or hallucinated information in the generated abstractive summaries. Overall, our investigation indicates that the pretrained abstractive summarization models and LLMs are not yet ready for fully automatic deployment for case judgement summarization; rather a humanintheloop approach including manual checks for inconsistencies is more suitable at present.
A Comparison of Personalized and Generalized Approaches to Emotion Recognition Using Consumer Wearable Devices Machine Learning Study ; Background Studies have shown the potential adverse health effects, ranging from headaches to cardiovascular disease, associated with longterm negative emotions and chronic stress. Since many indicators of stress are imperceptible to observers, the early detection and intervention of stress remains a pressing medical need. Physiological signals offer a noninvasive method of monitoring emotions and are easily collected by smartwatches. Existing research primarily focuses on developing generalized machine learningbased models for emotion classification. Objective We aim to study the differences between personalized and generalized machine learning models for threeclass emotion classification neutral, stress, and amusement using wearable biosignal data. Methods We developed a convolutional encoder for the threeclass emotion classification problem using data from WESAD, a multimodal dataset with physiological signals for 15 subjects. We compared the results between a subjectexclusive generalized, subjectinclusive generalized, and personalized model. Results For the threeclass classification problem, our personalized model achieved an average accuracy of 95.06 and F1score of 91.71, our subjectinclusive generalized model achieved an average accuracy of 66.95 and F1score of 42.50, and our subjectexclusive generalized model achieved an average accuracy of 67.65 and F1score of 43.05. Conclusions Our results emphasize the need for increased research in personalized emotion recognition models given that they outperform generalized models in certain contexts. We also demonstrate that personalized machine learning models for emotion classification are viable and can achieve high performance.
Reinforcement Learning for Generative AI A Survey ; Deep Generative AI has been a longstanding essential topic in the machine learning community, which can impact a number of application areas like text generation and computer vision. The major paradigm to train a generative model is maximum likelihood estimation, which pushes the learner to capture and approximate the target data distribution by decreasing the divergence between the model distribution and the target distribution. This formulation successfully establishes the objective of generative tasks, while it is incapable of satisfying all the requirements that a user might expect from a generative model. Reinforcement learning, serving as a competitive option to inject new training signals by creating new objectives that exploit novel signals, has demonstrated its power and flexibility to incorporate human inductive bias from multiple angles, such as adversarial learning, handdesigned rules and learned reward model to build a performant model. Thereby, reinforcement learning has become a trending research field and has stretched the limits of generative AI in both model design and application. It is reasonable to summarize and conclude advances in recent years with a comprehensive review. Although there are surveys in different application areas recently, this survey aims to shed light on a highlevel review that spans a range of application areas. We provide a rigorous taxonomy in this area and make sufficient coverage on various models and applications. Notably, we also surveyed the fastdeveloping large language model area. We conclude this survey by showing the potential directions that might tackle the limit of current models and expand the frontiers for generative AI.
Algebraic model structures ; We define a new notion of an algebraic model structure, in which the cofibrations and fibrations are retracts of coalgebras for comonads and algebras for monads, and prove algebraic analogs of classical results. Using a modified version of Quillen's small object argument, we show that every cofibrantly generated model structure in the usual sense underlies a cofibrantly generated algebraic model structure. We show how to pass a cofibrantly generated algebraic model structure across an adjunction, and we characterize the algebraic Quillen adjunction that results. We prove that pointwise natural weak factorization systems on diagram categories are cofibrantly generated if the original ones are, and we give an algebraic generalization of the projective model structure. Finally, we prove that certain fundamental comparison maps present in any cofibrantly generated model category are cofibrations when the cofibrations are monomorphisms, a conclusion that does not seem to be provable in the classical, nonalgebraic, theory.
PoissonLie Tduals of the biYangBaxter models ; We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the PoissonLie Tduals of the biYangBaxter sigma models coincide with the recently introduced generalized lambda models. We then generalize this result by showing that the analytic continuation of a generic sigma model of universal WZWtype introduced by Tseytlin in 1993 is nothing but the PoissonLie Tdual of a generic PoissonLie symmetric sigma model introduced by Klimcik and Severa in 1995.
Counterfactual Control for Free from Generative Models ; We introduce a method by which a generative model learning the joint distribution between actions and future states can be used to automatically infer a control scheme for any desired reward function, which may be altered on the fly without retraining the model. In this method, the problem of action selection is reduced to one of gradient descent on the latent space of the generative model, with the model itself providing the means of evaluating outcomes and finding the gradient, much like how the reward network in Deep QNetworks DQN provides gradient information for the action generator. Unlike DQN or ActorCritic, which are conditional models for a specific reward, using a generative model of the full joint distribution permits the reward to be changed on the fly. In addition, the generated futures can be inspected to gain insight in to what the network 'thinks' will happen, and to what went wrong when the outcomes deviate from prediction.
Small presentations of model categories and Vopenka's principle ; We prove existence results for small presentations of model categories generalizing a theorem of D. Dugger from combinatorial model categories to more general model categories. Some of these results are shown under the assumption of Vopvenka's principle. Our main theorem applies in particular to cofibrantly generated model categories where the domains of the generating cofibrations satisfy a slightly stronger smallness condition. As a consequence, assuming Vopvenka's principle, such a cofibrantly generated model category is Quillen equivalent to a combinatorial model category. Moreover, if there are generating sets which consist of presentable objects, then the same conclusion holds without the assumption of Vopvenka's principle. We also correct a mistake from previous work that made similar claims.
SemiSupervised Generation with Clusteraware Generative Models ; Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Clusteraware Generative Model, that uses unlabelled information to infer a latent representation that models the natural clustering of the data, and additional labelled data points to refine this clustering. The generative performances of the model significantly improve when labelled information is exploited, obtaining a loglikelihood of 79.38 nats on permutation invariant MNIST, while also achieving competitive semisupervised classification accuracies. The model can also be trained fully unsupervised, and still improve the loglikelihood performance with respect to related methods.
Least Squares EstimationBased Synchronous Generator Parameter Estimation Using PMU Data ; In this paper, least square estimation LSEbased dynamic generator model parameter identification is investigated. Electromechanical dynamics related parameters such as inertia constant and primary frequency control droop for a synchronous generator are estimated using Phasor Measurement Unit PMU data obtained at the generator terminal bus. The key idea of applying LSE for dynamic parameter estimation is to have a discrete underlineautounderlineregression with eunderlinexogenous input ARX model. With an ARX model, a linear estimation problem can be formulated and the parameters of the ARX model can be found. This paper gives the detailed derivation of converting a generator model with primary frequency control into an ARX model. The generator parameters will be recovered from the estimated ARX model parameters afterwards. Two types of conversion methods are presented zeroorder hold ZOH method and Tustin method. Numerical results are presented to illustrate the proposed LSE application in dynamic system parameter identification using PMU data.
Promoting Diversity for EndtoEnd Conversation Response Generation ; We present our work on Track 2 in the Dialog System Technology Challenges 7 DSTC7. The DSTC7Track 2 aims to evaluate the response generation of fully datadriven conversation models in knowledgegrounded settings, which provides the contextualrelevant factual texts. The SequencetoSequence models have been widely used for endtoend generative conversation modelling and achieved impressive results. However, they tend to output dull and repeated responses in previous studies. Our work aims to promote the diversity for endtoend conversation response generation, which follows a twostage pipeline 1 Generate multiple responses. At this stage, two different models are proposed, i.e., a variational generative VariGen model and a retrieval based Retrieval model. 2 Rank and return the most related response by training a topic coherence discrimination TCD model for the ranking process. According to the official evaluation results, our proposed Retrieval and VariGen systems ranked first and second respectively on objective diversity metrics, i.e., Entropy, among all participant systems. And the VariGen system ranked second on NIST and METEOR metrics.
On the Quantitative Analysis of DecoderBased Generative Models ; The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of loglikelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating loglikelihoods for decoderbased models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at httpsgithub.comtonywu95evalgen. Using this technique, we analyze the performance of decoderbased models, the effectiveness of existing loglikelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.