text
stringlengths
62
2.94k
Endochronic theory, nonlinear kinematic hardening rule and generalized plasticity a new interpretation based on generalized normality assumption ; A simple way to define the flow rules of plasticity models is the assumption of generalized normality associated with a suitable pseudopotential function. This approach, however, is not usually employed to formulate endochronic theory and nonlinear kinematic NLK hardening rules as well as generalized plasticity models. In this paper, generalized normality is used to give a new formulation of these classes of models. As a result, a suited pseudopotential is introduced for endochronic models and a nonstandard description of NLK hardening and generalized plasticity models is also provided. This new formulation allows for an effective investigation of the relationships between these three classes of plasticity models.
On the consistency of the Horava Theory ; With the goal of giving evidence for the theoretical consistency of the Horava Theory, we perform a Hamiltonian analysis on a classical model suitable for analyzing its effective dynamics at large distances. The model is the lowestorder truncation of the Horava Theory with the detailedbalance condition. We consider the pure gravitational theory without matter sources. The model has the same potential term of general relativity, but the kinetic term is modified by the inclusion of an arbitrary coupling constant lambda. Since this constant breaks the general covariance under spacetime diffeomorphisms, it is believed that arbitrary values of lambda deviate the model from general relativity. We show that this model is not a deviation at all, instead it is completely equivalent to general relativity in a particular partial gauge fixing for it. In doing this, we clarify the role of a secondclass constraint of the model.
A ModelDriven Parser Generator, from Abstract Syntax Trees to Abstract Syntax Graphs ; Modelbased parser generators decouple language specification from language processing. The modeldriven approach avoids the limitations that conventional parser generators impose on the language designer. Conventional tools require the designed language grammar to conform to the specific kind of grammar supported by the particular parser generator being LL and LR parser generators the most common. Modeldriven parser generators, like ModelCC, do not require a grammar specification, since that grammar can be automatically derived from the language model and, if needed, adapted to conform to the requirements of the given kind of parser, all of this without interfering with the conceptual design of the language and its associated applications. Moreover, modeldriven tools such as ModelCC are able to automatically resolve references between language elements, hence producing abstract syntax graphs instead of abstract syntax trees as the result of the parsing process. Such graphs are not confined to directed acyclic graphs and they can contain cycles, since ModelCC supports anaphoric, cataphoric, and recursive references.
Topological energy bounds in generalized Skyrme models ; The Skyrme model has a natural generalization amenable to a standard hamiltonian treatment, consisting of the standard sigma model and the Skyrme terms, a potential, and a certain term sextic in first derivatives. Here we demonstrate that, in this theory, each pair of terms in the static energy functional which may support topological solitons according to the Derrick criterion i.e., each pair of terms with opposite Derrick scaling separately posesses a topological energy bound. As a consequence, there exists a fourparameter family of topological bounds for the full generalized Skyrme model. The optimal bounds, i.e., the optimal values of the parameters, depend both on the form of the potential and on the relative strength of the different terms. It also follows that various submodels of the generalized Skyrme model have oneparameter families of topological energy bounds. We also consider the case of topological bounds for the generalized Skyrme model on a compact base space as well as generalizations to higher dimensions.
The Influence of the Generator's License on Generated Artifacts ; Open sourcing modelling tools and generators becomes more and more important as open source software as a whole becomes more important. We evaluate the impact open source licenses of code generators have on the intellectual property IP of generated artifacts comparing the most common open source licenses by categories found in literature. Restrictively licensed generators do have effects on the IP and therefore on the usability of the artifacts they produce. We then how how this effects can be shaped to the needs of the licensor and the licensee.
Chinese Song Iambics Generation with Neural Attentionbased Model ; Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variablelength sentences and strict rhythmic patterns. This paper applies the attentionbased sequencetosequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bidirectional LongShort Term Memory LSTM model and then predict the entire iambic with the information provided by the encoder, in the form of an attentionbased LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful.
Conditional molecular design with deep generative models ; Although machine learning has been successfully used to propose novel molecules that satisfy desired properties, it is still challenging to explore a large chemical space efficiently. In this paper, we present a conditional molecular design method that facilitates generating new molecules with desired properties. The proposed model, which simultaneously performs both property prediction and molecule generation, is built as a semisupervised variational autoencoder trained on a set of existing molecules with only a partial annotation. We generate new molecules with desired properties by sampling from the generative distribution estimated by the model. We demonstrate the effectiveness of the proposed model by evaluating it on druglike molecules. The model improves the performance of property prediction by exploiting unlabeled molecules, and efficiently generates novel molecules fulfilling various target conditions.
Composition and decomposition of GANs ; In this work, we propose a compositiondecomposition framework for adversarially training generative models on composed data data where each sample can be thought of as being constructed from a fixed number of components. In our framework, samples are generated by sampling components from component generators and feeding these components to a composition function which combines them into a composed sample. This compositional training approach improves the modularity, extensibility and interpretability of Generative Adversarial Networks GANs providing a principled way to incrementally construct complex models out of simpler component models, and allowing for explicit division of responsibility between these components. Using this framework, we define a family of learning tasks and evaluate their feasibility on two datasets in two different data modalities image and text. Lastly, we derive sufficient conditions such that these compositional generative models are identifiable. Our work provides a principled approach to building on pretrained generative models or for exploiting the compositional nature of data distributions to train extensible and interpretable models.
CoOptimization Generation and Distribution Planning in Microgrids ; This paper proposes a cooptimization generation and distribution planning model in microgrids in which simultaneous investment in generation, i.e., distributed generation DG and distributed energy storage DES, and distribution, i.e., upgrading the existing distribution network, is considered. The objective of the proposed model is to minimize the microgrid total planning cost which comprises the investment cost of installed generation assets and lines, the microgrid operation cost, and the cost of unserved energy. The microgrid planning solution determines the optimal generation size, location, and mix, as well as required network upgrade. To consider line flow and voltage limits, a linearized power flow model is proposed and used, allowing further application of mixed integer linear programming MILP in problem modeling. The proposed model is applied to the IEEE 33bus standard test system to demonstrate the acceptable performance and the effectiveness of the proposed model.
Image Captioning at Will A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions ; Automatic image captioning has recently approached humanlevel performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentimentconveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the stateoftheart models in generating sentimental i.e., sentimentbearing image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments.
Parallel Scheduled Sampling ; Autoregressive models are widely used in sequence generation problems. The output sequence is typically generated in a predetermined order, one discrete unit pixel or word or character at a time. The models are trained by teacherforcing where groundtruth history is fed to the model as input, which at test time is replaced by the model prediction. Scheduled Sampling aims to mitigate this discrepancy between train and test time by randomly replacing some discrete units in the history with the model's prediction. While teacherforced training works well with ML accelerators as the computation can be parallelized across time, Scheduled Sampling involves undesirable sequential processing. In this paper, we introduce a simple technique to parallelize Scheduled Sampling across time. Experimentally, we find the proposed technique leads to equivalent or better performance on image generation, summarization, dialog generation, and translation compared to teacherforced training. In dialog response generation task, Parallel Scheduled Sampling achieves 1.6 BLEU score 11.5 improvement over teacherforcing while in image generation it achieves 20 and 13.8 improvement in Frechet Inception Distance FID and Inception Score IS respectively. Further, we discuss the effects of different hyperparameters associated with Scheduled Sampling on the model performance.
A Hierarchical Attention Based Seq2seq Model for Chinese Lyrics Generation ; In this paper, we comprehensively study on contextaware generation of Chinese song lyrics. Conventional text generative models generate a sequence or sentence word by word, failing to consider the contextual relationship between sentences. Taking account into the characteristics of lyrics, a hierarchical attention based Seq2Seq SequencetoSequence model is proposed for Chinese lyrics generation. With encoding of wordlevel and sentencelevel contextual information, this model promotes the topic relevance and consistency of generation. A large Chinese lyrics corpus is also leveraged for model training. Eventually, results of automatic and human evaluations demonstrate that our model is able to compose complete Chinese lyrics with one united topic constraint.
LPMNet Latent Part Modification and Generation for 3D Point Clouds ; In this paper, we focus on latent modification and generation of 3D point cloud object models with respect to their semantic parts. Different to the existing methods which use separate networks for part generation and assembly, we propose a single endtoend Autoencoder model that can handle generation and modification of both semantic parts, and global shapes. The proposed method supports part exchange between 3D point cloud models and composition by different parts to form new models by directly editing latent representations. This holistic approach does not need partbased training to learn part representations and does not introduce any extra loss besides the standard reconstruction loss. The experiments demonstrate the robustness of the proposed method with different object categories and varying number of points. The method can generate new models by integration of generative models such as GANs and VAEs and can work with unannotated point clouds by integration of a segmentation module.
Functional central limit theorems for epidemic models with varying infectivity ; In this paper, we prove functional central limit theorems FCLTs for a stochastic epidemic model with varying infectivity and general infectious periods recently introduced in Forien, Pang and Pardoux 2020.The infectivity process total force of infection at each time is composed of the independent infectivity random functions of each infectious individual, which starts at the time of infection. These infectivity random functions induce the infectious periods as well as exposed, recovered or immune periods in full generality, whose probability distributions can be very general. The epidemic model includes the generalized nonMarkovian SIR, SEIR, SIS, SIRS models with infectionage dependent infectivity. In the FCLTs for the generalized SIR and SEIR models, the limits of the diffusionscaled fluctuations of the infectivity and susceptible processes are a unique solution to a twodimensional Gaussiandriven stochastic Volterra integral equations, and then given these solutions, the limits for the infected exposedinfectious and recovered processes are Gaussian processes expressed in terms of the solutions to those stochastic Volterra integral equations. We also present the FCLTs for the generalized SIS and SIRS models.
Variational methods for Conditional Multimodal Deep Learning ; In this paper, we address the problem of conditional modality learning, whereby one is interested in generating one modality given the other. While it is straightforward to learn a joint distribution over multiple modalities using a deep multimodal architecture, we observe that such models aren't very effective at conditional generation. Hence, we address the problem by learning conditional distributions between the modalities. We use variational methods for maximizing the corresponding conditional loglikelihood. The resultant deep model, which we refer to as conditional multimodal autoencoder CMMA, forces the latent representation obtained from a single modality alone to be close' to the joint representation obtained from multiple modalities. We use the proposed model to generate faces from attributes. We show that the faces generated from attributes using the proposed model, are qualitatively and quantitatively more representative of the attributes from which they were generated, than those obtained by other deep generative models. We also propose a secondary task, whereby the existing faces are modified by modifying the corresponding attributes. We observe that the modifications in face introduced by the proposed model are representative of the corresponding modifications in attributes.
DLOW Domain Flow for Adaptation and Generalization ; In this work, we present a domain flow generationDLOW model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other. The benefits of our DLOW model are twofold. First, it is able to transfer source images into different styles in the intermediate domains. The transferred images smoothly bridge the gap between source and target domains, thus easing the domain adaptation task. Second, when multiple target domains are provided for training, our DLOW model is also able to generate new styles of images that are unseen in the training data. We implement our DLOW model based on CycleGAN. A domainness variable is introduced to guide the model to generate the desired intermediate domain images. In the inference phase, a flow of various styles of images can be obtained by varying the domainness variable. We demonstrate the effectiveness of our model for both crossdomain semantic segmentation and the style generalization tasks on benchmark datasets. Our implementation is available at httpsgithub.comETHRuiGongDLOW.
Theoretical guarantees for sampling and inference in generative models with latent diffusions ; We introduce and study a class of probabilistic generative models, where the latent object is a finitedimensional diffusion process on a finite time interval and the observed variable is drawn conditionally on the terminal point of the diffusion. We make the following contributions We provide a unified viewpoint on both sampling and variational inference in such generative models through the lens of stochastic control. We quantify the expressiveness of diffusionbased generative models. Specifically, we show that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets, with the accuracy of sampling measured by the KullbackLeibler divergence to the target distribution. Finally, we present and analyze a scheme for unbiased simulation of generative models with latent diffusions and provide bounds on the variance of the resulting estimators. This scheme can be implemented as a deep generative model with a random number of layers.
Compositional Generalization in Image Captioning ; Image captioning models are usually evaluated on their ability to describe a heldout set of images, not on their ability to generalize to unseen concepts. We study the problem of compositional generalization, which measures how well a model composes unseen combinations of concepts when describing images. Stateoftheart image captioning models show poor generalization performance on this task. We propose a multitask model to address the poor performance, that combines caption generation and imagesentence ranking, and uses a decoding mechanism that reranks the captions according their similarity to the image. This model is substantially better at generalizing to unseen combinations of concepts compared to stateoftheart captioning models.
Make Up Your Mind Adversarial Generation of Inconsistent Natural Language Explanations ; To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions. In this work, we show that such models are nonetheless prone to generating mutually inconsistent explanations, such as Because there is a dog in the image and Because there is no dog in the same image, exposing flaws in either the decisionmaking process of the model or in the generation of the explanations. We introduce a simple yet effective adversarial framework for sanity checking models against the generation of inconsistent natural language explanations. Moreover, as part of the framework, we address the problem of adversarial attacks with full target sequences, a scenario that was not previously addressed in sequencetosequence attacks. Finally, we apply our framework on a stateoftheart neural natural language inference model that provides natural language explanations for its predictions. Our framework shows that this model is capable of generating a significant number of inconsistent explanations.
Zeroshot Learning via Simultaneous Generating and Learning ; To overcome the absence of training data for unseen classes, conventional zeroshot learning approaches mainly train their model on seen datapoints and leverage the semantic descriptions for both seen and unseen classes. Beyond exploiting relations between classes of seen and unseen, we present a deep generative model to provide the model with experience about both seen and unseen classes. Based on the variational autoencoder with classspecific multimodal prior, the proposed method learns the conditional distribution of seen and unseen classes. In order to circumvent the need for samples of unseen classes, we treat the nonexisting data as missing examples. That is, our network aims to find optimal unseen datapoints and model parameters, by iteratively following the generating and learning strategy. Since we obtain the conditional generative model for both seen and unseen classes, classification as well as generation can be performed directly without any offtheshell classifiers. In experimental results, we demonstrate that the proposed generating and learning strategy makes the model achieve the outperforming results compared to that trained only on the seen classes, and also to the several stateoftheart methods.
Generative Temporal Link Prediction via Selftokenized Sequence Modeling ; We formalize networks with evolving structures as temporal networks and propose a generative link prediction model, Generative Link Sequence Modeling GLSM, to predict future links for temporal networks. GLSM captures the temporal link formation patterns from the observed links with a sequence modeling framework and has the ability to generate the emerging links by inferring from the probability distribution on the potential future links. To avoid overfitting caused by treating each link as a unique token, we propose a selftokenization mechanism to transform each raw link in the network to an abstract aggregation token automatically. The selftokenization is seamlessly integrated into the sequence modeling framework, which allows the proposed GLSM model to have the generalization capability to discover link formation patterns beyond raw link sequences. We compare GLSM with the existing stateofart methods on five realworld datasets. The experimental results demonstrate that GLSM obtains future positive links effectively in a generative fashion while achieving the best performance 210 improvements on AUC among other alternatives.
RelevancePromoting Language Model for ShortText Conversation ; Despite the effectiveness of sequencetosequence framework on the task of ShortText Conversation STC, the issue of underexploitation of training data i.e., the supervision signals from query text is textitignored still remains unresolved. Also, the adopted textitmaximizationbased decoding strategies, inclined to generating the generic responses or responses with repetition, are unsuited to the STC task. In this paper, we propose to formulate the STC task as a language modeling problem and tailormake a training strategy to adapt a language model for response generation. To enhance generation performance, we design a relevancepromoting transformer language model, which performs additional supervised source attention after the selfattention to increase the importance of informative query tokens in calculating the tokenlevel representation. The model further refines the query representation with relevance clues inferred from its multiple references during training. In testing, we adopt a textitrandomizationovermaximization strategy to reduce the generation of generic responses. Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics.footnoteCode available at httpsai.tencent.comailabnlpdialogue.
Does syntax need to grow on trees Sources of hierarchical inductive bias in sequencetosequence networks ; Learners that are exposed to the same training data might generalize differently due to differing inductive biases. In neural network models, inductive biases could in theory arise from any aspect of the model architecture. We investigate which architectural factors affect the generalization behavior of neural sequencetosequence models trained on two syntactic tasks, English question formation and English tense reinflection. For both tasks, the training set is consistent with a generalization based on hierarchical structure and a generalization based on linear order. All architectural factors that we investigated qualitatively affected how models generalized, including factors with no clear connection to hierarchical structure. For example, LSTMs and GRUs displayed qualitatively different inductive biases. However, the only factor that consistently contributed a hierarchical bias across tasks was the use of a treestructured model rather than a model with sequential recurrence, suggesting that humanlike syntactic generalization requires architectural syntactic structure.
WGWaveNet RealTime HighFidelity Speech Synthesis without GPU ; In this paper, we propose WGWaveNet, a fast, lightweight, and highquality waveform generation model. WGWaveNet is composed of a compact flowbased model and a postfilter. The two components are jointly trained by maximizing the likelihood of the training data and optimizing loss functions on the frequency domains. As we design a flowbased model that is heavily compressed, the proposed model requires much less computational resources compared to other waveform generation models during both training and inference time; even though the model is highly compressed, the postfilter maintains the quality of generated waveform. Our PyTorch implementation can be trained using less than 8 GB GPU memory and generates audio samples at a rate of more than 960 kHz on an NVIDIA 1080Ti GPU. Furthermore, even if synthesizing on a CPU, we show that the proposed method is capable of generating 44.1 kHz speech waveform 1.2 times faster than realtime. Experiments also show that the quality of generated audio is comparable to those of other methods. Audio samples are publicly available online.
Multifidelity Generative Deep Learning Turbulent Flows ; In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost. In this work, a novel multifidelity deep generative model is introduced for the surrogate modeling of highfidelity turbulent flow fields given the solution of a computationally inexpensive but inaccurate lowfidelity solver. The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a highfidelity simulation. The deep generative model developed is a conditional invertible neural network, built with normalizing flows, with recurrent LSTM connections that allow for stable training of transient systems with high predictive accuracy. The model is trained with a variational loss that combines both datadriven and physicsconstrained learning. This deep generative model is applied to nontrivial high Reynolds number flows governed by the NavierStokes equations including turbulent flow over a backwards facing step at different Reynolds numbers and turbulent wake behind an array of bluff bodies. For both of these examples, the model is able to generate unique yet physically accurate turbulent fluid flows conditioned on an inexpensive lowfidelity solution.
Scalable Deep Generative Modeling for Sparse Graphs ; Learning graph generative models is a challenging task for deep learning and has wide applicability to a range of domains like chemistry, biology and social science. However current deep neural methods suffer from limited scalability for a graph with n nodes and m edges, existing deep neural methods require Omegan2 complexity by building up the adjacency matrix. On the other hand, many real world graphs are actually sparse in the sense that mll n2. Based on this, we develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to On mlog n. Furthermore, during training this autoregressive model can be parallelized with Olog n synchronization stages, which makes it much more efficient than other autoregressive models that require Omegan. Experiments on several benchmarks show that the proposed approach not only scales to orders of magnitude larger graphs than previously possible with deep autoregressive graph generative models, but also yields better graph generation quality.
A Brief Summary of Interactions Between MetaLearning and SelfSupervised Learning ; This paper briefly reviews the connections between metalearning and selfsupervised learning. Metalearning can be applied to improve model generalization capability and to construct general AI algorithms. Selfsupervised learning utilizes selfsupervision from original data and extracts higherlevel generalizable features through unsupervised pretraining or optimization of contrastive loss objectives. In selfsupervised learning, data augmentation techniques are widely applied and data labels are not required since pseudo labels can be estimated from trained models on similar tasks. Metalearning aims to adapt trained deep models to solve diverse tasks and to develop general AI algorithms. We review the associations of metalearning with both generative and contrastive selfsupervised learning models. Unlabeled data from multiple sources can be jointly considered even when data sources are vastly different. We show that an integration of metalearning and selfsupervised learning models can best contribute to the improvement of model generalization capability. Selfsupervised learning guided by metalearner and general metalearning algorithms under selfsupervision are both examples of possible combinations.
Cooperative Selftraining of Machine Reading Comprehension ; Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing highquality contextualized word embeddings. However, training question answering models still requires large amounts of annotated data for specific domains. In this work, we propose a cooperative selftraining framework, RGX, for automatically generating more nontrivial questionanswer pairs to improve model performance. RGX is built upon a masked answer extraction task with an interactive learning environment containing an answer entity Recognizer, a question Generator, and an answer eXtractor. Given a passage with a masked entity, the generator generates a question around the entity, and the extractor is trained to extract the masked entity with the generated question and raw texts. The framework allows the training of question generation and answering models on any text corpora without annotation. Experiment results show that RGX outperforms the stateoftheart SOTA pretrained language models and transfer learning approaches on standard questionanswering benchmarks, and yields the new SOTA performance under given model size and transfer learning settings.
Symbolic Music Generation with Diffusion Models ; Scorebased generative models and diffusion probabilistic models have been successful at generating highquality samples in continuous domains such as images and audio. However, due to their Langevininspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pretrained variational autoencoder. Our method is nonautoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant number of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and posthoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings.
Learning to Synthesize Data for Semantic Parsing ; Synthesizing data for semantic parsing has gained increasing attention recently. However, most methods require handcrafted highprecision rules in their generative process, hindering the exploration of diverse unseen data. In this work, we propose a generative model which features a nonneural PCFG that models the composition of programs e.g., SQL, and a BARTbased translation model that maps a program to an utterance. Due to the simplicity of PCFG and pretrained BART, our generative model can be efficiently learned from existing data at hand. Moreover, explicitly modeling compositions using PCFG leads to a better exploration of unseen programs, thus generate more diverse data. We evaluate our method in both indomain and outofdomain settings of texttoSQL parsing on the standard benchmarks of GeoQuery and Spider, respectively. Our empirical results show that the synthesized data generated from our model can substantially help a semantic parser achieve better compositional and domain generalization.
Expressivity of Parameterized and Datadriven Representations in Quality Diversity Search ; We consider multisolution optimization and generative models for the generation of diverse artifacts and the discovery of novel solutions. In cases where the domain's factors of variation are unknown or too complex to encode manually, generative models can provide a learned latent space to approximate these factors. When used as a search space, however, the range and diversity of possible outputs are limited to the expressivity and generative capabilities of the learned model. We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces 1 a predefined parameterized space and 2 the latent space of a variational autoencoder model. We find that the search on an explicit parametric encoding creates more diverse artifact sets than searching the latent space. A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples. We recommend using a generative model's latent space primarily to measure similarity between artifacts rather than for search and generation. Whenever a parametric encoding is obtainable, it should be preferred over a learned representation as it produces a higher diversity of solutions.
Latent Space Refinement for Deep Generative Models ; Deep generative models are becoming widely used across science and industry for a variety of purposes. A common challenge is achieving a precise implicit or explicit representation of the data probability density. Recent proposals have suggested using classifier weights to refine the learned density of deep generative models. We extend this idea to all types of generative models and show how latent space refinement via iterated generative modeling can circumvent topological obstructions and improve precision. This methodology also applies to cases were the target model is nondifferentiable and has many internal latent dimensions which must be marginalized over before refinement. We demonstrate our Latent Space Refinement LaSeR protocol on a variety of examples, focusing on the combinations of Normalizing Flows and Generative Adversarial Networks.
Improving Prediction of LowPrior Clinical Events with Simultaneous General PatientState Representation Learning ; Lowprior targets are common among many important clinical events, which introduces the challenge of having enough data to support learning of their predictive models. Many prior works have addressed this problem by first building a general patientstate representation model, and then adapting it to a new lowprior prediction target. In this schema, there is potential for the predictive performance to be hindered by the misalignment between the general patientstate model and the target task. To overcome this challenge, we propose a new method that simultaneously optimizes a shared model through multitask learning of both the lowprior supervised target and general purpose patientstate representation GPSR. More specifically, our method improves prediction performance of a lowprior task by jointly optimizing a shared model that combines the loss of the target event and a broad range of generic clinical events. We study the approach in the context of Recurrent Neural Networks RNNs. Through extensive experiments on multiple clinical event targets using MIMICIII data, we show that the inclusion of general patientstate representation tasks during model training improves the prediction of individual lowprior targets.
Adversarial Examples Generation for Reducing Implicit Gender Bias in Pretrained Models ; Over the last few years, Contextualized Pretrained Neural Language Models, such as BERT, GPT, have shown significant gains in various NLP tasks. To enhance the robustness of existing pretrained models, one way is adversarial examples generation and evaluation for conducting data augmentation or adversarial learning. In the meanwhile, gender bias embedded in the models seems to be a serious problem in practical applications. Many researches have covered the gender bias produced by wordlevel informatione.g. genderstereotypical occupations, while few researchers have investigated the sentencelevel cases and implicit cases. In this paper, we proposed a method to automatically generate implicit gender bias samples at sentencelevel and a metric to measure gender bias. Samples generated by our method will be evaluated in terms of accuracy. The metric will be used to guide the generation of examples from Pretrained models. Therefore, those examples could be used to impose attacks on Pretrained Models. Finally, we discussed the evaluation efficacy of our generated examples on reducing gender bias for future research.
Lifelong Vehicle Trajectory Prediction Framework Based on Generative Replay ; Accurate trajectory prediction of vehicles is essential for reliable autonomous driving. To maintain consistent performance as a vehicle driving around different cities, it is crucial to adapt to changing traffic circumstances and achieve lifelong trajectory prediction model. To realize it, catastrophic forgetting is a main problem to be addressed. In this paper, a divergence measurement method based on conditional KullbackLeibler divergence is proposed first to evaluate spatiotemporal dependency difference among varied driving circumstances. Then based on generative replay, a novel lifelong vehicle trajectory prediction framework is developed. The framework consists of a conditional generation model and a vehicle trajectory prediction model. The conditional generation model is a generative adversarial network conditioned on position configuration of vehicles. After learning and merging trajectory distribution of vehicles across different cities, the generation model replays trajectories with prior samplings as inputs, which alleviates catastrophic forgetting. The vehicle trajectory prediction model is trained by the replayed trajectories and achieves consistent prediction performance on visited cities. A lifelong experiment setup is established on four open datasets including five tasks. Spatiotemporal dependency divergence is calculated for different tasks. Even though these divergence, the proposed framework exhibits lifelong learning ability and achieves consistent performance on all tasks.
Optimal regularizations for data generation with probabilistic graphical models ; Understanding the role of regularization is a central question in Statistical Inference. Empirically, wellchosen regularization schemes often dramatically improve the quality of the inferred models by avoiding overfitting of the training data. We consider here the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori MAP inference of generative pairwise graphical models. Based on analytical calculations on Gaussian multivariate distributions and numerical experiments on Gaussian and Potts models we study the likelihoods of the training, test, and 'generated data' with the inferred models sets as functions of the regularization strengths. We show in particular that, at its maximum, the test likelihood and the 'generated' likelihood, which quantifies the quality of the generated samples, have remarkably close values. The optimal value for the regularization strength is found to be approximately equal to the inverse sum of the squared couplings incoming on sites on the underlying network of interactions. Our results seem largely independent of the structure of the true underlying interactions that generated the data, of the regularization scheme considered, and are valid when small fluctuations of the posterior distribution around the MAP estimator are taken into account. Connections with empirical works on protein models learned from homologous sequences are discussed.
AttributableWatermarking of Speech Generative Models ; Generative models are now capable of synthesizing images, speeches, and videos that are hardly distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation and IP theft. This paper investigates a solution for model attribution, i.e., the classification of synthetic contents by their source models via watermarks embedded in the contents. Building on past success of model attribution in the image domain, we discuss algorithmic improvements for generating userend speech models that empirically achieve high attribution accuracy, while maintaining high generation quality. We show the trade off between attributability and generation quality under a variety of attacks on generated speech signals attempting to remove the watermarks, and the feasibility of learning robust watermarks against these attacks.
SingleSketch2Mesh Generating 3D Mesh model from Sketch ; Sketching is an important activity in any design process. Designers and stakeholders share their ideas through handdrawn sketches. These sketches are further used to create 3D models. Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms. Therefore, it requires users to have an experience of sketching on such platform. Moreover, most of the existing approaches are based on geometric manipulation and thus cannot be generalized. We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from handdrawn sketches. Our approach is based on Generative Networks and EncoderDecoder Architecture to generate 3D mesh model from a handdrawn sketch. We evaluate our solution with existing solutions. Our approach outperforms existing approaches on both quantitative and qualitative evaluation criteria.
Diffusion Probabilistic Modeling for Video Generation ; Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in highquality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, endtoend optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic nextframe prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against five baselines on four datasets involving natural and simulationbased videos. We find significant improvements in terms of perceptual quality for all datasets. Furthermore, by introducing a scalable version of the Continuous Ranked Probability Score CRPS applicable to video, we show that our model also outperforms existing approaches in their probabilistic frame forecasting ability.
Modeling Intensification for Sign Language Generation A Computational Approach ; Endtoend sign language generation models do not accurately represent the prosody in sign language. A lack of temporal and spatial variations leads to poorquality generated presentations that confuse human interpreters. In this paper, we aim to improve the prosody in generated sign languages by modeling intensification in a datadriven manner. We present different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations. To employ our strategies, we first annotate a subset of the benchmark PHOENIX14T, a German Sign Language dataset, with different levels of intensification. We then use a supervised intensity tagger to extend the annotated dataset and obtain labels for the remaining portion of it. This enhanced dataset is then used to train stateoftheart transformer models for sign language generation. We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics. Human evaluation also indicates a higher preference of the videos generated using our model.
FewShot Diffusion Models ; Denoising diffusion probabilistic models DDPM are powerful hierarchical latent variable models with remarkable sample generation quality and training stability. These properties can be attributed to parameter sharing in the generative hierarchy, as well as a parameterfree diffusionbased inference procedure. In this paper, we present FewShot Diffusion Models FSDM, a framework for fewshot generation leveraging conditional DDPMs. FSDMs are trained to adapt the generative process conditioned on a small set of images from a given class by aggregating image patch information using a setbased Vision Transformer ViT. At test time, the model is able to generate samples from previously unseen classes conditioned on as few as 5 samples from that class. We empirically show that FSDM can perform fewshot generation and transfer to new datasets. We benchmark variants of our method on complex vision datasets for fewshot learning and compare to unconditional and conditional DDPM baselines. Additionally, we show how conditioning the model on patchbased input set information improves training convergence.
Bootstrapped Transformer for Offline Reinforcement Learning ; Offline reinforcement learning RL aims at learning policies from previously collected static trajectory data without interacting with the real environment. Recent works provide a novel perspective by viewing offline RL as a generic sequence generation problem, adopting sequence models such as Transformer architecture to model distributions over trajectories, and repurposing beam search as a planning algorithm. However, the training datasets utilized in general offline RL tasks are quite limited and often suffer from insufficient distribution coverage, which could be harmful to training sequence generation models yet has not drawn enough attention in the previous works. In this paper, we propose a novel algorithm named Bootstrapped Transformer, which incorporates the idea of bootstrapping and leverages the learned model to selfgenerate more offline data to further boost the sequence model training. We conduct extensive experiments on two offline RL benchmarks and demonstrate that our model can largely remedy the existing offline RL training limitations and beat other strong baseline methods. We also analyze the generated pseudo data and the revealed characteristics may shed some light on offline RL training. The codes are available at httpsseqml.github.iobootorl.
Scorebased Generative Models for Calorimeter Shower Simulation ; Scorebased generative models are a new class of generative algorithms that have been shown to produce realistic images even in high dimensional spaces, currently surpassing other stateoftheart models for different benchmark categories and applications. In this work we introduce CaloScore, a scorebased generative model for collider physics applied to calorimeter shower generation. Three different diffusion models are investigated using the Fast Calorimeter Simulation Challenge 2022 dataset. CaloScore is the first application of a scorebased generative model in collider physics and is able to produce highfidelity calorimeter images for all datasets, providing an alternative paradigm for calorimeter shower simulation.
Studying Generalization Through Data Averaging ; The generalization of machine learning models has a complex dependence on the data, model and learning algorithm. We study train and test performance, as well as the generalization gap given by the mean of their difference over different data set samples to understand their typical behavior. We derive an expression for the gap as a function of the covariance between the model parameter distribution and the train loss, and another expression for the average test performance, showing test generalization only depends on dataaveraged parameter distribution and the dataaveraged loss. We show that for a large class of model parameter distributions a modified generalization gap is always nonnegative. By specializing further to parameter distributions produced by stochastic gradient descent SGD, along with a few approximations and modeling considerations, we are able to predict some aspects about how the generalization gap and model train and test performance vary as a function of SGD noise. We evaluate these predictions empirically on the Cifar10 classification task based on a ResNet architecture.
Towards Multimodal VisionLanguage Models Generating NonGeneric Text ; Visionlanguage models can assess visual context in an image and generate descriptive text. While the generated text may be accurate and syntactically correct, it is often overly general. To address this, recent work has used optical character recognition to supplement visual information with text extracted from an image. In this work, we contend that visionlanguage models can benefit from additional information that can be extracted from an image, but are not used by current models. We modify previous multimodal frameworks to accept relevant information from any number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel imagecaption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions PAC, consists of captioned images of wellknown people in context. By finetuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data. For the PAC dataset, we provide a discussion on collection and baseline benchmark scores.
Outofdistribution Detection via Frequencyregularized Generative Models ; Modern deep generative models can assign high likelihood to inputs drawn from outside the training distribution, posing threats to models in openworld deployments. While much research attention has been placed on defining new testtime measures of OOD uncertainty, these methods do not fundamentally change how deep generative models are regularized and optimized in training. In particular, generative models are shown to overly rely on the background information to estimate the likelihood. To address the issue, we propose a novel frequencyregularized learning FRL framework for OOD detection, which incorporates highfrequency information into training and guides the model to focus on semantically relevant features. FRL effectively improves performance on a wide range of generative architectures, including variational autoencoder, GLOW, and PixelCNN. On a new largescale evaluation task, FRL achieves the stateoftheart performance, outperforming a strong baseline Likelihood Regret by 10.7 AUROC while achieving 147times faster inference speed. Extensive ablations show that FRL improves the OOD detection performance while preserving the image generation quality. Code is available at httpsgithub.commucaiFRL.
Quantifying Quality of ClassConditional Generative Models in TimeSeries Domain ; Generative models are designed to address the data scarcity problem. Even with the exploding amount of data, due to computational advancements, some applications e.g., health care, weather forecast, fault detection still suffer from data insufficiency, especially in the timeseries domain. Thus generative models are essential and powerful tools, but they still lack a consensual approach for quality assessment. Such deficiency hinders the confident application of modern implicit generative models on timeseries data. Inspired by assessment methods on the image domain, we introduce the InceptionTime Score ITS and the Frechet InceptionTime Distance FITD to gauge the qualitative performance of class conditional generative models on the timeseries domain. We conduct extensive experiments on 80 different datasets to study the discriminative capabilities of proposed metrics alongside two existing evaluation metrics Train on Synthetic Test on Real TSTR and Train on Real Test on Synthetic TRTS. Extensive evaluation reveals that the proposed assessment method, i.e., ITS and FITD in combination with TSTR, can accurately assess classconditional generative model performance.
Controlled Text Reduction ; Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks deciding on targeted content and generating a coherent text conveying it. While some popular approaches address summarization as a single endtoend task, prominent works support decomposed modeling for individual subtasks. Further, semiautomated text reduction is also very appealing, where users may identify targeted content while models would generate a corresponding coherent summary. In this paper, we focus on the second subtask, of generating coherent text given preselected content. Concretely, we formalize textitControlled Text Reduction as a standalone task, whose input is a source text with marked spans of targeted content highlighting. A model then needs to generate a coherent text that includes all and only the target information. We advocate the potential of such models, both for modular fullyautomatic summarization, as well as for semiautomated humanintheloop use cases. Facilitating proper research, we crowdsource highquality dev and test datasets for the task. Further, we automatically generate a larger silver training dataset from available summarization benchmarks, leveraging a pretrained summarysource alignment model. Finally, employing these datasets, we present a supervised baseline model, showing promising results and insightful analyses.
Improving Generalization of Pretrained Language Models via Stochastic Weight Averaging ; Knowledge Distillation KD is a commonly used technique for improving the generalization of compact Pretrained Language Models PLMs on downstream tasks. However, such methods impose the additional burden of training a separate teacher model for every new dataset. Alternatively, one may directly work on the improvement of the optimization procedure of the compact model toward better generalization. Recent works observe that the flatness of the local minimum correlates well with better generalization. In this work, we adapt Stochastic Weight Averaging SWA, a method encouraging convergence to a flatter minimum, to finetuning PLMs. We conduct extensive experiments on various NLP tasks text classification, question answering, and generation and different model architectures and demonstrate that our adaptation improves the generalization without extra computation cost. Moreover, we observe that this simple optimization technique is able to outperform the stateoftheart KD methods for compact models.
PointE A System for Generating 3D Point Clouds from Complex Prompts ; While recent work on textconditional 3D object generation has shown promising results, the stateoftheart methods typically require multiple GPUhours to produce a single sample. This is in stark contrast to stateoftheart generative image models, which produce samples in a number of seconds or minutes. In this paper, we explore an alternative method for 3D object generation which produces 3D models in only 12 minutes on a single GPU. Our method first generates a single synthetic view using a texttoimage diffusion model, and then produces a 3D point cloud using a second diffusion model which conditions on the generated image. While our method still falls short of the stateoftheart in terms of sample quality, it is one to two orders of magnitude faster to sample from, offering a practical tradeoff for some use cases. We release our pretrained point cloud diffusion models, as well as evaluation code and models, at httpsgithub.comopenaipointe.
Text Generation with Diffusion Language Models A Pretraining Approach with Continuous Paragraph Denoise ; In this paper, we introduce a novel dIffusion language modEl pretraining framework for text generation, which we call GENIE. GENIE is a largescale pretrained diffusion language model that consists of an encoder and a diffusionbased decoder, which can generate text by gradually transforming a random noise sequence into a coherent text sequence. To pretrain GENIE on a largescale language corpus, we design a new continuous paragraph denoise objective, which encourages the diffusiondecoder to reconstruct a clean text paragraph from a corrupted version, while preserving the semantic and syntactic coherence. We evaluate GENIE on four downstream text generation benchmarks, namely XSum, CNNDailyMail, Gigaword, and CommonGen. Our experimental results show that GENIE achieves comparable performance with the stateoftheart autoregressive models on these benchmarks, and generates more diverse text samples. The code and models of GENIE are available at httpsgithub.commicrosoftProphetNettreemasterGENIE.
SDYNGANs Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics ; We introduce adversarial learning methods for datadriven generative modeling of the dynamics of nthorder stochastic systems. Our approach builds on Generative Adversarial Networks GANs with generative model classes based on stable mstep stochastic numerical integrators. We introduce different formulations and training methods for learning models of stochastic dynamics based on observation of trajectory samples. We develop approaches using discriminators based on Maximum Mean Discrepancy MMD, training protocols using conditional and marginal distributions, and methods for learning dynamic responses over different timescales. We show how our approaches can be used for modeling physical systems to learn forcelaws, damping coefficients, and noiserelated parameters. The adversarial learning approaches provide methods for obtaining stable generative models for dynamic tasks including longtime prediction and developing simulations for stochastic systems.
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints ; The limits of openended generative models are unclear, yet increasingly important. What causes them to succeed and what causes them to fail In this paper, we take a promptcentric approach to analyzing and bounding the abilities of openended generative models. We present a generic methodology of analysis with two challenging prompt constraint types structural and stylistic. These constraint types are categorized into a set of welldefined constraints that are analyzable by a single prompt. We then systematically create a diverse set of simple, natural, and useful prompts to robustly analyze each individual constraint. Using the GPT3 textdavinci002 model as a case study, we generate outputs from our collection of prompts and analyze the model's generative failures. We also show the generalizability of our proposed method on other large models like BLOOM and OPT. Our results and our incontext mitigation strategies reveal open challenges for future research. We have publicly released our code at httpsgithub.comSALTNLPBoundCapLLM.
Synthesizing Mixedtype Electronic Health Records using Diffusion Models ; Electronic Health Records EHRs contain sensitive patient information, which presents privacy concerns when sharing such data. Synthetic data generation is a promising solution to mitigate these risks, often relying on deep generative models such as Generative Adversarial Networks GANs. However, recent studies have shown that diffusion models offer several advantages over GANs, such as generation of more realistic synthetic data and stable training in generating data modalities, including image, text, and sound. In this work, we investigate the potential of diffusion models for generating realistic mixedtype tabular EHRs, comparing TabDDPM model with existing methods on four datasets in terms of data quality, utility, privacy, and augmentation. Our experiments demonstrate that TabDDPM outperforms the stateoftheart models across all evaluation metrics, except for privacy, which confirms the tradeoff between privacy and utility.
Enhancing Text Generation with Cooperative Training ; Recently, there has been a surge in the use of generated data to enhance the performance of downstream models, largely due to the advancements in pretrained language models. However, most prevailing methods trained generative and discriminative models in isolation, which left them unable to adapt to changes in each other. These approaches lead to generative models that are prone to deviating from the true data distribution and providing limited benefits to discriminative models. While some works have proposed jointly training generative and discriminative language models, their methods remain challenging due to the nondifferentiable nature of discrete data. To overcome these issues, we introduce a textitselfconsistent learning framework in the text field that involves training a discriminator and generator cooperatively in a closedloop manner until a scoring consensus is reached. By learning directly from selected samples, our framework are able to mitigate training instabilities such as mode collapse and nonconvergence. Extensive experiments on four downstream benchmarks, including AFQMC, CHIPSTS, QQP, and MRPC, demonstrate the efficacy of the proposed framework.
Text Semantics to Image Generation A method of building facades design base on Stable Diffusion model ; Stable Diffusion model has been extensively employed in the study of architectural image generation, but there is still an opportunity to enhance in terms of the controllability of the generated image content. A multinetwork combined texttobuilding facade image generating method is proposed in this work. We first finetuned the Stable Diffusion model on the CMP Facades dataset using the LoRA LowRank Adaptation approach, then we apply the ControlNet model to further control the output. Finally, we contrasted the facade generating outcomes under various architectural style text contents and control strategies. The results demonstrate that the LoRA training approach significantly decreases the possibility of finetuning the Stable Diffusion large model, and the addition of the ControlNet model increases the controllability of the creation of text to building facade images. This provides a foundation for subsequent studies on the generation of architectural images.
A Lightweight Constrained Generation Alternative for Queryfocused Summarization ; Queryfocused summarization QFS aims to provide a summary of a document that satisfies information need of a given query and is useful in various IR applications, such as abstractive snippet generation. Current QFS approaches typically involve injecting additional information, e.g. queryanswer relevance or finegrained tokenlevel interaction between a query and document, into a finetuned large language model. However, these approaches often require extra parameters training, and generalize poorly to new dataset distributions. To mitigate this, we propose leveraging a recently developed constrained generation model Neurological Decoding NLD as an alternative to current QFS regimes which rely on additional subarchitectures and training. We first construct lexical constraints by identifying important tokens from the document using a lightweight gradient attribution model, then subsequently force the generated summary to satisfy these constraints by directly manipulating the final vocabulary likelihood. This lightweight approach requires no additional parameters or finetuning as it utilizes both an offtheshelf neural retrieval model to construct the constraints and a standard generative language model to produce the QFS. We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the stateoftheart model with substantially reduced complexity.
TR0N Translator Networks for 0Shot PlugandPlay Conditional Generation ; We propose TR0N, a highly general framework to turn pretrained unconditional generative models, such as GANs and VAEs, into conditional models. The conditioning can be highly arbitrary, and requires only a pretrained auxiliary model. For example, we show how to turn unconditional models into classconditional ones with the help of a classifier, and also into texttoimage models by leveraging CLIP. TR0N learns a lightweight stochastic mapping which translates between the space of conditions and the latent space of the generative model, in such a way that the generated latent corresponds to a data sample satisfying the desired condition. The translated latent samples are then further improved upon through Langevin dynamics, enabling us to obtain higherquality data samples. TR0N requires no training data nor finetuning, yet can achieve a zeroshot FID of 10.9 on MSCOCO, outperforming competing alternatives not only on this metric, but also in sampling speed all while retaining a much higher level of generality. Our code is available at httpsgithub.comlayer6ailabstr0n.
It is all about where you start Texttoimage generation with seed selection ; Texttoimage diffusion models can synthesize a large variety of concepts in new compositions and scenarios. However, they still struggle with generating uncommon concepts, rare unusual combinations, or structured concepts like hand palms. Their limitation is partly due to the longtail nature of their training data webcrawled data sets are strongly unbalanced, causing models to underrepresent concepts from the tail of the distribution. Here we characterize the effect of unbalanced training data on texttoimage models and offer a remedy. We show that rare concepts can be correctly generated by carefully selecting suitable generation seeds in the noise space, a technique that we call SeedSelect. SeedSelect is efficient and does not require retraining the diffusion model. We evaluate the benefit of SeedSelect on a series of problems. First, in fewshot semantic data augmentation, where we generate semantically correct images for fewshot and longtail benchmarks. We show classification improvement on all classes, both from the head and tail of the training data of diffusion models. We further evaluate SeedSelect on correcting images of hands, a wellknown pitfall of current diffusion models, and show that it improves hand generation substantially.
STOAT Structured Data to Analytical Text With Controls ; Recent language models have made tremendous progress in the structured data to text generation task. However, these models still give suboptimal performance where logical inference is required to generate the descriptions. In this work, we specifically focus on analytical text generation from structured data such as tables. Building on the taxonomy proposed in Gupta et al., 2020 we focus on controllable table to text generation for the following reasoning categories numerical reasoning, commonsense reasoning, temporal reasoning, table knowledge, and entity knowledge. We propose STOAT model, which is table and reasoning aware, with vectorquantization to infuse the given reasoning categories in the output. We observe that our model provides 10.19, 1.13 improvement on the PARENT metric in iToTTo and Infotabs for the analytical sentence task. We also found that our model generates 15.3 more faithful and analytical descriptions as compared to the baseline models in human evaluation. We curate and release two reasoning category annotated tabletointeresting text generation datasets based on the ToTTo Parikh et al., 2020 and InfoTabs datasets Gupta et al.,2020.
Federated Variational Inference Towards Improved Personalization and Generalization ; Conventional federated learning algorithms train a single global model by leveraging all participating clients' data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropriately approximate the predictive process, converge to an optimal state, or generalize to new clients. We study personalization and generalization in stateless crossdevice federated learning setups assuming heterogeneity in client data distributions and predictive models. We first propose a hierarchical generative model and formalize it using Bayesian Inference. We then approximate this process using Variational Inference to train our model efficiently. We call this algorithm Federated Variational Inference FedVI. We use PACBayes analysis to provide generalization bounds for FedVI. We evaluate our model on FEMNIST and CIFAR100 image classification and show that FedVI beats the stateoftheart on both tasks.
Compositional Generalization without Trees using Multiset Tagging and Latent Permutations ; Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a twostep process we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform nontreebased models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion.
A Practical Toolkit for Multilingual Question and Answer Generation ; Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation QAG is not straightforward due to the expected structured output i.e. a list of question and answer pairs, as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG, along with lmqg, an allinone Python package for model finetuning, generation, and evaluation. We also release QAG models in eight languages finetuned on a few variants of pretrained encoderdecoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easytouse code for developers who require custom models or finegrained controls for generation.
How Generative Models Improve LOS Estimation in 6G NonTerrestrial Networks ; With the advent of 5G and the anticipated arrival of 6G, there has been a growing research interest in combining mobile networks with NonTerrestrial Network platforms such as low earth orbit satellites and Geosynchronous Equatorial Orbit satellites to provide broader coverage for a wide range of applications. However, integrating these platforms is challenging because LineOfSight LOS estimation is required for both inter satellite and satellitetoterrestrial segment links. Machine Learning ML techniques have shown promise in channel modeling and LOS estimation, but they require large datasets for model training, which can be difficult to obtain. In addition, network operators may be reluctant to disclose their network data due to privacy concerns. Therefore, alternative data collection techniques are needed. In this paper, a framework is proposed that uses generative models to generate synthetic data for LOS estimation in nonterrestrial 6G networks. Specifically, the authors show that generative models can be trained with a small available dataset to generate large datasets that can be used to train ML models for LOS estimation. Furthermore, since the generated synthetic data does not contain identifying information of the original dataset, it can be made publicly available without violating privacy
LICGAN Language Information Conditioned Graph Generative GAN Model ; Deep generative models for Natural Language data offer a new angle on the problem of graph synthesis by optimizing differentiable models that directly generate graphs, it is possible to sidestep expensive search procedures in the discrete and vast space of possible graphs. We introduce LICGAN, an implicit, likelihoodfree generative model for small graphs that circumvents the need for expensive graph matching procedures. Our method takes as input a natural language query and using a combination of language modelling and Generative Adversarial Networks GANs and returns a graph that closely matches the description of the query. We combine our approach with a reward network to further enhance the graph generation with desired properties. Our experiments, show that LICGAN does well on metrics such as PropMatch and Closeness getting scores of 0.36 and 0.48. We also show that LICGAN performs as good as ChatGPT, with ChatGPT getting scores of 0.40 and 0.42. We also conduct a few experiments to demonstrate the robustness of our method, while also highlighting a few interesting caveats of the model.
Generative Plug and Play Posterior Sampling for Inverse Problems ; Over the past decade, PlugandPlay PnP has become a popular method for reconstructing images using a modular framework consisting of a forward and prior model. The great strength of PnP is that an image denoiser can be used as a prior model while the forward model can be implemented using more traditional physicsbased approaches. However, a limitation of PnP is that it reconstructs only a single deterministic image. In this paper, we introduce Generative PlugandPlay GPnP, a generalization of PnP to sample from the posterior distribution. As with PnP, GPnP has a modular framework using a physicsbased forward model and an image denoising prior model. However, in GPnP these models are extended to become proximal generators, which sample from associated distributions. GPnP applies these proximal generators in alternation to produce samples from the posterior. We present experimental simulations using the wellknown BM3D denoiser. Our results demonstrate that the GPnP method is robust, easy to implement, and produces intuitively reasonable samples from the posterior for sparse interpolation and tomographic reconstruction. Code to accompany this paper is available at httpsgithub.comgbuzzardgenerativepnpallerton .
Hyperbolic Graph Diffusion Model for Molecule Generation ; Recently, diffusion models have achieved remarkable performance in data generation, e.g., generating highquality images. Nevertheless, chemistry molecules often have complex nonEuclidean spatial structures, with the behavior changing dynamically and unpredictably. Most existing diffusion models highly rely on computing the probability distribution, i.e., Gaussian distribution, in Euclidean space, which cannot capture internal nonEuclidean structures of molecules, especially the hierarchical structures of the implicit manifold surface represented by molecules. It has been observed that the complex hierarchical structures in hyperbolic embedding space become more prominent and easier to be captured. In order to leverage both the data generation power of diffusion models and the strong capability to extract complex geometric features of hyperbolic embedding, we propose to extend the diffusion model to hyperbolic manifolds for molecule generation, namely, Hyperbolic Graph Diffusion Model HGDM. The proposed HGDM employs a hyperbolic variational autoencoder to generate the hyperbolic hidden representation of nodes and then a scorebased hyperbolic graph neural network is used to learn the distribution in hyperbolic space. Numerical experimental results show that the proposed HGDM achieves higher performance on several molecular datasets, compared with stateoftheart methods.
Viewset Diffusion 0ImageConditioned 3D Generative Models from 2D Data ; We present Viewset Diffusion, a diffusionbased generator that outputs 3D objects while only using multiview 2D data for supervision. We note that there exists a onetoone mapping between viewsets, i.e., collections of several 2D views of an object, and 3D models. Hence, we train a diffusion model to generate viewsets, but design the neural network generator to reconstruct internally corresponding 3D models, thus generating those too. We fit a diffusion model to a large number of viewsets for a given category of objects. The resulting generator can be conditioned on zero, one or more input views. Conditioned on a single view, it performs 3D reconstruction accounting for the ambiguity of the task and allowing to sample multiple solutions compatible with the input. The model performs reconstruction efficiently, in a feedforward manner, and is trained using only rendering losses using as few as three views per viewset. Project page szymanowiczs.github.ioviewsetdiffusion.
Training Diffusion Classifiers with Denoising Assistance ; Scorematching and diffusion models have emerged as stateoftheart generative models for both conditional and unconditional generation. Classifierguided diffusion models are created by training a classifier on samples obtained from the forwarddiffusion process i.e., from data to noise. In this paper, we propose denoisingassisted DA classifiers wherein the diffusion classifier is trained using both noisy and denoised examples as simultaneous inputs to the model. We differentiate between denoisingassisted DA classifiers and noisy classifiers, which are diffusion classifiers that are only trained on noisy examples. Our experiments on Cifar10 and Imagenet show that DAclassifiers improve over noisy classifiers both quantitatively in terms of generalization to test data and qualitatively in terms of perceptuallyaligned classifiergradients and generative modeling metrics. Finally, we describe a semisupervised framework for training diffusion classifiers and our experiments, that also include positiveunlabeled settings, demonstrate improved generalization of DAclassifiers over noisy classifiers.
Hierarchical Neural Coding for Controllable CAD Model Generation ; This paper presents a novel generative model for Computer Aided Design CAD that 1 represents highlevel design concepts of a CAD model as a threelevel hierarchical tree of neural codes, from global part arrangement down to local curve geometry; and 2 controls the generation or completion of CAD models by specifying the target design using a code tree. Concretely, a novel variant of a vector quantized VAE with masked skip connection extracts design variations as neural codebooks at three levels. Twostage cascaded autoregressive transformers learn to generate code trees from incomplete CAD models and then complete CAD models following the intended design. Extensive experiments demonstrate superior performance on conventional tasks such as random generation while enabling novel interaction capabilities on conditional generation tasks. The code is available at httpsgithub.comsamxuxianghnccad.
Polyffusion A Diffusion Model for Polyphonic Score Generation with Internal and External Controls ; We propose Polyffusion, a diffusion model that generates polyphonic music scores by regarding music as imagelike piano roll representations. The model is capable of controllable music generation with two paradigms internal control and external control. Internal control refers to the process in which users predefine a part of the music and then let the model infill the rest, similar to the task of masked music generation or music inpainting. External control conditions the model with external yet related information, such as chord, texture, or other features, via the crossattention mechanism. We show that by using internal and external controls, Polyffusion unifies a wide range of music creation tasks, including melody generation given accompaniment, accompaniment generation given melody, arbitrary music segment inpainting, and music arrangement given chords or textures. Experimental results show that our model significantly outperforms existing Transformer and samplingbased baselines, and using pretrained disentangled representations as external conditions yields more effective controls.
A multiscale and multicriteria Generative Adversarial Network to synthesize 1dimensional turbulent fields ; This article introduces a new Neural Network stochastic model to generate a 1dimensional stochastic field with turbulent velocity statistics. Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence, so guaranteeing descriptions of 1 energy distribution, 2 energy cascade and 3 intermittency across scales in agreement with experimental observations. The model is a Generative Adversarial Network with multiple multiscale optimization criteria. First, we use three physicsbased criteria the variance, skewness and flatness of the increments of the generated field that retrieve respectively the turbulent energy distribution, energy cascade and intermittency across scales. Second, the Generative Adversarial Network criterion, based on reproducing statistical distributions, is used on segments of different length of the generated field. Furthermore, to mimic multiscale decompositions frequently used in turbulence's studies, the model architecture is fully convolutional with kernel sizes varying along the multiple layers of the model. To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.
Informed Named Entity Recognition Decoding for Generative Language Models ; Everlarger language models with everincreasing capabilities are by now wellestablished text processing tools. Alas, information extraction tasks such as named entity recognition are still largely unaffected by this progress as they are primarily based on the previous generation of encoderonly transformer models. Here, we propose a simple yet effective approach, Informed Named Entity Recognition Decoding iNERD, which treats named entity recognition as a generative process. It leverages the language understanding capabilities of recent generative models in a futureproof manner and employs an informed decoding scheme incorporating the restricted nature of information extraction into openended text generation, improving performance and eliminating any risk of hallucinations. We coarsetune our model on a merged named entity corpus to strengthen its performance, evaluate five generative language models on eight named entity recognition datasets, and achieve remarkable results, especially in an environment with an unknown entity class set, demonstrating the adaptability of the approach.
Instruction Position Matters in Sequence Generation with Large Language Models ; Large language models LLMs are capable of performing conditional sequence generation tasks, such as translation or summarization, through instruction finetuning. The finetuning data is generally sequentially concatenated from a specific task instruction, an input sentence, and the corresponding response. Considering the locality modeled by the selfattention mechanism of LLMs, these models face the risk of instruction forgetting when generating responses for long input sentences. To mitigate this issue, we propose enhancing the instructionfollowing capability of LLMs by shifting the position of task instructions after the input sentences. Theoretical analysis suggests that our straightforward method can alter the model's learning focus, thereby emphasizing the training of instructionfollowing capabilities. Concurrently, experimental results demonstrate that our approach consistently outperforms traditional settings across various model scales 1B 7B 13B and different sequence generation tasks translation and summarization, without any additional data or annotation costs. Notably, our method significantly improves the zeroshot performance on conditional sequence generation, e.g., up to 9.7 BLEU points on WMT zeroshot translation tasks.
Detecting OutofContext ImageCaption Pairs in News A CounterIntuitive Method ; The growth of misinformation and recontextualized media in social media and news leads to an increasing need for factchecking methods. Concurrently, the advancement in generative models makes cheapfakes and deepfakes both easier to make and harder to detect. In this paper, we present a novel approach using generative image models to our advantage for detecting OutofContext OOC use of imagescaption pairs in news. We present two new datasets with a total of 6800 images generated using two different generative models including 1 DALLE 2, and 2 StableDiffusion. We are confident that the method proposed in this paper can further research on generative models in the field of cheapfake detection, and that the resulting datasets can be used to train and evaluate new models aimed at detecting cheapfakes. We run a preliminary qualitative and quantitative analysis to evaluate the performance of each image generation model for this task, and evaluate a handful of methods for computing image similarity.
DiffuGen Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models ; Generating highquality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often timeconsuming and costly. To address these challenges associated with dataset generation, we introduce DiffuGen, a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.
Optimal Nonlinearities Improve Generalization Performance of Random Features ; Random feature model with a nonlinear activation function has been shown to perform asymptotically equivalent to a Gaussian model in terms of training and generalization errors. Analysis of the equivalent model reveals an important yet not fully understood role played by the activation function. To address this issue, we study the parameters of the equivalent model to achieve improved generalization performance for a given supervised learning problem. We show that acquired parameters from the Gaussian model enable us to define a set of optimal nonlinearities. We provide two example classes from this set, e.g., secondorder polynomial and piecewise linear functions. These functions are optimized to improve generalization performance regardless of the actual form. We experiment with regression and classification problems, including synthetic and real e.g., CIFAR10 data. Our numerical results validate that the optimized nonlinearities achieve better generalization performance than widelyused nonlinear functions such as ReLU. Furthermore, we illustrate that the proposed nonlinearities also mitigate the socalled double descent phenomenon, which is known as the nonmonotonic generalization performance regarding the sample size and the model size.
Nonlinear evolution of coarsegrained quantum systems with generalized purity constraints ; Constrained quantum dynamics is used to propose a nonlinear dynamical equation for pure states of a generalized coarsegrained system. The relevant constraint is given either by the generalized purity or by the generalized invariant fluctuation, and the coarsegrained pure states correspond to the generalized coherent i.e. generalized nonentangled states. Open system model of the coarsegraining is discussed. It is shown that in this model and in the weak coupling limit the constrained dynamical equations coincide with an equation for pointer states, based on HilbertSchmidt distance, that was previously suggested in the context of the decoherence theory.
Simple Cellular AutomataBased Linear Models for the Shrinking Generator ; Structural properties of two wellknown families of keystream generators, Shrinking Generators and Cellular Automata, have been analyzed. Emphasis is on the equivalence of the binary sequences obtained from both kinds of generators. In fact, Shrinking Generators SG can be identified with a subset of linear Cellular Automata mainly rule 90, rule 150 or a hybrid combination of both rules. The linearity of these cellular models can be advantageously used in the cryptanalysis of those keystream generators.
Generic expansions of countable models ; We compare two different notions of generic expansions of countable saturated structures. One kind of genericity is related to modelcompanions and to amalgamation constructions 'a la HrushovskiFraiss'e. Another notion of generic expansion is defined via topological properties and Baire category theory. The second type of genericity was first formulated by Truss for automorphisms. We work with a later generalization, due to Ivanov, to finite tuples of predicates and functions.
Energy conditions in generalized teleparallel gravity models ; In this paper, we investigate the energy conditions including null, weak, strong, dominant in generalized teleparallel gravities including pure FT, teleparallel gravity with nonminimally coupled scalar field and FT with nonminimally coupled scalar field models. In particular, we apply them to FriedmannRobertsonWalker FRW cosmology and obtain some corresponding results. Using two specific phenomenological forms of FT, we show that some of the energy conditions are violated.
Anisotropic Inflation with General Potentials ; Anomalies in recent observational data indicate that there might be some anisotropic hair generated in an inflation period. To obtain general information about the effects of this anisotropic hair to inflation models, we studied anisotropic inflation models that involve one vector and one scalar using several types of potentials. We determined the general relationship between the degree of anisotropy and the fraction of the vector and scalar fields, and concluded that the anisotropies behave independently of the potentials. We also generalized our study to the case of multidirectional anisotropies.
Capturing Distribution GridIntegrated Solar Variability and Uncertainty Using Microgrids ; The variable nature of the solar generation and the inherent uncertainty in solar generation forecasts are two challenging issues for utility grids, especially as the distribution grid integrated solar generation proliferates. This paper offers to utilize microgrids as local solutions for mitigating these negative drawbacks and helping the utility grid in hosting a higher penetration of solar generation. A microgrid optimal scheduling model based on robust optimization is developed to capture solar generation variability and uncertainty. Numerical simulations on a test feeder indicate the effectiveness of the proposed model.
A Syntactic Neural Model for GeneralPurpose Code Generation ; We consider the problem of parsing natural language descriptions into source code written in a generalpurpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving stateoftheart results that well outperform previous code generation and semantic parsing approaches.
Generalized ThreeForm Field ; A generalized threeform field is an extended version of the canonical threeform field by considering a Lagrangian of the generalized threeform field as a function of the kinetic and the mass terms. In this work, we investigated cosmological models due to this generalized threeform field. It is found that one can use the threeform field to interpret the nonrelativistic matter without the caustic problem. Moreover, by analyzing the dynamical system, a viable model of dark energy due to the generalized threeform field can be obtained.
The octet rule in chemical space Generating virtual molecules ; We present a generator of virtual molecules that selects valid chemistry on the basis of the octet rule. Also, we introduce a mesomer group key that allows a fast detection of duplicates in the generated structures. Compared to existing approaches, our model is simpler and faster, generates new chemistry and avoids invalid chemistry. Its versatility is illustrated by the correct generation of molecules containing thirdrow elements and a surprisingly adept handling of complex boron chemistry. Without any empirical parameters, our model is designed to be valid also in unexplored regions of chemical space. One first unexpected finding is the high prevalence of dipolar structures among generated molecules.
Nonlocality in Quantum Field Theory due to General Relativity ; We show that General Relativity coupled to a quantum field theory generically leads to nonlocal effects in the matter sector. These nonlocal effects can be described by nonlocal higher dimensional operators which remarkably have an approximate shift symmetry. When applied to inflationary models, our results imply that small nonGaussianities are a generic feature of models based on General Relativity coupled to matter fields. However, these effects are too small to be observable in the Cosmic Microwave Background.
CRNNGAN Continuous recurrent neural networks with adversarial training ; Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.
Synthesizing Novel Pairs of Image and Text ; Generating novel pairs of image and text is a problem that combines computer vision and natural language processing. In this paper, we present strategies for generating novel image and caption pairs based on existing captioning datasets. The model takes advantage of recent advances in generative adversarial networks and sequencetosequence modeling. We make generalizations to generate paired samples from multiple domains. Furthermore, we study cycles generating from image to text then back to image and vise versa, as well as its connection with autoencoders.
Deep Extrapolation for AttributeEnhanced Generation ; Attribute extrapolation in sample generation is challenging for deep neural networks operating beyond the training distribution. We formulate a new task for extrapolation in sequence generation, focusing on natural language and proteins, and propose GENhance, a generative framework that enhances attributes through a learned latent space. Trained on movie reviews and a computed protein stability dataset, GENhance can generate stronglypositive text reviews and highly stable protein sequences without being exposed to similar data during training. We release our benchmark tasks and models to contribute to the study of generative modeling extrapolation and datadriven design in biology and chemistry.
Generalization Error of GAN from the Discriminator's Perspective ; The generative adversarial network GAN is a wellknown model for learning highdimensional distributions, but the mechanism for its generalization ability is not understood. In particular, GAN is vulnerable to the memorization phenomenon, the eventual convergence to the empirical distribution. We consider a simplified GAN model with the generator replaced by a density, and analyze how the discriminator contributes to generalization. We show that with early stopping, the generalization error measured by Wasserstein metric escapes from the curse of dimensionality, despite that in the long term, memorization is inevitable. In addition, we present a hardness of learning result for WGAN.
Ensemble Learning For Mega Man Level Generation ; Procedural content generation via machine learning PCGML is the process of procedurally generating game content using models trained on existing game content. PCGML methods can struggle to capture the true variance present in underlying data with a single model. In this paper, we investigated the use of ensembles of Markov chains for procedurally generating emphMega Man levels. We conduct an initial investigation of our approach and evaluate it on measures of playability and stylistic similarity in comparison to a nonensemble, existing Markov chain approach.
Generating Image Sequence from Description with LSTM Conditional GAN ; Generating images from word descriptions is a challenging task. Generative adversarial networksGANs are shown to be able to generate realistic images of reallife objects. In this paper, we propose a new neural network architecture of LSTM Conditional Generative Adversarial Networks to generate images of reallife objects. Our proposed model is trained on the Oxford102 Flowers and CaltechUCSD Birds2002011 datasets. We demonstrate that our proposed model produces the better results surpassing other stateofart approaches.
Generating Graphs with Symmetry ; In the field of complex networks and graph theory, new results are typically tested on graphs generated by a variety of algorithms such as the ErdHosR'enyi model or the Barab'asiAlbert model. Unfortunately, most graph generating algorithms do not typically create graphs with symmetries, which have been shown to have an important role on the network dynamics. Here, we present an algorithm to generate graphs with prescribed symmetries. The algorithm can also be used to generate graphs with a prescribed equitable partition but possibly without any symmetry. We also use our graph generator to examine the recently raised question about the relation between the orbits of the automorphism group and a graph's minimal equitable partition.
Generative Models For Deep Learning with Very Scarce Data ; The goal of this paper is to deal with a data scarcity scenario where deep learning techniques use to fail. We compare the use of two well established techniques, Restricted Boltzmann Machines and Variational Autoencoders, as generative models in order to increase the training set in a classification framework. Essentially, we rely on Markov Chain Monte Carlo MCMC algorithms for generating new samples. We show that generalization can be improved comparing this methodology to other stateoftheart techniques, e.g. semisupervised learning with ladder networks. Furthermore, we show that RBM is better than VAE generating new samples for training a classifier with good generalization capabilities.
Generating protein sequences from antibiotic resistance genes data using Generative Adversarial Networks ; We introduce a method to generate synthetic protein sequences which are predicted to be resistant to certain antibiotics. We did this using 6,023 genes that were predicted to be resistant to antibiotics in the intestinal region of the human gut and were fed as input to a Wasserstein generative adversarial network WGAN model a variant to the original generative adversarial model which has been known to perform efficiently when it comes to mimicking the distribution of the real data in order to generate new data which is similar in style to the original data which was fed as the training data
OPALNet A Generative Model for Partbased Object Layout Generation ; We propose OPALNet, a novel hierarchical architecture for partbased layout generation of objects from multiple categories using a single unified model. We adopt a coarsetofine strategy involving semantically conditioned autoregressive generation of bounding box layouts and pixellevel part layouts for objects. We use Graph Convolutional Networks, Deep Recurrent Networks along with customdesigned Conditional Variational Autoencoders to enable flexible, diverse and categoryaware generation of object layouts. We train OPALNet on PASCALParts dataset. The generated samples and corresponding evaluation scores demonstrate the versatility of OPALNet compared to ablative variants and baselines.
COD3S Diverse Generation with Discrete Semantic Signatures ; We present COD3S, a novel method for generating semantically diverse sentences using neural sequencetosequence seq2seq models. Conditioned on an input, seq2seq models typically produce semantically and syntactically homogeneous sets of sentences and thus perform poorly on onetomany sequence generation tasks. Our twostage approach improves output diversity by conditioning generation on localitysensitive hash LSHbased semantic sentence codes whose Hamming distances highly correlate with human judgments of semantic textual similarity. Though it is generally applicable, we apply COD3S to causal generation, the task of predicting a proposition's plausible causes or effects. We demonstrate through automatic and human evaluation that responses produced using our method exhibit improved diversity without degrading task performance.
Generative Layout Modeling using Constraint Graphs ; We propose a new generative model for layout generation. We generate layouts in three steps. First, we generate the layout elements as nodes in a layout graph. Second, we compute constraints between layout elements as edges in the layout graph. Third, we solve for the final layout using constrained optimization. For the first two steps, we build on recent transformer architectures. The layout optimization implements the constraints efficiently. We show three practical contributions compared to the state of the art our work requires no user input, produces higher quality layouts, and enables many novel capabilities for conditional layout generation.
Fractal Dimension Generalization Measure ; Developing a robust generalization measure for the performance of machine learning models is an important and challenging task. A lot of recent research in the area focuses on the model decision boundary when predicting generalization. In this paper, as part of the Predicting Generalization in Deep Learning competition, we analyse the complexity of decision boundaries using the concept of fractal dimension and develop a generalization measure based on that technique.