text
stringlengths
62
2.94k
Truncated Variational Sampling for Black Box Optimization of Generative Models ; We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the black box applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model's prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure A to Binary Sparse Coding a model with continuous observables, and B to basic Sigmoid Belief Networks which are models with binary observables. Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps.
MelNet A Generative Model for Audio in the Frequency Domain ; Capturing highlevel structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While longrange dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in twodimensional timefrequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating highfidelity audio samples which capture structure at timescales that timedomain models have yet to achieve. We apply our model to a variety of audio generation tasks, including unconditional speech generation, music generation, and texttospeech synthesisshowing improvements over previous approaches in both density estimates and human judgments.
Reconstruction and Membership Inference Attacks against Generative Models ; We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries who perform single record membership inference. We argue for considering regulatory actors who perform set membership inference to identify the use of specific datasets for training. The attacks are evaluated on two generative model architectures, Generative Adversarial Networks GANs and Variational Autoencoders VAEs, trained on standard image datasets. Our results show that the two attacks yield success rates superior to previous work on most data sets while at the same time having only very mild assumptions. We envision the two attacks in combination with the membership inference attack type formalization as especially useful. For example, to enforce data privacy standards and automatically assessing model quality in machine learning as a service setups. In practice, our work motivates the use of GANs since they prove less vulnerable against information leakage attacks while producing detailed samples.
Generating new concepts with hybrid neurosymbolic models ; Human conceptual knowledge supports the ability to generate novel yet highly structured concepts, and the form of this conceptual knowledge is of great interest to cognitive scientists. One tradition has emphasized structured knowledge, viewing concepts as embedded in intuitive theories or organized in complex symbolic knowledge structures. A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models. In this paper, we explore a synthesis of these two traditions through a novel neurosymbolic model for generating new concepts. Using simple visual concepts as a testbed, we bring together neural networks and symbolic probabilistic programs to learn a generative model of novel handwritten characters. Two alternative models are explored with more generic neural network architectures. We compare each of these three models for their likelihoods on heldout character classes and for the quality of their productions, finding that our hybrid model learns the most convincing representation and generalizes further from the training observations.
Analytical InverterBased Distributed Generator Model for Power Flow Analysis ; Quantifying the impact of inverterbased distributed generation DG sources on powerflow distribution system cases is arduous. Existing distribution system tools predominately model distributed generation sources as either negative PQ loads or as a PV generator and then employed a PVPQ switching algorithm to mimic VoltVAR support. These models neglect the unique characteristics of inverterbased distributed generation sources, have scalability and convergence issues, and are illsuited for increasing solar penetration scenarios. This work proposes an inverterbased DG model accounting for the inverter's topology, sensing position, and control strategies. The model extends recently introduced analytical positive sequence generator models for threephase studies. The use of circuitsimulation based heuristics help achieve robust convergence. Simulation of the PGE prototypical feeders using a prototype solver demonstrate the model's accuracy and efficacy.
Continuous Graph Flow ; In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graphstructured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph. It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time. This class of models offers several advantages a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memoryefficient; and exact and efficient computation of the likelihood of the data. We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains graph generation, image puzzle generation, and layout generation from scene graphs. Our proposed model achieves significantly better performance compared to stateoftheart models.
Unsupervised Inflection Generation Using Neural Language Modeling ; The use of Deep Neural Network architectures for Language Modeling has recently seen a tremendous increase in interest in the field of NLP with the advent of transfer learning and the shift in focus from rulebased and predictive models supervised learning to generative or unsupervised models to solve the longstanding problems in NLP like Information Extraction or Question Answering. While this shift has worked greatly for languages lacking in inflectional morphology, such as English, challenges still arise when trying to build similar systems for morphologicallyrich languages, since their individual words shift forms in context more often. In this paper we investigate the extent to which these new unsupervised or generative techniques can serve to alleviate the typetoken ratio disparity in morphologically rich languages. We apply an offtheshelf neural language modeling library to the newly introduced task of unsupervised inflection generation in the nominal domain of three morphologically rich languages Romanian, German, and Finnish. We show that this neural language model architecture can successfully generate the full inflection table of nouns without needing any pretraining on large, wikipediasized corpora, as long as the model is shown enough inflection examples. In fact, our experiments show that pretraining hinders the generation performance.
Learning Generative Models using Denoising Density Estimators ; Learning probabilistic models that can estimate the density of a given set of samples, and generate samples from that density, is one of the fundamental challenges in unsupervised machine learning. We introduce a new generative model based on denoising density estimators DDEs, which are scalar functions parameterized by neural networks, that are efficiently trained to represent kernel density estimators of the data. Leveraging DDEs, our main contribution is a novel technique to obtain generative models by minimizing the KLdivergence directly. We prove that our algorithm for obtaining generative models is guaranteed to converge to the correct solution. Our approach does not require specific network architecture as in normalizing flows, nor use ordinary differential equation solvers as in continuous normalizing flows. Experimental results demonstrate substantial improvement in density estimation and competitive performance in generative model training.
Learning a Simple and Effective Model for Multiturn Response Generation with Auxiliary Tasks ; We study multiturn response generation for opendomain dialogues. The existing stateoftheart addresses the problem with deep neural architectures. While these models improved response quality, their complexity also hinders the application of the models in real systems. In this work, we pursue a model that has a simple structure yet can effectively leverage conversation contexts for response generation. To this end, we propose four auxiliary tasks including word order recovery, utterance order recovery, masked word recovery, and masked utterance recovery, and optimize the objectives of these tasks together with maximizing the likelihood of generation. By this means, the auxiliary tasks that relate to context understanding can guide the learning of the generation model to achieve a better local optimum. Empirical studies with three benchmarks indicate that our model can significantly outperform stateoftheart generation models in terms of response quality on both automatic evaluation and human judgment, and at the same time enjoys a much faster decoding process.
IBERT Inductive Generalization of Transformer to Arbitrary Context Lengths ; Selfattention has emerged as a vital component of stateoftheart sequencetosequence models for natural language processing in recent years, brought to the forefront by pretrained bidirectional Transformer models. Its effectiveness is partly due to its nonsequential architecture, which promotes scalability and parallelism but limits the model to inputs of a bounded length. In particular, such architectures perform poorly on algorithmic tasks, where the model must learn a procedure which generalizes to input lengths unseen in training, a capability we refer to as inductive generalization. Identifying the computational limits of existing selfattention mechanisms, we propose IBERT, a bidirectional Transformer that replaces positional encodings with a recurrent layer. The model inductively generalizes on a variety of algorithmic tasks where stateoftheart Transformer models fail to do so. We also test our method on masked language modeling tasks where training and validation sets are partitioned to verify inductive generalization. Out of three algorithmic and two natural language inductive generalization tasks, IBERT achieves stateoftheart results on four tasks.
Improving Generative Imagination in ObjectCentric World Models ; The remarkable recent advances in objectcentric generative world models raise a few questions. First, while many of the recent achievements are indispensable for making a general and versatile world model, it is quite unclear how these ingredients can be integrated into a unified framework. Second, despite using generative objectives, abilities for object detection and tracking are mainly investigated, leaving the crucial ability of temporal imagination largely under question. Third, a few key abilities for more faithful temporal imagination such as multimodal uncertainty and situationawareness are missing. In this paper, we introduce Generative Structured World Models GSWM. The GSWM achieves the versatile world modeling not only by unifying the key properties of previous models in a principled framework but also by achieving two crucial new abilities, multimodal uncertainty and situationawareness. Our thorough investigation on the temporal generation ability in comparison to the previous models demonstrates that GSWM achieves the versatility with the best or comparable performance for all experiment settings including a few complex settings that have not been tested before.
The Neural Coding Framework for Learning Generative Models ; Neural generative models can be used to learn complex probability distributions from data, to sample from them, and to produce probability density estimates. We propose a computational framework for developing neural generative models inspired by the theory of predictive processing in the brain. According to predictive processing theory, the neurons in the brain form a hierarchy in which neurons in one level form expectations about sensory inputs from another level. These neurons update their local models based on differences between their expectations and the observed signals. In a similar way, artificial neurons in our generative models predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality. In this work, we show that the neural generative models learned within our framework perform well in practice across several benchmark datasets and metrics and either remain competitive with or significantly outperform other generative models with similar functionality such as the variational autoencoder.
Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints ; We develop a new framework for learning variational autoencoders and other deep generative models that balances generative and discriminative goals. Our framework optimizes model parameters to maximize a variational lower bound on the likelihood of observed data, subject to a taskspecific prediction constraint that prevents model misspecification from leading to inaccurate predictions. We further enforce a consistency constraint, derived naturally from the generative model, that requires predictions on reconstructed data to match those on the original data. We show that these two contributions prediction constraints and consistency constraints lead to promising image classification performance, especially in the semisupervised scenario where category labels are sparse but unlabeled data is plentiful. Our approach enables advances in generative modeling to directly boost semisupervised classification performance, an ability we demonstrate by augmenting deep generative models with latent variables capturing spatial transformations.
Convex Smoothed AutoencoderOptimal Transport model ; Generative modelling is a key tool in unsupervised machine learning which has achieved stellar success in recent years. Despite this huge success, even the best generative models such as Generative Adversarial Networks GANs and Variational Autoencoders VAEs come with their own shortcomings, mode collapse and mode mixture being the two most prominent problems. In this paper we develop a new generative model capable of generating samples which resemble the observed data, and is free from mode collapse and mode mixture. Our model is inspired by the recently proposed AutoencoderOptimal Transport AEOT model and tries to improve on it by addressing the problems faced by the AEOT model itself, specifically with respect to the sample generation algorithm. Theoretical results concerning the bound on the error in approximating the nonsmooth Brenier potential by its smoothed estimate, and approximating the discontinuous optimal transport map by a smoothed optimal transport map estimate have also been established in this paper.
Measuring global properties of neural generative model outputs via generating mathematical objects ; We train deep generative models on datasets of reflexive polytopes. This enables us to compare how well the models have picked up on various global properties of generated samples. Our datasets are complete in the sense that every single example, up to changes of coordinate, is included in the dataset. Using this property we also perform tests checking to what extent the models are merely memorizing the data. We also train models on the same dataset represented in two different ways, enabling us to measure which form is easiest to learn from. We use these experiments to show that deep generative models can learn to generate geometric objects with nontrivial global properties, and that the models learn some underlying properties of the objects rather than simply memorizing the data.
Learning to Complete Code with Sketches ; Code completion is usually cast as a language modelling problem, i.e., continuing an input in a lefttoright fashion. However, in practice, some parts of the completion e.g., string literals may be very hard to predict, whereas subsequent parts directly follow from the context. To handle this, we instead consider the scenario of generating code completions with holes inserted in places where a model is uncertain. We develop Grammformer, a Transformerbased model that guides code generation by the programming language grammar, and compare it to a variety of more standard sequence models. We train the models on code completion for C and Python given partial code context. To evaluate models, we consider both ROUGE as well as a new metric RegexAcc that measures success of generating completions matching long outputs with as few holes as possible. In our experiments, Grammformer generates 1050 more accurate completions compared to traditional generative models and 3750 longer sketches compared to sketchgenerating baselines trained with similar techniques.
A variational autoencoder approach for choice set generation and implicit perception of alternatives in choice modeling ; This paper derives the generalized extreme value GEV model with implicit availabilityperception IAP of alternatives and proposes a variational autoencoder VAE approach for choice set generation and implicit perception of alternatives. Specifically, the crossnested logit CNL model with IAP is derived as an example of IAPGEV models. The VAE approach is adapted to model the choice set generation process, in which the likelihood of perceiving chosen alternatives in the choice set is maximized. The VAE approach for route choice set generation is exemplified using a real dataset. IAP CNL model estimated has the best performance in terms of goodnessoffit and prediction performance, compared to multinomial logit models and conventional choice set generation methods.
IntraDay Price Simulation with Generative Adversarial Modelling of the Order Flow ; Intraday price variations in financial markets are driven by the sequence of orders, called the order flow, that is submitted at high frequency by traders. This paper introduces a novel application of the Sequence Generative Adversarial Networks framework to model the order flow, such that random sequences of the order flow can then be generated to simulate the intraday variation of prices. As a benchmark, a wellknown parametric model from the quantitative finance literature is selected. The models are fitted, and then multiple random paths of the order flow sequences are sampled from each model. Model performances are then evaluated by using the generated sequences to simulate price variations, and we compare the empirical regularities between the price variations produced by the generated and real sequences. The empirical regularities considered include the distribution of the price logreturns, the price volatility, and the heavytail of the logreturns distributions. The results show that the order sequences from the generative model are better able to reproduce the statistical behaviour of real price variations than the sequences from the benchmark.
Identifiable Generative Models for Missing Not at Random Data Imputation ; Realworld datasets often have missing values associated with complex generative processes, where the cause of the missingness may not be fully observed. This is known as missing not at random MNAR data. However, many imputation methods do not take into account the missingness mechanism, resulting in biased imputation values when MNAR data is present. Although there are a few methods that have considered the MNAR scenario, their model's identifiability under MNAR is generally not guaranteed. That is, model parameters can not be uniquely determined even with infinite data samples, hence the imputation results given by such models can still be biased. This issue is especially overlooked by many modern deep generative models. In this work, we fill in this gap by systematically analyzing the identifiability of generative models under MNAR. Furthermore, we propose a practical deep generative model which can provide identifiability guarantees under mild assumptions, for a wide range of MNAR mechanisms. Our method demonstrates a clear advantage for tasks on both synthetic data and multiple realworld scenarios with MNAR data.
Multiscale Generative Models Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models ; Realistic finegrained multiagent simulation of realworld complex systems is crucial for many downstream tasks such as reinforcement learning. Recent work has used generative models GANs in particular for providing highfidelity simulation of realworld systems. However, such generative models are often monolithic and miss out on modeling the interaction in multiagent systems. In this work, we take a first step towards building multiple interacting generative models GANs that reflects the interaction in real world. We build and analyze a hierarchical setup where a higherlevel GAN is conditioned on the output of multiple lowerlevel GANs. We present a technique of using feedback from the higherlevel GAN to improve performance of lowerlevel GANs. We mathematically characterize the conditions under which our technique is impactful, including understanding the transfer learning nature of our setup. We present three distinct experiments on synthetic data, time series data, and image domain, revealing the wide applicability of our technique.
FrePGAN Robust Deepfake Detection Using Frequencylevel Perturbations ; Various deepfake detectors have been proposed, but challenges still exist to detect images of unknown categories or GAN models outside of the training settings. Such issues arise from the overfitting issue, which we discover from our own analysis and the previous studies to originate from the frequencylevel artifacts in generated images. We find that ignoring the frequencylevel artifacts can improve the detector's generalization across various GAN models, but it can reduce the model's performance for the trained GAN models. Thus, we design a framework to generalize the deepfake detector for both the known and unseen GAN models. Our framework generates the frequencylevel perturbation maps to make the generated images indistinguishable from the real images. By updating the deepfake detector along with the training of the perturbation generator, our model is trained to detect the frequencylevel artifacts at the initial iterations and consider the imagelevel irregularities at the last iterations. For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories. Numerous experiments validate the stateoftheart performance of our deepfake detector.
Few Shot Protein Generation ; We present the MSAtoprotein transformer, a generative model of protein sequences conditioned on protein families represented by multiple sequence alignments MSAs. Unlike existing approaches to learning generative models of protein families, the MSAtoprotein transformer conditions sequence generation directly on a learned encoding of the multiple sequence alignment, circumventing the need for fitting dedicated family models. By training on a large set of wellcurated multiple sequence alignments in Pfam, our MSAtoprotein transformer generalizes well to protein families not observed during training and outperforms conventional family modeling approaches, especially when MSAs are small. Our generative approach accurately models epistasis and indels and allows for exact inference and efficient sampling unlike other approaches. We demonstrate the protein sequence modeling capabilities of our MSAtoprotein transformer and compare it with alternative sequence modeling approaches in comprehensive benchmark experiments.
Generative power of a protein language model trained on multiple sequence alignments ; Computational models starting from large ensembles of evolutionarily related protein sequences capture a representation of protein families and learn constraints associated to protein structure and function. They thus open the possibility for generating novel sequences belonging to protein families. Protein language models trained on multiple sequence alignments, such as MSA Transformer, are highly attractive candidates to this end. We propose and test an iterative method that directly employs the masked language modeling objective to generate sequences using MSA Transformer. We demonstrate that the resulting sequences score as well as natural sequences, for homology, coevolution and structurebased measures. For large protein families, our synthetic sequences have similar or better properties compared to sequences generated by Potts models, including experimentallyvalidated ones. Moreover, for small protein families, our generation method based on MSA Transformer outperforms Potts models. Our method also more accurately reproduces the higherorder statistics and the distribution of sequences in sequence space of natural data than Potts models. MSA Transformer is thus a strong candidate for protein sequence generation and protein design.
Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models ; Largescale pretrained language models have achieved great success on natural language generation tasks. However, it is difficult to control the pretrained language models to generate sentences with the desired attribute such as topic and sentiment, etc. Recently, Bayesian Controllable Language Models BCLMs have been shown to be efficient in controllable language generation. Rather than finetuning the parameters of pretrained language models, BCLMs use external discriminators to guide the generation of pretrained language models. However, the mismatch between training and inference of BCLMs limits the performance of the models. To address the problem, in this work we propose a Gemini Discriminator for controllable language generation which alleviates the mismatch problem with a small computational cost. We tested our method on two controllable language generation tasks sentiment control and topic control. On both tasks, our method reached achieved new stateoftheart results in automatic and human evaluations.
Counterexample Generation for InfiniteState Chemical Reaction Networks ; Counterexample generation is an indispensable part of model checking process. In stochastic model checking, counterexample generation is a challenging problem as it is not enough to find a single trace that violates the given property. Instead, a potentially large set of traces with enough probability to violate the property needs to be found. This paper considers counterexample generation for chemical reaction network CRN models with potentially infinite state space. A method based on bounded model checking using SMT solving is developed for counterexample generation for CRNs. It intends to find a small set of property violating paths of a given model such that they collectively have a total probability that is above a given threshold. A unique challenge is due to the highly connected state space of CRNs where a counterexample is only a tiny subset of all property violating paths. To address such challenges, this paper presents a number of optimizations including a divideandconquer technique to scale up the counterexample generation method for large CRN models. This paper reports results from experiments on a number of infinitestate CRN models.
Discovering Bugs in Vision Models using Offtheshelf Image Generation and Captioning ; Automatically discovering failures in vision models under realworld settings remains an open challenge. This work demonstrates how offtheshelf, largescale, imagetotext and texttoimage models, trained on vast amounts of data, can be leveraged to automatically find such failures. In essence, a conditional texttoimage generative model is used to generate large amounts of synthetic, yet realistic, inputs given a groundtruth label. Misclassified inputs are clustered and a captioning model is used to describe each cluster. Each cluster's description is used in turn to generate more inputs and assess whether specific clusters induce more failures than expected. We use this pipeline to demonstrate that we can effectively interrogate classifiers trained on ImageNet to find specific failure cases and discover spurious correlations. We also show that we can scale the approach to generate adversarial datasets targeting specific classifier architectures. This work serves as a proofofconcept demonstrating the utility of largescale generative models to automatically discover bugs in vision models in an openended manner. We also describe a number of limitations and pitfalls related to this approach.
HARP Autoregressive Latent Video Prediction with HighFidelity Image Generator ; Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two subproblems pretraining an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and highresolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting highfidelity future frames with minimal modification to existing models, and produce highresolution 256x256 videos. Specifically, we scale up prior models by employing a highfidelity image generator VQGAN with a causal transformer model, and introduce additional techniques of topk sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to stateoftheart approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and largescale datasets. Videos are available at httpssites.google.comviewharpvideoshome.
Calibrating Sequence likelihood Improves Conditional Language Generation ; Conditional language models are predominantly trained with maximum likelihood estimation MLE, giving probability mass to sparsely observed target sequences. While MLE trained models assign high probability to plausible sequences given the context, the model probabilities often do not accurately rankorder generated sequences by quality. This has been empirically observed in beam search decoding as output quality degrading with large beam sizes, and decoding strategies benefiting from heuristics such as length normalization and repetitionblocking. In this work, we introduce sequence likelihood calibration SLiC where the likelihood of model generated sequences are calibrated to better align with reference sequences in the model's latent space. With SLiC, decoding heuristics become unnecessary and decoding candidates' quality significantly improves regardless of the decoding method. Furthermore, SLiC shows no sign of diminishing returns with model scale, and presents alternative ways to improve quality with limited training and inference budgets. With SLiC, we exceed or match SOTA results on a wide range of generation tasks spanning abstractive summarization, question generation, abstractive question answering and datatotext generation, even with modestsized models.
Model Criticism for LongForm Text Generation ; Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent highlevel structure e.g., story progression. Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the highlevel structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of highlevel discourse coherence, coreference, and topicality and find that transformerbased language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference.
Generative model for learning quantum ensemble via optimal transport loss ; Generative modeling is an unsupervised machine learning framework, that exhibits strong performance in various machine learning tasks. Recently we find several quantum version of generative model, some of which are even proven to have quantum advantage. However, those methods are not directly applicable to construct a generative model for learning a set of quantum states, i.e., ensemble. In this paper, we propose a quantum generative model that can learn quantum ensemble, in an unsupervised machine learning framework. The key idea is to introduce a new loss function calculated based on optimal transport loss, which have been widely used in classical machine learning due to its several good properties; e.g., no need to ensure the common support of two ensembles. We then give indepth analysis on this measure, such as the scaling property of the approximation error. We also demonstrate the generative modeling with the application to quantum anomaly detection problem, that cannot be handled via existing methods. The proposed model paves the way for a wide application such as the health check of quantum devices and efficient initialization of quantum computation.
De novo PROTAC design using graphbased deep generative models ; PROteolysis TArgeting Chimeras PROTACs are an emerging therapeutic modality for degrading a protein of interest POI by marking it for degradation by the proteasome. Recent developments in artificial intelligence AI suggest that deep generative models can assist with the de novo design of molecules with desired properties, and their application to PROTAC design remains largely unexplored. We show that a graphbased generative model can be used to propose novel PROTAClike structures from empty graphs. Our model can be guided towards the generation of large molecules 30140 heavy atoms predicted to degrade a POI through policygradient reinforcement learning RL. Rewards during RL are applied using a boosted tree surrogate model that predicts a molecule's degradation potential for each POI. Using this approach, we steer the generative model towards compounds with higher likelihoods of predicted degradation activity. Despite being trained on sparse public data, the generative model proposes molecules with substructures found in known degraders. After finetuning, predicted activity against a challenging POI increases from 50 to 80 with nearperfect chemical validity for sampled compounds, suggesting this is a promising approach for the optimization of large, PROTAClike molecules for targeted protein degradation.
The Effectiveness of Bidirectional Generative Patent Language Models ; Generative patent language models can assist humans to write patent text more effectively. The question is how to measure effectiveness from a humancentric perspective and how to improve effectiveness. In this manuscript, a simplified design of the autocomplete function is proposed to increase effectiveness by more than 10. With the new design, the effectiveness of autocomplete can reach more than 60, which means that more than 60 of keystrokes can be saved by autocomplete. Since writing patent text does not necessarily start from the beginning to the end, a question is whether the generative model can assist a user no matter where to start writing. To answer the question, the generative models in this manuscript are pretrained with training data in both directions. The generative models become bidirectional. Since text generation is bidirectional, the calculation of autocomplete effectiveness can be bidirectional and starts from anywhere in the text. After thorough experiments, a key finding is that the autocomplete effectiveness of a model for the same text remains similar no matter where the calculation starts. The finding indicates that such bidirectional models can assist a user at a similar level, no matter where the user starts to write.
PrefixMol Target and Chemistryaware Molecule Design via Prefix Embedding ; Is there a unified model for generating molecules considering different conditions, such as binding pockets and chemical properties Although targetaware generative models have made significant advances in drug design, they do not consider chemistry conditions and cannot guarantee the desired chemical properties. Unfortunately, merging the targetaware and chemicalaware models into a unified model to meet customized requirements may lead to the problem of negative transfer. Inspired by the success of multitask learning in the NLP area, we use prefix embeddings to provide a novel generative model that considers both the targeted pocket's circumstances and a variety of chemical properties. All conditional information is represented as learnable features, which the generative model subsequently employs as a contextual prompt. Experiments show that our model exhibits good controllability in both single and multiconditional molecular generation. The controllability enables us to outperform previous structurebased drug design methods. More interestingly, we open up the attention mechanism and reveal coupling relationships between conditions, providing guidance for multiconditional molecule generation.
Sketch2Cloth Sketchbased 3D Garment Generation with Unsigned Distance Fields ; 3D model reconstruction from a single image has achieved great progress with the recent deep generative models. However, the conventional reconstruction approaches with template mesh deformation and implicit fields have difficulty in reconstructing nonwatertight 3D mesh models, such as garments. In contrast to imagebased modeling, the sketchbased approach can help users generate 3D models to meet the design intentions from handdrawn sketches. In this study, we propose Sketch2Cloth, a sketchbased 3D garment generation system using the unsigned distance fields from the user's sketch input. Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes. We also provide the model editing function to modify the generated mesh. We verified the proposed Sketch2Cloth with quantitative evaluations on garment generation and editing with a stateoftheart approach.
Generalization and Estimation Error Bounds for Modelbased Neural Networks ; Modelbased neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, modelbased neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of modelbased networks. We show that the generalization abilities of modelbased networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct modelbased networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities especially for small number of training samples, compared to ReLU networks.
ShapE Generating Conditional 3D Implicit Functions ; We present ShapE, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, ShapE directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train ShapE in two stages first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to PointE, an explicit generative model over point clouds, ShapE converges faster and reaches comparable or better sample quality despite modeling a higherdimensional, multirepresentation output space. We release model weights, inference code, and samples at httpsgithub.comopenaishape.
Generating Virtual Onbody Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition ; The development of robust, generalized models in human activity recognition HAR has been hindered by the scarcity of largescale, labeled data sets. Recent work has shown that virtual IMU data extracted from videos using computer vision techniques can lead to substantial performance improvements when training HAR models combined with small portions of real IMU data. Inspired by recent advances in motion synthesis from textual descriptions and connecting Large Language Models LLMs to various AI models, we introduce an automated pipeline that first uses ChatGPT to generate diverse textual descriptions of activities. These textual descriptions are then used to generate 3D human motion sequences via a motion synthesis model, T2MGPT, and later converted to streams of virtual IMU data. We benchmarked our approach on three HAR datasets RealWorld, PAMAP2, and USCHAD and demonstrate that the use of virtual IMU training data generated using our new approach leads to significantly improved HAR model performance compared to only using real IMU data. Our approach contributes to the growing field of crossmodality transfer methods and illustrate how HAR models can be improved through the generation of virtual training data that do not require any manual effort.
SimOAP Improve Coherence and Consistency in Personabased Dialogue Generation via Oversampling and Postevaluation ; Language models trained on largescale corpora can generate remarkably fluent results in opendomain dialogue. However, for the personabased dialogue generation task, consistency and coherence are also key factors, which are great challenges for language models. Existing works mainly focus on valuable data filtering, model structure modifying, or objective function designing, while their improvements are limited and hard to generalize to all types of pretrained language models. However, we find that language models can produce consistent and coherent responses if we consider enough generations. Thus, the problems lay in largescale response generation and target response selection. In this work, a simple but effective twostage SimOAP strategy is proposed, i.e., oversampling and postevaluation. The oversampling stage takes largescale responses from existing trained models efficiently via offtheshelf distilling and compressing methods, and the postevaluation stage selects a good response based on multiple welldesigned evaluation metrics from largescale candidates. Experimental results show that the proposed plugin SimOAP strategy improves the backbone models and outperforms the baseline strategies in both automatic and human evaluations.
Learning Answer Generation using Supervision from Automatic Question Answering Evaluators ; Recent studies show that sentencelevel extractive QA, i.e., based on Answer Sentence Selection AS2, is outperformed by Generationbased QA GenQA models, which generate answers using the topk answer sentences ranked by AS2 models a la retrievalaugmented generation style. In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models GAVA. Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model i augmenting training data with answers generated by the GenQA model and labelled by GAVA either statically, before training, or ii dynamically, at every training epoch; and iii using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art.
NaturalFinger Generating Natural Fingerprint with Generative Adversarial Networks ; Deep neural network DNN models have become a critical asset of the model owner as training them requires a large amount of resource i.e. labeled data. Therefore, many fingerprinting schemes have been proposed to safeguard the intellectual property IP of the model owner against model extraction and illegal redistribution. However, previous schemes adopt unnatural images as the fingerprint, such as adversarial examples and noisy images, which can be easily perceived and rejected by the adversary. In this paper, we propose NaturalFinger which generates natural fingerprint with generative adversarial networks GANs. Besides, our proposed NaturalFinger fingerprints the decision difference areas rather than the decision boundary, which is more robust. The application of GAN not only allows us to generate more imperceptible samples, but also enables us to generate unrestricted samples to explore the decision boundary.To demonstrate the effectiveness of our fingerprint approach, we evaluate our approach against four model modification attacks including adversarial training and two model extraction attacks. Experiments show that our approach achieves 0.91 ARUC value on the FingerBench dataset 154 models, exceeding the optimal baseline MetaV over 17.
DreamHuman Animatable 3D Avatars from Text ; We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions. Recent textto3D methods have made considerable strides in generation, but are still lacking in important aspects. Control and often spatial resolution remain limited, existing methods produce fixed rather than animated 3D human models, and anthropometric consistency for complex structures like people remains a challenge. DreamHuman connects large texttoimage synthesis models, neural radiance fields, and statistical human body models in a novel modeling and optimization framework. This makes it possible to generate dynamic 3D human avatars with highquality textures and learned, instancespecific, surface deformations. We demonstrate that our method is capable to generate a wide variety of animatable, realistic 3D human models from text. Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic textto3D approaches and previous textbased 3D avatar generators in visual fidelity. For more results and animations please check our website at httpsdreamhuman.github.io.
Generate Anything Anywhere in Any Scene ; Texttoimage diffusion models have attracted considerable interest due to their wide applicability across diverse fields. However, challenges persist in creating controllable models for personalized object generation. In this paper, we first identify the entanglement issues in existing personalized generative models, and then propose a straightforward and efficient data augmentation training strategy that guides the diffusion model to focus solely on object identity. By inserting the plugandplay adapter layers from a pretrained controllable diffusion model, our model obtains the ability to control the location and size of each generated personalized object. During inference, we propose a regionallyguided sampling technique to maintain the quality and fidelity of the generated images. Our method achieves comparable or superior fidelity for personalized objects, yielding a robust, versatile, and controllable texttoimage diffusion model that is capable of generating realistic and personalized images. Our approach demonstrates significant potential for various applications, such as those in art, entertainment, and advertising design.
LowResource Response Generation with Template Prior ; We study open domain response generation with limited messageresponse pairs. The problem exists in realworld applications but is less explored by the existing work. Since the paired data now is no longer enough to train a neural generation model, we consider leveraging the large scale of unpaired data that are much easier to obtain, and propose response generation with both paired and unpaired data. The generation model is defined by an encoderdecoder architecture with templates as prior, where the templates are estimated from the unpaired data as a neural hidden semimarkov model. By this means, response generation learned from the small paired data can be aided by the semantic and syntactic knowledge in the large unpaired data. To balance the effect of the prior and the input message to response generation, we propose learning the whole generation model with an adversarial approach. Empirical studies on question response generation and sentiment response generation indicate that when only a few pairs are available, our model can significantly outperform several stateoftheart response generation models in terms of both automatic and human evaluation.
Transformerbased conditional generative adversarial network for multivariate time series generation ; Conditional generation of timedependent data is a task that has much interest, whether for data augmentation, scenario simulation, completing missing data, or other purposes. Recent works proposed a Transformerbased Time series generative adversarial network TTSGAN to address the limitations of recurrent neural networks. However, this model assumes a unimodal distribution and tries to generate samples around the expectation of the real data distribution. One of its limitations is that it may generate a random multivariate time series; it may fail to generate samples in the presence of multiple subcomponents within an overall distribution. One could train models to fit each subcomponent separately to overcome this limitation. Our work extends the TTSGAN by conditioning its generated output on a particular encoded context allowing the use of one model to fit a mixture distribution with multiple subcomponents. Technically, it is a conditional generative adversarial network that models realistic multivariate time series under different types of conditions, such as categorical variables or multivariate time series. We evaluate our model on UniMiB Dataset, which contains acceleration data following the XYZ axes of human activities collected using Smartphones. We use qualitative evaluations and quantitative metrics such as Principal Component Analysis PCA, and we introduce a modified version of the Frechet inception distance FID to measure the performance of our model and the statistical similarities between the generated and the real data distributions. We show that this transformerbased CGAN can generate realistic highdimensional and long data sequences under different kinds of conditions.
A SequencetoSequenceSet Model for TexttoTable Generation ; Recently, the texttotable generation task has attracted increasing attention due to its wide applications. In this aspect, the dominant model formalizes this task as a sequencetosequence generation task and serializes each table into a token sequence during training by concatenating all rows in a topdown order. However, it suffers from two serious defects 1 the predefined order introduces a wrong bias during training, which highly penalizes shifts in the order between rows; 2 the error propagation problem becomes serious when the model outputs a long token sequence. In this paper, we first conduct a preliminary study to demonstrate the generation of most rows is orderinsensitive. Furthermore, we propose a novel sequencetosequenceset texttotable generation model. Specifically, in addition to a text encoder encoding the input text, our model is equipped with a table header generator to first output a table header, i.e., the first row of the table, in the manner of sequence generation. Then we use a table body generator with learnable row embeddings and column embeddings to generate a set of table body rows in parallel. Particularly, to deal with the issue that there is no correspondence between each generated table body row and target during training, we propose a target assignment strategy based on the bipartite matching between the first cells of generated table body rows and targets. Experiment results show that our model significantly surpasses the baselines, achieving stateoftheart performance on commonlyused datasets.
DiffDTM A conditional structurefree framework for bioactive molecules generation targeted for dual proteins ; Advances in deep generative models shed light on de novo molecule generation with desired properties. However, molecule generation targeted for dual protein targets still faces formidable challenges including protein 3D structure data requisition for model training, autoregressive sampling, and model generalization for unseen targets. Here, we proposed DiffDTM, a novel conditional structurefree deep generative model based on a diffusion model for dual targets based molecule generation to address the above issues. Specifically, DiffDTM receives protein sequences and molecular graphs as inputs instead of protein and molecular conformations and incorporates an information fusion module to achieve conditional generation in a oneshot manner. We have conducted comprehensive multiview experiments to demonstrate that DiffDTM can generate druglike, synthesisaccessible, novel, and highbinding affinity molecules targeting specific dual proteins, outperforming the stateoftheart SOTA models in terms of multiple evaluation metrics. Furthermore, we utilized DiffDTM to generate molecules towards dopamine receptor D2 and 5hydroxytryptamine receptor 1A as new antipsychotics. The experimental results indicate that DiffDTM can be easily plugged into unseen dual targets to generate bioactive molecules, addressing the issues of requiring insufficient active molecule data for training as well as the need to retrain when encountering new targets.
GeoGAN A Conditional GAN with Reconstruction and Style Loss to Generate Standard Layer of Maps from Satellite Images ; Automatically generating maps from satellite images is an important task. There is a body of literature which tries to address this challenge. We created a more expansive survey of the task by experimenting with different models and adding new loss functions to improve results. We created a database of pairs of satellite images and the corresponding map of the area. Our model translates the satellite image to the corresponding standard layer map image using three main model architectures i a conditional Generative Adversarial Network GAN which compresses the images down to a learned embedding, ii a generator which is trained as a normalizing flow RealNVP model, and iii a conditional GAN where the generator translates via a series of convolutions to the standard layer of a map and the discriminator input is the concatenation of the realgenerated map and the satellite image. Model iii was by far the most promising of three models. To improve the results we also added a reconstruction loss and style transfer loss in addition to the GAN losses. The third model architecture produced the best quality of sampled images. In contrast to the other generative model where evaluation of the model is a challenging problem. since we have access to the real map for a given satellite image, we are able to assign a quantitative metric to the quality of the generated images in addition to inspecting them visually. While we are continuing to work on increasing the accuracy of the model, one challenge has been the coarse resolution of the data which upperbounds the quality of the results of our model. Nevertheless, as will be seen in the results, the generated map is more accurate in the features it produces since the generator architecture demands a pixelwise image translationpixelwise coloring. A video presentation summarizing this paper is available at httpsyoutu.beUr0flOXJi0
Three Generative, Lexicalised Models for Statistical Parsing ; In this paper we first propose a new statistical parsing model, which is a generative model of lexicalised contextfree grammar. We then extend the model to include a probabilistic treatment of both subcategorisation and whmovement. Results on Wall Street Journal text show that the parser performs at 88.187.5 constituent precisionrecall, an average improvement of 2.3 over Collins 96.
Force and Momentum in an Evolving Axisymmetric Universe Model ; We take an axisymmetric rotating universe model by crossing with a time dependent factor and evaluate its force and momentum in this evolving universe. It is concluded that it behaves exactly like a Friedmann model. We also extend this conclusion to the most general cosmological model.
Electroweak Constraints on Little Higgs Models ; In this talk I will give a brief introduction to Little Higgs models in general, including an overview of all models in existence thus far. I then review some of the generic constraints on these models from electroweak precision measurements.
Higgs boson with a single generation in the bulk ; We study Higgs boson properties in a model where three fermionic families of the Standard Model arise from a single generation in 51 dimensions. We demonstrate that, in spite of a nontrivial background, properties of the fourdimensional Higgs particle are almost indistinguishable from those in the Standard Model. We also argue that it is more natural to have a light Higgs boson in this model.
Determination of constants of Standard Model and some generalized models ; Methods of determination of constants of the Standard Model are considered. The constants values obtained now are presented and experiments for improving some values are pointed out. A few possible generalized models are considered together with their groups of gauge and kinematical symmetries.
Massive sigma models with p,q supersymmetry ; We determine the general scalar potential consistent with p,q supersymmetry in twodimensional nonlinear sigma models with torsion, generalizing previous results for special cases. We thereby find many new supersymmetric sigma models with potentials, including new 2,2 and 4,4 models.
Generalized RichardsonGaudin Nuclear Models ; The exact solvability of several nuclear models with nondegenerate singleparticle energies is outlined and leads to a generalization of integrable RichardsonGaudin models, like the su2based fermion pairing, to any simple Lie algebra. As an example, the so5sim sp4 model of T1 pairing is discussed and illustrated for the case of 64Ge with nondegenerate singleparticle energies.
Classical dynamical rmatrices for CalogeroMoser systems and their generalizations ; We present a derivation of the dynamical rmatrices of the CalogeroMoser models using the Hamiltonian reduction procedure to get general formulae. We describe the dynamical rmatrices thus found for spin CalogeroMoser models and relativistic RuijsenaarsSchneider models
Chaos in generalized JaynesCummings model. Kinetic approach ; In this work we study possibility of chaos formation in the dynamics governed by paradigmatic model of Cavity Quantum Electrodynamics, the so called JamesCammings model. In particular we consider generalized JC model. It is shown that even in the case of zero detuning dynamics is chaotic. Kinetic approach for the problem under study has been applied.
General Flattened Jaffe Models for Galaxies ; In this paper we extend oblate and prolate Jaffe models into more general flattened Jaffe models. Since dynamical properties of oblate and prolate Jaffe Models have been studied by Jiang Moss, they are not repeated here.
Supersymmetry in the ON GrossNeveu Models ; We study a special classical current in the O3 GrossNeveu model that becomes supersymmetric when quantum anomalies are included. Following its definition, we generalize the current for the general case, the ON GrossNeveu models. We compute its algebra and discuss the possibility of supersymmetry can be established for these models.
The complete relativistic kinetic model of symmetry violation in isotopic expanding plasma. III. Specific entropy calculation ; A complete model of baryon production in an expanding, primordially symmetric hot Universe is constructed in the framework of generalrelativistic kinetic theory. In this model specific model for a baryon is calculated and graphs of the value dependence are constructed.
Modeling Multiple Risks Hidden Domain of Attraction ; Hidden regular variation is a submodel of multivariate regular variation and facilitates accurate estimation of joint tail probabilities. We generalize the model of hidden regular variation to what we call hidden domain of attraction. We exhibit examples that illustrate the need for a more general model and discuss detection and estimation techniques.
On death processes and urn models ; We use death processes and embeddings into continuous time in order to analyze several urn models with a diminishing content. In particular we discuss generalizations of the pill's problem, originally introduced by Knuth and McCarthy, and generalizations of the well known sampling without replacement urn models, and OK Corral urn models.
Grothendieck's Homotopy Hypothesis ; We construct a diagonal cofibrantly generated model structre on the category of simplicial objects in the category of topological categories sCatTop, which is the category of diagrams Deltaop, CatTop. Moreover, we prove that the diagonal model structures is left proper and cellular. We also prove that the category of inftygroupoids the full subcategory of topological categories has a cofibrantly generated model structure and is Quillen equivalent to the model category of simplicial sets, which proves the Grothendieck's homotopy hypothesis.
WarmLogamediate inflationary universe model ; Warm inflationary universe models in the context of logamediate expansion are studied. General conditions required for these models to be realizable and discussed. This study is done in the weak and strong dissipative regimes. The parameters of our models are constrained from the observational data.
Probabilistic Generative Model of Social Network Based on Web Features ; In this paper, we develop a dynamic framework for the modeling and analysis of social networks to work with web documents. We illustrate the model with features of web, design a form to analyze relationships of attributes as a modality of social structure, and create the optimization of generative model based on Bayes Theorem.
Generalized parity in multiphoton Rabi model ; Quantum multiphoton spinboson model is considered. We solve an operator Riccati equation associated with that model and present a candidate for a generalized parity operator allowing to transform spinboson Hamiltonian to a block diagonal form what indicates an existence of the related symmetry of the model.
Generalized Shalika model on mathrmSO4nF, symplectic linear model on mathrmSp4nF and theta correspondence ; We show that if an irreducible admissible representation of mathrmSO4nF has a generalized Shalika model, then its small theta lift to mathrmSp4nF has the symplectic linear model, thus answering a question posed by D. Jiang. Here F is a nonarchimedean field of characteristic zero.
The Fermion Content of the Standard Model ; We describe a simple model that automatically generates the sum over gauge group representations and chiralities of a single generation of fermions in the Standard Model, augmented by a sterile neutrino. The model is a modification of the worldline approach to chiral fermions.
Sequence Modeling using Gated Recurrent Neural Networks ; In this paper, we have used Recurrent Neural Networks to capture and model human motion data and generate motions by prediction of the next immediate data point at each timestep. Our RNN is armed with recently proposed Gated Recurrent Units which has shown promising results in some sequence modeling problems such as Machine Translation and Speech Synthesis. We demonstrate that this model is able to capture longterm dependencies in data and generate realistic motions.
Inhomogeneous Restricted Lattice Walks ; We consider inhomogeneous lattice walk models in a halfspace and in the quarter plane. For the models in a halfspace, we show by a generalization of the kernel method to linear systems of functional equations that their generating functions are always algebraic. For the models in the quarter plane, we have carried out an experimental classification of all models with small steps. We discovered many apparently Dfinite cases for most of which we have no explanation yet.
Model reduction by separation of variables a comparison between Hierarchical Model reduction and Proper Generalized Decomposition ; Hierarchical Model reduction and Proper Generalized Decomposition both exploit separation of variables to perform a model reduction. After setting the basics, we exemplify these techniques on some standard elliptic problems to highlight pros and cons of the two procedures, both from a methodological and a numerical viewpoint.
The complete 1N expansion of colored tensor models in arbitrary dimension ; In this paper we generalize the results of 1,2 and derive the full 1N expansion of colored tensor models in arbitrary dimensions. We detail the expansion for the independent identically distributed model and the topological Boulatov Ooguri model.
Interacting Quintessence Models of Dark Energy ; In this paper we consider two models of quintessence scalar fields with different potentials. Interaction with generalized cosmic Chaplygin gas is also investigated. Cosmological parameters are studied and graphical behavior is analyzed. We find that our model is agree with observational data specially LambdaCDM model.
Regulatory Markets for AI Safety ; We propose a new model for regulation to achieve AI safety global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice responding to the risk of adversarial attacks on AI models employed in commercial drones.
Moderate deviations of generalized Nurn Ehrenfest models ; This paper is a further investigation of the generalized Nurn Ehrenfest model introduced in citeXue2020. A moderate deviation principle from the hydrodynamic limit of the model is derived. The proof of this main result follows a routine procedure introduced in citeKipnis1989, where a replacement lemma plays the key role. To prove the replacement lemma, the large deviation principle of the model given in citeXue2020 is utilized.
Some minimization problems for mean field models with competing forces ; We review recent results on three families of minimization problems, defined on subsets of nonnegative functions with fixed integral. The competition between attractive and repulsive forces leads to transitions between parameter regimes, where minimizers exist and where they do not. The problems considered are generalized liquid drop models, swarming models and generalized KellerSegel models.
Estimating Separable Matching Models ; In this paper we propose two simple methods to estimate models of matching with transferable and separable utility introduced in Galichon and Salani'e 2022. The first method is a minimum distance estimator that relies on the generalized entropy of matching. The second relies on a reformulation of the more special but popular Choo and Siow 2006 model; it uses generalized linear models GLMs with twoway fixed effects.
Renormalizable and unitary Lorentz invariant model of quantum gravity ; We analyze the R R2 model of quantum gravity where terms quadratic in the curvature tensor are added to the General Relativity action. This model was recently proved to be a selfconsistent quantum theory of gravitation, being both renormalizable and unitary. The model can be made practically indistinguishable from General Relativity at astrophysical and cosmological scales by the proper choice of parameters.
Neutrino Masses and Helicities ; Models of neutrinos and their mass generation by selfenergy radiative corrections are formulated in an ultraviolet complete quantum field theory UCQFT. A model of the three flavors of neutrinos as Majorana fermions is developed as the minimal model. A model incorporating an SU2 singlet sterile neutrino can also be formulated.
Generic Forward Curve Dynamics for Commodity Derivatives ; This article presents a generic framework for modeling the dynamics of forward curves in commodity market as commodity derivatives are typically traded by futures or forwards. We have theoretically demonstrated that commodity prices are driven by multiple components. As such, the model can better capture the forward price and volatility dynamics. Empirical study shows that the model prices are very close to the market prices, indicating prima facie that the model performs quite well.
Analyzing Generalized Polya Urn Models using Martingales, with an Application to Viral Evolution ; The randomized playthewinner RPW model is a generalized P'olya Urn process with broad applications ranging from viral genomics to clinical trials. We derive an exact expression for the variance of the RPW model as well as an approximation of its full probability mass function. We then demonstrate an application of the model to viral replication processes, and provide a novel method of estimating viral mutation rates.
Trust the Model When It Is Confident Masked Modelbased ActorCritic ; It is a popular belief that modelbased Reinforcement Learning RL is more sample efficient than modelfree RL, but in practice, it is not always true due to overweighed model errors. In complex and noisy settings, modelbased RL tends to have trouble using the model if it does not know when to trust the model. In this work, we find that better model usage can make a huge difference. We show theoretically that if the use of modelgenerated data is restricted to stateaction pairs where the model error is small, the performance gap between model and real rollouts can be reduced. It motivates us to use model rollouts only when the model is confident about its predictions. We propose Masked Modelbased ActorCritic M2AC, a novel policy optimization algorithm that maximizes a modelbased lowerbound of the true value function. M2AC implements a masking mechanism based on the model's uncertainty to decide whether its prediction should be used or not. Consequently, the new algorithm tends to give robust policy improvements. Experiments on continuous control benchmarks demonstrate that M2AC has strong performance even when using long model rollouts in very noisy environments, and it significantly outperforms previous stateoftheart methods.
Unsafe Diffusion On the Generation of Unsafe Images and Hateful Memes From TextToImage Models ; Stateoftheart TexttoImage models like Stable Diffusion and DALLEcdot2 are revolutionizing how people generate visual content. At the same time, society has serious concerns about how adversaries can exploit such models to generate unsafe images. In this work, we focus on demystifying the generation of unsafe images and hateful memes from TexttoImage models. We first construct a typology of unsafe images consisting of five categories sexually explicit, violent, disturbing, hateful, and political. Then, we assess the proportion of unsafe images generated by four advanced TexttoImage models using four prompt datasets. We find that these models can generate a substantial percentage of unsafe images; across four models and four prompt datasets, 14.56 of all generated images are unsafe. When comparing the four models, we find different risk levels, with Stable Diffusion being the most prone to generating unsafe content 18.92 of all generated images are unsafe. Given Stable Diffusion's tendency to generate more unsafe content, we evaluate its potential to generate hateful meme variants if exploited by an adversary to attack a specific individual or community. We employ three image editing methods, DreamBooth, Textual Inversion, and SDEdit, which are supported by Stable Diffusion. Our evaluation result shows that 24 of the generated images using DreamBooth are hateful meme variants that present the features of the original hateful meme and the target individualcommunity; these generated images are comparable to hateful meme variants collected from the real world. Overall, our results demonstrate that the danger of largescale generation of unsafe images is imminent. We discuss several mitigating measures, such as curating training data, regulating prompts, and implementing safety filters, and encourage better safeguard tools to be developed to prevent unsafe generation.
Better Language Models with Model Merging ; This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.
Structure and Parameter Learning for Causal Independence and Causal Interaction Models ; This paper discusses causal independence models and a generalization of these models called causal interaction models. Causal interaction models are models that have independent mechanisms where a mechanism can have several causes. In addition to introducing several particular types of causal interaction models, we show how we can apply the Bayesian approach to learning causal interaction models obtaining approximate posterior distributions for the models and obtain MAP and ML estimates for the parameters. We illustrate the approach with a simulation study of learning model posteriors.
Vector models and generalized SYK models ; We consider the relation between SYKlike models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the GrossNeveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYKlike model at low energy. A chaoticnonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Disfluency Detection using a Noisy Channel Model and a Deep Neural Language Model ; This paper presents a model for disfluency detection in spontaneous speech transcripts called LSTM Noisy Channel Model. The model uses a Noisy Channel Model NCM to generate nbest candidate disfluency analyses and a Long ShortTerm Memory LSTM language model to score the underlying fluent sentences of each analysis. The LSTM language model scores, along with other features, are used in a MaxEnt reranker to identify the most plausible analysis. We show that using an LSTM language model in the reranking process of noisy channel disfluency model improves the stateoftheart in disfluency detection.
A Review of Latent Space Models for Social Networks ; In this paper, we provide a review on both fundamentals of social networks and latent space modeling. The former discusses important topics related to network description, including vertex characteristics and network structure; whereas the latter articulates relevant advances in network modeling, including random graph models, generalized random graph models, exponential random graph models, and social space models. We discuss in detail several latent space models provided in literature, providing special attention to distance, class, and eigen models in the context of undirected, binary networks. In addition, we also examine empirically the behavior of these models in terms of prediction and goodnessoffit using more than twenty popular datasets of the network literature.
Dimers and the Ising model ; We present a innovative relationship between ground states of the Ising model and dimer coverings which sheds new light on the Ising Models with highly degenerated ground states and enables one to construct such models. Thanks to this relationship we also find the generating function for dimers as the appropriate limit of the free energy per spin for the Ising model.
Modeling the Dialectic ; Three formal firstorder finite dialectical schemes are investigated. It is shown that schemes 1 and 2 have significantly different finite models. Further, an infinite natural number model for schemes 1, 2, 3 is constructed, and it is shown that scheme 3 has no finite model.
Dislocation dynamics from microscopic models to macroscopic crystal plasticity ; In this paper we study the connection between four models describing dislocation dynamics a generalized 2D FrenkelKontorova model at the atomic level, the PeierlsNabarro model, the discrete dislocation dynamics and a macroscopic model with dislocation densities. We show how each model can be deduced from the previous one at a smaller scale.
Topics on abelian spin models and related problems ; In these notes, we discuss a selection of topics on several models of planar statistical mechanics. We consider the Ising, Potts, and more generally abelian spin models; the discrete Gaussian free field; the random cluster model; and the sixvertex model. Emphasis is put on duality, order, disorder and spinor variables, and on mappings between these models.
Calibration diagnostics for point process models via the probability integral transform ; We propose the use of the probability integral transform PIT for model validation in point process models. The simple PIT diagnostics assess the calibration of the model and can detect inconsistencies in both the intensity and the interaction structure. For the Poisson model, the PIT diagnostics can be calculated explicitly. Generally, the calibration may be assessed empirically based on random draws from the model and the method applies to processes of any dimension.
The bounded 15vertex model ; The 15vertex model of Statistical Mechanics is studied on a square domain with partially oriented boundary. With DWBC the model would reduce to the sixvertex model, but more general boundary configurations are available. After establishing the dynamic version of the model we simulate with it to find the typical equilibrium states for a set of increasingly complex boundaries. Among others they yield almost isotropic nontrivial limit shapes even though the microscopic model is highly asymmetric.
Exponential bounds of ruin probabilities for nonhomogeneous risk models ; Lundbergtype inequalities for ruin probabilities of nonhomogeneous risk models are presented in this paper. By employing martingale method, the upper bounds of ruin probabilities are obtained for the general risk models under weak assumptions. In addition, several risk models, including the newly defined united risk model and quasiperiodic risk model with interest rate, are studied.
A Model Structure On The Category Of Small Acyclic Categories ; In this paper, we show that the Thomason model structure restricts to a Quillen equivalent cofibrantly generated model structure on the category of acyclic categories, whose generating cofibrations are the same as those generating the Thomason model structure. To understand the Thomason model structure, we need to have a closer look at the barycentric subdivision endofunctor on the category of simplicial sets. This functor has a well known right adjoint, called Kan's Ex functor. Taking the subdivision twice and then the fundamental category yields a left adjoint of an adjunction between the category of simplicial sets and the category of small categories, whose right adjoint is given by applying the Ex functor twice on the nerve of a category. This adjunction lifts the cofibrantly generated Quillen model structure on simplicial sets to a cofibrantly generated model structure on the category of small categories, the Thomason model structure. The generating sets are given by the image of the generating sets of the Quillen model structure on simplicial sets under the aforementioned adjunction. We furthermore show that the category of acyclic categories is proper and combinatorial with respect to said model structure. That is weak equivalences behave nicely with respect to pushouts along fibrations and cofibrations, and cofibrations satisfy certain smallness conditions which allow us to work with sets instead of proper classes.
GraphGAN Graph Representation Learning with Generative Adversarial Nets ; The goal of graph representation learning is to embed each vertex in a graph into a lowdimensional vector space. Existing graph representation learning methods can be classified into two categories generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose GraphGAN, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a gametheoretical minimax game. Specifically, for a given vertex, the generative model tries to fit its underlying true connectivity distribution over all other vertices and produces fake samples to fool the discriminative model, while the discriminative model tries to detect whether the sampled vertex is from ground truth or generated by the generative model. With the competition between these two models, both of them can alternately and iteratively boost their performance. Moreover, when considering the implementation of generative model, we propose a novel graph softmax to overcome the limitations of traditional softmax function, which can be proven satisfying desirable properties of normalization, graph structure awareness, and computational efficiency. Through extensive experiments on realworld datasets, we demonstrate that GraphGAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over stateoftheart baselines.
Learning TaskGeneral Representations with Generative NeuroSymbolic Modeling ; People can learn rich, generalpurpose conceptual representations from only raw perceptual inputs. Current machine learning approaches fall well short of these human standards, although different modeling traditions often have complementary strengths. Symbolic models can capture the compositional and causal knowledge that enables flexible generalization, but they struggle to learn from raw inputs, relying on strong abstractions and simplifying assumptions. Neural network models can learn directly from raw data, but they struggle to capture compositional and causal structure and typically must retrain to tackle new tasks. We bring together these two traditions to learn generative models of concepts that capture rich compositional and causal structure, while learning from raw data. We develop a generative neurosymbolic GNS model of handwritten character concepts that uses the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed. The distributions of parts strokes, and correlations between parts, are modeled with neural network subroutines, allowing the model to learn directly from raw data and express nonparametric statistical relationships. We apply our model to the Omniglot challenge of humanlevel concept learning, using a background set of alphabets to learn an expressive prior distribution over character drawings. In a subsequent evaluation, our GNS model uses probabilistic inference to learn rich conceptual representations from a single training image that generalize to 4 unique tasks, succeeding where previous work has fallen short.
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision ; The blackbox nature of neural models has motivated a line of research that aims to generate natural language rationales to explain why a model made certain predictions. Such rationale generation models, to date, have been trained on datasetspecific crowdsourced rationales, but this approach is costly and is not generalizable to new tasks and domains. In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for humanwritten rationales. We investigate multiple ways to automatically generate rationales using pretrained language models, neural knowledge models, and distant supervision from related tasks, and train generative models capable of composing explanatory rationales for unseen instances. We demonstrate our approach on the defeasible inference task, a nonmonotonic reasoning task in which an inference may be strengthened or weakened when new information an update is introduced. Our model shows promises at generating posthoc rationales explaining why an inference is more or less likely given the additional information, however, it mostly generates trivial rationales reflecting the fundamental limitations of neural language models. Conversely, the more realistic setup of jointly predicting the update or its type and generating rationale is more challenging, suggesting an important future direction.
How Do Your Biomedical Named Entity Recognition Models Generalize to Novel Entities ; The number of biomedical literature on new biomedical concepts is rapidly increasing, which necessitates a reliable biomedical named entity recognition BioNER model for identifying new and unseen entity mentions. However, it is questionable whether existing models can effectively handle them. In this work, we systematically analyze the three types of recognition abilities of BioNER models memorization, synonym generalization, and concept generalization. We find that although current best models achieve stateoftheart performance on benchmarks based on overall performance, they have limitations in identifying synonyms and new biomedical concepts, indicating they are overestimated in terms of their generalization abilities. We also investigate failure cases of models and identify several difficulties in recognizing unseen mentions in biomedical literature as follows 1 models tend to exploit dataset biases, which hinders the models' abilities to generalize, and 2 several biomedical names have novel morphological patterns with weak name regularity, and models fail to recognize them. We apply a statisticsbased debiasing method to our problem as a simple remedy and show the improvement in generalization to unseen mentions. We hope that our analyses and findings would be able to facilitate further research into the generalization capabilities of NER models in a domain where their reliability is of utmost importance.
On Training Sample Memorization Lessons from Benchmarking Generative Modeling with a Largescale Competition ; Many recent developments on generative models for natural images have relied on heuristicallymotivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluate the gameability of these metrics by designing and deploying a generative modeling competition. Our competition received over 11000 submitted models. The competitiveness between participants allowed us to investigate both intentional and unintentional memorization in generative modeling. To detect intentional memorization, we propose the MemorizationInformed Fr'echet Inception Distance'' MiFID as a new memorizationaware metric and design benchmark procedures to ensure that winning submissions made genuine improvements in perceptual quality. Furthermore, we manually inspect the code for the 1000 topperforming models to understand and label different forms of memorization. Our analysis reveals that unintentional memorization is a serious and common issue in popular generative models. The generated images and our memorization labels of those models as well as code to compute MiFID are released to facilitate future studies on benchmarking generative models.