text
stringlengths
62
2.94k
Parameters of NJL models for a generic representation of the gauge group ; We generalize a nonlocal NambuJonaLasinio model to a generic representation of the gauge group. The critical temperature is given in a closed form as a function of the parameters of the theory and the cutoff. This result is generally useful in the understanding of QCDlike theories and their thermodynamical behavior.
The ChernRicci flow on smooth minimal models of general type ; We show that on a smooth Hermitian minimal model of general type the ChernRicci flow converges to a closed positive current on M. Moreover, the flow converges smoothly to a KahlerEinstein metric on compact sets away from the null locus of KM. This generalizes work of Tsuji and TianZhang to Hermitian manifolds, providing further evidence that the ChernRicci flow is a natural generalization of the KahlerRicci flow.
Energymomentum distribution of a general plane symmetric spacetime in metric fR gravity ; In this paper, the exact vacuum solution of a general plane symmetric spacetime is investigated in metric fR gravity with the assumption of constant Ricci scalar. For this solution, we have studied the generalized LandauLifshitz energymomentum complex in this theory to determine the energy distribution expressions for some specific fR models. Also, we show that these models satisfy the constant curvature condition.
StrokeCoder PathBased Image Generation from Single Examples using Transformers ; This paper demonstrates how a Transformer Neural Network can be used to learn a Generative Model from a single pathbased example image. We further show how a data set can be generated from the example image and how the model can be used to generate a large set of deviated images, which still represent the original image's style and concept.
Automatic Evaluation of Neural Personalitybased Chatbots ; Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequencetosequence models for opendomain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.
Unsupervised Quantum Circuit Learning in High Energy Physics ; Unsupervised training of generative models is a machine learning task that has many applications in scientific computing. In this work we evaluate the efficacy of using quantum circuitbased generative models to generate synthetic data of high energy physics processes. We use nonadversarial, gradientbased training of quantum circuit Born machines to generate joint distributions over 2 and 3 variables.
When to Trust Your Model ModelBased Policy Optimization ; Designing effective modelbased reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of modelgenerated data. In this paper, we study the role of model usage in policy optimization both theoretically and empirically. We first formulate and analyze a modelbased reinforcement learning algorithm with a guarantee of monotonic improvement at each step. In practice, this analysis is overly pessimistic and suggests that real offpolicy data is always preferable to modelgenerated onpolicy data, but we show that an empirical estimate of model generalization can be incorporated into such analysis to justify model usage. Motivated by this analysis, we then demonstrate that a simple procedure of using short modelgenerated rollouts branched from real data has the benefits of more complicated modelbased algorithms without the usual pitfalls. In particular, this approach surpasses the sample efficiency of prior modelbased methods, matches the asymptotic performance of the best modelfree algorithms, and scales to horizons that cause other modelbased methods to fail entirely.
Evaluating Generative Patent Language Models ; Generative language models are promising for assisting human writing in various domains. This manuscript aims to build generative language models in the patent domain and evaluate model performance from a humancentric perspective. The perspective is to measure the ratio of keystrokes that can be saved by autocompletion based on generative patent language models. A higher ratio means a more effective model which can save more keystrokes. This metric can be used to benchmark model performance. The metric is different from conventional machinecentric metrics that are tokenbased instead of keystrokebased. In terms of model size, the largest model built in this manuscript is 6B, which is stateoftheart in the patent domain. Based on the metric, it is found that the largest model is not necessarily the best for the humancentric metric. The finding means that keeping increasing model sizes in the patent domain might be unnecessary if the purpose is to assist human writing with autocompletion. Several patent language models are pretrained from scratch in this research. The pretrained models are released for future researchers. Several visualization tools are also provided. The importance of building a generative language model in the patent domain is the potential to facilitate creativity and innovations in the future.
The Robustness Limits of SoTA Vision Models to Natural Variation ; Recent stateoftheart vision models introduced new architectures, learning paradigms, and larger pretraining data, leading to impressive performance on tasks such as classification. While previous generations of vision models were shown to lack robustness to factors such as pose, it's unclear the extent to which this next generation of models are more robust. To study this question, we develop a dataset of more than 7 million images with controlled changes in pose, position, background, lighting, and size. We study not only how robust recent stateoftheart models are, but also the extent to which models can generalize variation in factors when they're present during training. We consider a catalog of recent vision models, including vision transformers ViT, selfsupervised models such as masked autoencoders MAE, and models trained on larger datasets such as CLIP. We find outofthebox, even today's best models are not robust to common changes in pose, size, and background. When some samples varied during training, we found models required a significant portion of diversity to generalize though eventually robustness did improve. When diversity is only seen for some classes however, we found models did not generalize to other classes, unless the classes were very similar to those seen varying during training. We hope our work will shed further light on the blind spots of SoTA models and spur the development of more robust vision models.
Optimization of coarsegrained models matching probability density in conformational space ; CoarseGraining CG models are low resolution approximation of high resolution models, such as allatomic AA models. An effective CG model is expected to reproduce equilibrium values of sufficient physical quantities of its AA model, which requires to match the equilibrium probability density of the CG model to that of the AA model in conformational space. The present work proposes for constructing effective CG models a novel methodology that aims at minimizing the distance between CG model and AA model. The distance is defined as a functional of conformational probability densities in CG and AA models and further expanded by ensemble averages of a set of sufficient and independent basis functions. An orthogonalization strategy is adopted to get the independent basis functions from sufficiently preselected interesting physical quantities of the system. Two variational methods are developed to optimize parameters of effective CG force field by minimizing the functional of probability densities, are then generalized so that the CG model also reproduce the pressure of AA model. The general CG framework is verified in constructing onesite CG water from TIP3P water model.
A generalized configuration model with triadic closure ; In this paper we present a generalized configuration model with random triadic closure GCTC. This model possesses five fundamental properties large clustering coefficient, power law degree distribution, short path length, nonzero Pearson degree correlation, and existence of community structures. We analytically derive the Pearson degree correlation coefficient and the clustering coefficient of the proposed model. We select a few datasets of realworld networks. By simulation, we show that the GCTC model matches very well with the datasets in terms of Pearson degree correlations and clustering coefficients. We also test three wellknown community detection algorithms on our model, the datasets and other three prevalent benchmark models. We show that the GCTC model performs equally well as the other three benchmark models. Finally, we perform influence diffusion on the GCTC model using the independent cascade model and the linear threshold model. We show that the influence spreads of the GCTC model are much closer to those of the datasets than the other benchmark models. This suggests that the GCTC model is a suitable tool to study network science problems where degree correlation or clustering plays an important role.
Classifying Emails into Human vs Machine Category ; It is an essential product requirement of Yahoo Mail to distinguish between personal and machinegenerated emails. The old production classifier in Yahoo Mail was based on a simple logistic regression model. That model was trained by aggregating features at the SMTP address level. We propose building deep learning models at the message level. We built and trained four individual CNN models 1 a content model with subject and content as input; 2 a sender model with sender email address and name as input; 3 an action model by analyzing email recipients' action patterns and correspondingly generating target labels based on senders' openingdeleting behaviors; 4 a salutation model by utilizing senders' explicit salutation signal as positive labels. Next, we built a final full model after exploring different combinations of the above four models. Experimental results on editorial data show that our full model improves the adjustedrecall from 70.5 to 78.8 compared to the old production model, while at the same time lifts the precision from 94.7 to 96.0. Our full model also significantly beats the stateoftheart Bert model at this task. This full model has been deployed into the current production system Yahoo Mail 6.
OutofDomain Semantics to the Rescue ZeroShot Hybrid Retrieval Models ; The pretrained language model eg, BERT based deep retrieval models achieved superior performance over lexical retrieval models eg, BM25 in many passage retrieval tasks. However, limited work has been done to generalize a deep retrieval model to other tasks and domains. In this work, we carefully select five datasets, including two indomain datasets and three outofdomain datasets with different levels of domain shift, and study the generalization of a deep model in a zeroshot setting. Our findings show that the performance of a deep retrieval model is significantly deteriorated when the target domain is very different from the source domain that the model was trained on. On the contrary, lexical models are more robust across domains. We thus propose a simple yet effective framework to integrate lexical and deep retrieval models. Our experiments demonstrate that these two models are complementary, even when the deep model is weaker in the outofdomain setting. The hybrid model obtains an average of 20.4 relative gain over the deep retrieval model, and an average of 9.54 over the lexical model in three outofdomain datasets.
Multifold CrossValidation Model Averaging for Generalized Additive Partial Linear Models ; Generalized additive partial linear models GAPLMs are appealing for model interpretation and prediction. However, for GAPLMs, the covariates and the degree of smoothing in the nonparametric parts are often difficult to determine in practice. To address this model selection uncertainty issue, we develop a computationally feasible model averaging MA procedure. The model weights are datadriven and selected based on multifold crossvalidation CV instead of leaveoneout for computational saving. When all the candidate models are misspecified, we show that the proposed MA estimator for GAPLMs is asymptotically optimal in the sense of achieving the lowest possible KullbackLeibler loss. In the other scenario where the candidate model set contains at least one correct model, the weights chosen by the multifold CV are asymptotically concentrated on the correct models. As a byproduct, we propose a variable importance measure to quantify the importances of the predictors in GAPLMs based on the MA weights. It is shown to be able to asymptotically identify the variables in the true model. Moreover, when the number of candidate models is very large, a model screening method is provided. Numerical experiments show the superiority of the proposed MA method over some existing model averaging and selection methods.
ZipIt Merging Models from Different Tasks without Training ; Typical deep visual recognition models are capable of performing the one task they were trained on. In this paper, we tackle the extremely difficult problem of combining completely distinct models with different initializations, each solving a separate task, into one multitask model without any additional training. Prior work in model merging permutes one model to the space of the other then adds them together. While this works for models trained on the same task, we find that this fails to account for the differences in models trained on disjoint tasks. Thus, we introduce ZipIt, a general method for merging two arbitrary models of the same architecture that incorporates two simple strategies. First, in order to account for features that aren't shared between models, we expand the model merging problem to additionally allow for merging features within each model by defining a general zip operation. Second, we add support for partially zipping the models up until a specified layer, naturally creating a multihead model. We find that these two changes combined account for a staggering 2060 improvement over prior work, making the merging of models trained on disjoint tasks feasible.
A Model Structure on the Category of Topological Categories ; In this article, we construct a cofibrantly generated Quillen model structure on the category of small topological categories mathbfCatmathbfTop. It is Quillen equivalent to the Joyal model structure of infty,1categories and the Bergner model structure on mathbfCatmathbfsSet.
Phylogenetic complexity of the Kimura 3parameter model ; In algebraic statistics, the Kimura 3parameter model is one of the most interesting and classical phylogenetic models. We prove that the ideals associated to this model are generated in degree four, confirming a conjecture by Sturmfels and Sullivant.
A generalized lattice Boltzmann model for fluid flow system and its application in twophase flows ; In this paper, a generalized lattice Boltzmann LB model with a mass source is proposed to solve both incompressible and nearly incompressible NavierStokes NS equations. This model can be used to deal with singlephase and twophase flows problems with a mass source term. From this generalized model, we can not only get some existing models, but also derive new models. Moreover, for the incompressible model derived, a modified pressure scheme is introduced to calculate the pressure, and then to ensure the accuracy of the model. In this work, we will focus on a twophase flow system, and in the frame work of our generalized LB model, a new phasefieldbased LB model is developed for incompressible and quasiincompressible twophase flows. A series of numerical simulations of some classic physical problems, including a spinodal decomposition, a static droplet, a layered Poiseuille flow, and a bubble rising flow under buoyancy, are performed to validate the developed model. Besides, some comparisons with previous quasiincompressible and incompressible LB models are also carried out, and the results show that the present model is accurate in the study of twophase flows. Finally, we also conduct a comparison between quasiincompressible and incompressible LB models for twophase flow problems, and find that in some cases, the proposed quasiincompressible LB model performs better than incompressible LB models.
Timedependent Heston model ; This work presents an exact solution to the generalized Heston model, where the model parameters are assumed to have linear time dependence The solution for the model in expressed in terms of confluent hypergeometric functions.
Symbolic Knowledge Distillation from General Language Models to Commonsense Models ; The common practice for training commonsense models has gone fromhumantocorpustomachine humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, frommachinetocorpustomachine general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation Hinton et al., 2015, our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolicallyas textin addition to the neural model. We also distill only one aspectthe commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill highquality causal commonsense from GPT3, a general language model. Empirical results demonstrate that, for the first time, a humanauthored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model's commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models.
Artificial Interrogation for Attributing Language Models ; This paper presents solutions to the Machine Learning Model Attribution challenge MLMAC collectively organized by MITRE, Microsoft, SchmidtFutures, RobustIntelligence, LincolnNetwork, and Huggingface community. The challenge provides twelve opensourced base versions of popular language models developed by wellknown organizations and twelve finetuned language models for text generation. The names and architecture details of finetuned models were kept hidden, and participants can access these models only through the rest APIs developed by the organizers. Given these constraints, the goal of the contest is to identify which finetuned models originated from which base model. To solve this challenge, we have assumed that finetuned models and their corresponding base versions must share a similar vocabulary set with a matching syntactical writing style that resonates in their generated outputs. Our strategy is to develop a set of queries to interrogate base and finetuned models. And then perform onetomany pairing between them based on similarities in their generated responses, where more than one finetuned model can pair with a base model but not viceversa. We have employed four distinct approaches for measuring the resemblance between the responses generated from the models of both sets. The first approach uses evaluation metrics of the machine translation, and the second uses a vector space model. The third approach uses stateoftheart multiclass text classification, Transformer models. Lastly, the fourth approach uses a set of Transformer based binary text classifiers, one for each provided base model, to perform multiclass text classification in a onevsall fashion. This paper reports implementation details, comparison, and experimental studies, of these approaches along with the final obtained results.
On an inferential model construction using generalized associations ; The inferential model IM approach, like fiducial and its generalizations, depends on a representation of the datagenerating process. Here, a particular variation on the IM construction is considered, one based on generalized associations. The resulting generalized IM is more flexible than the basic IM in that it does not require a complete specification of the datagenerating process and is provably valid under mild conditions. Computation and marginalization strategies are discussed, and two applications of this generalized IM approach are presented.
Synthesizing Tabular Data using Generative Adversarial Networks ; Generative adversarial networks GANs implicitly learn the probability distribution of a dataset and can draw samples from the distribution. This paper presents, Tabular GAN TGAN, a generative adversarial network which can generate tabular data like medical or educational records. Using the power of deep neural networks, TGAN generates highquality and fully synthetic tables while simultaneously generating discrete and continuous variables. When we evaluate our model on three datasets, we find that TGAN outperforms conventional statistical generative models in both capturing the correlation between columns and scaling up for large datasets.
Controllable Text Generation with Focused Variation ; This work introduces FocusedVariation Network FVN, a novel model to control language generation. The main problems in previous controlled language generation models range from the difficulty of generating text according to the given attributes, to the lack of diversity of the generated texts. FVN addresses these issues by learning disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity, while at the same time generating fluent text. We evaluate FVN on two text generation datasets with annotated content and style, and show stateoftheart performance as assessed by automatic and human evaluations.
Improving Compositional Generalization in Classification Tasks via Structure Annotations ; Compositional generalization is the ability to generalize systematically to a new data distribution by combining known components. Although humans seem to have a great ability to generalize compositionally, stateoftheart neural models struggle to do so. In this work, we study compositional generalization in classification tasks and present two main contributions. First, we study ways to convert a natural language sequencetosequence dataset to a classification dataset that also requires compositional generalization. Second, we show that providing structural hints specifically, providing parse trees and entity links as attention masks for a Transformer model helps compositional generalization.
TrainingFree LocationAware TexttoImage Synthesis ; Current largescale generative models have impressive efficiency in generating highquality images based on text prompts. However, they lack the ability to precisely control the size and position of objects in the generated image. In this study, we analyze the generative mechanism of the stable diffusion model and propose a new interactive generation paradigm that allows users to specify the position of generated objects without additional training. Moreover, we propose an object detectionbased evaluation metric to assess the control capability of location aware generation task. Our experimental results show that our method outperforms stateoftheart methods on both control capacity and image quality.
Towards Diverse and Consistent Typography Generation ; In this work, we consider the typography generation task that aims at producing diverse typographic styling for the given graphic document. We formulate typography generation as a finegrained attribute generation for multiple text elements and build an autoregressive model to generate diverse typography that matches the input design context. We further propose a simple yet effective sampling approach that respects the consistency and distinction principle of typography so that generated examples share consistent typographic styling across text elements. Our empirical study shows that our model successfully generates diverse typographic designs while preserving a consistent typographic structure.
Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation ; Generative models have recently received renewed attention as a result of adversarial learning. Generative adversarial networks consist of samples generation model and a discrimination model able to distinguish between genuine and synthetic samples. In combination with convolutional for the discriminator and deconvolutional for the generator layers, they are particularly suitable for image generation, especially of natural scenes. However, the presence of fully connected layers adds global dependencies in the generated images. This may lead to high and global variations in the generated sample for small local variations in the input noise. In this work we propose to use architectures based on fully convolutional networks including among others dilated layers, architectures specifically designed to generate globally ergodic images, that is images without global dependencies. Conducted experiments reveal that these architectures are well suited for generating natural textures such as geologic structures .
Generating Pertinent and Diversified Comments with Topicaware PointerGenerator Networks ; Comment generation, a new and challenging task in Natural Language Generation NLG, attracts a lot of attention in recent years. However, comments generated by previous work tend to lack pertinence and diversity. In this paper, we propose a novel generation model based on Topicaware PointerGenerator Networks TPGN, which can utilize the topic information hidden in the articles to guide the generation of pertinent and diversified comments. Firstly, we design a keywordlevel and topiclevel encoder attention mechanism to capture topic information in the articles. Next, we integrate the topic information into pointergenerator networks to guide comment generation. Experiments on a large scale of comment generation dataset show that our model produces the valuable comments and outperforms competitive baseline models significantly.
GraphStega Semantic Controllable Steganographic Text Generation Guided by Knowledge Graph ; Most of the existing text generative steganographic methods are based on coding the conditional probability distribution of each word during the generation process, and then selecting specific words according to the secret information, so as to achieve information hiding. Such methods have their limitations which may bring potential security risks. Firstly, with the increase of embedding rate, these models will choose words with lower conditional probability, which will reduce the quality of the generated steganographic texts; secondly, they can not control the semantic expression of the final generated steganographic text. This paper proposes a new text generative steganography method which is quietly different from the existing models. We use a Knowledge Graph KG to guide the generation of steganographic sentences. On the one hand, we hide the secret information by coding the path in the knowledge graph, but not the conditional probability of each generated word; on the other hand, we can control the semantic expression of the generated steganographic text to a certain extent. The experimental results show that the proposed model can guarantee both the quality of the generated text and its semantic expression, which is a supplement and improvement to the current text generation steganography.
Exploiting Pretrained Feature Networks for Generative Adversarial Networks in Audiodomain Loop Generation ; While generative adversarial networks GANs have been widely used in research on audio generation, the training of a GAN model is known to be unstable, time consuming, and data inefficient. Among the attempts to ameliorate the training process of GANs, the idea of Projected GAN emerges as an effective solution for GANbased image generation, establishing the stateoftheart in different image applications. The core idea is to use a pretrained classifier to constrain the feature space of the discriminator to stabilize and improve GAN training. This paper investigates whether Projected GAN can similarly improve audio generation, by evaluating the performance of a StyleGAN2based audiodomain loop generation model with and without using a pretrained feature space in the discriminator. Moreover, we compare the performance of using a general versus domainspecific classifier as the pretrained audio classifier. With experiments on both drum loop and synth loop generation, we show that a general audio classifier works better, and that with Projected GAN our loop generation models can converge around 5 times faster without performance degradation.
Learning to Rank in Generative Retrieval ; Generative retrieval is a promising new paradigm in text retrieval that generates identifier strings of relevant passages as the retrieval target. This paradigm leverages powerful generation models and represents a new paradigm distinct from traditional learningtorank methods. However, despite its rapid development, current generative retrieval methods are still limited. They typically rely on a heuristic function to transform predicted identifiers into a passage rank list, which creates a gap between the learning objective of generative retrieval and the desired passage ranking target. Moreover, the inherent exposure bias problem of text generation also persists in generative retrieval. To address these issues, we propose a novel framework, called LTRGR, that combines generative retrieval with the classical learningtorank paradigm. Our approach involves training an autoregressive model using a passage rank loss, which directly optimizes the autoregressive model toward the optimal passage ranking. This framework only requires an additional training step to enhance current generative retrieval systems and does not add any burden to the inference stage. We conducted experiments on three public datasets, and our results demonstrate that LTRGR achieves stateoftheart performance among generative retrieval methods, indicating its effectiveness and robustness.
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization ; Despite extensive studies, the underlying reason as to why overparameterized neural networks can generalize remains elusive. Existing theory shows that common stochastic optimizers prefer flatter minimizers of the training loss, and thus a natural potential explanation is that flatness implies generalization. This work critically examines this explanation. Through theoretical and empirical investigation, we identify the following three scenarios for twolayer ReLU networks 1 flatness provably implies generalization; 2 there exist nongeneralizing flattest models and sharpness minimization algorithms fail to generalize, and 3 perhaps most surprisingly, there exist nongeneralizing flattest models, but sharpness minimization algorithms still generalize. Our results suggest that the relationship between sharpness and generalization subtly depends on the data distributions and the model architectures and sharpness minimization algorithms do not only minimize sharpness to achieve better generalization. This calls for the search for other explanations for the generalization of overparameterized neural networks.
General Canonical Quantum Gravity Theory and that of the Universe and General Black Hole ; This paper gives both a general canonical quantum gravity theory and the general canonical quantum gravity theories of the Universe and general black hole, and discovers the relations reflecting symmetric properties of the standard nonlinear gravitational Lagrangian, which are not relevant to any concrete metric models. This paper concretely shows the general commutation relations of the general gravitational field operators and their zeroth, first, second and third style, respectively, of high order canonical momentum operators for the general nonlinear system of the standard gravitational Lagrangian, and then has finished all the four styles of the canonical quantization of the standard gravity.
A Survey on RetrievalAugmented Text Generation ; Recently, retrievalaugmented text generation attracted increasing attention of the computational linguistics community. Compared with conventional generation models, retrievalaugmented text generation has remarkable advantages and particularly has achieved stateoftheart performance in many NLP tasks. This paper aims to conduct a survey about retrievalaugmented text generation. It firstly highlights the generic paradigm of retrievalaugmented generation, and then it reviews notable approaches according to different tasks including dialogue response generation, machine translation, and other generation tasks. Finally, it points out some important directions on top of recent methods to facilitate future research.
Composing Ensembles of Pretrained Models via Iterative Consensus ; Large pretrained models exhibit distinct and complementary capabilities dependent on the data they are trained on. Language models such as GPT3 are capable of textual reasoning but cannot understand visual information, while vision models such as DALLE can generate photorealistic photos but fail to understand complex language descriptions. In this work, we propose a unified framework for composing ensembles of different pretrained models combining the strengths of each individual model to solve various multimodal problems in a zeroshot manner. We use pretrained models as generators or scorers and compose them via closedloop iterative consensus optimization. The generator constructs proposals and the scorers iteratively provide feedback to refine the generated result. Such closedloop communication enables models to correct errors caused by other models, significantly boosting performance on downstream tasks, e.g. improving accuracy on grade school math problems by 7.5, without requiring any model finetuning. We demonstrate that consensus achieved by an ensemble of scorers outperforms the feedback of a single scorer, by leveraging the strengths of each expert model. Results show that the proposed method can be used as a general purpose framework for a wide range of zeroshot multimodal tasks, such as image generation, video question answering, mathematical reasoning, and robotic manipulation. Project page httpsenergybasedmodel.github.iocomposingpretrainedmodels.
ApproximationGeneralization Tradeoffs under Approximate Group Equivariance ; The explicit incorporation of taskspecific inductive biases through symmetry has emerged as a general design precept in the development of highperformance machine learning models. For example, group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design. A prevalent intuition about such models is that the integration of relevant symmetry results in enhanced generalization. Moreover, it is posited that when the data andor the model may only exhibit textitapproximate or textitpartial symmetry, the optimal or bestperforming model is one where the model symmetry aligns with the data symmetry. In this paper, we conduct a formal unified investigation of these intuitions. To begin, we present general quantitative bounds that demonstrate how models capturing taskspecific symmetries lead to improved generalization. In fact, our results do not require the transformations to be finite or even form a group and can work with partial or approximate equivariance. Utilizing this quantification, we examine the more general question of model misspecification i.e. when the model symmetries don't align with the data symmetries. We establish, for a given symmetry group, a quantitative comparison between the approximatepartial equivariance of the model and that of the data distribution, precisely connecting model equivariance error and data equivariance error. Our result delineates conditions under which the model equivariance error is optimal, thereby yielding the bestperforming model for the given task and data.
Learning Joint Latent Space EBM Prior Model for Multilayer Generator ; This paper studies the fundamental problem of learning multilayer generator models. The multilayer generator model builds multiple layers of latent variables as a prior model on top of the generator, which benefits learning complex data distribution and hierarchical representations. However, such a prior model usually focuses on modeling interlayer relations between latent variables by assuming noninformative conditional Gaussian distributions, which can be limited in model expressivity. To tackle this issue and learn more expressive prior models, we propose an energybased model EBM on the joint latent space over all layers of latent variables with the multilayer generator as its backbone. Such joint latent space EBM prior model captures the intralayer contextual relations at each layer through layerwise energy terms, and latent variables across different layers are jointly corrected. We develop a joint training scheme via maximum likelihood estimation MLE, which involves Markov Chain Monte Carlo MCMC sampling for both prior and posterior distributions of the latent variables from different layers. To ensure efficient inference and learning, we further propose a variational training scheme where an inference model is used to amortize the costly posterior MCMC sampling. Our experiments demonstrate that the learned model can be expressive in generating highquality images and capturing hierarchical features for better outlier detection.
Distilling Model Knowledge ; Topperforming machine learning systems, such as deep neural networks, large ensembles and complex probabilistic graphical models, can be expensive to store, slow to evaluate and hard to integrate into larger systems. Ideally, we would like to replace such cumbersome models with simpler models that perform equally well. In this thesis, we study knowledge distillation, the idea of extracting the knowledge contained in a complex model and injecting it into a more convenient model. We present a general framework for knowledge distillation, whereby a convenient model of our choosing learns how to mimic a complex model, by observing the latter's behaviour and being penalized whenever it fails to reproduce it. We develop our framework within the context of three distinct machine learning applications a model compression, where we compress large discriminative models, such as ensembles of neural networks, into models of much smaller size; b compact predictive distributions for Bayesian inference, where we distil large bags of MCMC samples into compact predictive distributions in closed form; c intractable generative models, where we distil unnormalizable models such as RBMs into tractable models such as NADEs. We contribute to the state of the art with novel techniques and ideas. In model compression, we describe and implement derivative matching, which allows for better distillation when data is scarce. In compact predictive distributions, we introduce online distillation, which allows for significant savings in memory. Finally, in intractable generative models, we show how to use distilled models to robustly estimate intractable quantities of the original model, such as its intractable partition function.
The generalized stochastic preference choice model ; We propose a new discrete choice model, called the generalized stochastic preference GSP model, that incorporates nonrationality into the stochastic preference SP choice model, also known as the rank based choice model. Our model can explain several choice phenomena that cannot be represented by any SP model such as the compromise and attraction effects, but still subsumes the SP model class. The GSP model is defined as a distribution over consumer types, where each type extends the choice behavior of rational types in the SP model. We build on existing methods for estimating the SP model and propose an iterative estimation algorithm for the GSP model that finds new types by solving a integer linear program in each iteration. We further show that our proposed notion of nonrationality can be incorporated into other choice models, like the random utility maximization RUM model class as well as any of its subclasses. As a concrete example, we introduce the nonrational extension of the classical MNL model, which we term the generalized MNL GMNL model and present an efficient expectationmaximization EM algorithm for estimating the GMNL model. Numerical evaluation on real choice data shows that the GMNL and GSP models can outperform their rational counterparts in outofsample prediction accuracy.
Forecasting SpatioTemporal Renewable Scenarios a Deep Generative Approach ; The operation and planning of largescale power systems are becoming more challenging with the increasing penetration of stochastic renewable generation. In order to minimize the decision risks in power systems with large amount of renewable resources, there is a growing need to model the shortterm generation uncertainty. By producing a group of possible future realizations for certain set of renewable generation plants, scenario approach has become one popular way for renewables uncertainty modeling. However, due to the complex spatial and temporal correlations underlying in renewable generations, traditional modelbased approaches for forecasting future scenarios often require extensive knowledge, while fitted models are often hard to scale. To address such modeling burdens, we propose a learningbased, datadriven scenario forecasts method based on generative adversarial networks GANs, which is a class of deeplearning generative algorithms used for modeling unknown distributions. We firstly utilize an improved GANs with convergence guarantees to learn the intrinsic patterns and model the unknown distributions of multiplesite renewable generation timeseries. Then by solving an optimization problem, we are able to generate forecasted scenarios without any scenario number and forecasting horizon restrictions. Our method is totally modelfree, and could forecast scenarios under different level of forecast uncertainties. Extensive numerical simulations using realworld data from NREL wind and solar integration datasets validate the performance of proposed method in forecasting both wind and solar power scenarios.
A MetaLearning Framework for Generalized ZeroShot Learning ; Learning to classify unseen class samples at test time is popularly referred to as zeroshot learning ZSL. If test samples can be from training seen as well as unseen classes, it is a more challenging problem due to the existence of strong bias towards seen classes. This problem is generally known as emphgeneralized zeroshot learning GZSL. Thanks to the recent advances in generative models such as VAEs and GANs, sample synthesis based approaches have gained considerable attention for solving this problem. These approaches are able to handle the problem of class bias by synthesizing unseen class samples. However, these ZSLGZSL models suffer due to the following key limitations i Their training stage learns a classconditioned generator using only emphseen class data and the training stage does not emphexplicitly learn to generate the unseen class samples; ii They do not learn a generic optimal parameter which can easily generalize for both seen and unseen class generation; and iii If we only have access to a very few samples per seen class, these models tend to perform poorly. In this paper, we propose a metalearning based generative model that naturally handles these limitations. The proposed model is based on integrating modelagnostic meta learning with a Wasserstein GAN WGAN to handle i and iii, and uses a novel task distribution to handle ii. Our proposed model yields significant improvements on standard ZSL as well as more challenging GZSL setting. In ZSL setting, our model yields 4.5, 6.0, 9.8, and 27.9 relative improvements over the current stateoftheart on CUB, AWA1, AWA2, and aPY datasets, respectively.
Jointly Trained Image and Video Generation using Residual Vectors ; In this work, we propose a modeling technique for jointly training image and video generation models by simultaneously learning to map latent variables with a fixed prior onto real images and interpolate over images to generate videos. The proposed approach models the variations in representations using residual vectors encoding the change at each time step over a summary vector for the entire video. We utilize the technique to jointly train an image generation model with a fixed prior along with a video generation model lacking constraints such as disentanglement. The joint training enables the image generator to exploit temporal information while the video generation model learns to flexibly share information across frames. Moreover, experimental results verify our approach's compatibility with pretraining on videos or images and training on datasets containing a mixture of both. A comprehensive set of quantitative and qualitative evaluations reveal the improvements in sample quality and diversity over both video generation and image generation baselines. We further demonstrate the technique's capabilities of exploiting similarity in features across frames by applying it to a model based on decomposing the video into motion and content. The proposed model allows minor variations in content across frames while maintaining the temporal dependence through latent vectors encoding the pose or motion features.
Generative models for sampling and phase transition indication in spin systems ; Recently, generative machinelearning models have gained popularity in physics, driven by the goal of improving the efficiency of Markov chain Monte Carlo techniques and of exploring their potential in capturing experimental data distributions. Motivated by their ability to generate images that look realistic to the human eye, we here study generative adversarial networks GANs as tools to learn the distribution of spin configurations and to generate samples, conditioned on external tuning parameters, such as temperature. We propose ways to efficiently represent the physical states, e.g., by exploiting symmetries, and to minimize the correlations between generated samples. We present a detailed evaluation of the various modifications, using the twodimensional XY model as an example, and find considerable improvements in our proposed implicit generative model. It is also shown that the model can reliably generate samples in the vicinity of the phase transition, even when it has not been trained in the critical region. On top of using the samples generated by the model to capture the phase transition via evaluation of observables, we show how the model itself can be employed as an unsupervised indicator of transitions, by constructing measures of the model's susceptibility to changes in tuning parameters.
ScoreBased Generative Modeling through Stochastic Differential Equations ; Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation SDE that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reversetime SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reversetime SDE depends only on the timedependent gradient field aka, score of the perturbed data distribution. By leveraging advances in scorebased generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in scorebased generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictorcorrector framework to correct errors in the evolution of the discretized reversetime SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with scorebased models, as demonstrated with experiments on classconditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve recordbreaking performance for unconditional image generation on CIFAR10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bitsdim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a scorebased generative model.
DISCO Distilling Counterfactuals with Large Language Models ; Models trained with counterfactually augmented data learn representations of the causal structure of tasks, enabling robust generalization. However, highquality counterfactual data is scarce for most tasks and not easily generated at scale. When crowdsourced, such data is typically limited in scale and diversity; when generated using supervised methods, it is computationally expensive to extend to new counterfactual dimensions. In this work, we introduce DISCO DIStilled COunterfactual Data, a new method for automatically generating high quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a taskspecific teacher model filters these generations to distill highquality counterfactual data. While taskagnostic, we apply our pipeline to the task of natural language inference NLI and find that on challenging evaluations such as the NLI stress test, comparatively smaller student models trained with DISCO generated counterfactuals are more robust 6 absolute and generalize better across distributions 2 compared to models trained without data augmentation. Furthermore, DISCO augmented models are 10 more consistent between counterfactual pairs on three evaluation sets, demonstrating that DISCO augmentation enables models to more reliably learn causal representations. Our repository is available at httpsgithub.comeric11ecadisco
q2d Turning Questions into Dialogs to Teach Models How to Search ; One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose q2d an automatic data generation pipeline that generates informationseeking dialogs from questions. We prompt a large language model PaLM to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate querybased grounded dialogs with better control and scale. Our experiments demonstrate that 1 For query generation on the QReCC dataset, models trained on our syntheticallygenerated data achieve 9097 of the performance of models trained on the humangenerated data; 2 We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multihop MuSiQue and Bamboogle QA datasets. 3 We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from humanwritten dialogs.
Smaller Language Models are Better Blackbox MachineGenerated Text Detectors ; With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machinegenerated or humanwritten becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures. To this end, there have been a slew of methods proposed to detect machinegenerated text. Most of these methods need access to the logits of the target model or need the ability to sample from the target. One such blackbox detection method relies on the observation that generated text is locally optimal under the likelihood function of the generator, while humanwritten text is not. We find that overall, smaller and partiallytrained models are better universal text detectors they can more precisely detect text generated from both small and larger models. Interestingly, we find that whether the detector and generator were trained on the same data is not critically important to the detection success. For instance the OPT125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ6B, has AUC of 0.45.
A Federated Channel Modeling System using Generative Neural Networks ; The paper proposes a datadriven approach to airtoground channel estimation in a millimeterwave wireless network on an unmanned aerial vehicle. Unlike traditional centralized learning methods that are specific to certain geographical areas and inappropriate for others, we propose a generalized model that uses Federated Learning FL for channel estimation and can predict the airtoground path loss between a lowaltitude platform and a terrestrial terminal. To this end, our proposed FLbased Generative Adversarial Network FLGAN is designed to function as a generative data model that can learn different types of data distributions and generate realistic patterns from the same distributions without requiring prior data analysis before the training phase. To evaluate the effectiveness of the proposed model, we evaluate its performance using KullbackLeibler divergence KL, and Wasserstein distance between the synthetic data distribution generated by the model and the actual data distribution. We also compare the proposed technique with other generative models, such as FLVariational Autoencoder FLVAE and standalone VAE and GAN models. The results of the study show that the synthetic data generated by FLGAN has the highest similarity in distribution with the real data. This shows the effectiveness of the proposed approach in generating datadriven channel models that can be used in different regions
A General Method for Robust Bayesian Modeling ; Robust Bayesian models are appealing alternatives to standard models, providing protection from data that contains outliers or other departures from the model assumptions. Historically, robust models were mostly developed on a casebycase basis; examples include robust linear regression, robust mixture models, and bursty topic models. In this paper we develop a general approach to robust Bayesian modeling. We show how to turn an existing Bayesian model into a robust model, and then develop a generic strategy for computing with it. We use our method to study robust variants of several models, including linear regression, Poisson regression, logistic regression, and probabilistic topic models. We discuss the connections between our methods and existing approaches, especially empirical Bayes and JamesStein estimation.
A Collapsed Generalized AwRascleZhang Model and Its Model Accuracy ; This work presents a collapsed generalized AwRascleZhang CGARZ model, which fits into a generic second order model GSOM framework. GSOMs augment the evolution of the traffic density by a second state variable characterizing a property of vehicles or drivers. A cell transmission model for the numerical solution of GSOMs is derived, which is based on analyzing the sending and receiving functions of the traffic density and total property. The predictive accuracy of the CGARZ model is then compared to the classical firstorder LWR and four secondorder models. To that end, a systematic approach to calibrate model parameters from sensor flowdensity data is introduced and applied to all models studied. The comparative model validation is conducted using two types of field data vehicle trajectory data, and loop detector data. It is shown that the CGARZ model provides an intriguing combination of simple model dynamics in the freeflow regime and a representation of the spread of flowdensity data in the congested regime, while possessing a competitive prediction accuracy.
One evaluation of modelbased testing and its automation ; Modelbased testing relies on behavior models for the generation of model traces input and expected outputtest casesfor an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived modelbased test suites detected significantly more requirements errors than handcrafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated modelbased test suites detected as many errors as handcrafted modelbased suites with the same number of tests. A sixfold increase in the number of modelbased tests led to an 11 increase in detected errors.
Extendability in the Sheaftheoretic Approach Construction of Bell Models from KochenSpecker Models ; Extendability of an empirical model was shown by Abramsky Brandenburger to correspond in a unified manner to both locality and noncontextuality. We develop their approach by presenting a refinement of the notion of extendability that can also be useful in characterising the properties of submodels. The refinement is found to have another useful application it is shown that a particular canonical extension, when welldefined, may be used for the construction of Belltype models from models of more general kinds in such a way that the constructed model is equivalent to the original in terms of nonlocalitycontextuality. This is important since on practical and foundational levels, the notion of locality in Belltype models can more easily be motivated than the corresponding general notion of contextuality. We consider examples of Belltype models generated from some standard examples of contextual models, including an entire class of KochenSpeckerlike models. This exposes an intriguing relationship between the simplest possible contextual model the contextual triangle and PopescuRohrlich nosignalling correlations.
Automatic Generation of Adaptive Network Models based on Similarity to the Desired Complex Network ; Complex networks have become powerful mechanisms for studying a variety of realworld systems. Consequently, many humandesigned network models are proposed that reproduce nontrivial properties of complex networks, such as longtail degree distribution or high clustering coefficient. Therefore, we may utilize network models in order to generate graphs similar to desired networks. However, a desired network structure may deviate from emerging structure of any generative model, because no selected single model may support all the needed properties of the target graph and instead, each network model reflects a subset of the required features. In contrast to the classical approach of network modeling, an appropriate modern network model should adapt the desired features of the target network. In this paper, we propose an automatic approach for constructing network models that are adapted to the desired network features. We employ Genetic Algorithms in order to evolve network models based on the characteristics of the target networks. The experimental evaluations show that our proposed framework, called NetMix, results network models that outperform baseline models according to the compliance with the desired features of the target networks.
A Bidomain Model for Lens Microcirculation ; There exists a large body of research on the lens of mammalian eye over the past several decades. The objective of the current work is to provide a link between the most recent computational models to some of the pioneering work in the 1970s and 80s. We introduce a general nonelectroneutral model to study the microcirculation in lens of eyes. It describes the steady state relationships among ion fluxes, water flow and electric field inside cells, and in the narrow extracellular spaces between cells in the lens. Using asymptotic analysis, we derive a simplified model based on physiological data and compare our results with those in the literature. We show that our simplified model can be reduced further to the first generation models while our full model is consistent with the most recent computational models. In addition, our simplified model captures the main features of the full model. Our results serve as a useful link intermediate between the computational models and the first generation analytical models.
A comparison of streaming models and data augmentation methods for robust speech recognition ; In this paper, we present a comparative study on the robustness of two different online streaming speech recognition models Monotonic Chunkwise Attention MoChA and Recurrent Neural NetworkTransducer RNNT. We explore three recently proposed data augmentation techniques, namely, multiconditioned training using an acoustic simulator, Vocal Tract Length Perturbation VTLP for speaker variability, and SpecAugment. Experimental results show that unidirectional models are in general more sensitive to noisy examples in the training set. It is observed that the final performance of the model depends on the proportion of training examples processed by data augmentation techniques. MoChA models generally perform better than RNNT models. However, we observe that training of MoChA models seems to be more sensitive to various factors such as the characteristics of training sets and the incorporation of additional augmentations techniques. On the other hand, RNNT models perform better than MoChA models in terms of latency, inference time, and the stability of training. Additionally, RNNT models are generally more robust against noise and reverberation. All these advantages make RNNT models a better choice for streaming ondevice speech recognition compared to MoChA models.
DST Dynamic Substitute Training for Datafree Blackbox Attack ; With the wide applications of deep neural network models in various computer vision tasks, more and more works study the model vulnerability to adversarial examples. For datafree black box attack scenario, existing methods are inspired by the knowledge distillation, and thus usually train a substitute model to learn knowledge from the target model using generated data as input. However, the substitute model always has a static network structure, which limits the attack ability for various target models and tasks. In this paper, we propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model. Specifically, a dynamic substitute structure learning strategy is proposed to adaptively generate optimal substitute model structure via a dynamic gate according to different target models and tasks. Moreover, we introduce a taskdriven graphbased structure information learning constrain to improve the quality of generated training data, and facilitate the substitute model learning structural relationships from the target model multiple outputs. Extensive experiments have been conducted to verify the efficacy of the proposed attack method, which can achieve better performance compared with the stateoftheart competitors on several datasets.
METRO Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals ; We present an efficient method of pretraining largescale autoencoding language models using training signals generated by an auxiliary model. Originated in ELECTRA, this training strategy has demonstrated sampleefficiency to pretrain models at the scale of hundreds of millions of parameters. In this work, we conduct a comprehensive empirical study, and propose a recipe, namely Model generated dEnoising TRaining Objective METRO, which incorporates some of the best modeling techniques developed recently to speed up, stabilize, and enhance pretrained language models without compromising model effectiveness. The resultant models, METROLM, consisting of up to 5.4 billion parameters, achieve new stateoftheart on the GLUE, SuperGLUE, and SQuAD benchmarks. More importantly, METROLM are efficient in that they often outperform previous large models with significantly smaller model sizes and lower pretraining cost.
Minimizing Maximum Model Discrepancy for Transferable Blackbox Targeted Attacks ; In this work, we study the blackbox targeted attack problem from the model discrepancy perspective. On the theoretical side, we present a generalization error bound for blackbox targeted attacks, which gives a rigorous theoretical analysis for guaranteeing the success of the attack. We reveal that the attack error on a target model mainly depends on empirical attack error on the substitute model and the maximum model discrepancy among substitute models. On the algorithmic side, we derive a new algorithm for blackbox targeted attacks based on our theoretical analysis, in which we additionally minimize the maximum model discrepancyM3D of the substitute models when training the generator to generate adversarial examples. In this way, our model is capable of crafting highly transferable adversarial examples that are robust to the model variation, thus improving the success rate for attacking the blackbox model. We conduct extensive experiments on the ImageNet dataset with different classification models, and our proposed approach outperforms existing stateoftheart methods by a significant margin. Our codes will be released.
Matching Pairs Attributing FineTuned Models to their PreTrained Large Language Models ; The wide applicability and adaptability of generative large language models LLMs has enabled their rapid adoption. While the pretrained models can perform many tasks, such models are often finetuned to improve their performance on various downstream applications. However, this leads to issues over violation of model licenses, model theft, and copyright infringement. Moreover, recent advances show that generative technology is capable of producing harmful content which exacerbates the problems of accountability within model supply chains. Thus, we need a method to investigate how a model was trained or a piece of text was generated and what their pretrained base model was. In this paper we take the first step to address this open problem by tracing back the origin of a given finetuned LLM to its corresponding pretrained base model. We consider different knowledge levels and attribution strategies, and find that we can correctly trace back 8 out of the 10 fine tuned models with our best method.
A generalization of Quillen's small object argument ; We generalize the small object argument in order to allow for its application to proper classes of maps as opposed to sets of maps in Quillen's small object argument. The necessity of such a generalization arose with appearance of several important examples of model categories which were proven to be noncofibrantly generated. Our current approach allows for construction of functorial factorizations and localizations in the equivariant model structures on diagrams of spaces and diagrams of chain complexes. We also formulate a nonfunctorial version of the argument, which applies in two different model structures on the category of prospaces. The examples above suggest a natural extension of the framework of cofibrantly generated model categories. We introduce the concept of a classcofibrantly generated model category, which is a model category generated by classes of cofibrations and trivial cofibrations satisfying some reasonable assumptions.
2nDimensional Models with Topological Mass Generation ; The 4dimensional model with topological mass generation that has recently been presented by Dvali, Jackiw and Pi G. Dvali, R. Jackiw, and S.Y. Pi, Phys. Rev. Lett. 96, 081602 2006, hepth0610228 is generalized to any even number of dimensions. As in the 4dimensional model, the 2ndimensional model describes a massgeneration phenomenon due to the presence of the chiral anomaly. In addition to this model, new 2ndimensional models with topological mass generation are proposed, in which a Stueckelbergtype mass term plays a crucial role in the mass generation. The mass generation of a pseudoscalar field such as the etaprime meson is discussed within this framework.
Finding Optimal Bayesian Networks ; In this paper, we derive optimality results for greedy Bayesiannetwork search algorithms that perform singleedge modifications at each step and use asymptotically consistent scoring criteria. Our results extend those of Meek 1997 and Chickering 2002, who demonstrate that in the limit of large datasets, if the generative distribution is perfect with respect to a DAG defined over the observable variables, such search algorithms will identify this optimal i.e. generative DAG model. We relax their assumption about the generative distribution, and assume only that this distribution satisfies the em composition property over the observable variables, which is a more realistic assumption for real domains. Under this assumption, we guarantee that the search algorithms identify an em inclusionoptimal model; that is, a model that 1 contains the generative distribution and 2 has no submodel that contains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relationships in the generative distribution can be characterized by paths between singleton elements in some generative graphical model e.g. a DAG, a chain graph, or a Markov network even when the generative model includes unobserved variables, and even when the observed data is subject to selection bias.
A Generative Model of People in Clothing ; We present the first imagebased generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for highquality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure imagebased approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely datadriven approach to people generation is possible.
Depth Structure Preserving Scene Image Generation ; Key to automatically generate natural scene images is to properly arrange among various spatial elements, especially in the depth direction. To this end, we introduce a novel depth structure preserving scene image generation network DSPGAN, which favors a hierarchical and heterogeneous architecture, for the purpose of depth structure preserving scene generation. The main trunk of the proposed infrastructure is built on a Hawkes point process that models the spatial dependency between different depth layers. Within each layer generative adversarial subnetworks are trained collaboratively to generate realistic scene components, conditioned on the layer information produced by the point process. We experiment our model on a subset of SUNdataset with annotated scene images and demonstrate that our models are capable of generating depthrealistic natural scene image.
Generative learning for deep networks ; Learning, taking into account full distribution of the data, referred to as generative, is not feasible with deep neural networks DNNs because they model only the conditional distribution of the outputs given the inputs. Current solutions are either based on joint probability models facing difficult estimation problems or learn two separate networks, mapping inputs to outputs recognition and viceversa generation. We propose an intermediate approach. First, we show that forward computation in DNNs with logistic sigmoid activations corresponds to a simplified approximate Bayesian inference in a directed probabilistic multilayer model. This connection allows to interpret DNN as a probabilistic model of the output and all hidden units given the input. Second, we propose that in order for the recognition and generation networks to be more consistent with the joint model of the data, weights of the recognition and generator network should be related by transposition. We demonstrate in a tentative experiment that such a coupled pair can be learned generatively, modelling the full distribution of the data, and has enough capacity to perform well in both recognition and generation.
Simulating Multichannel Wind Noise Based on the Corcos Model ; A novel multichannel artificial wind noise generator based on a fluid dynamics model, namely the Corcos model, is proposed. In particular, the model is used to approximate the complex coherence function of wind noise signals measured with closelyspaced microphones in the freefield and for timeinvariant wind stream direction and speed. Preliminary experiments focus on a spatial analysis of recorded wind noise signals and the validation of the Corcos model for diverse measurement setups. Subsequently, the Corcos model is used to synthetically generate wind noise signals exhibiting the desired complex coherence. The multichannel generator is designed extending an existing singlechannel generator to create N mutually uncorrelated signals, while the predefined complex coherence function is obtained exploiting an algorithm developed to generate multichannel nonstationary noise signals under a complex coherence constraint. Temporal, spectral and spatial characteristics of synthetic signals match with those observed in measured wind noise. The artificial generation overcomes the timeconsuming challenge of collecting pure wind noise samples for noise reduction evaluations and provides flexibility in the number of generated signals used in the simulations.
Improving Sampling from Generative Autoencoders with Markov Chains ; We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model. We call the distribution to which the inference model maps observed samples, the learned latent distribution, which may not be consistent with the prior. We formulate a Markov chain Monte Carlo MCMC sampling process, equivalent to iteratively decoding and encoding, which allows us to sample from the learned latent distribution. Since, the generative model learns to map from the learned latent distribution, rather than the prior, we may use MCMC to improve the quality of samples drawn from the generative model, especially when the learned latent distribution is far from the prior. Using MCMC sampling, we are able to reveal previously unseen differences between generative autoencoders trained either with or without a denoising criterion.
Augmenting Neural Response Generation with ContextAware Topical Attention ; SequencetoSequence Seq2Seq models have witnessed a notable success in generating natural conversational exchanges. Notwithstanding the syntactically wellformed responses generated by these neural network models, they are prone to be acontextual, short and generic. In this work, we introduce a Topical Hierarchical Recurrent Encoder Decoder THRED, a novel, fully datadriven, multiturn response generation system intended to produce contextual and topicaware responses. Our model is built upon the basic Seq2Seq model by augmenting it with a hierarchical joint attention mechanism that incorporates topical concepts and previous interactions into the response generation. To train our model, we provide a clean and highquality conversational dataset mined from Reddit comments. We evaluate THRED on two novel automated metrics, dubbed Semantic Similarity and Response Echo Index, as well as with human evaluation. Our experiments demonstrate that the proposed model is able to generate more diverse and contextually relevant responses compared to the strong baselines.
Generative Deep Neural Networks for Dialogue A Short Review ; Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequencetosequence Seq2Seq models have shown promising results for unstructured tasks, such as wordlevel dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and handcrafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoderdecoder neural network architectures, and show that these models have better ability to incorporate longterm dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with highlevel compositional structure.
Learning Deep Generative Models of Graphs ; Graphs are fundamental data structures which concisely capture the relational structure in many important realworld domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph's nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graphstructured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vector and sequencelike knowledge representations, toward more expressive and flexible relational data structures.
Latent Adversarial Defence with Boundaryguided Generation ; Deep Neural Networks DNNs have recently achieved great success in many tasks, which encourages DNNs to be widely used as a machine learning service in model sharing scenarios. However, attackers can easily generate adversarial examples with a small perturbation to fool the DNN models to predict wrong labels. To improve the robustness of shared DNN models against adversarial attacks, we propose a novel method called Latent Adversarial Defence LAD. The proposed LAD method improves the robustness of a DNN model through adversarial training on generated adversarial examples. Different from popular attack methods which are carried in the input space and only generate adversarial examples of repeating patterns, LAD generates myriad of adversarial examples through adding perturbations to latent features along the normal of the decision boundary which is constructed by an SVM with an attention mechanism. Once adversarial examples are generated, we adversarially train the model through augmenting the training data with generated adversarial examples. Extensive experiments on the MNIST, SVHN, and CelebA dataset demonstrate the effectiveness of our model in defending against different types of adversarial attacks.
CPgeneric expansions of models of Peano Arithmetic ; We study notions of genericity in models of mathsfPA, inspired by lines of inquiry initiated by Chatzidakis and Pillay and continued by Dolich, Miller and Steinhorn in general modeltheoretic contexts. These papers studied the theories obtained by adding a random predicate to a class of structures. Chatzidakis and Pillay axiomatized the theories obtained in this way. In this article, we look at the subsets of models of mathsfPA which satisfy the axiomatization given by Chatzidakis and Pillay; we refer to these subsets in models of mathsfPA as CPgenerics. We study a more natural property, called strong CPgenericity, which implies CPgenericity. We use an arithmetic version of Cohen forcing to construct strong CPgenerics with various properties, including ones in which every element of the model is definable in the expansion, and, on the other extreme, ones in which the definable closure relation is unchanged.
LatentVariable Generative Models for DataEfficient Text Classification ; Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zeroshot learning Ng and Jordan,2002; Yogatama et al., 2017; Lewis and Fan,2019. In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradientbased optimization, which avoids the need to do expectationmaximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in smalldata settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.
Stochastic DC Optimal Power Flow With Reserve Saturation ; We propose an optimization framework for stochastic optimal power flow with uncertain loads and renewable generator capacity. Our model follows previous work in assuming that generator outputs respond to load imbalances according to an affine control policy, but introduces a model of saturation of generator reserves by assuming that when a generator's target level hits its limit, it abandons the affine policy and produces at that limit. This is a particularly interesting feature in models where wind power plants, which have uncertain upper generation limits, are scheduled to provide reserves to balance load fluctuations. The resulting model is a nonsmooth nonconvex twostage stochastic program, and we use a stochastic approximation method to find stationary solutions to a smooth approximation. Computational results on 6bus and 118bus test instances demonstrates that by considering the effects of saturation, our model can yield solutions with lower expected generation costs at the same target line violation probability level than those obtained from a model that enforces the affine policy to stay within generator limits with high probability.
Generalization Error of Generalized Linear Models in High Dimensions ; At the heart of machine learning lies the question of generalizability of learned rules over previously unseen data. While overparameterized models based on neural networks are now ubiquitous in machine learning applications, our understanding of their generalization capabilities is incomplete. This task is made harder by the nonconvexity of the underlying learning problems. We provide a general framework to characterize the asymptotic generalization error for singlelayer neural networks i.e., generalized linear models with arbitrary nonlinearities, making it applicable to regression as well as classification problems. This framework enables analyzing the effect of i overparameterization and nonlinearity during modeling; and ii choices of loss function, initialization, and regularizer during learning. Our model also captures mismatch between training and test distributions. As examples, we analyze a few special cases, namely linear regression and logistic regression. We are also able to rigorously and analytically explain the emphdouble descent phenomenon in generalized linear models.
Exponential Tilting of Generative Models Improving Sample Quality by Training and Sampling from Latent Energy ; In this paper, we present a general method that can improve the sample quality of pretrained likelihood based generative models. Our method constructs an energy function on the latent variable space that yields an energy function on samples produced by the pretrained generative model. The energy based model is efficiently trained by maximizing the data likelihood, and after training, new samples in the latent space are generated from the energy based model and passed through the generator to producing samples in observation space. We show that using our proposed method, we can greatly improve the sample quality of popular likelihood based generative models, such as normalizing flows and VAEs, with very little computational overhead.
Efficient Training Data Generation for PhaseBased DOA Estimation ; Deep learning DL based direction of arrival DOA estimation is an active research topic and currently represents the stateoftheart. Usually, DLbased DOA estimators are trained with recorded data or computationally expensive generated data. Both data types require significant storage and excessive time to, respectively, record or generate. We propose a low complexity online data generation method to train DL models with a phasebased feature input. The data generation method models the phases of the microphone signals in the frequency domain by employing a deterministic model for the direct path and a statistical model for the late reverberation of the room transfer function. By an evaluation using data from measured room impulse responses, we demonstrate that a model trained with the proposed training data generation method performs comparably to models trained with data generated based on the sourceimage method.
Integration of Renewable Generators in Synthetic Electric Grids for Dynamic Analysis ; This paper presents a method to better integrate dynamic models for renewable resources into synthetic electric grids. An automated dynamic models assignment process is proposed for wind and solar generators. A realistic composition ratio for different types of wind turbine generators WTG is assigned to each wind generator. Statistics summarized from real electric grid data form the bases in assigning proper models with reasonable parameters to each WTG. A similar process is used to assign appropriate models and parameters to each photovoltaic PV generator. Multiple control strategies of the renewable resources are considered and tested in case studies. Two largescale synthetic network test cases are used as examples of modeling the dynamics of renewable generators. Several transient stability metrics are adopted to assess the stability level after being subject to N1 contingency event. Representative contingency events are given to demonstrate the performance of the synthetic renewable generator models.
Test case generation for agentbased models A systematic literature review ; Agentbased models play an important role in simulating complex emergent phenomena and supporting critical decisions. In this context, a software fault may result in poorly informed decisions that lead to disastrous consequences. The ability to rigorously test these models is therefore essential. In this systematic literature review, we answer five research questions related to the key aspects of test case generation in agentbased models What are the information artifacts used to generate tests How are these tests generated How is a verdict assigned to a generated test How is the adequacy of a generated test suite measured What level of abstraction of an agentbased model is targeted by a generated test Our results show that whilst the majority of techniques are effective for testing functional requirements at the agent and integration levels of abstraction, there are comparatively few techniques capable of testing societylevel behaviour. Additionally, we identify a need for more thorough evaluation using realistic case studies that feature challenging properties associated with a typical agentbased model.
Estimating Subjective CrowdEvaluations as an Additional Objective to Improve Natural Language Generation ; Human ratings are one of the most prevalent methods to evaluate the performance of natural language processing algorithms. Similarly, it is common to measure the quality of sentences generated by a natural language generation model using human raters. In this paper, we argue for exploring the use of subjective evaluations within the process of training language generation models in a multitask learning setting. As a case study, we use a crowdauthored dialogue corpus to finetune six different language generation models. Two of these models incorporate multitask learning and use subjective ratings of lines as part of an explicit learning goal. A human evaluation of the generated dialogue lines reveals that utterances generated by the multitasking models were subjectively rated as the most typical, most moving the conversation forward, and least offensive. Based on these promising first results, we discuss future research directions for incorporating subjective human evaluations into language model training and to hence keep the human user in the loop during the development process.
Augmenting Molecular Deep Generative Models with Topological Data Analysis Representations ; Deep generative models have emerged as a powerful tool for learning useful molecular representations and designing novel molecules with desired properties, with applications in drug discovery and material design. However, most existing deep generative models are restricted due to lack of spatial information. Here we propose augmentation of deep generative models with topological data analysis TDA representations, known as persistence images, for robust encoding of 3D molecular geometry. We show that the TDA augmentation of a characterbased Variational AutoEncoder VAE outperforms stateoftheart generative neural nets in accurately modeling the structural composition of the QM9 benchmark. Generated molecules are valid, novel, and diverse, while exhibiting distinct electronic property distribution, namely higher sample population with small HOMOLUMO gap. These results demonstrate that TDA features indeed provide crucial geometric signal for learning abstract structures, which is nontrivial for existing generative models operating on string, graph, or 3D point sets to capture.
View Generalization for Single Image Textured 3D Models ; Humans can easily infer the underlying 3D geometry and texture of an object only from a single 2D image. Current computer vision methods can do this, too, but suffer from view generalization problems the models inferred tend to make poor predictions of appearance in novel views. As for generalization problems in machine learning, the difficulty is balancing singleview accuracy cf. training error; bias with novel view accuracy cf. test error; variance. We describe a class of models whose geometric rigidity is easily controlled to manage this tradeoff. We describe a cycle consistency loss that improves view generalization roughly, a model from a generated view should predict the original view well. View generalization of textures requires that models share texture information, so a car seen from the back still has headlights because other cars have headlights. We describe a cycle consistency loss that encourages model textures to be aligned, so as to encourage sharing. We compare our method against the stateoftheart method and show both qualitative and quantitative improvements.
Latent Space EnergyBased Model of SymbolVector Coupling for Text Generation and Classification ; We propose a latent space energybased prior model for text generation and classification. The model stands on a generator network that generates the text sequence based on a continuous latent vector. The energy term of the prior model couples a continuous latent vector and a symbolic onehot vector, so that discrete category can be inferred from the observed example based on the continuous latent vector. Such a latent space coupling naturally enables incorporation of information bottleneck regularization to encourage the continuous latent vector to extract information from the observed example that is informative of the underlying category. In our learning method, the symbolvector coupling, the generator network and the inference network are learned jointly. Our model can be learned in an unsupervised setting where no category labels are provided. It can also be learned in semisupervised setting where category labels are provided for a subset of training examples. Our experiments demonstrate that the proposed model learns wellstructured and meaningful latent space, which 1 guides the generator to generate text with high quality, diversity, and interpretability, and 2 effectively classifies text.
StackGAN Facial Image Generation Optimizations ; Current stateoftheart photorealistic generators are computationally expensive, involve unstable training processes, and have real and synthetic distributions that are dissimilar in higherdimensional spaces. To solve these issues, we propose a variant of the StackGAN architecture. The new architecture incorporates conditional generators to construct an image in many stages. In our model, we generate grayscale facial images in two different stages noise to edges stage one and edges to grayscale stage two. Our model is trained with the CelebA facial image dataset and achieved a Fr'echet Inception Distance FID score of 73 for edge images and a score of 59 for grayscale images generated using the synthetic edge images. Although our model achieved subpar results in relation to stateoftheart models, dropout layers could reduce the overfitting in our conditional mapping. Additionally, since most images can be broken down into important features, improvements to our model can generalize to other datasets. Therefore, our model can potentially serve as a superior alternative to traditional means of generating photorealistic images.
MetaGeneralization for Multiparty Privacy Learning to Identify Anomaly Multimedia Traffic in Graynet ; Identifying anomaly multimedia traffic in cyberspace is a big challenge in distributed service systems, multiple generation networks and future internet of everything. This letter explores metageneralization for a multiparty privacy learning model in graynet to improve the performance of anomaly multimedia traffic identification. The multiparty privacy learning model in graynet is a globally shared model that is partitioned, distributed and trained by exchanging multiparty parameters updates with preserving private data. The metageneralization refers to discovering the inherent attributes of a learning model to reduce its generalization error. In experiments, three metageneralization principles are tested as follows. The generalization error of the multiparty privacy learning model in graynet is reduced by changing the dimension of bytelevel imbedding. Following that, the error is reduced by adapting the depth for extracting packetlevel features. Finally, the error is reduced by adjusting the size of support set for preprocessing trafficlevel data. Experimental results demonstrate that the proposal outperforms the stateoftheart learning models for identifying anomaly multimedia traffic.
Texture Generation Using DualDomain Feature Flow with MultiView Hallucinations ; We propose a dualdomain generative model to estimate a texture map from a single image for colorizing a 3D human model. When estimating a texture map, a single image is insufficient as it reveals only one facet of a 3D object. To provide sufficient information for estimating a complete texture map, the proposed model simultaneously generates multiview hallucinations in the image domain and an estimated texture map in the texture domain. During the generating process, each domain generator exchanges features to the other by a flowbased local attention mechanism. In this manner, the proposed model can estimate a texture map utilizing abundant multiview image features from which multiview hallucinations are generated. As a result, the estimated texture map contains consistent colors and patterns over the entire region. Experiments show the superiority of our model for estimating a directly renderable texture map, which is applicable to 3D animation rendering. Furthermore, our model also improves an overall generation quality in the image domain for pose and viewpoint transfer tasks.
3D pride without 2D prejudice Biascontrolled multilevel generative models for structurebased ligand design ; Generative models for structurebased molecular design hold significant promise for drug discovery, with the potential to speed up the hittolead development cycle, while improving the quality of drug candidates and reducing costs. Data sparsity and bias are, however, two main roadblocks to the development of 3Daware models. Here we propose a firstinkind training protocol based on multilevel contrastive learning for improved bias control and data efficiency. The framework leverages the large data resources available for 2D generative modelling with datasets of ligandprotein complexes. The result are hierarchical generative models that are topologically unbiased, explainable and customizable. We show how, by deconvolving the generative posterior into chemical, topological and structural context factors, we not only avoid common pitfalls in the design and evaluation of generative models, but furthermore gain detailed insight into the generative process itself. This improved transparency significantly aids method development, besides allowing finegrained control over novelty vs familiarity.
Exploring Length Generalization in Large Language Models ; The ability to extrapolate from short problem instances to longer ones is an important form of outofdistribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics problems, and readingsummarizing novels. In this paper, we run careful empirical studies exploring the length generalization capabilities of transformerbased language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale. We then show that combining pretrained large language models' incontext learning abilities with scratchpad prompting asking the model to output solution steps before producing an answer results in a dramatic improvement in length generalization. We run careful failure analyses on each of the learning modalities and identify common sources of mistakes that highlight opportunities in equipping language models with the ability to generalize to longer problems.
Democratizing Ethical Assessment of Natural Language Generation Models ; Natural language generation models are computer systems that generate coherent language when prompted with a sequence of words as context. Despite their ubiquity and many beneficial applications, language generation models also have the potential to inflict social harms by generating discriminatory language, hateful speech, profane content, and other harmful material. Ethical assessment of these models is therefore critical. But it is also a challenging task, requiring an expertise in several specialized domains, such as computational linguistics and social justice. While significant strides have been made by the research community in this domain, accessibility of such ethical assessments to the wider population is limited due to the high entry barriers. This article introduces a new tool to democratize and standardize ethical assessment of natural language generation models Tool for Ethical Assessment of Language generation models TEAL, a component of Credo AI Lens, an opensource assessment framework.
Learning to Generate 3D Shapes from a Single Example ; Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multiscale GANbased model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the triplane hybrid representation, which requires only 2D convolutions. We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation. Once trained, our model can generate diverse and highquality 3D shapes possibly of different sizes and aspect ratios. The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.
Machine Generated Text A Comprehensive Survey of Threat Models and Detection Methods ; Machine generated text is increasingly difficult to distinguish from human authored text. Powerful opensource models are freely available, and userfriendly tools that democratize access to generative models are proliferating. ChatGPT, which was released shortly after the first edition of this survey, epitomizes these trends. The great potential of stateoftheart natural language generation NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1 an extensive analysis of threat models posed by contemporary NLG systems, and 2 the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability.
Is synthetic data from generative models ready for image recognition ; Recent texttoimage generation models have shown promising results in generating highfidelity photorealistic images. Though the results are astonishing to human eyes, how applicable these generated images are for recognition tasks remains underexplored. In this work, we extensively study whether and how synthetic images generated from stateoftheart texttoimage generation models can be used for image recognition tasks, and focus on two perspectives synthetic data for improving classification models in datascarce settings i.e. zeroshot and fewshot, and synthetic data for largescale model pretraining for transfer learning. We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks. Code httpsgithub.comCVMILabSyntheticData.
Comparing Synthetic Tabular Data Generation Between a Probabilistic Model and a Deep Learning Model for Education Use Cases ; The ability to generate synthetic data has a variety of use cases across different domains. In education research, there is a growing need to have access to synthetic data to test certain concepts and ideas. In recent years, several deep learning architectures were used to aid in the generation of synthetic data but with varying results. In the education context, the sophistication of implementing different models requiring large datasets is becoming very important. This study aims to compare the application of synthetic tabular data generation between a probabilistic model specifically a Bayesian Network, and a deep learning model, specifically a Generative Adversarial Network using a classification task. The results of this study indicate that synthetic tabular data generation is better suited for the education context using probabilistic models overall accuracy of 75 than deep learning architecture overall accuracy of 38 because of probabilistic interdependence. Lastly, we recommend that other data types, should be explored and evaluated for their application in generating synthetic data for education use cases.
Controllable Text Generation with Language Constraints ; We consider the task of text generation in language models with constraints specified in natural language. To this end, we first create a challenging benchmark Cognac that provides as input to the model a topic with example text, along with a constraint on text to be avoided. Unlike prior work, our benchmark contains knowledgeintensive constraints sourced from databases like Wordnet and Wikidata, which allows for straightforward evaluation while striking a balance between broad attributelevel and narrow lexicallevel controls. We find that even stateoftheart language models like GPT3 fail often on this task, and propose a solution to leverage a language model's own internal knowledge to guide generation. Our method, called CognacGen, first queries the language model to generate guidance terms for a specified topic or constraint, and uses the guidance to modify the model's token generation probabilities. We propose three forms of guidance binary verifier, topk tokens, textual example, and employ prefixtuning approaches to distill the guidance to tackle diverse natural language constraints. Through extensive empirical evaluations, we demonstrate that CognacGen can successfully generalize to unseen instructions and outperform competitive baselines in generating constraint conforming text.
Learning to Generate Questions by Enhancing Text Generation with Sentence Selection ; We introduce an approach for the answeraware question generation problem. Instead of only relying on the capability of strong pretrained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pretrained models for the question generation task. The code is also available shorturl.atlV567.
Deep Image Fingerprint Towards Low Budget Synthetic Image Detection and Model Lineage Analysis ; The generation of highquality images has become widely accessible and is a rapidly evolving process. As a result, anyone can generate images that are indistinguishable from real ones. This leads to a wide range of applications, including malicious usage with deceptive intentions. Despite advances in detection techniques for generated images, a robust detection method still eludes us. Furthermore, model personalization techniques might affect the detection capabilities of existing methods. In this work, we utilize the architectural properties of convolutional neural networks CNNs to develop a new detection method. Our method can detect images from a known generative model and enable us to establish relationships between finetuned generative models. We tested the method on images produced by both Generative Adversarial Networks GANs and recent large texttoimage models LTIMs that rely on Diffusion Models. Our approach outperforms others trained under identical conditions and achieves comparable performance to stateoftheart pretrained detection methods on images generated by Stable Diffusion and MidJourney, with significantly fewer required train samples.
Understanding Deep Generative Models with Generalized Empirical Likelihoods ; Understanding how well a deep generative model captures a distribution of highdimensional data remains an important open challenge. It is especially difficult for certain model classes, such as Generative Adversarial Networks and Diffusion Models, whose models do not admit exact likelihoods. In this work, we demonstrate that generalized empirical likelihood GEL methods offer a family of diagnostic tools that can identify many deficiencies of deep generative models DGMs. We show, with appropriate specification of moment conditions, that the proposed method can identify which modes have been dropped, the degree to which DGMs are mode imbalanced, and whether DGMs sufficiently capture intraclass diversity. We show how to combine techniques from Maximum Mean Discrepancy and Generalized Empirical Likelihood to create not only distribution tests that retain persample interpretability, but also metrics that include label information. We find that such tests predict the degree of mode dropping and mode imbalance up to 60 better than metrics such as improved precisionrecall. We provide an implementation at httpsgithub.comdeepmindunderstandingdeepgenerativemodelswithgeneralizedempiricallikelihood.
Solving and Generating NPR Sunday Puzzles with Large Language Models ; We explore the ability of large language models to solve and generate puzzles from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15 years of onair puzzles. We evaluate four large language models using PUZZLEQA, in both multiple choice and free response formats, and explore two prompt engineering techniques to improve free response performance chainofthought reasoning and prompt summarization. We find that stateoftheart large language models can solve many PUZZLEQA puzzles the best model, GPT3.5, achieves 50.2 loose accuracy. However, in our fewshot puzzle generation experiment, we find no evidence that models can generate puzzles GPT3.5 generates puzzles with answers that do not conform to the generated rules. Puzzle generation remains a challenging task for future work.
MultiBERT for Embeddings for Recommendation System ; In this paper, we propose a novel approach for generating document embeddings using a combination of SentenceBERT SBERT and RoBERTa, two stateoftheart natural language processing models. Our approach treats sentences as tokens and generates embeddings for them, allowing the model to capture both intrasentence and intersentence relations within a document. We evaluate our model on a book recommendation task and demonstrate its effectiveness in generating more semantically rich and accurate document embeddings. To assess the performance of our approach, we conducted experiments on a book recommendation task using the Goodreads dataset. We compared the document embeddings generated using our MULTIBERT model to those generated using SBERT alone. We used precision as our evaluation metric to compare the quality of the generated embeddings. Our results showed that our model consistently outperformed SBERT in terms of the quality of the generated embeddings. Furthermore, we found that our model was able to capture more nuanced semantic relations within documents, leading to more accurate recommendations. Overall, our results demonstrate the effectiveness of our approach and suggest that it is a promising direction for improving the performance of recommendation systems