text
stringlengths 62
2.94k
|
---|
Model Elicitation through Direct Questioning ; The future will be replete with scenarios where humans are robots will be working together in complex environments. Teammates interact, and the robot's interaction has to be about getting useful information about the human's teammate's model. There are many challenges before a robot can interact, such as incorporating the structural differences in the human's model, ensuring simpler responses, etc. In this paper, we investigate how a robot can interact to localize the human model from a set of models. We show how to generate questions to refine the robot's understanding of the teammate's model. We evaluate the method in various planning domains. The evaluation shows that these questions can be generated offline, and can help refine the model through simple answers.
|
Proposed evolution in MarolfMaxfield toy model obtained through correspondence to spontaneous collapse theory ; The MarolfMaxfield topological toy model for 2D gravity gives the full spectrum of boundary theories, but can not describe any evolution. In order to obtain the expected evolution from HartleHawking state to one of the superselection sectors, we suggest to consider a correspondence between models of evaporating black holes and models of collapsing wave functions. This note explores this correspondence by equating the MarolfMaxfield topological toy model and to the Bonifacio model of spontaneous collapse theory. The expected evolution of a matrix element due to a generator of a parameter in MarolfMaxfield model is obtained.
|
The Iterated Local Transitivity Model for Tournaments ; A key generative principle within social and other complex networks is transitivity, where friends of friends are more likely friends. We propose a new model for highly dense complex networks based on transitivity, called the Iterated Local Transitivity Tournament or ILTT model. In ILTT and a dual version of the model, we iteratively apply the principle of transitivity to form new tournaments. The resulting models generate tournaments with small average distances as observed in realworld complex networks. We explore properties of small subtournaments or motifs in the ILTT model and study its graphtheoretic properties, such as Hamilton cycles, spectral properties, and domination numbers. We finish with a set of open problems and the next steps for the ILTT model.
|
A General Approach to Modeling Covid19 ; The present work shows that it is possible to analytically solve a general model to explain the transmission dynamics of SARSCoV2. First, the withinhost model is described, and later a betweenhost model, where the coupling between them is the viral load of SARSCoV2. The withinhost model describes the equations involved in the life cycle of SARSCoV2, and also the immune response; while that the betweenHost model analyzes the dynamics of virus spread from the original source of contagion associated with bats, subsequently transmitted to a host, and then reaching the reservoir Huanan Seafood Wholesale Market in Wuhan , until finally infecting the human population.
|
Generative AI for Business Strategy Using Foundation Models to Create Business Strategy Tools ; Generative models foundation models such as LLMs large language models are having a large impact on multiple fields. In this work, we propose the use of such models for business decision making. In particular, we combine unstructured textual data sources e.g., news data with multiple foundation models namely, GPT4, transformerbased Named Entity Recognition NER models and Entailmentbased Zeroshot Classifiers ZSC to derive IT information technology artifacts in the form of a sequence of signed business networks. We posit that such artifacts can inform business stakeholders about the state of the market and their own positioning as well as provide quantitative insights into improving their future outlook.
|
A General Formalism for Inhomogeneous Random Graphs ; We present and investigate an extension of the classical random graph to a general class of inhomogeneous random graph models, where vertices come in different types, and the probability of realizing an edge depends on the types of its terminal vertices. This approach provides a general framework for the analysis of a large class of models. The generic phase structure is derived using generating function techniques, and relations to other classes of models are pointed out.
|
A Generalized Disjunctive Paraconsistent Data Model for Negative and Disjunctive Information ; This paper presents a generalization of the disjunctive paraconsistent relational data model in which disjunctive positive and negative information can be represented explicitly and manipulated. There are situations where the closed world assumption to infer negative facts is not valid or undesirable and there is a need to represent and reason with negation explicitly. We consider explicit disjunctive negation in the context of disjunctive databases as there is an interesting interplay between these two types of information. Generalized disjunctive paraconsistent relation is introduced as the main structure in this model. The relational algebra is appropriately generalized to work on generalized disjunctive paraconsistent relations and their correctness is established.
|
An Extended Generalized Disjunctive Paraconsistent Data Model for Disjunctive Information ; This paper presents an extension of generalized disjunctive paraconsistent relational data model in which pure disjunctive positive and negative information as well as mixed disjunctive positive and negative information can be represented explicitly and manipulated. We consider explicit mixed disjunctive information in the context of disjunctive databases as there is an interesting interplay between these two types of information. Extended generalized disjunctive paraconsistent relation is introduced as the main structure in this model. The relational algebra is appropriately generalized to work on extended generalized disjunctive paraconsistent relations and their correctness is established.
|
Symmetries in twodimensional dilaton gravity with matter ; The symmetries of generic 2D dilaton models of gravity with and without matter are studied in some detail. It is shown that delta2, one of the symmetries of the matterless models, can be generalized to the case where matter fields of any kind are present. The general classical solution for some of these models, in particular those coupled to chiral matter, which generalizes the Vaidya solution of Einstein Gravity, is also given.
|
Mass Generation Mechanism in Supersymmetric Composite Model with Three Generations ; We propose a supersymmetric composite model with three generations in which supersymmetry and electroweak symmetry are broken dynamically, and masses of quarks and leptons are generated without introducing any mass scales by hand. All the mass scales in the model are expected to be generated dynamically. The mechanism to have mass hierarchy is explicitly described, although the roughly estimated mass spectrum of quarks and leptons does not exactly coincide with the realistic one.
|
Mass formula for leptons and quarks as suggested by generalized Dirac's square root ; Basing on the formalism of generalized Dirac equation introduced by the author already many years ago, we suggest a model of formal intrinsic interactions within leptons and quarks of three generations. In this model, the leptons and quarks are treated as some intrinsic composites of algebraic partons, the latter being identified with Dirac bispinor indices appearing in the generalized Dirac equation. This intrinsic model implies a universal mass formula for leptons and quarks of three generations that has been recently proposed on an essentially empirical level.
|
Hierarchic Models of Turbulence, Superfluidity and Superconductivity ; New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids physics0102086, have been proposed. CONTENTS 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic selforganization process; 6 Superfluidity in 3He; 7 Superconductivity General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity BCS; 9. Mesoscopic scenario of superconductivity Interpretation of experimental data in the framework of mesoscopic model of superconductivity.
|
Continuous Spectra of Generalized KronigPenney Model ; The standard KronigPenney model with periodic delta potentials is extended to the cases with generalized contact interactions. The eigen equation which determines the dispersion relation for onedimensional periodic array of the generalized contact interactions is deduced with the transfer matrix formalism. Numerical results are presented which reveal unexpected band spectra with broader band gap in higher energy region for generic model with generalized contact interaction.
|
Genericity of blackhole formation in the gravitational collapse of homogeneous selfinteracting scalar fields ; The gravitational collapse of a wide class of selfinteracting homogeneous scalar fields models is analyzed. The class is characterized by certain general conditions on the scalar field potential, which, in particular, include both asymptotically polynomial and exponential behaviors. Within this class, we show that the generic evolution is always divergent in a finite time, and then make use of this result to construct radiating star models of the Vaidya type. It turns out that blackholes are generically formed in such models.
|
Constraints on Mass Spectrum of Fourth Generation Fermions and Higgs Bosons ; We reanalyze constraints on the mass spectrum of the chiral fourth generation fermions and the Higgs bosons for the standard model SM4 and the two Higgs doublet model THDM. We find that the Higgs mass in the SM4 should be larger than roughly the fourth generation uptype quark mass, while the light CP even Higgs mass in the THDM can be smaller. Various mass spectra of the fourth generation fermions and the Higgs bosons are allowed. The phenomenology of the fourth generation models is still rich.
|
Provable Secure Identity Based Generalized Signcryption Scheme ; According to actual needs, generalized signcryption scheme can flexibly work as an encryption scheme, a signature scheme or a signcryption scheme. In this paper, firstly, we give a security model for identity based generalized signcryption which is more complete than existing model. Secondly, we propose an identity based generalized signcryption scheme. Thirdly, we give the security proof of the new scheme in this complete model. Comparing with existing identity based generalized signcryption, the new scheme has less implementation complexity. Moreover, the new scheme has comparable computation complexity with the existing normal signcryption schemes.
|
Robust Regulation of MIMO systems A Reformulation of the Internal Model Principle ; The internal model principle is a fundamental result stating a necessary and sufficient condition for a stabilizing controller to be robustly regulating. Its classical formulation is given in terms of coprime factorizations and the largest invariant factor of the signal generator which sets unnecessary restrictions for the theory and its applicability. In this article, the internal model principle is formulated using a general factorization approach and the generators of the fractional ideals generated by the elements of the signal generator. The proposed results are related to the classical ones.
|
IzerginKorepin analysis on the projected wavefunctions of the generalized freefermion model ; We apply the IzerginKorepin analysis to the study of the projected wavefunctions of the generalized freefermion model. We introduce a generalization of the Loperator of the sixvertex model by BumpBrubakerFriedberg and BumpMcNamaraNakasuji. We make the IzerginKorepin analysis to characterize the projected wavefunctions and show that they can be expressed as a product of factors and certain symmetric functions which generalizes the factorial Schur functions. This result can be seen as a generalization of the Tokuyama formula for the factorial Schur functions.
|
Slightly generalized Generalized Contagion Unifying simple models of biological and social spreading ; We motivate and explore the basic features of generalized contagion, a model mechanism that unifies fundamental models of biological and social contagion. Generalized contagion builds on the elementary observation that spreading and contagion of all kinds involve some form of system memory. We discuss the three main classes of systems that generalized contagion affords, resembling simple biological contagion; critical mass contagion of social phenomena; and an intermediate, and explosive, vanishing critical mass contagion. We also present a simple explanation of the global spreading condition in the context of a small seed of infected individuals.
|
Bidirectional Generative Modeling Using Adversarial Gradient Estimation ; This paper considers the general fdivergence formulation of bidirectional generative modeling, which includes VAE and BiGAN as special cases. We present a new optimization method for this formulation, where the gradient is computed using an adversarially learned discriminator. In our framework, we show that different divergences induce similar algorithms in terms of gradient evaluation, except with different scaling. Therefore this paper gives a general recipe for a class of principled fdivergence based generative modeling methods. Theoretical justifications and extensive empirical studies are provided to demonstrate the advantage of our approach over existing methods.
|
Metaphoric Paraphrase Generation ; This work describes the task of metaphoric paraphrase generation, in which we are given a literal sentence and are charged with generating a metaphoric paraphrase. We propose two different models for this task a lexical replacement baseline and a novel sequence to sequence model, 'metaphor masking', that generates free metaphoric paraphrases. We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases. We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity, while our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
|
Smarr formula for BTZ black holes in general threedimensional gravity models ; Recent studies have presented the interpretation of thermodynamic enthalpy for the mass of BTZ black holes and the corresponding Smarr formula. All these are made in the background of threedimensional 3D general relativity. In this paper, we extend such interpretation into general 3D gravity models. It is found that the direct extension is unfeasible and some extra conditions are required to preserve both the Smarr formula and the first law of black hole thermodynamics. Thus, BTZ black hole thermodynamics enforces some constraints for general 3D gravity models, and these constraints are consistent with all previous discussions.
|
Content preserving text generation with attribute controls ; In this work, we address the problem of modifying textual attributes of sentences. Given an input sentence and a set of attribute labels, we attempt to generate sentences that are compatible with the conditioning information. To ensure that the model generates content compatible sentences, we introduce a reconstruction loss which interpolates between autoencoding and backtranslation loss components. We propose an adversarial loss to enforce generated samples to be attribute compatible and realistic. Through quantitative, qualitative and human evaluations we demonstrate that our model is capable of generating fluent sentences that better reflect the conditioning information compared to prior methods. We further demonstrate that the model is capable of simultaneously controlling multiple attributes.
|
Adversarial Learning of Label Dependency A Novel Framework for Multiclass Classification ; Recent work has shown that exploiting relations between labels improves the performance of multilabel classification. We propose a novel framework based on generative adversarial networks GANs to model label dependency. The discriminator learns to model label dependency by discriminating real and generated label sets. To fool the discriminator, the classifier, or generator, learns to generate label sets with dependencies close to real data. Extensive experiments and comparisons on two largescale image classification benchmark datasets MSCOCO and NUSWIDE show that the discriminator improves generalization ability for different kinds of models
|
Java Generics An OrderTheoretic Approach Abridged Outline ; The mathematical modeling of generics in Java and other similar nominallytyped objectoriented programming languages is a challenge. In this short paper we present the outline of a novel ordertheoretic approach to modeling generics, in which we also elementarily use some concepts and tools from category theory. We believe a combined ordertheoretic and categorytheoretic approach to modeling generics holds the keys to overcoming much of the adversity found when analyzing features of generic OO type systems.
|
Modeling the Dispersion and Polarization Content of Gravitational Waves for Tests of General Relativity ; We propose a generic, phenomenological approach to modifying the dispersion of gravitational waves, independent of corrections to the generation mechanism. This modelindependent approach encapsulates all previously proposed parametrizations, including Lorentz violation in the StandardModel Extension, and provides a roadmap for additional theories. Furthermore, we present a general approach to include modulations to the gravitationalwave polarization content. The framework developed here can be implemented in existing data analysis pipelines for future gravitationalwave observation runs.
|
Gravitational wave generation by interaction of high power lasers with matter. Part II Ablation and Piston models ; We analyze theoretical models of gravitational waves generation in the interaction of high intensity laser with matter, namely ablation and piston models. We analyse the generated gravitational waves in linear approximation of gravitational theory. We derive the analytical formulas and estimates for the metric perturbations and the radiated power of generated gravitational waves. Furthermore we investigate the characteristics of polarization and the behaviour of test particles in the presence of gravitational wave which will be important for the detection.
|
Learning to Generate Questions with Adaptive Copying Neural Networks ; Automatic question generation is an important problem in natural language processing. In this paper we propose a novel adaptive copying recurrent neural network model to tackle the problem of question generation from sentences and paragraphs. The proposed model adds a copying mechanism component onto a bidirectional LSTM architecture to generate more suitable questions adaptively from the input data. Our experimental results show the proposed model can outperform the stateoftheart question generation methods in terms of BLEU and ROUGE evaluation scores.
|
Graph Generation with Variational Recurrent Neural Network ; Generating graph structures is a challenging problem due to the diverse representations and complex dependencies among nodes. In this paper, we introduce Graph Variational Recurrent Neural Network GraphVRNN, a probabilistic autoregressive model for graph generation. Through modeling the latent variables of graph data, GraphVRNN can capture the joint distributions of graph structures and the underlying node attributes. We conduct experiments on the proposed GraphVRNN in both graph structure learning and attribute generation tasks. The evaluation results show that the variational component allows our network to model complicated distributions, as well as generate plausible structures and node attributes.
|
VITONGAN Virtual Tryon Image Generator Trained with Adversarial Loss ; Generating a virtual tryon image from inshop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Tryon Generative Adversarial Network VITONGAN, that generates virtual tryon images using images of inshop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image e.g., arms crossed in front of the clothes by adding an adversarial mechanism in the training pipeline.
|
Nonautoregressive Transformer by Position Learning ; Nonautoregressive models are promising on various text generation tasks. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling is an essential problem in nonautoregressive text generation. In this study, we propose PNAT, which incorporates positions as a latent variable into the text generative process. Experimental results show that PNAT achieves top results on machine translation and paraphrase generation tasks, outperforming several strong baselines.
|
Russian Natural Language Generation Creation of a Language Modelling Dataset and Evaluation with Modern Neural Architectures ; Generating coherent, grammatically correct, and meaningful text is very challenging, however, it is crucial to many modern NLP systems. So far, research has mostly focused on English language, for other languages both standardized datasets, as well as experiments with stateoftheart models, are rare. In this work, we i provide a novel reference dataset for Russian language modeling, ii experiment with popular modern methods for text generation, namely variational autoencoders, and generative adversarial networks, which we trained on the new dataset. We evaluate the generated text regarding metrics such as perplexity, grammatical correctness and lexical diversity.
|
Unsupervised Paraphrase Generation using Pretrained Language Models ; Large scale Pretrained Language Models have proven to be very powerful approach in various Natural language tasks. OpenAI's GPT2 citeradford2019language is notable for its capability to generate fluent, well formulated, grammatically consistent text and for phrase completions. In this paper we leverage this generation capability of GPT2 to generate paraphrases without any supervision from labelled data. We examine how the results compare with other supervised and unsupervised approaches and the effect of using paraphrases for data augmentation on downstream tasks such as classification. Our experiments show that paraphrases generated with our model are of good quality, are diverse and improves the downstream task performance when used for data augmentation.
|
Anisotropic generalization of VaidyaTikekar superdense star ; We study superdense relativistic stars with anisotropic matter distributions with spheroidal spatial hypersurfaces. We propose a methodology to make an anisotropic generalization of the VaidyaTikekar superdense star model. The anisotropic Einstein field equations can be solved in terms of hypergeometric functions for our choice of gravitational potential and anisotropy. Particular parameter choices allow us to generate models of anisotropic stars in terms of elementary functions. Also, isotropic stars can be generated in the limit of vanishing anisotropy. In particular, we obtain the well known superdense models of Tikekar which are isotropic and have specific spheroidal geometries. The impact of anisotropy on the gross physical behaviour of a compact star is studied.
|
A GeometryInspired Attack for Generating Natural Language Adversarial Examples ; Generating adversarial examples for natural language is hard, as natural language consists of discrete symbols, and examples are often of variable lengths. In this paper, we propose a geometryinspired attack for generating natural language adversarial examples. Our attack generates adversarial examples by iteratively approximating the decision boundary of Deep Neural Networks DNNs. Experiments on two datasets with two different models show that our attack fools natural language models with high success rates, while only replacing a few words. Human evaluation shows that adversarial examples generated by our attack are hard for humans to recognize. Further experiments show that adversarial training can improve model robustness against our attack.
|
Melody Classifier with StackedLSTM ; Attempts to use generative models for music generation have been common in recent years, and some of them have achieved good results. Pieces generated by some of these models are almost indistinguishable from those being composed by human composers. However, the research on the evaluation system for machinegenerated music is still at a relatively early stage, and there is no uniform standard for such tasks. This paper proposes a stackedLSTM binary classifier based on a language model, which can be used to distinguish the human composer's work from the machinegenerated melody by learning the MIDI file's pitch, position, and duration.
|
The Role of Syntactic Planning in Compositional Image Captioning ; Image captioning has focused on generalizing to images drawn from the same distribution as the training set, and not to the more challenging problem of generalizing to different distributions of images. Recently, Nikolaus et al. 2019 introduced a dataset to assess compositional generalization in image captioning, where models are evaluated on their ability to describe images with unseen adjectivenoun and nounverb compositions. In this work, we investigate different methods to improve compositional generalization by planning the syntactic structure of a caption. Our experiments show that jointly modeling tokens and syntactic tags enhances generalization in both RNN and Transformerbased models, while also improving performance on standard metrics.
|
PixelTransformer Sample Conditioned Signal Generation ; We propose a generative model that can infer a distribution for the underlying spatial signal conditioned on sparse samples e.g. plausible images given a few observed pixels. In contrast to sequential autoregressive generative models, our model allows conditioning on arbitrary samples and can answer distributional queries for any location. We empirically validate our approach across three image datasets and show that we learn to generate diverse and meaningful samples, with the distribution variance reducing given more observed pixels. We also show that our approach is applicable beyond images and can allow generating other types of spatial outputs e.g. polynomials, 3D shapes, and videos.
|
Quantization of Generalized Abelian Gauge Field Theory under Rotor Model ; This paper is a followup work of the previous study of the generalized abelian gauge field theory under rotor model of order n of higher order derivatives. We will study the quantization of this theory using path integral approach and find out the Feynman propagator 2point correlation function of this generalized theory. We also investigate the generalized Proca action under rotor model and derive the Feynman propagator for the massive case.
|
Trisections in colored tensor models ; We give a procedure to construct quasitrisection diagrams for closed pseudomanifolds generated by colored tensor models without restrictions on the number of simplices in the triangulation, therefore generalizing previous works in the context of crystallizations and PLmanifolds. We further speculate on generalization of similar constructions for a class of pseudomanifolds generated by simplicial colored tensor models.
|
Does Structure Matter Leveraging DatatoText Generation for Answering Complex Information Needs ; In this work, our aim is to provide a structured answer in natural language to a complex information need. Particularly, we envision using generative models from the perspective of datatotext generation. We propose the use of a content selection and planning pipeline which aims at structuring the answer by generating intermediate plans. The experimental evaluation is performed using the TREC Complex Answer Retrieval CAR dataset. We evaluate both the generated answer and its corresponding structure and show the effectiveness of planningbased models in comparison to a texttotext model.
|
Contextual road lane and symbol generation for autonomous driving ; In this paper we present a novel approach for lane detection and segmentation using generative models. Traditionally discriminative models have been employed to classify pixels semantically on a road. We model the probability distribution of lanes and road symbols by training a generative adversarial network. Based on the learned probability distribution, contextaware lanes and road signs are generated for a given image which are further quantized for nearest class label. Proposed method has been tested on BDD100K and Baidu ApolloScape datasets and performs better than state of the art and exhibits robustness to adverse conditions by generating lanes in faded out and occluded scenarios.
|
Generalization Gap in Amortized Inference ; The ability of likelihoodbased probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression. In this work, we study the generalization of a popular class of probabilistic model the Variational AutoEncoder VAE. We discuss the two generalization gaps that affect VAEs and show that overfitting is usually dominated by amortized inference. Based on this observation, we propose a new training objective that improves the generalization of amortized inference. We demonstrate how our method can improve performance in the context of image modeling and lossless compression.
|
Model Generalization A Sharpness Aware Optimization Perspective ; SharpnessAware Minimization SAM and adaptive sharpnessaware minimization ASAM aim to improve the model generalization. And in this project, we proposed three experiments to valid their generalization from the sharpness aware perspective. And our experiments show that sharpness awarebased optimization techniques could help to provide models with strong generalization ability. Our experiments also show that ASAM could improve the generalization performance on unnormalized data, but further research is needed to confirm this.
|
AARGH Endtoend RetrievalGeneration for TaskOriented Dialog ; We introduce AARGH, an endtoend taskoriented dialog system combining retrieval and generative approaches in a single model, aiming at improving dialog management and lexical diversity of outputs. The model features a new response selection method based on an actionaware training objective and a simplified singleencoder retrieval architecture which allow us to build an endtoend retrievalenhanced generation model where retrieval and generation share most of the parameters. On the MultiWOZ dataset, we show that our approach produces more diverse outputs while maintaining or improving state tracking and contexttoresponse generation performance, compared to stateoftheart baselines.
|
Vine copula based knockoff generation for highdimensional controlled variable selection ; Vine copulas are a flexible tool for highdimensional dependence modeling. In this article, we discuss the generation of approximate modelX knockoffs with vine copulas. It is shown how Gaussian knockoffs can be generalized to Gaussian copula knockoffs. A convenient way to parametrize Gaussian copulas are partial correlation vines. We discuss how completion problems for partial correlation vines are related to Gaussian knockoffs. A natural generalization of partial correlation vines are vine copulas which are well suited for the generation of approximate modelX knockoffs. We discuss a specific Dvine structure which is advantageous to obtain vine copula knockoff models. In a simulation study, we demonstrate that vine copula knockoff models are effective and powerful for highdimensional controlled variable selection.
|
PACBayesian Generalization Bounds for Adversarial Generative Models ; We extend PACBayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and EnergyBased GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing nonvacuous generalization bounds for Wasserstein GANs on synthetic datasets.
|
Prototype Generation Robust Feature Visualisation for Data Independent Interpretability ; We introduce Prototype Generation, a stricter and more robust form of feature visualisation for modelagnostic, dataindependent interpretability of image classification models. We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations. We substantiate these claims by quantitatively measuring similarity between the internal activations of our generated prototypes and natural images. We also demonstrate how the interpretation of generated prototypes yields important insights, highlighting spurious correlations and biases learned by models which quantitative methods over testsets cannot identify.
|
Level Generation for Angry Birds with Sequential VAE and Latent Variable Evolution ; Video game level generation based on machine learning ML, in particular, deep generative models, has attracted attention as a technique to automate level generation. However, applications of existing MLbased level generations are mostly limited to tilebased level representation. When ML techniques are applied to game domains with nontilebased level representation, such as Angry Birds, where objects in a level are specified by realvalued parameters, ML often fails to generate playable levels. In this study, we develop a deepgenerativemodelbased level generation for the game domain of Angry Birds. To overcome these drawbacks, we propose a sequential encoding of a level and process it as text data, whereas existing approaches employ a tilebased encoding and process it as an image. Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels compared with existing approaches. We apply latent variable evolution with the proposed generator to control the feature of a generated level computed through an AI agent's play, while keeping the level stable and natural.
|
Contrastive Multidocument Question Generation ; Multidocument question generation focuses on generating a question that covers the common aspect of multiple documents. Such a model is useful in generating clarifying options. However, a naive model trained only using the targeted positive document set may generate too generic questions that cover a larger scope than delineated by the document set. To address this challenge, we introduce the contrastive learning strategy where given positive and negative sets of documents, we generate a question that is closely related to the positive set but is far away from the negative set. This setting allows generated questions to be more specific and related to the target document set. To generate such specific questions, we propose MultiSource Coordinated Question Generator MSCQG, a novel framework that includes a supervised learning SL stage and a reinforcement learning RL stage. In the SL stage, a singledocument question generator is trained. In the RL stage, a coordinator model is trained to find optimal attention weights to align multiple singledocument generators, by optimizing a reward designed to promote specificity of generated questions. We also develop an effective auxiliary objective, named Setinduced Contrastive Regularization SCR that improves the coordinator's contrastive learning during the RL stage. We show that our model significantly outperforms several strong baselines, as measured by automatic metrics and human evaluation. The source repository is publicly available at urlwww.github.comwoonsangchocontrastqgen.
|
Generative MaxMahalanobis Classifiers for Image Classification, Generation and More ; Joint Energybased Model JEM of Grathwohl et al. shows that a standard softmax classifier can be reinterpreted as an energybased model EBM for the joint distribution px,y; the resulting model can be optimized to improve calibration, robustness, and outofdistribution detection, while generating samples rivaling the quality of recent GANbased approaches. However, the softmax classifier that JEM exploits is inherently discriminative and its latent feature space is not well formulated as probabilistic distributions, which may hinder its potential for image generation and incur training instability. We hypothesize that generative classifiers, such as Linear Discriminant Analysis LDA, might be more suitable for image generation since generative classifiers model the data generation process explicitly. This paper therefore investigates an LDA classifier for image classification and generation. In particular, the MaxMahalanobis Classifier MMC, a special case of LDA, fits our goal very well. We show that our Generative MMC GMMC can be trained discriminatively, generatively, or jointly for image classification and generation. Extensive experiments on multiple datasets show that GMMC achieves stateoftheart discriminative and generative performances, while outperforming JEM in calibration, adversarial robustness, and outofdistribution detection by a significant margin. Our source code is available at httpsgithub.comsndnyangGMMC.
|
How Important are Good Method Names in Neural Code Generation A Model Robustness Perspective ; Pretrained code generation models PCGMs have been widely applied in neural code generation which can generate executable code from functional descriptions in natural languages, possibly together with signatures. Despite substantial performance improvement of PCGMs, the role of method names in neural code generation has not been thoroughly investigated. In this paper, we study and demonstrate the potential of benefiting from method names to enhance the performance of PCGMs, from a model robustness perspective. Specifically, we propose a novel approach, named RADAR neuRAl coDe generAtor Robustifier. RADAR consists of two components RADARAttack and RADARDefense. The former attacks a PCGM by generating adversarial method names as part of the input, which are semantic and visual similar to the original input, but may trick the PCGM to generate completely unrelated code snippets. As a countermeasure to such attacks, RADARDefense synthesizes a new method name from the functional description and supplies it to the PCGM. Evaluation results show that RADARAttack can reduce the CodeBLEU of generated code by 19.72 to 38.74 in three stateoftheart PCGMs i.e., CodeGPT, PLBART, and CodeT5 in the finetuning code generation task, and reduce the Pass1 of generated code by 32.28 to 44.42 in three stateoftheart PCGMs i.e., Replit, CodeGen, and CodeT5 in the zeroshot code generation task. Moreover, RADARDefense is able to reinstate the performance of PCGMs with synthesized method names. These results highlight the importance of good method names in neural code generation and implicate the benefits of studying model robustness in software engineering.
|
Generating Synthetic Documents for CrossEncoder ReRankers A Comparative Study of ChatGPT and Human Experts ; We investigate the usefulness of generative Large Language Models LLMs in generating training data for crossencoder rerankers in a novel direction generating synthetic documents instead of synthetic queries. We introduce a new dataset, ChatGPTRetrievalQA, and compare the effectiveness of models finetuned on LLMgenerated and humangenerated data. Data generated with generative LLMs can be used to augment training data, especially in domains with smaller amounts of labeled data. We build ChatGPTRetrievalQA based on an existing dataset, human ChatGPT Comparison Corpus HC3, consisting of public question collections with human responses and answers from ChatGPT. We finetune a range of crossencoder rerankers on either humangenerated or ChatGPTgenerated data. Our evaluation on MS MARCO DEV, TREC DL'19, and TREC DL'20 demonstrates that crossencoder reranking models trained on ChatGPT responses are statistically significantly more effective zeroshot rerankers than those trained on human responses. In a supervised setting, the humantrained rerankers outperform the LLMtrained rerankers. Our novel findings suggest that generative LLMs have high potential in generating training data for neural retrieval models. Further work is needed to determine the effect of factually wrong information in the generated responses and test our findings' generalizability with opensource LLMs. We release our data, code, and crossencoders checkpoints for future work.
|
Towards Languageguided Interactive 3D Generation LLMs as Layout Interpreter with Generative Feedback ; Generating and editing a 3D scene guided by natural language poses a challenge, primarily due to the complexity of specifying the positional relations and volumetric changes within the 3D space. Recent advancements in Large Language Models LLMs have demonstrated impressive reasoning, conversational, and zeroshot generation abilities across various domains. Surprisingly, these models also show great potential in realizing and interpreting the 3D space. In light of this, we propose a novel languageguided interactive 3D generation system, dubbed LI3D, that integrates LLMs as a 3D layout interpreter into the offtheshelf layoutto3D generative models, allowing users to flexibly and interactively generate visual content. Specifically, we design a versatile layout structure base on the bounding boxes and semantics to prompt the LLMs to model the spatial generation and reasoning from language. Our system also incorporates LLaVA, a large language and vision assistant, to provide generative feedback from the visual aspect for improving the visual quality of generated content. We validate the effectiveness of LI3D, primarily in 3D generation and editing through multiround interactions, which can be flexibly extended to 2D generation and editing. Various experiments demonstrate the potential benefits of incorporating LLMs in generative AI for applications, e.g., metaverse. Moreover, we benchmark the layout reasoning performance of LLMs with neural visual artist tasks, revealing their emergent ability in the spatial layout domain.
|
Controllable LyricstoMelody Generation ; Lyricstomelody generation is an interesting and challenging topic in AI music research field. Due to the difficulty of learning the correlations between lyrics and melody, previous methods suffer from low generation quality and lack of controllability. Controllability of generative models enables human interaction with models to generate desired contents, which is especially important in music generation tasks towards humancentered AI that can facilitate musicians in creative activities. To address these issues, we propose a controllable lyricstomelody generation network, ConL2M, which is able to generate realistic melodies from lyrics in userdesired musical style. Our work contains three main novelties 1 To model the dependencies of music attributes cross multiple sequences, interbranch memory fusion Memofu is proposed to enable information flow between multibranch stacked LSTM architecture; 2 Reference style embedding RSE is proposed to improve the quality of generation as well as control the musical style of generated melodies; 3 Sequencelevel statistical loss SeqLoss is proposed to help the model learn sequencelevel features of melodies given lyrics. Verified by evaluation metrics for music quality and controllability, initial study of controllable lyricstomelody generation shows better generation quality and the feasibility of interacting with users to generate the melodies in desired musical styles when given lyrics.
|
Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet ; The rapid adoption of generative Artificial Intelligence AI tools that can generate realistic images or text, such as DALLE, MidJourney, or ChatGPT, have put the societal impacts of these technologies at the center of public debate. These tools are possible due to the massive amount of data text and images that is publicly available through the Internet. At the same time, these generative AI tools become content creators that are already contributing to the data that is available to train future models. Therefore, future versions of generative AI tools will be trained with a mix of humancreated and AIgenerated content, causing a potential feedback loop between generative AI and public data repositories. This interaction raises many questions how will future versions of generative AI tools behave when trained on a mixture of real and AI generated data Will they evolve and improve with the new data sets or on the contrary will they degrade Will evolution introduce biases or reduce diversity in subsequent generations of generative AI tools What are the societal implications of the possible degradation of these models Can we mitigate the effects of this feedback loop In this document, we explore the effect of this interaction and report some initial results using simple diffusion models trained with various image datasets. Our results show that the quality and diversity of the generated images can degrade over time suggesting that incorporating AIcreated data can have undesired effects on future versions of generative models.
|
GenCO Generating Diverse Solutions to Design Problems with Combinatorial Nature ; Generating diverse objects e.g., images using generative models such as GAN or VAE has achieved impressive results in the recent years, to help solve many design problems that are traditionally done by humans. Going beyond image generation, we aim to find solutions to more general design problems, in which both the diversity of the design and conformity of constraints are important. Such a setting has applications in computer graphics, animation, industrial design, material science, etc, in which we may want the output of the generator to follow discretecombinatorial constraints and penalize any deviation, which is nontrivial with existing generative models and optimization solvers. To address this, we propose GenCO, a novel framework that conducts endtoend training of deep generative models integrated with embedded combinatorial solvers, aiming to uncover highquality solutions aligned with nonlinear objectives. While structurally akin to conventional generative models, GenCO diverges in its role it focuses on generating instances of combinatorial optimization problems rather than final objects e.g., images. This shift allows finer control over the generated outputs, enabling assessments of their feasibility and introducing an additional combinatorial loss component. We demonstrate the effectiveness of our approach on a variety of generative tasks characterized by combinatorial intricacies, including game level generation and map creation for path planning, consistently demonstrating its capability to yield diverse, highquality solutions that reliably adhere to userspecified combinatorial properties.
|
Compositional Visual Generation with Composable Diffusion Models ; Large textguided diffusion models, such as DALLE2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energybased models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pretrained textguided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page httpsenergybasedmodel.github.ioCompositionalVisualGenerationwithComposableDiffusionModels
|
Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets ; This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from humantranslated sentences i.e., the golden target sentences, And the discriminator makes efforts to discriminate the machinegenerated sentences from humantranslated ones. The two sub models play a minimax game and achieve the winwin situation when they reach a Nash Equilibrium. Additionally, the static sentencelevel BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged stateoftheart Transformer on EnglishGerman and ChineseEnglish translation tasks.
|
On the Statistical Settings of Generation and Load in a Synthetic Grid Modeling ; This paper investigates the problem of generation and load settings in a synthetic power grid modeling of highvoltage transmission network, considering both electrical parameters and topology measures. Our previous study indicated that the relative location of generation and load buses in a realistic grid are not random but correlated. And an entropy based optimization approach has been proposed to determine a set of correlated siting for generation and load buses in a synthetic grid modeling. Using the exponential distribution of individual generation capacity or load settings in a grid, and the nontrivial correlation between the generation capacity or load setting and the nodal degree of a generation or load bus we develop an approach to generate a statistically correct random set of generation capacities and load settings, and then assign them to each generation or load bus in a grid.
|
Inferring Semantic Layout for Hierarchical TexttoImage Synthesis ; We propose a novel hierarchical approach for texttoimage synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarsetofine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and usercontrolled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MSCOCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.
|
Conditional Hybrid GAN for Sequence Generation ; Conditional sequence generation aims to instruct the generation procedure by conditioning the model with additional context information, which is a selfsupervised learning issue a form of unsupervised learning with supervision information from data itself. Unfortunately, the current stateoftheart generative models have limitations in sequence generation with multiple attributes. In this paper, we propose a novel conditional hybrid GAN CHybridGAN to solve this issue. Discrete sequence with triplet attributes are separately generated when conditioned on the same context. Most importantly, relational reasoning technique is exploited to model not only the dependency inside each sequence of the attribute during the training of the generator but also the consistency among the sequences of attributes during the training of the discriminator. To avoid the nondifferentiability problem in GANs encountered during discrete data generation, we exploit the GumbelSoftmax technique to approximate the distribution of discretevalued sequences.Through evaluating the task of generating melody associated with note, duration, and rest from lyrics, we demonstrate that the proposed CHybridGAN outperforms the existing methods in contextconditioned discretevalued sequence generation.
|
Generative Image Modeling using Style and Structure Adversarial Networks ; Current generative frameworks use endtoend learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation images are product of a Structure the underlying 3D model; b Style the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network S2GAN. Our S2GAN has two components the StructureGAN generates a surface normal map; the StyleGAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S2GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.
|
SampleEfficient Generation of Novel Photoacid Generator Molecules using a Deep Generative Model ; Photoacid generators PAGs are compounds that release acids H ions when exposed to light. These compounds are critical components of the photolithography processes that are used in the manufacture of semiconductor logic and memory chips. The exponential increase in the demand for semiconductors has highlighted the need for discovering novel photoacid generators. While de novo molecule design using deep generative models has been widely employed for drug discovery and material design, its application to the creation of novel photoacid generators poses several unique challenges, such as lack of property labels. In this paper, we highlight these challenges and propose a generative modeling approach that utilizes conditional generation from a pretrained deep autoencoder and expertintheloop techniques. The validity of the proposed approach was evaluated with the help of subject matter experts, indicating the promise of such an approach for applications beyond the creation of novel photoacid generators.
|
Improving LogicalLevel Natural Language Generation with TopicConditioned Data Augmentation and Logical Form Generation ; Logical Natural Language Generation, i.e., generating textual descriptions that can be logically entailed by a structured table, has been a challenge due to the low fidelity of the generation. citetchen2020logic2text have addressed this problem by annotating interim logical programs to control the generation contents and semantics, and presented the task of tableaware logical form to text Logic2text generation. However, although table instances are abundant in the real world, logical forms paired with textual descriptions require costly human annotation work, which limits the performance of neural models. To mitigate this, we propose topicconditioned data augmentation TopicDA, which utilizes GPT2 to generate unpaired logical forms and textual descriptions directly from tables. We further introduce logical form generation LG, a dual task of Logic2text that requires generating a valid logical form based on a text description of a table. We also propose a semisupervised learning approach to jointly train a Logic2text and an LG model with both labeled and augmented data. The two models benefit from each other by providing extra supervision signals through backtranslation. Experimental results on the Logic2text dataset and the LG task demonstrate that our approach can effectively utilize the augmented data and outperform supervised baselines by a substantial margin.
|
DUVLG Unifying VisionandLanguage Generation via Dual SequencetoSequence Pretraining ; Due to the limitations of the model structure and pretraining objectives, existing visionandlanguage generation models cannot utilize pairwise images and text through bidirectional generation. In this paper, we propose DUVLG, a framework which unifies visionandlanguage generation as sequence generation problems. DUVLG is trained with novel dual pretraining tasks multimodal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pretraining objectives on image captioning and texttoimage generation datasets. Results show that DUVLG yields better performance than variants trained with unidirectional generation objectives or the variant without the commitment loss. We also obtain higher scores compared to previous stateoftheart systems on three visionandlanguage generation tasks. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.
|
Detecting ChatGPT A Survey of the State of Detecting ChatGPTGenerated Text ; While recent advancements in the capabilities and widespread accessibility of generative language models, such as ChatGPT OpenAI, 2022, have brought about various benefits by generating fluent humanlike text, the task of distinguishing between human and large language model LLM generated text has emerged as a crucial problem. These models can potentially deceive by generating artificial text that appears to be humangenerated. This issue is particularly significant in domains such as law, education, and science, where ensuring the integrity of text is of the utmost importance. This survey provides an overview of the current approaches employed to differentiate between texts generated by humans and ChatGPT. We present an account of the different datasets constructed for detecting ChatGPTgenerated text, the various methods utilized, what qualitative analyses into the characteristics of human versus ChatGPTgenerated text have been performed, and finally, summarize our findings into general insights
|
Looking through the past better knowledge retention for generative replay in continual learning ; In this work, we improve the generative replay in a continual learning setting to perform well on challenging scenarios. Current generative rehearsal methods are usually benchmarked on small and simple datasets as they are not powerful enough to generate more complex data with a greater number of classes. We notice that in VAEbased generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space. Therefore, we propose three modifications that allow the model to learn and generate complex data. More specifically, we incorporate the distillation in latent space between the current and previous models to reduce feature drift. Additionally, a latent matching for the reconstruction and original data is proposed to improve generated features alignment. Further, based on the observation that the reconstructions are better for preserving knowledge, we add the cycling of generations through the previously trained model to make them closer to the original data. Our method outperforms other generative replay methods in various scenarios. Code available at httpsgithub.comvaleriyakhanlookingthroughthepast.
|
Notes on certain other 0,2 correlation functions ; In this paper we shall describe some correlation function computations in perturbative heterotic strings that generalize B model computations. On the 2,2 locus, correlation functions in the B model receive no quantum corrections, but off the 2,2 locus, that can change. Classically, the 0,2 analogue of the B model is equivalent to the previouslydiscussed 0,2 analogue of the A model, but with the gauge bundle dualized our generalization of the A model, also simultaneously generalizes the B model. The A and B analogues sometimes have different regularizations, however, which distinguish them quantummechanically. We discuss how properties of the 2,2 B model, such as the lack of quantum corrections, are realized in 0,2 A model language. In an appendix, we also extensively discuss how the CalabiYau condition for the closed string B model uncoupled to topological gravity can be weakened slightly, a detail which does not seem to have been covered in the literature previously. That weakening also manifests in the description of the 2,2 B model as a 0,2 A model.
|
Complete Nondiagonal Reflection Matrices of RSOSSOS and Hard Hexagon Models ; In this paper we compute the most general nondiagonal reflection matrices of the RSOSSOS models and hard hexagon model using the boundary YangBaxter equations. We find new oneparameter family of reflection matrices for the RSOS model in addition to the previous result without any parameter. We also find three classes of reflection matrices for the SOS model, which has one or two parameters. For the hard hexagon model which can be mapped to RSOS5 model by folding four RSOS heights into two, the solutions can be obtained similarly with a main difference in the boundary unitarity conditions. Due to this, the reflection matrices can have two free parameters. We show that these extra terms can be identified with the decorated' solutions. We also generalize the hard hexagon model by folding' the RSOS heights of the general RSOSp model and show that they satisfy the integrability conditions such as the Yang Baxter and boundary YangBaxter equations. These models can be solved using the results for the RSOS models.
|
Comparative model accuracy of a datafitted generalized AwRascleZhang model ; The AwRascleZhang ARZ model can be interpreted as a generalization of the LighthillWhithamRichards LWR model, possessing a family of fundamental diagram curves, each of which represents a class of drivers with a different empty road velocity. A weakness of this approach is that different drivers possess vastly different densities at which traffic flow stagnates. This drawback can be overcome by modifying the pressure relation in the ARZ model, leading to the generalized AwRascleZhang GARZ model. We present an approach to determine the parameter functions of the GARZ model from fundamental diagram measurement data. The predictive accuracy of the resulting datafitted GARZ model is compared to other traffic models by means of a threedetector test setup, employing two types of data vehicle trajectory data, and sensor data. This work also considers the extension of the ARZ and the GARZ models to models with a relaxation term, and conducts an investigation of the optimal relaxation time.
|
A Note on How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It ; King and Roberts 2015, KR claim that a disagreement between robust and classical standard errors exposes model misspecification. We emphasize that KR's claim only generally applies to parametric models models that assume a restrictive form of the distribution of the outcome. Many common models in use in political science, including the linear model, are not necessarily parametric rather they may be semiparametric. Common estimators of model parameters such as ordinary least squares have both robust corresponding to a semiparametric model and classical corresponding to a more restrictive model standard error estimates. Given a properly specified semiparametric model and mild regularity conditions, the classical standard errors are not generally consistent, but the robust standard errors are. To illustrate this point, we consider the case of the regression estimate of a semiparametric linear model with no model misspecification, and show that robust standard errors may nevertheless systematically differ from classical standard errors. We show that a disagreement between robust and classical standard errors is not generally suitable as a diagnostic for regression estimators, and that KR's reanalyses of Neumayer 2003 and Buthe and Milner 2008 are predicated on strong assumptions that the original authors did not invoke nor require.
|
Bivariate Covariance Functions of Polya Type ; We provide sufficient conditions of P'olya type which guarantee the positive definiteness of a 2times 2matrixvalued function in mathbbR and mathbbR3. Several bivariate covariance models have been proposed in literature, where all components of the covariance matrix are of the same parametric family, such as the bivariate Mat'ern model. Based on the P'olya type conditions, we introduce two novel bivariate parametric covariance models of this class, the powered exponential or stable covariance model and the generalized Cauchy covariance model. Both models allow for flexible smoothness, variance, scale, and crosscorrelation parameters. The smoothness parameters are in 0, 1. Additionally, the bivariate generalized Cauchy model allows for distinct long range parameters. We also show that the univariate spherical model can be generalized to the bivariate case within the above class only in a trivial way. In a data example on the content of copper and zinc in the top soil of Swiss Jura we compare the bivariate powered exponential model to the traditional linear model of coregionalization and the bivariate Mat'ern model.
|
Flow Improving FlowBased Generative Models with Variational Dequantization and Architecture Design ; Flowbased generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flowbased models generally have much worse density modeling performance compared to stateoftheart autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flowbased models in prior work the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow, a new flowbased model that is now the stateoftheart nonautoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flowbased models. Our implementation is available at httpsgithub.comaravindsrinivasflowpp
|
A Unified Stochastic Hybrid System Approach to Aggregate Modeling of Responsive Loads ; Aggregate load modeling is of fundamental importance for systematic analysis and design of various demand response strategies. Instead of keeping track of the trajectories of individual loads, the aggregate modeling problem focuses on characterizing the density evolution of the load population. Most existing models are only applicable to Thermostatically Controlled Loads TCL with firstorder linear dynamics. This paper develops a unified aggregate modeling approach that can be used for general TCLs as well as deferrable loads. We propose a deterministic hybrid system model to describe individual load dynamics under demand response rules, and develop a general stochastic hybrid system SHS model to capture the population dynamics. We also derive a set of partial differential equations PDE that governs the probability density evolution of the SHS. Our results cannot be obtained using the exiting SHS tools in the literature as the proposed SHS model involves both random and deterministic switchings with general switching surfaces in multidimensional domains. The derived PDE model includes many existing aggregate load modeling results as special cases and can be used in many other realistic modeling scenarios that have not been studied in the literature.
|
Affine forward variance models ; We introduce the class of affine forward variance AFV models of which both the conventional Heston model and the rough Heston model are special cases. We show that AFV models can be characterized by the affine form of their cumulant generating function, which can be obtained as solution of a convolution Riccati equation. We further introduce the class of affine forward order flow intensity AFI models, which are structurally similar to AFV models, but driven by jump processes, and which include Hawkestype models. We show that the cumulant generating function of an AFI model satisfies a generalized convolution Riccati equation and that a highfrequency limit of AFI models converges in distribution to the AFV model.
|
Dissipative models of swell propagation across the Pacific ; Ocean swell plays an important role in the transport of energy across the ocean, yet its evolution is still not well understood. In the late 1960s, the nonlinear Schrodinger NLS equation was derived as a model for the propagation of ocean swell over large distances. More recently, a number of dissipative generalizations of the NLS equation based on a simple dissipation assumption have been proposed. These models have been shown to accurately model wave evolution in the laboratory setting, but their validity in modeling ocean swell has not previously been examined. We study the efficacy of the NLS equation and four of its generalizations in modeling the evolution of swell in the ocean. The dissipative generalizations perform significantly better than conservative models and are overall reasonable models for swell amplitudes, indicating dissipation is an important physical effect in ocean swell evolution. The nonlinear models did not outperform their linearizations, indicating linear models may be sufficient in modeling ocean swell evolution.
|
ControlOriented Modeling of Pipe Flow in Gas Processing Facilities ; Pipe flow models are developed with a focus on their eventual use for feedback control design at the process control level, as opposed to the unit level, in gas processing facilities. Accordingly, linearized facilityscale models are generated to describe pressures, mass flows and temperatures based on sets of nonlinear partial differential equations from fluid dynamics and thermodynamics together with constraints associated with their interconnection. As part of the treatment, the divergence of these simplified models from physics is assessed, since robustness to these errors will be an objective for the eventual control system. The approach commences with a thorough analysis of pipe flow models and then proceeds to study their automated interconnection into network models, which subsume the algebraic constraints of bond graph or standard fluid modeling. The models are validated and their errors quantified by referring them to operational data from a commercial gas compressor test facility. For linear timeinvariant models, the interconnection method to generate network models is shown to coincide with automation of Mason's Gain Formula. These pipe network models based on engineering data are the first part of the development of general facility process control tools.
|
Pushpull direct modeling of solid CAD models ; Direct modeling is a very recent CAD modeling paradigm featuring direct, intuitive pushpull interactions with the geometry of the model to much increase model editing flexibility. The major issue for pushpull direct modeling is the possible inconsistency between the altered geometry of the model and its unchanged topology. The challenge of resolving the geometrytopology inconsistency lies in ensuring that the resulting model remains as a valid solid model and that the model shape follows a continuous change pattern. Although pushpull direct modeling has been implemented in several CAD software packages, robustness towards generating valid modeling results and continuous shape changes still remains an open issue. This paper proposes a systematic method to handle the resolution of any possible inconsistent situation. The method formulates the continuous shape change requirement as successive Boolean operations on the model volume, thereby guaranteeing valid solid models and continuous shape changes. Further, this formulation allows for an easy implementation of pushpull direct modeling using existing CAD research and engineering results. In order to show the effectiveness of the proposed method, a software prototype is developed and the modeling results are compared with those of five leading CAD software packages.
|
Shot Noise Neuron Model ; In this paper, we propose a shot noisebased leaky integrated and firing neuron model and provide a detailed analysis of the performance of this model compared to the traditional diffusion approximated model. In theoretical neuroscience, there are three general neuron models in the field Compartmental neuron model is a conductancebased model, in which it views the biological neurons as a large circuit. The problem of this model comes from its structural complexity and the number of its free parameters; Leaky integrated and firing model is a more flexible model due to the special design called thresholdresetting, in which the voltage of the neuron is reset after reaching the threshold. This model is proposed as an alternative to the compartment model to provide a more biologically realistic model that can capture spike timing behaviors that are observed in experiments. In addition to that, it is more computationally efficient since it has much less free parameters; Firing ratebased neuron model uses the firing rate of the neuron as the coupling terms in the coupled neuronal network models. Models of this kind can be easily analyzed on the network level. Theoretical analysis for this model proves the existence of phase transition and shows that the computational capacity is boosted in the chaotic regime.
|
Temporal Difference Models ModelFree Deep RL for ModelBased Control ; Modelfree reinforcement learning RL is a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging realworld problems, even with offpolicy algorithms such as Qlearning. A limiting factor in classic modelfree RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Modelbased RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as modelfree RL due to model bias. We introduce temporal difference models TDMs, a family of goalconditioned value functions that can be trained with modelfree learning and used for modelbased control. TDMs combine the benefits of modelfree and modelbased RL they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct modelbased RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to stateoftheart modelbased and modelfree methods.
|
Dynamic Fusion Attentional Language Model for Neural Machine Translation ; Neural Machine Translation NMT can be used to generate fluent output. As such, language models have been investigated for incorporation with NMT. In prior investigations, two models have been used a translation model and a language model. The translation model's predictions are weighted by the language model with a handcrafted ratio in advance. However, these approaches fail to adopt the language model weighting with regard to the translation history. In another line of approach, language model prediction is incorporated into the translation model by jointly considering source and target information. However, this line of approach is limited because it largely ignores the adequacy of the translation output. Accordingly, this work employs two mechanisms, the translation model and the language model, with an attentive architecture to the language model as an auxiliary element of the translation model. Compared with previous work in EnglishJapanese machine translation using a language model, the experimental results obtained with the proposed Dynamic Fusion mechanism improve BLEU and Rankbased Intuitive Bilingual Evaluation Scores RIBES scores. Additionally, in the analyses of the attention and predictivity of the language model, the Dynamic Fusion mechanism allows predictive language modeling that conforms to the appropriate grammatical structure.
|
GIF Generative Interpretable Faces ; Photorealistic visualization and animation of expressive human faces have been a long standing challenge. 3D face modeling methods provide parametric control but generates unrealistic images, on the other hand, generative 2D models like GANs Generative Adversarial Networks output photorealistic face images, but lack explicit control. Recent methods gain partial control, either by attempting to disentangle different factors in an unsupervised manner, or by adding control post hoc to a pretrained model. Unconditional GANs, however, may entangle factors that are hard to undo later. We condition our generative model on predefined control parameters to encourage disentanglement in the generation process. Specifically, we condition StyleGAN2 on FLAME, a generative 3D face model. While conditioning on FLAME parameters yields unsatisfactory results, we find that conditioning on rendered FLAME geometry and photometric details works well. This gives us a generative 2D face model named GIF Generative Interpretable Faces that offers FLAME's parametric control. Here, interpretable refers to the semantic meaning of different parameters. Given FLAME parameters for shape, pose, expressions, parameters for appearance, lighting, and an additional style vector, GIF outputs photorealistic face images. We perform an AMT based perceptual study to quantitatively and qualitatively evaluate how well GIF follows its conditioning. The code, data, and trained model are publicly available for research purposes at httpgif.is.tue.mpg.de.
|
Towards a Neural Graphics Pipeline for Controllable Image Generation ; In this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline NGP, a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce viewspecific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional e.g., Blinn Phong rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of singleobject scenes. We evaluate our hybrid modeling framework, compare with neuralonly generation methods namely, DCGAN, LSGAN, WGANGP, VON, and SRNs, report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at httpgeometry.cs.ucl.ac.ukprojects2021ngp.
|
The Connection between OutofDistribution Generalization and Privacy of ML Models ; With the goal of generalizing to outofdistribution OOD data, recent domain generalization methods aim to learn stable feature representations whose effect on the output remains invariant across domains. Given the theoretical connection between generalization and privacy, we ask whether better OOD generalization leads to better privacy for machine learning models, where privacy is measured through robustness to membership inference MI attacks. In general, we find that the relationship does not hold. Through extensive evaluation on a synthetic dataset and image datasets like MNIST, FashionMNIST, and Chest Xrays, we show that a lower OOD generalization gap does not imply better robustness to MI attacks. Instead, privacy benefits are based on the extent to which a model captures the stable features. A model that captures stable features is more robust to MI attacks than models that exhibit better OOD generalization but do not learn stable features. Further, for the same provable differential privacy guarantees, a model that learns stable features provides higher utility as compared to others. Our results offer the first extensive empirical study connecting stable features and privacy, and also have a takeaway for the domain generalization community; MI attack can be used as a complementary metric to measure model quality.
|
Controlled Text Generation using T5 based EncoderDecoder Soft Prompt Tuning and Analysis of the Utility of Generated Text in AI ; Controlled text generation is a very important task in the arena of natural language processing due to its promising applications. In order to achieve this task we mainly introduce the novel soft prompt tuning method of using soft prompts at both encoder and decoder levels together in a T5 model and investigate the performance as the behaviour of an additional soft prompt related to the decoder of a T5 model in controlled text generation remained unexplored. Then we also investigate the feasibility of steering the output of this extended soft prompted T5 model at decoder level and finally analyse the utility of generated text to be used in AI related tasks such as training AI models with an interpretability analysis of the classifier trained with synthetic text, as there is a lack of proper analysis of methodologies in generating properly labelled data to be utilized in AI tasks. Through the performed indepth intrinsic and extrinsic evaluations of this generation model along with the artificially generated data, we found that this model produced better results compared to the T5 model with a single soft prompt at encoder level and the sentiment classifier trained using this artificially generated data can produce comparable classification results to the results of a classifier trained with real labelled data and also the classifier decision is interpretable with respect to the input text content.
|
Noise2Music Textconditioned Music Generation with Diffusion Models ; We introduce Noise2Music, where a series of diffusion models is trained to generate highquality 30second music clips from text prompts. Two types of diffusion models, a generator model, which generates an intermediate representation conditioned on text, and a cascader model, which generates highfidelity audio conditioned on the intermediate representation and possibly the text, are trained and utilized in succession to generate highfidelity music. We explore two options for the intermediate representation, one using a spectrogram and the other using audio with lower fidelity. We find that the generated audio is not only able to faithfully reflect key elements of the text prompt such as genre, tempo, instruments, mood, and era, but goes beyond to ground finegrained semantics of the prompt. Pretrained large language models play a key role in this story they are used to generate paired text for the audio of the training set and to extract embeddings of the text prompts ingested by the diffusion models. Generated examples httpsgoogleresearch.github.ionoise2music
|
Generative Model Based Noise Robust Training for Unsupervised Domain Adaptation ; Target domain pseudolabelling has shown effectiveness in unsupervised domain adaptation UDA. However, pseudolabels of unlabeled target domain data are inevitably noisy due to the distribution shift between source and target domains. This paper proposes a Generative modelbased NoiseRobust Training method GeNRT, which eliminates domain shift while mitigating label noise. GeNRT incorporates a Distributionbased Classwise Feature Augmentation DCFA and a GenerativeDiscriminative classifier Consistency GDC, both based on the classwise target distributions modelled by generative models. DCFA minimizes the domain gap by augmenting the source data with distributionsampled target features, and trains a noiserobust discriminative classifier by using target domain knowledge from the generative models. GDC regards all the classwise generative models as generative classifiers and enforces a consistency regularization between the generative and discriminative classifiers. It exploits an ensemble of target knowledge from all the generative models to train a noiserobust discriminative classifier and eventually gets theoretically linked to the BenDavid domain adaptation theorem for reducing the domain gap. Extensive experiments on OfficeHome, PACS, and DigitFive show that our GeNRT achieves comparable performance to stateoftheart methods under singlesource and multisource UDA settings.
|
Enhancing RetrievalAugmented Large Language Models with Iterative RetrievalGeneration Synergy ; Large language models are powerful text processors and reasoners, but are still subject to limitations including outdated knowledge and hallucinations, which necessitates connecting them to the world. Retrievalaugmented large language models have raised extensive attention for grounding model generation on external knowledge. However, retrievers struggle to capture relevance, especially for queries with complex information needs. Recent work has proposed to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation. In this paper, we show that strong performance can be achieved by a method we call IterRetGen, which synergizes retrieval and generation in an iterative manner. A model output shows what might be needed to finish a task, and thus provides an informative context for retrieving more relevant knowledge which in turn helps generate a better output in the next iteration. Compared with recent work which interleaves retrieval with generation when producing an output, IterRetGen processes all retrieved knowledge as a whole and largely preserves the flexibility in generation without structural constraints. We evaluate IterRetGen on multihop question answering, fact verification, and commonsense reasoning, and show that it can flexibly leverage parametric knowledge and nonparametric knowledge, and is superior to or competitive with stateoftheart retrievalaugmented baselines while causing fewer overheads of retrieval and generation. We can further improve performance via generationaugmented retrieval adaptation.
|
Reverse Stable Diffusion What prompt was used to generate this image ; Texttoimage diffusion models such as Stable Diffusion have recently attracted the interest of many researchers, and inverting the diffusion process can play an important role in better understanding the generative process and how to engineer prompts in order to obtain the desired images. To this end, we introduce the new task of predicting the text prompt given an image generated by a generative diffusion model. We combine a series of whitebox and blackbox models with and without access to the weights of the diffusion network to deal with the proposed task. We propose a novel learning framework comprising of a joint prompt regression and multilabel vocabulary classification objective that generates improved prompts. To further improve our method, we employ a curriculum learning procedure that promotes the learning of imageprompt pairs with lower labeling noise i.e. that are better aligned, and an unsupervised domainadaptive kernel learning method that uses the similarities between samples in the source and target domains as extra features. We conduct experiments on the DiffusionDB data set, predicting text prompts from images generated by Stable Diffusion. Our novel learning framework produces excellent results on the aforementioned task, yielding the highest gains when applied on the whitebox model. In addition, we make an interesting discovery training a diffusion model on the prompt generation task can make the model generate images that are much better aligned with the input prompts, when the model is directly reused for texttoimage generation.
|
Generating Faithful Text From a Knowledge Graph with Noisy Reference Text ; Knowledge Graph KGtoText generation aims at generating fluent naturallanguage text that accurately represents the information of a given knowledge graph. While significant progress has been made in this task by exploiting the power of pretrained language models PLMs with appropriate graph structureaware modules, existing models still fall short of generating faithful text, especially when the groundtruth naturallanguage text contains additional information that is not present in the graph. In this paper, we develop a KGtotext generation model that can generate faithful naturallanguage text from a given graph, in the presence of noisy reference text. Our framework incorporates two core ideas Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text, thereby encouraging the decoder to generate text that aligns with the input graph. Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique. We evaluate our model's performance through the standard quantitative metrics as well as a ChatGPTbased quantitative and qualitative analysis. Our evaluation demonstrates the superior performance of our model over stateoftheart KGtotext models on faithfulness.
|
QuantumNoisedriven Generative Diffusion Models ; Generative models realized with machine learning techniques are powerful tools to infer complex and unknown data distributions from a finite number of training samples in order to produce new synthetic data. Diffusion models are an emerging framework that have recently overcome the performance of the generative adversarial networks in creating synthetic text and highquality images. Here, we propose and discuss the quantum generalization of diffusion models, i.e., three quantumnoisedriven generative diffusion models that could be experimentally tested on real quantum systems. The idea is to harness unique quantum features, in particular the nontrivial interplay among coherence, entanglement and noise that the currently available noisy quantum processors do unavoidably suffer from, in order to overcome the main computational burdens of classical diffusion models during inference. Hence, we suggest to exploit quantum noise not as an issue to be detected and solved but instead as a very remarkably beneficial key ingredient to generate much more complex probability distributions that would be difficult or even impossible to express classically, and from which a quantum processor might sample more efficiently than a classical one. An example of numerical simulations for an hybrid classicalquantum generative diffusion model is also included. Therefore, our results are expected to pave the way for new quantuminspired or quantumbased generative diffusion algorithms addressing more powerfully classical tasks as data generationprediction with widespread realworld applications ranging from climate forecasting to neuroscience, from traffic flow analysis to financial forecasting.
|
Possible ChandraMechanism for Generation Bound ; In the generation structure, the quark mass increases extremely rapidly with the increase of generation index, and there is the bound for generation number. The ground for this bound is investigated on the basis of a certain kind of composite model of leptons and quarks, in which they are supposed to be composed of subconstituents with fermi statistics. Possible Chandrasekharlike mechanism for generation bound is proposed.
|
Turbulence Generation from a stochastic wavelet model ; This research presents a new turbulence generation method based on stochastic wavelets and tests its various properties in both homogeneous and inhomogeneous turbulence. Turbulence field can be generated with less basis compared to previous synthetic Fourier methods. Adaptive generation of inhomogeneous turbulence is achieved by scale reduction algorithm and lead to smaller computation cost. The generated turbulence shows good agreement with input data and theoretical results.
|
Detecting parity violation from axion inflation with third generation detectors ; A gravitational wave background is expected to emerge from the superposition of numerous gravitational wave sources of both astrophysical and cosmological origin. A number of cosmological models can have a parity violation, resulting in the generation of circularly polarised gravitational waves. We investigate the constraining power of third generation Einstein Telescope and Cosmic Explorer detectors, for a gravitational wave background generated by early universe axion inflation.
|
Testing a quintessence model with CMBR peaks location ; We show that a model of quintessence with exponential potential, which allows to obtain general exact solutions, can generate locations of CMBR peaks which are fully compatible with present observational data
|
Generality of inflation in closed FRW models ; We investigate the generality of inflation in closed FRW models for a wide class of quintessence potentials. It is shown that inflation is not suppressed for most of them for a wide class of their parameters. This allows us to decide if inflation is common even in case of a closed universe.
|
Clifford bundle formulation of BF gravity generalized to the standard model ; The structure and dynamics of the standard model and gravity are described by a Clifford valued connection and its curvature.
|
Three generation neutrino mixing is compatible with all experiments ; We consider the minimal extension of the Standard Model with three generations of massive neutrinos that mix. We then determine the parameters of the model that satisfy all experimental constraints.
|
Evaluation of the coincidence probabilities in a generalized Gaussian model of multiple particle production ; Coincidence probabilities, which yield Renyi entropies, are investigated in a generalized Gaussian model, which includes interparticle correlations
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.