text
stringlengths 62
2.94k
|
---|
Stability in Generalized Modified Gravity ; The stability issue of generalized modified gravitational models is discussed with particular emphasis to de Sitter solutions. Two approaches are briefly presented.
|
A generalization of random selfdecomposability ; The notion of random selfdecomposability is generalized here. Its relation to selfdecomposability, Harris infinite divisibility and its connection with a stationary first order generalized autoregressive model are presented. The notion is then extended to mathbfZvalued distributions.
|
Monotonicity of the polaron energy IIGeneral theory of operator monotonicity ; We construct a general theory of operator monotonicity and apply it to the Frohlich polaron hamiltonian. This general theory provides a consistent viewpoint of the Frohlich model.
|
Semantics and Compilation of Answer Set Programming with Generalized Atoms ; Answer Set Programming ASP is logic programming under the stable model or answer set semantics. During the last decade, this paradigm has seen several extensions by generalizing the notion of atom used in these programs. Among these, there are aggregate atoms, HEX atoms, generalized quantifiers, and abstract constraints. In this paper we refer to these constructs collectively as generalized atoms. The idea common to all of these constructs is that their satisfaction depends on the truth values of a set of nongeneralized atoms, rather than the truth value of a single nongeneralized atom. Motivated by several examples, we argue that for some of the more intricate generalized atoms, the previously suggested semantics provide unintuitive results and provide an alternative semantics, which we call supportedly stable or SFLP answer sets. We show that it is equivalent to the major previously proposed semantics for programs with convex generalized atoms, and that it in general admits more intended models than other semantics in the presence of nonconvex generalized atoms. We show that the complexity of supportedly stable models is on the second level of the polynomial hierarchy, similar to previous proposals and to stable models of disjunctive logic programs. Given these complexity results, we provide a compilation method that compactly transforms programs with generalized atoms in disjunctive normal form to programs without generalized atoms. Variants are given for the new supportedly stable and the existing FLP semantics, for which a similar compilation technique has not been known so far.
|
Threedimensional homogeneous generalized Ricci solitons ; We study threedimensional generalized Ricci solitons, both in Riemannian and Lorentzian settings. We shall determine their homogeneous models, classifying leftinvariant generalized Ricci solitons on threedimensional Lie groups.
|
Java Generics An OrderTheoretic Approach Detailed Outline ; Generics have been added to Java so as to increase the expressiveness of its type system. Generics in Java, however, include some featuressuch as Java wildcards, Fbounded generics, and Java erasurethat have been hard to analyze and reason about so far, reflecting the fact that the mathematical modeling of generics in Java and other similar nominallytyped objectoriented programming OOP languages is a challenge. As a result, the type systems of mainstream nominallytyped OOP languages, which are built based on current models of generics, are overly complex, which hinders the progress of these type systems. In this paper we present a detailed outline of a new approach to modeling Java generics that uses concepts and tools from order theory, and we report on our progress in developing this approach. Fundamentally, we use the nominal subclassing relation as a partial order together with some standard and novel ordertheoretic tools to construct the generic nominal subtyping relation as a partial order and the containment relation between generic type arguments a third partial order. We further analyze the relation between these three ordering relationswhich lie at the heart of mainstream generic OO type systemsusing order theoretic tools, and accordingly we explore extensions of OO type systems suggested by such analysis. In our approach we also make use of some concepts and tools from category theory. We believe a combined ordertheoretic and categorytheoretic approach to modeling generics holds the keys to overcoming much of the adversity found when analyzing features of generic OO type systems.
|
The phase symmetry of general relativity ; It is shown that the general relativity has a oneparameter compact symmetry and this symmetry is analogous to the phase symmetry of quantum mechanics
|
Classify and Generate Using Classification Latent Space Representations for Image Generations ; Utilization of classification latent space information for downstream reconstruction and generation is an intriguing and a relatively unexplored area. In general, discriminative representations are rich in classspecific features but are too sparse for reconstruction, whereas, in autoencoders the representations are dense but have limited indistinguishable classspecific features, making them less suitable for classification. In this work, we propose a discriminative modeling framework that employs manipulated supervised latent representations to reconstruct and generate new samples belonging to a given class. Unlike generative modeling approaches such as GANs and VAEs that aim to model the data manifold distribution, Representation based Generations ReGene directly represent the given data manifold in the classification space. Such supervised representations, under certain constraints, allow for reconstructions and controlled generations using an appropriate decoder without enforcing any prior distribution. Theoretically, given a class, we show that these representations when smartly manipulated using convex combinations retain the same class label. Furthermore, they also lead to the novel generation of visually realistic images. Extensive experiments on datasets of varying resolutions demonstrate that ReGene has higher classification accuracy than existing conditional generative models while being competitive in terms of FID.
|
Compositional Generalization in Unsupervised Compositional Representation Learning A Study on Disentanglement and Emergent Language ; Deep learning models struggle with compositional generalization, i.e. the ability to recognize or generate novel combinations of observed elementary concepts. In hopes of enabling compositional generalization, various unsupervised learning algorithms have been proposed with inductive biases that aim to induce compositional structure in learned representations e.g. disentangled representation and emergent language learning. In this work, we evaluate these unsupervised learning algorithms in terms of how well they enable compositional generalization. Specifically, our evaluation protocol focuses on whether or not it is easy to train a simple model on top of the learned representation that generalizes to new combinations of compositional factors. We systematically study three unsupervised representation learning algorithms betaVAE, betaTCVAE, and emergent language EL autoencoders on two datasets that allow directly testing compositional generalization. We find that directly using the bottleneck representation with simple models and few labels may lead to worse generalization than using representations from layers before or after the learned representation itself. In addition, we find that the previously proposed metrics for evaluating the levels of compositionality are not correlated with actual compositional generalization in our framework. Surprisingly, we find that increasing pressure to produce a disentangled representation produces representations with worse generalization, while representations from EL models show strong compositional generalization. Taken together, our results shed new light on the compositional generalization behavior of different unsupervised learning algorithms with a new setting to rigorously test this behavior, and suggest the potential benefits of delevoping EL learning algorithms for more generalizable representations.
|
IC3D ImageConditioned 3D Diffusion for Shape Generation ; In recent years, Denoising Diffusion Probabilistic Models DDPMs have demonstrated exceptional performance in various 2D generative tasks. Following this success, DDPMs have been extended to 3D shape generation, surpassing previous methodologies in this domain. While many of these models are unconditional, some have explored the potential of using guidance from different modalities. In particular, image guidance for 3D generation has been explored through the utilization of CLIP embeddings. However, these embeddings are designed to align images and text, and do not necessarily capture the specific details needed for shape generation. To address this limitation and enhance imageguided 3D DDPMs with augmented 3D understanding, we introduce CISP Contrastive ImageShape Pretraining, obtaining a wellstructured imageshape joint embedding space. Building upon CISP, we then introduce IC3D, a DDPM that harnesses CISP's guidance for 3D shape generation from singleview images. This generative diffusion model outperforms existing benchmarks in both quality and diversity of generated 3D shapes. Moreover, despite IC3D's generative nature, its generated shapes are preferred by human evaluators over a competitive singleview 3D reconstruction model. These properties contribute to a coherent embedding space, enabling latent interpolation and conditioned generation also from outofdistribution images. We find IC3D able to generate coherent and diverse completions also when presented with occluded views, rendering it applicable in controlled realworld scenarios.
|
Model Selection Confidence Sets by Likelihood Ratio Testing ; The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets MSCSs based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a userspecified level of confidence, which extends the familiar notion of confidence intervals to the modelselection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic data and real data examples.
|
Metabolite mediated modeling of microbial community dynamics captures emergent behavior more effectively than speciesspecies modeling ; Personalized models of the gut microbiome are valuable for disease prevention and treatment. For this, one requires a mathematical model that predicts microbial community composition and the emergent behavior of microbial communities. We seek a modeling strategy that can capture emergent behavior when built from sets of universal individual interactions. Our investigation reveals that speciesmetabolite interaction modeling is better able to capture emergent behavior in community composition dynamics than direct speciesspecies modeling. Using publicly available data, we examine the ability of speciesspecies models and speciesmetabolite models to predict trio growth experiments from the outcomes of pair growth experiments. We compare quadratic speciesspecies interaction models and quadratic speciesmetabolite interaction models, and conclude that only speciesmetabolite models have the necessary complexity to to explain a wide variety of interdependent growth outcomes. We also show that general speciesspecies interaction models cannot match patterns observed in community growth dynamics, whereas speciesmetabolite models can. We conclude that speciesmetabolite modeling will be important in the development of accurate, clinically useful models of microbial communities.
|
Are You Stealing My Model Sample Correlation for Fingerprinting Deep Neural Networks ; An offtheshelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation SAC. Specifically, we present SACw that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SACm that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at httpsgithub.comguanjiyangSAC.
|
Mimicking Dark Energy with LemaitreTolmanBondi Models Weak Central Singularities and Critical Points ; There has been much debate over whether or not one could explain the observed acceleration of the Universe with inhomogeneous cosmological models, such as the sphericallysymmetric LemaitreTolmanBondi LTB models. It has been claimed that the central observer in these models can observe a local acceleration, which would contradict general theorems. We resolve the contradiction by noting that many of the models that have been explored contain a weak singularity at the location of the observer which makes them unphysical. In the absence of this singularity, we show that LTB models must have a positive central deceleration parameter q0, in agreement with the general theorems. We also show that it is possible to achieve a negative apparent deceleration parameter at nonzero redshifts in LTB models that do not contain this singularity. However, we find other singularities that tend to arise in LTB models when attempting to match luminosity distance data, and these generally limit the range of redshifts for which these models can mimic observations of an accelerating Universe. Exceptional models do exist that can extend to arbitrarily large redshift without encountering these pathologies, and we show how these may be constructed. These special models exhibit regions with negative effective equation of state parameter, which may fall below negative one, but we have failed to find any singularityfree models that agree with observations. Moreover, models based on dustfilled LTB metrics probably fail to reproduce observed properties of large scale structure.
|
A Cosmological Model of Thermodynamic Open Universe ; In this paper we have given a generalisation of the earlier work by Prigogine et al. who have constructed a phenomenological model of entropy production via particle creation in the very early universe generated out of the vacuum rather than from a singularity, by including radiation also as the energy source and tried to develop an alternative cosmological model in which particle creation prevents the big bang. We developed Radiation dominated model of the universe which shows a general tendency that i it originates from instability of vacuum rather than from a singularity. ii Up to a characteristic time cosmological quantities like density, pressure, Hubble constant and expansion parameter vary rapidly with time. iii After the characteristic time these quantities settles down and the models are turned into desitter type model with uniform matter, radiation, creation densities and Hubble's constant H. The desitter regime survives during a decay time then connects continuously to a usual adiabatic matter radiation RW universe.The interesting thing in the paper is that we have related the phenomenological radiation dominated model to macroscopic model of quantum particle creation in the early universe giving rise to the present observed value of cosmic background radiation . It is also found that the dust filled model tallies exactly with that of the Prigogine's one, which justifies that our model is generalized Prigogine's model. Although the model originates from instability of vacuum rather than from a singularity, still there is a couple of unavoidable singularities in the model.
|
A General AgeSpecific Mortality Model with An Example Indexed by Child or ChildAdult Mortality ; BACKGROUND. The majority of countries in Africa and nearly one third of all countries require mortality models to infer complete age schedules of mortality, required for population estimates, projectionsforecasts and many other tasks in demography and epidemiology. Models that relate child mortality to mortality at other ages are important because all countries have measures of child mortality. OBJECTIVE. 1 Design a general model for agespecific mortality that provides a standard way to relate covariates to agespecific mortality. 2 Calibrate that model using the relationship between child or childadult mortality and mortality at other ages. 3 Validate the calibrated model and compare its performance to existing models. METHODS. A general, parametrizable component model of mortality is designed using the singular value decomposition SVDComp and calibrated to the relationship between child or childadult mortality and mortality at other ages in the observed mortality schedules of the Human Mortality Database. Cross validation is used to validate the model, and the predictive performance of the model is compared to that of the LogQuad model, designed to do the same thing. RESULTS. Prediction and cross validation tests indicate that the child mortalitycalibrated SVDComp is able to accurately represent the observed mortality schedules in the Human Mortality Database, is robust to the selection of mortality schedules used to calibrate it, and performs better than the LogQuad Model. CONCLUSIONS. The child mortalitycalibrated SVDComp is a useful tool that can be used where child mortality is available but mortality at other ages is unknown. Together with earlier work on an HIV prevalencecalibrated version of SVDComp, this work suggests that this approach is truly general and could be used to develop a wide range of additional useful models.
|
Simultaneous Inference for Pairwise Graphical Models with Generalized Score Matching ; Probabilistic graphical models provide a flexible yet parsimonious framework for modeling dependencies among nodes in networks. There is a vast literature on parameter estimation and consistent model selection for graphical models. However, in many of the applications, scientists are also interested in quantifying the uncertainty associated with the estimated parameters and selected models, which current literature has not addressed thoroughly. In this paper, we propose a novel estimator for statistical inference on edge parameters in pairwise graphical models based on generalized Hyvarinen scoring rule. Hyvarinen scoring rule is especially useful in cases where the normalizing constant cannot be obtained efficiently in a closed form, which is a common problem for graphical models, including Ising models and truncated Gaussian graphical models. Our estimator allows us to perform statistical inference for general graphical models whereas the existing works mostly focus on statistical inference for Gaussian graphical models where finding normalizing constant is computationally tractable. Under mild conditions that are typically assumed in the literature for consistent estimation, we prove that our proposed estimator is sqrtnconsistent and asymptotically normal, which allows us to construct confidence intervals and build hypothesis tests for edge parameters. Moreover, we show how our proposed method can be applied to test hypotheses that involve a large number of model parameters simultaneously. We illustrate validity of our estimator through extensive simulation studies on a diverse collection of datagenerating processes.
|
Generalized Multivariate Extreme Value Models for Explicit Route Choice Sets ; This paper analyses a class of route choice models with closedform probability expressions, namely, Generalized Multivariate Extreme Value GMEV models. A large group of these models emerge from different utility formulas that combine systematic utility and random error terms. Twelve models are captured in a single discrete choice framework. The additive utility formula leads to the known logit family, being multinomial, pathsize, paired combinatorial and linknested. For the multiplicative formulation only the multinomial and pathsize weibit models have been identified; this study also identifies the paired combinatorial and linknested variations, and generalizes the pathsize variant. Furthermore, a new traveller's decision rule based on the multiplicative utility formula with a reference route is presented. Here the traveller chooses exclusively based on the differences between routes. This leads to four new GMEV models. We assess the models qualitatively based on a generic structure of route utility with random foreseen travel times, for which we empirically identify that the variance of utility should be different from thus far assumed for multinomial probit and logitkernel models. The expected travellers' behaviour and modelbehaviour under simple network changes are analysed. Furthermore, all models are estimated and validated on an illustrative network example with long distance and short distance origindestination pairs. The new multiplicative models based on differences outperform the additive models in both tests.
|
ThingingBased Conceptual Modeling Case Study of a Tendering System ; In computer science, models are made explicit to provide formality and a precise understanding of small, contingent universes e.g., an organization, as constructed from stakeholder requirements. Conceptual modeling is a fundamental discipline in this context whose main concerns are identifying, analyzing and describing the critical concepts of a universe of discourse. In the information systems field, one of the reasons why projects fail is an inability to capture requirements in a way that can be technically used to configure a system. This problem of requirements specification is considered to have deficiencies in theory. We apply a recently developed model called the Thinging Machine TM model which uniformly integrates static and dynamic modeling features to this problem of requirements specification. The objectOriented OO approach to modeling, as applied in Unified Modeling Language, is by far the most applied and accepted standard in software engineering; nevertheless, new notions in the field may enhance and facilitate a supplementary understanding of the OO model itself. We aim to contribute to the field of conceptual modeling by introducing the TM model s philosophical foundation of requirements analysis. The TM model has only five generic processes of things e.g., objects, in which genericity indicates generality, as in the generic Aristotelian concepts based on abstraction. We show the TM model s viability by applying it to a real business system.
|
NLXGPT A Model for Natural Language Explanations in Vision and VisionLanguage Tasks ; Natural language explanation NLE models aim at explaining the decisionmaking process of a black box system via generating natural language sentences which are humanfriendly, highlevel and finegrained. Current NLE models explain the decisionmaking process of a vision or visionlanguage model a.k.a., task model, e.g., a VQA model, via a language model a.k.a., explanation model, e.g., GPT. Other than the additional memory resources and inference time required by the task model, the task and explanation models are completely independent, which disassociates the explanation from the reasoning process made to predict the answer. We introduce NLXGPT, a general, compact and faithful language model that can simultaneously predict an answer and explain it. We first conduct pretraining on large scale data of imagecaption pairs for general understanding of images, and then formulate the answer as a text prediction task along with the explanation. Without region proposals nor a task model, our resulting overall framework attains better evaluation scores, contains much less parameters and is 15times faster than the current SoA model. We then address the problem of evaluating the explanations which can be in many times generic, databiased and can come in several forms. We therefore design 2 new evaluation measures 1 explainpredict and 2 retrievalbased attack, a selfevaluation framework that requires no labels. Code is at httpsgithub.comfawazsammaninlxgpt.
|
Improve Model Testing by Integrating Bounded Model Checking and Coverage Guided Fuzzing ; The control logic models built by Simulink or Ptolemy have been widely used in industry scenes. It is an urgent need to ensure the safety and security of the control logic models. Test case generation technologies are widely used to ensure the safety and security. Stateoftheart model testing tools employ model checking techniques or searchbased methods to generate test cases. Traditional search based techniques based on Simulink simulation are plagued by problems such as low speed and high overhead. Traditional model checking techniques such as symbolic execution have limited performance when dealing with nonlinear elements and complex loops. Recently, coverage guided fuzzing technologies are known to be effective for test case generation, due to their high efficiency and impressive effects over complex branches of loops. In this paper, we apply fuzzing methods to improve model testing and demonstrate the effectiveness. The fuzzing methods aim to cover more program branches by mutating valuable seeds. Inspired by this feature, we propose a novel integration technology SPsCGF, which leverages bounded model checking for symbolic execution to generate test cases as initial seeds and then conduct fuzzing based upon these worthy seeds. In this manner, our work combines the advantages of the model checking methods and fuzzing techniques in a novel way. Since the control logic models always receive signal inputs, we specifically design novel mutation operators for signals to improve the existing fuzzing method in model testing. Over the evaluated benchmarks which consist of industrial cases, SPsCGF could achieve 8 to 38 higher model coverage and 3x10x time efficiency compared with the stateoftheart works.
|
PFGM Unlocking the Potential of PhysicsInspired Generative Models ; We introduce a new family of physicsinspired generative models termed PFGM that unifies diffusion models and Poisson Flow Generative Models PFGM. These models realize generative trajectories for N dimensional data by embedding paths in ND dimensional space while still controlling the progression with a simple scalar norm of the D additional variables. The new models reduce to PFGM when D1 and to diffusion models when Dtoinfty. The flexibility of choosing D allows us to trade off robustness against rigidity as increasing D results in more concentrated coupling between the data and the additional variable norms. We dispense with the biased large batch field targets used in PFGM and instead provide an unbiased perturbationbased objective similar to diffusion models. To explore different choices of D, we provide a direct alignment method for transferring welltuned hyperparameters from diffusion models Dto infty to any finite D values. Our experiments show that models with finite D can be superior to previous stateoftheart diffusion models on CIFAR10FFHQ 64times64 datasets, with FID scores of 1.912.43 when D2048128. In classconditional setting, D2048 yields current stateoftheart FID of 1.74 on CIFAR10. In addition, we demonstrate that models with smaller D exhibit improved robustness against modeling errors. Code is available at httpsgithub.comNewbeeerpfgmpp
|
Modelbased standardization using multiple imputation ; When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Modelbased standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariateadjusted estimate of the marginal treatment effect. The standard approach to modelbased standardization involves maximumlikelihood estimation and use of the nonparametric bootstrap. We introduce a novel, generalpurpose, modelbased standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization MIM. MIM consists of two main stages the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. We conduct a simulation study to benchmark the finitesample performance of MIM in conjunction with a parametric outcome model. The simulations provide proofofprinciple in scenarios with binary outcomes, continuousvalued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to modelbased standardization.
|
Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders ; Variational Autoencoders VAEs are deep generative latent variable models consisting of two components a generative model that captures a data distribution px by transforming a distribution pz over latent space, and an inference model that infers likely latent codes for each data point Kingma and Welling, 2013. Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata 1 the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data e.g. van den Oord et al. 2017; Kim et al. 2018; 2 the aggregate of the learned latent codes does not match the prior pz. This mismatch means that the learned generative model will be unable to generate realistic data with samples from pze.g. Makhzani et al. 2015; Tomczak and Welling 2017. In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions. Our analysis builds on two observations 1 the generative model is unidentifiable there exist many generative models that explain the data equally well, each with different and potentially unwanted properties and 2 bias in the VAE objective the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate. We present a novel inference method, LiBI, mitigating the problems identified in our analysis. On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so.
|
Generalized Ginflation Inflation with the most general secondorder field equations ; We study generalized Galileons as a framework to develop the most general singlefield inflation models ever, Generalized Ginflation, containing yet further generalization of Ginflation, as well as previous examples such as kinflation, extended inflation, and new Higgs inflation as special cases. We investigate the background and perturbation evolution in this model, calculating the most general quadratic actions for tensor and scalar cosmological perturbations to give the stability criteria and the power spectra of primordial fluctuations. It is pointed out in the Appendix that the Horndeski theory and the generalized Galileons are equivalent. In particular, even the nonminimal coupling to the GaussBonnet term is included in the generalized Galileons in a nontrivial manner.
|
NonMonotonic Sequential Text Generation ; Standard sequential generation methods assume a prespecified generation order, such as text generation methods which generate words from left to right. In this work, we propose a framework for training models of text generation that operate in nonmonotonic orders; the model directly learns good orders, without any additional annotation. Our framework operates by generating a word at an arbitrary position, and then recursively generating words to its left and then words to its right, yielding a binary tree. Learning is framed as imitation learning, including a coaching method which moves from imitating an oracle to reinforcing the policy's own preferences. Experimental results demonstrate that using the proposed method, it is possible to learn policies which generate text without prespecifying a generation order, while achieving competitive performance with conventional lefttoright generation.
|
A Comprehensive Survey on Deep Music Generation Multilevel Representations, Algorithms, Evaluations, and Future Directions ; The utilization of deep learning techniques in generating various contents such as image, text, etc. has become a trend. Especially music, the topic of this paper, has attracted widespread attention of countless researchers.The whole process of producing music can be divided into three stages, corresponding to the three levels of music generation score generation produces scores, performance generation adds performance characteristics to the scores, and audio generation converts scores with performance characteristics into audio by assigning timbre or generates music in audio format directly. Previous surveys have explored the network models employed in the field of automatic music generation. However, the development history, the model evolution, as well as the pros and cons of same music generation task have not been clearly illustrated. This paper attempts to provide an overview of various composition tasks under different music generation levels, covering most of the currently popular music generation tasks using deep learning. In addition, we summarize the datasets suitable for diverse tasks, discuss the music representations, the evaluation methods as well as the challenges under different levels, and finally point out several future directions.
|
CausalTGAN Generating Tabular Data Using Causal Generative Adversarial Networks ; Synthetic data generation becomes prevalent as a solution to privacy leakage and data shortage. Generative models are designed to generate a realistic synthetic dataset, which can precisely express the data distribution for the real dataset. The generative adversarial networks GAN, which gain great success in the computer vision fields, are doubtlessly used for synthetic data generation. Though there are prior works that have demonstrated great progress, most of them learn the correlations in the data distributions rather than the true processes in which the datasets are naturally generated. Correlation is not reliable for it is a statistical technique that only tells linear dependencies and is easily affected by the dataset's bias. Causality, which encodes all underlying factors of how the real data be naturally generated, is more reliable than correlation. In this work, we propose a causal model named Causal Tabular Generative Neural Network CausalTGAN to generate synthetic tabular data using the tabular data's causal information. Extensive experiments on both simulated datasets and real datasets demonstrate the better performance of our method when given the true causal graph and a comparable performance when using the estimated causal graph.
|
Deconstructed GenerationBased ZeroShot Model ; Recent research on Generalized ZeroShot Learning GZSL has focused primarily on generationbased methods. However, current literature has overlooked the fundamental principles of these methods and has made limited progress in a complex manner. In this paper, we aim to deconstruct the generatorclassifier framework and provide guidance for its improvement and extension. We begin by breaking down the generatorlearned unseen class distribution into classlevel and instancelevel distributions. Through our analysis of the role of these two types of distributions in solving the GZSL problem, we generalize the focus of the generationbased approach, emphasizing the importance of i attribute generalization in generator learning and ii independent classifier learning with partially biased data. We present a simple method based on this analysis that outperforms SotAs on four public GZSL datasets, demonstrating the validity of our deconstruction. Furthermore, our proposed method remains effective even without a generative model, representing a step towards simplifying the generatorclassifier structure. Our code is available at urlhttpsgithub.comcdb342DGZ.
|
Why is constrained neural language generation particularly challenging ; Recent advances in deep neural language models combined with the capacity of large scale datasets have accelerated the development of natural language generation systems that produce fluent and coherent texts to various degrees of success in a multitude of tasks and application contexts. However, controlling the output of these models for desired user and task needs is still an open challenge. This is crucial not only to customizing the content and style of the generated language, but also to their safe and reliable deployment in the real world. We present an extensive survey on the emerging topic of constrained neural language generation in which we formally define and categorize the problems of natural language generation by distinguishing between conditions and constraints the latter being testable conditions on the output text instead of the input, present constrained text generation tasks, and review existing methods and evaluation metrics for constrained text generation. Our aim is to highlight recent progress and trends in this emerging field, informing on the most promising directions and limitations towards advancing the stateoftheart of constrained neural language generation research.
|
SDMuse Stochastic Differential Music Editing and Generation via Hybrid Representation ; While deep generative models have empowered music generation, it remains a challenging and underexplored problem to edit an existing musical piece at fine granularity. In this paper, we propose SDMuse, a unified Stochastic Differential Music editing and generation framework, which can not only compose a whole musical piece from scratch, but also modify existing musical pieces in many ways, such as combination, continuation, inpainting, and style transferring. The proposed SDMuse follows a twostage pipeline to achieve music generation and editing on top of a hybrid representation including pianoroll and MIDIevent. In particular, SDMuse first generatesedits pianoroll by iteratively denoising through a stochastic differential equation SDE based on a diffusion model generative prior, and then refines the generated pianoroll and predicts MIDIevent tokens autoregressively. We evaluate the generated music of our method on ailabs1k7 pop music dataset in terms of quality and controllability on various music editing and generation tasks. Experimental results demonstrate the effectiveness of our proposed stochastic differential music editing and generation process, as well as the hybrid representations.
|
Relational Inductive Biases for ObjectCentric Image Generation ; Conditioning image generation on specific features of the desired output is a key ingredient of modern generative models. Most existing approaches focus on conditioning the generation based on freeform text, while some niche studies use scene graphs to describe the content of the image to be generated. This paper explores novel methods to condition image generation that are based on objectcentric relational representations. In particular, we propose a methodology to condition the generation of a particular object in an image on the attributed graph representing its structure and associated style. We show that such architectural biases entail properties that facilitate the manipulation and conditioning of the generative process and allow for regularizing the training procedure. The proposed framework is implemented by means of a neural network architecture combining convolutional operators that operate on both the underlying graph and the 2D grid that becomes the output image. The resulting model learns to generate multichannel masks of the object that can be used as a soft inductive bias in the downstream generative task. Empirical results show that the proposed approach compares favorably against relevant baselines on image generation conditioned on human poses.
|
PatchWise Point Cloud Generation A DivideandConquer Approach ; A generative model for highfidelity point clouds is of great importance in synthesizing 3d environments for applications such as autonomous driving and robotics. Despite the recent success of deep generative models for 2d images, it is nontrivial to generate 3d point clouds without a comprehensive understanding of both local and global geometric structures. In this paper, we devise a new 3d point cloud generation framework using a divideandconquer approach, where the whole generation process can be divided into a set of patchwise generation tasks. Specifically, all patch generators are based on learnable priors, which aim to capture the information of geometry primitives. We introduce point and patchwise transformers to enable the interactions between points and patches. Therefore, the proposed divideandconquer approach contributes to a new understanding of point cloud generation from the geometry constitution of 3d shapes. Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patchwise point cloud generation, where it clearly outperforms recent stateoftheart methods for highfidelity point cloud generation.
|
Diversifying Question Generation over Knowledge Base via External Natural Questions ; Previous methods on knowledge base question generation KBQG primarily focus on enhancing the quality of a single generated question. Recognizing the remarkable paraphrasing ability of humans, we contend that diverse texts should convey the same semantics through varied expressions. The above insights make diversifying question generation an intriguing task, where the first challenge is evaluation metrics for diversity. Current metrics inadequately assess the above diversity since they calculate the ratio of unique ngrams in the generated question itself, which leans more towards measuring duplication rather than true diversity. Accordingly, we devise a new diversity evaluation metric, which measures the diversity among topk generated questions for each instance while ensuring their relevance to the ground truth. Clearly, the second challenge is how to enhance diversifying question generation. To address this challenge, we introduce a dual model framework interwoven by two selection strategies to generate diverse questions leveraging external natural questions. The main idea of our dual framework is to extract more diverse expressions and integrate them into the generation model to enhance diversifying question generation. Extensive experiments on widely used benchmarks for KBQG demonstrate that our proposed approach generates highly diverse questions and improves the performance of question answering tasks.
|
MAR A structurebased search engine for models ; The availability of shared software models provides opportunities for reusing, adapting and learning from them. Public models are typically stored in a variety of locations, including model repositories, regular source code repositories, web pages, etc. To profit from them developers need effective search mechanisms to locate the models relevant for their tasks. However, to date, there has been little success in creating a generic and efficient search engine specially tailored to the modelling domain. In this paper we present MAR, a search engine for models. MAR is generic in the sense that it can index any type of model if its metamodel is known. MAR uses a querybyexample approach, that is, it uses example models as queries. The search takes the model structure into account using the notion of bag of paths, which encodes the structure of a model using paths between model elements and is a representation amenable for indexing. MAR is built over HBase using a specific design to deal with large repositories. Our benchmarks show that the engine is efficient and has fast response times in most cases. We have also evaluated the precision of the search engine by creating model mutants which simulate user queries. A REST API is available to perform queries and an Eclipse plugin allows end users to connect to the search engine from model editors. We have currently indexed more than 50.000 models of different kinds, including Ecore metamodels, BPMN diagrams and UML models. MAR is available at httpmarsearch.org.
|
Robust Mathematical Formulation and Probabilistic Description of AgentBased Computational Economic Market Models ; In science and especially in economics, agentbased modeling has become a widely used modeling approach. These models are often formulated as a large system of difference equations. In this study, we discuss two aspects, numerical modeling and the probabilistic description for two agentbased computational economic market models the LevyLevySolomon model and the FrankeWesterhoff model. We derive timecontinuous formulations of both models, and in particular we discuss the impact of the timescaling on the model behavior for the LevyLevySolomon model. For the FrankeWesterhoff model, we proof that a constraint required in the original model is not necessary for stability of the timecontinuous model. It is shown that a semiimplicit discretization of the timecontinuous system preserves this unconditional stability. In addition, this semiimplicit discretization can be computed at cost comparable to the original model. Furthermore, we discuss possible probabilistic descriptions of time continuous agentbased computational economic market models. Especially, we present the potential advantages of kinetic theory in order to derive mesoscopic desciptions of agentbased models. Exemplified, we show two probabilistic descriptions of the LevyLevySolomon and FrankeWesterhoff model.
|
Model Zoos A Dataset of Diverse Populations of Neural Network Models ; In the last years, neural networks NN have evolved from laboratory environments to the stateoftheart for many realworld problems. It was shown that NN models i.e., their weights and biases evolve on unique trajectories in weight space during training. Following, a population of such neural network models referred to as model zoo would form structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can reveal latent properties of individual models. With such model zoos, one could investigate novel approaches for i model analysis, ii discover unknown learning dynamics, iii learn rich representations of such populations, or iv exploit the model zoos for generative modelling of NN weights and biases. Unfortunately, the lack of standardized model zoos and available benchmarks significantly increases the friction for further research about populations of NNs. With this work, we publish a novel dataset of model zoos containing systematically generated and diverse populations of NN models for further research. In total the proposed model zoo dataset is based on eight image datasets, consists of 27 model zoos trained with varying hyperparameter combinations and includes 50'360 unique NN models as well as their sparsified twins, resulting in over 3'844'360 collected model states. Additionally, to the model zoo data we provide an indepth analysis of the zoos and provide benchmarks for multiple downstream tasks. The dataset can be found at www.modelzoos.cc.
|
The Bayesian Context Trees State Space Model for time series modelling and forecasting ; A hierarchical Bayesian framework is introduced for developing rich mixture models for realvalued time series, along with a collection of effective tools for learning and inference. At the top level, meaningful discrete states are identified as appropriately quantised values of some of the most recent samples. This collection of observable states is described as a discrete contexttree model. Then, at the bottom level, a different, arbitrary model for realvalued time series a base model is associated with each state. This defines a very general framework that can be used in conjunction with any existing model class to build flexible and interpretable mixture models. We call this the Bayesian Context Trees State Space Model, or the BCTX framework. Efficient algorithms are introduced that allow for effective, exact Bayesian inference; in particular, the maximum a posteriori probability MAP contexttree model can be identified. These algorithms can be updated sequentially, facilitating efficient online forecasting. The utility of the general framework is illustrated in two particular instances When autoregressive AR models are used as base models, resulting in a nonlinear AR mixture model, and when conditional heteroscedastic ARCH models are used, resulting in a mixture model that offers a powerful and systematic way of modelling the wellknown volatility asymmetries in financial data. In forecasting, the BCTX methods are found to outperform stateoftheart techniques on simulated and realworld data, both in terms of accuracy and computational requirements. In modelling, the BCTX structure finds natural structure present in the data. In particular, the BCTARCH model reveals a novel, important feature of stock market index data, in the form of an enhanced leverage effect.
|
Integrating economic and psychological insights in binary choice models with social interactions ; We investigate a class of binary choice models with social interactions. We propose a unifying perspective that integrates economic models using a utility function and psychological models using an impact function. A general approach for analyzing the equilibrium structure of these models within meanfield approximation is developed. It is shown that within a meanfield approach both the utility function and the impact function models are equivalent to threshold models. The interplay between heterogeneity and randomness in model formulation is discussed. A general framework is applied in a number of examples leading to some wellknown models but also showing the possibility of more complex dynamics related to multiple equilibria. Our synthesis can provide a basis for many practical applications extending the scope of binary choice models.
|
Controlling for individual heterogeneity in longitudinal models, with applications to student achievement ; Longitudinal data tracking repeated measurements on individuals are highly valued for research because they offer controls for unmeasured individual heterogeneity that might otherwise bias results. Random effects or mixed models approaches, which treat individual heterogeneity as part of the model error term and use generalized least squares to estimate model parameters, are often criticized because correlation between unobserved individual effects and other model variables can lead to biased and inconsistent parameter estimates. Starting with an examination of the relationship between random effects and fixed effects estimators in the standard unobserved effects model, this article demonstrates through analysis and simulation that the mixed model approach has a bias compression'' property under a general model for individual heterogeneity that can mitigate bias due to uncontrolled differences among individuals. The general model is motivated by the complexities of longitudinal student achievement measures, but the results have broad applicability to longitudinal modeling.
|
Immigrated urn models asymptotic properties and applications ; Urn models have been widely studied and applied in both scientific and social science disciplines. In clinical studies, the adoption of urn models in treatment allocation schemes has been proved to be beneficial to both researchers, by providing more efficient clinical trials, and patients, by increasing the likelihood of receiving the better treatment. In this paper, we propose a new and general class of immigrated urn IMU models that incorporates the immigration mechanism into the urn process. Theoretical properties are developed and the advantages of the IMU models are discussed. In general, the IMU models have smaller variabilities than the classical urn models, yielding more powerful statistical inferences in applications. Illustrative examples are presented to demonstrate the wide applicability of the IMU models. The proposed IMU framework, including many popular classical urn models, not only offers a unify perspective for us to comprehend the urn process, but also enables us to generate several novel urn models with desirable properties.
|
A new distance between DNA sequences ; We propose a new distance metric for DNA sequences, which can be defined on any evolutionary Markov model with infinitesimal generator matrix Q. That is the new metric can be defined under existing models such as JukesCantor model, Kimura2parameter model, F84 model, GTR model etc. Since our metric does not depend on the form of the generator matrix Q, it can be defined for very general models including those with varying nucleotide substitution rates among lineages. This makes our metric widely applicable. The simulation experiments carried out shows that the new metric, when defined under classical models such as the JC, F84 and Kimura2parameter models, performs better than these existing metrics in recovering phylogenetic trees from sequence data. Our simulation experiments also show that the new metric, under a model that allows varying nucleotide substitution rates among lineages, performs equally well or better than its other forms studied.
|
Topographic Mapping of astronomical light curves via a physically inspired Probabilistic model ; We present a probabilistic generative approach for constructing topographic maps of light curves from eclipsing binary stars. The model defines a lowdimensional manifold of local noise models induced by a smooth nonlinear mapping from a lowdimensional latent space into the space of probabilistic models of the observed light curves. The local noise models are physical models that describe how such light curves are generated. Due to the principled probabilistic nature of the model, a cost function arises naturally and the model parameters are fitted via MAP estimation using the ExpectationMaximisation algorithm. Once the model has been trained, each light curve may be projected to the latent space as the the mean posterior probability over the local noise models. We demonstrate our approach on a dataset of artificially generated light curves and on a dataset comprised of light curves from real observations.
|
Some classes of renormalizable tensor models ; We identify new families of renormalizable of tensor models from anterior renormalizable tensor models via a mapping capable of reducing or increasing the rank of the theory without having an effect on the renormalizability property. Mainly, a version of the rank 3 tensor model as defined in arXiv1201.0176 hepth, the GrosseWulkenhaar model in 4D and 2D generate three different classes of renormalizable models. The proof of the renormalizability is fully performed for the first reduced model. The same procedure can be applied for the remaining cases. Interestingly, we find that, due to the peculiar behavior of anisotropic wave function renormalizations, the rank 3 tensor model reduced to a matrix model generates a simple superrenormalizable vector model.
|
Modeling Waveform Shapes with Random Eects Segmental Hidden Markov Models ; In this paper we describe a general probabilistic framework for modeling waveforms such as heartbeats from ECG data. The model is based on segmental hidden Markov models as used in speech recognition with the addition of random effects to the generative model. The random effects component of the model handles shape variability across different waveforms within a general class of waveforms of similar shape. We show that this probabilistic model provides a unified framework for learning these models from sets of waveform data as well as parsing, classification, and prediction of new waveforms. We derive a computationally efficient EM algorithm to fit the model on multiple waveforms, and introduce a scoring method that evaluates a test waveform based on its shape. Results on two realworld data sets demonstrate that the random effects methodology leads to improved accuracy compared to alternative approaches on classification and segmentation of realworld waveforms.
|
Tensor models from the viewpoint of matrix models the case of loop models on random surfaces ; We study a connection between random tensors and random matrices through Utau matrix models which generate fully packed, oriented loops on random surfaces. The latter are found to be in bijection with a set of regular edgecolored graphs typically found in tensor models. It is shown that the expansion in the number of loops is organized like the 1N expansion of rankthree tensor models. Recent results on tensor models are reviewed and applied in this context. For example, configurations which maximize the number of loops are precisely the melonic graphs of tensor models and a scaling limit which projects onto the melonic sector is found. We also reinterpret the double scaling limit of tensor models from the point of view of loops on random surfaces. This approach is eventually generalized to higherrank tensor models, which generate loops with fugacity tau on triangulations in dimension d1.
|
Review on exact and perturbative deformations of the EinsteinStraus model uniqueness and rigidity results ; The EinsteinStraus model consists of a Schwarzschild spherical vacuole in a FriedmanLemaitreRobertsonWalker FLRW dust spacetime with or without Lambda. It constitutes the most widely accepted model to answer the question of the influence of large scale cosmological dynamics on local systems. The conclusion drawn by the model is that there is no influence from the cosmic background, since the spherical vacuole is static. Spherical generalizations to other interior matter models are commonly used in the construction of lumpy inhomogeneous cosmological models. On the other hand, the model has proven to be reluctant to admit nonspherical generalizations. In this review, we summarize the known uniqueness results for this model. These seem to indicate that the only reasonable and realistic nonspherical deformations of the EinsteinStraus model require perturbing the FLRW background. We review results about linear perturbations of the EinsteinStraus model, where the perturbations in the vacuole are assumed to be stationary and axially symmetric so as to describe regions voids in particular in which the matter has reached an equilibrium regime.
|
Restricted Likelihood Ratio Tests for Linearity in ScalaronFunction Regression ; We propose a procedure for testing the linearity of a scalaronfunction regression relationship. To do so, we use the functional generalized additive model FGAM, a recently developed extension of the functional linear model. For a functional covariate Xt, the FGAM models the mean response as the integral with respect to t of FXt,t where F is an unknown bivariate function. The FGAM can be viewed as the natural functional extension of generalized additive models. We show how the functional linear model can be represented as a simple mixed model nested within the FGAM. Using this representation, we then consider restricted likelihood ratio tests for zero variance components in mixed models to test the null hypothesis that the functional linear model holds. The methods are general and can also be applied to testing for interactions in a multivariate additive model or for testing for no effect in the functional linear model. The performance of the proposed tests is assessed on simulated data and in an application to measuring diesel truck emissions, where strong evidence of nonlinearities in the relationship between the functional predictor and the response are found.
|
Towards wellspecified semisupervised modelbased classifiers via structural adaptation ; Semisupervised learning plays an important role in largescale machine learning. Properly using additional unlabeled data largely available nowadays often can improve the machine learning accuracy. However, if the machine learning model is misspecified for the underlying true data distribution, the model performance could be seriously jeopardized. This issue is known as model misspecification. To address this issue, we focus on generative models and propose a criterion to detect the onset of model misspecification by measuring the performance difference between models obtained using supervised and semisupervised learning. Then, we propose to automatically modify the generative models during model training to achieve an unbiased generative model. Rigorous experiments were carried out to evaluate the proposed method using two image classification data sets PASCAL VOC'07 and MIR Flickr. Our proposed method has been demonstrated to outperform a number of stateoftheart semisupervised learning approaches for the classification task.
|
Fidelity Susceptibility in Onedimensional Disordered Lattice Models ; We investigate quantum phase transitions in onedimensional quantum disordered lattice models, the Anderson model and the AubryAndr'e model, from the fidelity susceptibility approach. First, we find that the fidelity susceptibility and the generalized adiabatic susceptibility are maximum at the quantum critical points of the disordered models, through which one can locate the quantum critical point in disordered lattice models. Second, finitesize scaling analysis of the fidelity susceptibility and of the generalized adiabatic susceptibility show that the correlation length critical exponent and the dynamical critical exponent at the quantum critical point of the onedimensional Anderson model are respectively 23 and 2 and of the AubryAndr'e model are respectively 1 and 2.375. Thus the quantum phase transitions in the Anderson model and in the AubryAndr'e model are of different universality classes. Because the fidelity susceptibility and the generalized adiabatic susceptibility are directly connected to the dynamical structure factor which are experimentally accessible in the linear response regime, the fidelity susceptibility in quantum disordered systems may be observed experimentally in near future.
|
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation ; Deep learning models trained using massive amounts of data tend to capture one view of the data and its associated mapping. Different deep learning models built on the same training data may capture different views of the data based on the underlying techniques used. For explaining the decisions arrived by blackbox deep learning models, we argue that it is essential to reproduce that model's view of the training data faithfully. This faithful reproduction can then be used for explanation generation. We investigate two methods for data view extraction hillclimbing approach and a GANdriven approach. We then use this synthesized data for creating shadow models for explanation generation DecisionTree model and Formal Concept Analysis based model. We evaluate these approaches on a Blackbox model trained on public datasets and show its usefulness in explanation generation.
|
Uniformly valid confidence intervals postmodelselection ; We suggest general methods to construct asymptotically uniformly valid confidence intervals postmodelselection. The constructions are based on principles recently proposed by Berk et al. 2013. In particular the candidate models used can be misspecified, the target of inference is modelspecific, and coverage is guaranteed for any datadriven model selection procedure. After developing a general theory we apply our methods to practically important situations where the candidate set of models, from which a working model is selected, consists of fixed design homoskedastic or heteroskedastic linear models, or of binary regression models with general link functions. In an extensive simulation study, we find that the proposed confidence intervals perform remarkably well, even when compared to existing methods that are tailored only for specific model selection procedures.
|
Rigorous results for the distribution of money on connected graphs ; This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the distribution of money at equilibrium approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of and extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.
|
Adaptive Generation Model A New Ensemble Method ; As a common method in Machine Learning, Ensemble Method is used to train multiple models from a data set and obtain better results through certain combination strategies. Stacking method, as representatives of Ensemble Learning methods, is often used in Machine Learning Competitions such as Kaggle. This paper proposes a variant of Stacking Model based on the idea of gcForest, namely Adaptive Generation Model AGM. It means that the adaptive generation is performed not only in the horizontal direction to expand the width of each layer model, but also in the vertical direction to expand the depth of the model. For base models of AGM, they all come from preset basic Machine Learning Models. In addition, a feature augmentation method is added between layers to further improve the overall accuracy of the model. Finally, through comparative experiments on 7 data sets, the results show that the accuracy of AGM are better than its previous models.
|
LocalGLMnet interpretable deep learning for tabular data ; Deep learning models have gained great popularity in statistical modeling because they lead to very competitive regression models, often outperforming classical statistical models such as generalized linear models. The disadvantage of deep learning models is that their solutions are difficult to interpret and explain, and variable selection is not easily possible because deep learning models solve feature engineering and variable selection internally in a nontransparent way. Inspired by the appealing structure of generalized linear models, we propose a new network architecture that shares similar features as generalized linear models, but provides superior predictive power benefiting from the art of representation learning. This new architecture allows for variable selection of tabular data and for interpretation of the calibrated deep learning model, in fact, our approach provides an additive decomposition in the spirit of Shapley values and integrated gradients.
|
Tensor Models extending the matrix models structures and methods ; In this text we review a few structural properties of matrix models that should at least partly generalize to random tensor models. We review some aspects of the loop equations for matrix models and their algebraic counterpart for tensor models. Despite the generic title of this review, we, in particular, invoke the Topological Recursion. We explain its appearance in matrix models. Then we state that a family of tensor models provides a natural example which satisfies a version of the most general form of the topological recursion, named the blobbed topological recursion. We discuss the difficulties of extending the technical solutions existing for matrix models to tensor models. Some proofs are not published yet but will be given in a coming paper, the rest of the results are well known in the literature.
|
Exact Solution to a Class of Generalized Kitaev Spin12 Models in Arbitrary Dimensions ; We construct a class of exactly solvable generalized Kitaev spin12 models in arbitrary dimensions, which is beyond the category of quantum compass models. The JordanWigner transformation is employed to prove the exact solvability. An exactly solvable quantum spin12 models can be mapped to a gas of free Majorana fermions coupled to static Z2 gauge fields. We classify these exactly solvable models according to their parent models. Any model belonging to this class can be generated by one of the parent models. For illustration, a two dimensional 2D tetragonoctagon model and a three dimensional 3D xy bond model are studied.
|
Deep Verifier Networks Verification of Deep Discriminative Models with Deep Generative Models ; AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework deep verifier networks DVN to verify the inputs and outputs of deep discriminative models with deep generative models. Our proposed model is based on conditional variational autoencoders with disentanglement constraints. We give both intuitive and theoretical justifications of the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on outofdistribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve stateoftheart results in all of these problems.
|
Automatic Backward Filtering Forward Guiding for Markov processes and graphical models ; We incorporate discrete and continuous time Markov processes as building blocks into probabilistic graphical models with latent and observed variables. We introduce the automatic Backward Filtering Forward Guiding BFFG paradigm Mider et al., 2021 for programmable inference on latent states and model parameters. Our starting point is a generative model, a forward description of the probabilistic process dynamics. We backpropagate the information provided by observations through the model to transform the generative forward model into a preconditional model guided by the data. It approximates the actual conditional model with known likelihoodratio between the two. The backward filter and the forward change of measure are suitable to be incorporated into a probabilistic programming context because they can be formulated as a set of transformation rules. The guided generative model can be incorporated in different approaches to efficiently sample latent states and parameters conditional on observations. We show applicability in a variety of settings, including Markov chains with discrete state space, interacting particle systems, state space models, branching diffusions and Gamma processes.
|
Generating Diverse Translation from Model Distribution with Dropout ; Despite the improvement of translation quality, neural machine translation NMT often suffers from the lack of diversity in its generation. In this paper, we propose to generate diverse translations by deriving a large number of possible models with Bayesian modelling and sampling models from them for inference. The possible models are obtained by applying concrete dropout to the NMT model and each of them has specific confidence for its prediction, which corresponds to a posterior model distribution under specific training data in the principle of Bayesian modeling. With variational inference, the posterior model distribution can be approximated with a variational distribution, from which the final models for inference are sampled. We conducted experiments on ChineseEnglish and EnglishGerman translation tasks and the results shows that our method makes a better tradeoff between diversity and accuracy.
|
Hypergeometric viable models in fR gravity ; A cosmologically viable hypergeometric model in the modified gravity theory fR is found from the need for asintoticity towards LambdaCDM, the existence of an inflection point in the fR curve, and the conditions of viability given by the phase space curves m, r, where m and r are characteristic functions of the model. To analyze the constraints associated with the viability requirements, the models were expressed in terms of a dimensionless variable, i.e. Rto x and fRto yxxhxlambda, where hx represents the deviation of the model from General Relativity. Using the geometric properties imposed by the inflection point, differential equations were constructed to relate h'x and h''x, and the solutions found were Starobinsky 2007 and HuSawicki type models, nonetheless, it was found that these differential equations are particular cases of a hypergeometric differential equation, so that these models can be obtained from a general hypergeometric model. The parameter domains of this model were analyzed to make the model viable.
|
ROBY Evaluating the Robustness of a Deep Model by its Decision Boundaries ; With the successful application of deep learning models in many realworld tasks, the model robustness becomes more and more critical. Often, we evaluate the robustness of the deep models by attacking them with purposely generated adversarial samples, which is computationally costly and dependent on the specific attackers and the model types. This work proposes a generic evaluation metric ROBY, a novel attackindependent robustness measure based on the model's decision boundaries. Independent of adversarial samples, ROBY uses the interclass and intraclass statistic features to capture the features of the model's decision boundaries. We experimented on ten stateoftheart deep models and showed that ROBY matches the robustness gold standard of attack success rate ASR by a strong firstorder generic attacker. with only 1 of time cost. To the best of our knowledge, ROBY is the first lightweight attackindependent robustness evaluation metric that can be applied to a wide range of deep models. The code of ROBY is open sourced at httpsgithub.combaaaadROBYEvaluatingtheRobustnessofaDeepModelbyitsDecisionBoundaries.
|
Structural risk minimization for quantum linear classifiers ; Quantum machine learning QML models based on parameterized quantum circuits are often highlighted as candidates for quantum computing's nearterm killer application''. However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper we study how to balance between training accuracy and generalization performance also called structural risk minimization for two prominent QML models introduced by Havl'ivcek et al. Nature, 2019, and Schuld and Killoran PRL, 2019. Firstly, using relationships to well understood classical models, we prove that two model parameters i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model closely control the models' complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models' ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.
|
GAN Cocktail mixing GANs without dataset access ; Today's generative models are capable of synthesizing highfidelity images, but each model specializes on a specific target domain. This raises the need for model merging combining two or more pretrained generative models into a single unified one. In this work we tackle the problem of model merging, given two constraints that often come up in the real world 1 no access to the original training data, and 2 without increasing the size of the neural network. To the best of our knowledge, model merging under these constraints has not been studied thus far. We propose a novel, twostage solution. In the first stage, we transform the weights of all the models to the same parameter space by a technique we term model rooting. In the second stage, we merge the rooted models by averaging their weights and finetuning them for each specific domain, using only data generated by the original trained models. We demonstrate that our approach is superior to baseline methods and to existing transfer learning techniques, and investigate several applications.
|
OnPolicy Model Errors in Reinforcement Learning ; Modelfree reinforcement learning algorithms can compute policy gradients given sampled environment transitions, but require large amounts of data. In contrast, modelbased methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal. In this paper, we present a novel method that combines realworld data and a learned model in order to get the best of both worlds. The core idea is to exploit the realworld data for onpolicy predictions and use the learned model only to generalize to different actions. Specifically, we use the data as timedependent onpolicy correction terms on top of a learned model, to retain the ability to generate data without accumulating errors over long prediction horizons. We motivate this method theoretically and show that it counteracts an error term for modelbased policy improvement. Experiments on MuJoCo and PyBulletbenchmarks show that our method can drastically improve existing modelbased approaches without introducing additional tuning parameters.
|
OSOA OneShot Online Adaptation of Deep Generative Models for Lossless Compression ; Explicit deep generative models DGMs, e.g., VAEs and Normalizing Flows, have shown to offer an effective data modelling alternative for lossless compression. However, DGMs themselves normally require large storage space and thus contaminate the advantage brought by accurate data density estimation. To eliminate the requirement of saving separate models for different target datasets, we propose a novel setting that starts from a pretrained deep generative model and compresses the data batches while adapting the model with a dynamical system for only one epoch. We formalise this setting as that of OneShot Online Adaptation OSOA of DGMs for lossless compression and propose a vanilla algorithm under this setting. Experimental results show that vanilla OSOA can save significant time versus training bespoke models and space versus using one model for all targets. With the same adaptation step number or adaptation time, it is shown vanilla OSOA can exhibit better space efficiency, e.g., 47 less space, than finetuning the pretrained model and saving the finetuned model. Moreover, we showcase the potential of OSOA and motivate more sophisticated OSOA algorithms by showing further space or time efficiency with multiple updates per batch and early stopping.
|
RoBERTuito a pretrained language model for social media text in Spanish ; Since BERT appeared, Transformer language models and transfer learning have become stateoftheart for Natural Language Understanding tasks. Recently, some works geared towards pretraining speciallycrafted models for particular domains, such as scientific papers, medical documents, usergenerated texts, among others. These domainspecific models have been shown to improve performance significantly in most tasks. However, for languages other than English such models are not widely available. In this work, we present RoBERTuito, a pretrained language model for usergenerated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving usergenerated text showed that RoBERTuito outperformed other pretrained language models in Spanish. In addition to this, our model achieves top results for some EnglishSpanish tasks of the Linguistic CodeSwitching Evaluation benchmark LinCE and has also competitive performance against monolingual models in English tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pretrain it.
|
SelfTraining Vision Language BERTs with a Unified Conditional Model ; Natural language BERTs are trained with language corpus in a selfsupervised manner. Unlike natural language BERTs, vision language BERTs need paired data to train, which restricts the scale of VLBERT pretraining. We propose a selftraining approach that allows training VLBERTs from unlabeled image data. The proposed method starts with our unified conditional model a vision language BERT model that can perform zeroshot conditional generation. Given different conditions, the unified conditional model can generate captions, dense captions, and even questions. We use the labeled image data to train a teacher model and use the trained model to generate pseudo captions on unlabeled image data. We then combine the labeled data and pseudo labeled data to train a student model. The process is iterated by putting the student model as a new teacher. By using the proposed selftraining approach and only 300k unlabeled extra data, we are able to get competitive or even better performances compared to the models of similar model size trained with 3 million extra image data.
|
A Comparative Study on Language Models for TaskOriented Dialogue Systems ; The recent development of language models has shown promising results by achieving stateoftheart performance on various natural language tasks by finetuning pretrained models. In taskoriented dialogue ToD systems, language models can be used for endtoend training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for finetuning, such as BART and T5, on endtoend ToD systems. The experimental results show substantial performance improvements after language model finetuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPTbased models in BLEU and F1 scores and achieve stateoftheart performance in a ToD system.
|
Improving Macroeconomic Model Validity and Forecasting Performance with Pooled Country Data using Structural, Reduced Form, and Neural Network Model ; We show that pooling countries across a panel dimension to macroeconomic data can improve by a statistically significant margin the generalization ability of structural, reduced form, and machine learning ML methods to produce stateoftheart results. Using GDP forecasts evaluated on an outofsample test set, this procedure reduces root mean squared error by 12 across horizons and models for certain reducedform models and by 24 across horizons for dynamic structural general equilibrium models. Removing US data from the training set and forecasting outofsample countrywise, we show that reducedform and structural models are more policyinvariant when trained on pooled data, and outperform a baseline that uses US data only. Given the comparative advantage of ML models in a datarich regime, we demonstrate that our recurrent neural network model and automated ML approach outperform all tested baseline economic models. Robustness checks indicate that our outperformance is reproducible, numerically stable, and generalizable across models.
|
NICGSlowDown Evaluating the Efficiency Robustness of Neural Image Caption Generation Models ; Neural image caption generation NICG models have received massive attention from the research community due to their excellent performance in visual understanding. Existing work focuses on improving NICG model accuracy while efficiency is less explored. However, many realworld applications require realtime feedback, which highly relies on the efficiency of NICG models. Recent research observed that the efficiency of NICG models could vary for different inputs. This observation brings in a new attack surface of NICG models, i.e., An adversary might be able to slightly change inputs to cause the NICG models to consume more computational resources. To further understand such efficiencyoriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models. Our experimental results show that NICGSlowDown can generate images with humanunnoticeable perturbations that will increase the NICG model latency up to 483.86. We hope this research could raise the community's concern about the efficiency robustness of NICG models.
|
Double Check Your State Before Trusting It ConfidenceAware Bidirectional Offline ModelBased Imagination ; The learned policy of modelfree offline reinforcement learning RL methods is often constrained to stay within the support of datasets to avoid possible dangerous outofdistribution actions or states, making it challenging to handle outofsupport region. Modelbased RL methods offer a richer dataset and benefit generalization by generating imaginary trajectories with either trained forward or reverse dynamics model. However, the imagined transitions may be inaccurate, thus downgrading the performance of the underlying offline RL method. In this paper, we propose to augment the offline dataset by using trained bidirectional dynamics models and rollout policies with double check. We introduce conservatism by trusting samples that the forward model and backward model agree on. Our method, confidenceaware bidirectional offline modelbased imagination, generates reliable samples and can be combined with any modelfree offline RL method. Experimental results on the D4RL benchmarks demonstrate that our method significantly boosts the performance of existing modelfree offline RL algorithms and achieves competitive or better scores against baseline methods.
|
Improving meanfield network percolation models with neighbourhood information ; Mean field theory models of percolation on networks provide analytic estimates of network robustness under node or edge removal. We introduce a new mean field theory model based on generating functions that includes information about the treelikeness of each node's local neighbourhood. We show that our new model outperforms all other generating function models in prediction accuracy when testing their estimates on a wide range of realworld network data. We compare the new model's performance against the recently introduced message passing models and provide evidence that the standard version is also outperformed, while the loopy' version is only outperformed on a targeted attack strategy. As we show, however, the computational complexity of our model implementation is much lower than that of message passing algorithms. We provide evidence that all discussed models are poor in predicting networks with highly modular structure with dispersed modules, which are also characterised by high mixing times, identifying this as a general limitation of percolation prediction models.
|
Separate And Diffuse Using a Pretrained Diffusion Model for Improving Source Separation ; The problem of speech separation, also known as the cocktail party problem, refers to the task of isolating a single speech signal from a mixture of speech signals. Previous work on source separation derived an upper bound for the source separation task in the domain of human speech. This bound is derived for deterministic models. Recent advancements in generative models challenge this bound. We show how the upper bound can be generalized to the case of random generative models. Applying a diffusion model Vocoder that was pretrained to model singlespeaker voices on the output of a deterministic separation model leads to stateoftheart separation results. It is shown that this requires one to combine the output of the separation model with that of the diffusion model. In our method, a linear combination is performed, in the frequency domain, using weights that are inferred by a learned model. We show stateoftheart results on 2, 3, 5, 10, and 20 speakers on multiple benchmarks. In particular, for two speakers, our method is able to surpass what was previously considered the upper performance bound.
|
Edit at your own risk evaluating the robustness of edited models to distribution shifts ; The current trend toward everlarger models makes standard retraining procedures an evermore expensive burden. For this reason, there is growing interest in model editing, which enables computationally inexpensive, interpretable, posthoc model modifications. While many model editing techniques are promising, research on the properties of edited models is largely limited to evaluation of validation accuracy. The robustness of edited models is an important and yet mostly unexplored topic. In this paper, we employ recently developed techniques from the field of deep learning robustness to investigate both how model editing affects the general robustness of a model, as well as the robustness of the specific behavior targeted by the edit. We find that edits tend to reduce general robustness, but that the degree of degradation depends on the editing algorithm and layers chosen. Motivated by these observations we introduce a new model editing algorithm, 1layer interpolation 1LI, which uses weightspace interpolation to navigate the tradeoff between editing task accuracy and general robustness.
|
Combining General and Personalized Models for Epilepsy Detection with Hyperdimensional Computing ; Epilepsy is a chronic neurological disorder with a significant prevalence. However, there is still no adequate technological support to enable epilepsy detection and continuous outpatient monitoring in everyday life. Hyperdimensional HD computing is an interesting alternative for wearable devices, characterized by a much simpler learning process and also lower memory requirements. In this work, we demonstrate a few additional aspects in which HD computing, and the way its models are built and stored, can be used for further understanding, comparing, and creating more advanced machine learning models for epilepsy detection. These possibilities are not feasible with other stateoftheart models, such as random forests or neural networks. We compare intersubject similarity of models per different classes seizure and nonseizure, then study the process of creation of generalized models from personalized ones, and in the end, how to combine personalized and generalized models to create hybrid models. This results in improved epilepsy detection performance. We also tested knowledge transfer between models created on two different datasets. Finally, all those examples could be highly interesting not only from an engineering perspective to create better models for wearables, but also from a neurological perspective to better understand individual epilepsy patterns.
|
HoloDiffusion Training a 3D Diffusion Model using 2D Images ; Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not billions of images with a stable learning objective. However, extending these models to 3D remains difficult for two reasons. First, finding a large quantity of 3D training data is much more complex than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in memory and compute complexity makes this infeasible. We address the first challenge by introducing a new diffusion setup that can be trained, endtoend, with only posed 2D images for supervision; and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on realworld data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
|
Exact solution for quantum strong longrange models via a generalized HubbardStratonovich transformation ; We present an exact analytical solution for quantum strong longrange models in the canonical ensemble by extending the classical solution proposed in Campa et al., J. Phys. A 36, 6897 2003. Specifically, we utilize the equivalence between generalized Dicke models and interacting quantum models as a generalization of the HubbardStratonovich transformation. To demonstrate our method, we apply it to the Ising chain in transverse field and discuss its potential application to other models, such as the FermiHubbard model, combined short and longrange models and models with antiferromagnetic interactions. Our findings indicate that the critical behaviour of a model is independent of the range of interactions, within the strong longrange regime, and the dimensionality of the model. Moreover, we show that the order parameter expression is equivalent to that provided by meanfield theory, thus confirming the exactness of the latter. Finally, we examine the algebraic decay of correlations and characterize its dependence on the range of interactions in the full phase diagram.
|
Performance Modeling of Data Storage Systems using Generative Models ; Highprecision modeling of systems is one of the main areas of industrial data analysis. Models of systems, their digital twins, are used to predict their behavior under various conditions. We have developed several models of a storage system using machine learningbased generative models. The system consists of several components hard disk drive HDD and solidstate drive SSD storage pools with different RAID schemes and cache. Each storage component is represented by a probabilistic model that describes the probability distribution of the component performance in terms of IOPS and latency, depending on their configuration and external data load parameters. The results of the experiments demonstrate the errors of 410 for IOPS and 316 for latency predictions depending on the components and models of the system. The predictions show up to 0.99 Pearson correlation with Little's law, which can be used for unsupervised reliability checks of the models. In addition, we present novel data sets that can be used for benchmarking regression algorithms, conditional generative models, and uncertainty estimation methods in machine learning.
|
Multilevel Large Language Models for Everyone ; Large language models have made significant progress in the past few years. However, they are either generic it or field specific, splitting the community into different groups. In this paper, we unify these large language models into a larger map, where the generic it and specific models are linked together and can improve each other, based on the user personal input and information from the internet. The idea of linking several large language models together is inspired by the functionality of human brain. The specific regions on the brain cortex are specific for certain low level functionality. And these regions can jointly work together to achieve more complex high level functionality. Such behavior on human brain cortex sheds the light to design the multilevel large language models that contain global level, field level and user level models. The user level models run on local machines to achieve efficient response and protect the user's privacy. Such multilevel models reduce some redundancy and perform better than the single level models. The proposed multilevel idea can be applied in various applications, such as natural language processing, computer vision tasks, professional assistant, business and healthcare.
|
Seismic Foundation Model SFM a new generation deep learning model in geophysics ; While computer science has seen remarkable advancements in foundation models, which remain underexplored in geoscience. Addressing this gap, we introduce a workflow to develop geophysical foundation models, including data preparation, model pretraining, and adaption to downstream tasks. From 192 globally collected 3D seismic volumes, we create a carefully curated dataset of 2,286,422 2D seismic images. Fully using these unlabeled images, we employ the selfsupervised learning to pretrain a Transformerbased Seismic Foundation Model SFM for producing allpurpose seismic features that work across various tasks and surveys. Through experiments on seismic facies classification, geobody identification, interpolation, denoising, and inversion, our pretrained model demonstrates versatility, generalization, scalability, and superior performance over baseline models. Conclusively, we provide a foundation model and vast dataset to advance AI in geophysics, addressing challenges poor generalization, lacking labels, and repetitive training for taskspecified models of applying AI in geophysics and paving the way for future innovations in geoscience.
|
A Cognitive Model of an Epistemic Community Mapping the Dynamics of Shallow Lake Ecosystems ; We used fuzzy cognitive mapping FCM to develop a generic shallow lake ecosystem model by augmenting the individual cognitive maps drawn by 8 scientists working in the area of shallow lake ecology. We calculated graph theoretical indices of the individual cognitive maps and the collective cognitive map produced by augmentation. The graph theoretical indices revealed internal cycles showing nonlinear dynamics in the shallow lake ecosystem. The ecological processes were organized democratically without a topdown hierarchical structure. The steady state condition of the generic model was a characteristic turbid shallow lake ecosystem since there were no dynamic environmental changes that could cause shifts between a turbid and a clearwater state, and the generic model indicated that only a dynamic disturbance regime could maintain the clearwater state. The model developed herein captured the empirical behavior of shallow lakes, and contained the basic model of the Alternative Stable States Theory. In addition, our model expanded the basic model by quantifying the relative effects of connections and by extending it. In our expanded model we ran 4 simulations harvesting submerged plants, nutrient reduction, fish removal without nutrient reduction, and biomanipulation. Only biomanipulation, which included fish removal and nutrient reduction, had the potential to shift the turbid state into clearwater state. The structure and relationships in the generic model as well as the outcomes of the management simulations were supported by actual field studies in shallow lake ecosystems. Thus, fuzzy cognitive mapping methodology enabled us to understand the complex structure of shallow lake ecosystems as a whole and obtain a valid generic model based on tacit knowledge of experts in the field.
|
Volumes of logistic regression models with applications to model selection ; Logistic regression models with n observations and q linearlyindependent covariates are shown to have Fisher information volumes which are bounded below by piq and above by n choose q piq. This is proved with a novel generalization of the classical theorems of Pythagoras and de Gua, which is of independent interest. The finding that the volume is always finite is new, and it implies that the volume can be directly interpreted as a measure of model complexity. The volume is shown to be a continuous function of the design matrix X at generic X, but to be discontinuous in general. This means that models with sparse design matrices can be significantly less complex than nearby models, so the resulting modelselection criterion prefers sparse models. This is analogous to the way that ell1regularisation tends to prefer sparse model fits, though in our case this behaviour arises spontaneously from general principles. Lastly, an unusual topological duality is shown to exist between the ideal boundaries of the natural and expectation parameter spaces of logistic regression models.
|
Variational Deep Semantic Hashing for Text Documents ; As the amount of textual data has been rapidly increasing over the past decade, efficient similarity search methods have become a crucial component of largescale information retrieval systems. A popular strategy is to represent original data samples by compact binary codes through hashing. A spectrum of machine learning methods have been utilized, but they often lack expressiveness and flexibility in modeling to learn effective representations. The recent advances of deep learning in a wide range of applications has demonstrated its capability to learn robust and powerful feature representations for complex data. Especially, deep generative models naturally combine the expressiveness of probabilistic generative models with the high capacity of deep neural networks, which is very suitable for text modeling. However, little work has leveraged the recent progress in deep learning for text hashing. In this paper, we propose a series of novel deep document generative models for text hashing. The first proposed model is unsupervised while the second one is supervised by utilizing document labelstags for hashing. The third model further considers documentspecific factors that affect the generation of words. The probabilistic generative formulation of the proposed models provides a principled framework for model extension, uncertainty estimation, simulation, and interpretability. Based on variational inference and reparameterization, the proposed models can be interpreted as encoderdecoder deep neural networks and thus they are capable of learning complex nonlinear distributed representations of the original documents. We conduct a comprehensive set of experiments on four public testbeds. The experimental results have demonstrated the effectiveness of the proposed supervised learning models for text hashing.
|
DeepFlow History Matching in the Space of Deep Generative Models ; The calibration of a reservoir model with observed transient data of fluid pressures and rates is a key task in obtaining a predictive model of the flow and transport behaviour of the earth's subsurface. The model calibration task, commonly referred to as history matching, can be formalised as an illposed inverse problem where we aim to find the underlying spatial distribution of petrophysical properties that explain the observed dynamic data. We use a generative adversarial network pretrained on geostatistical objectbased models to represent the distribution of rock properties for a synthetic model of a hydrocarbon reservoir. The dynamic behaviour of the reservoir fluids is modelled using a transient twophase incompressible Darcy formulation. We invert for the underlying reservoir properties by first modeling property distributions using the pretrained generative model then using the adjoint equations of the forward problem to perform gradient descent on the latent variables that control the output of the generative model. In addition to the dynamic observation data, we include well rocktype constraints by introducing an additional objective function. Our contribution shows that for a synthetic test case, we are able to obtain solutions to the inverse problem by optimising in the latent variable space of a deep generative model, given a set of transient observations of a nonlinear forward problem.
|
Generation of Consistent Sets of MultiLabel Classification Rules with a MultiObjective Evolutionary Algorithm ; Multilabel classification consists in classifying an instance into two or more classes simultaneously. It is a very challenging task present in many realworld applications, such as classification of biology, image, video, audio, and text. Recently, the interest in interpretable classification models has grown, partially as a consequence of regulations such as the General Data Protection Regulation. In this context, we propose a multiobjective evolutionary algorithm that generates multiple rulebased multilabel classification models, allowing users to choose among models that offer different compromises between predictive power and interpretability. An important contribution of this work is that different from most algorithms, which usually generate models based on lists ordered collections of rules, our algorithm generates models based on sets unordered collections of rules, increasing interpretability. Also, by employing a conflict avoidance algorithm during the rulecreation, every rule within a given model is guaranteed to be consistent with every other rule in the same model. Thus, no conflict resolution strategy is required, evolving simpler models. We conducted experiments on synthetic and realworld datasets and compared our results with stateoftheart algorithms in terms of predictive performance FScore and interpretability model size, and demonstrate that our best models had comparable FScore and smaller model sizes.
|
Short Literature Review for a General Player Model Based on Behavlets ; We present the first in a series of three academic essays which deal with the question of how to build a generalized player model. We begin with a proposition a general model of players requires parameters for the subjective experience of play, including at least player psychology, game structure, and actions of play. Based on this proposition, we pose three linked research questions, which make incomplete progress toward a generalised player model RQ1 what is a necessary and sufficient foundation to a general player model; RQ2 can such a foundation improve performance of a computational intelligencebased player model; and RQ3 can such a player model improve efficacy of adaptive artificial intelligence in games We set out the arguments behind these research questions in each of the three essays, presented as three preprints. The first essay, in this preprint, reviews the literature for the core foundations for a general player model. We then propose a plan for future work to systematically extend the review and thus provide an empirical answer to RQ1 above. This work will directly support the proposed approach to address RQ2 and RQ3 above. This review was developed to support our 'Behavlets' approach to player modelling; therefore if citing this work, please use the relevant citation Cowley B, Charles D. Behavlets a Method for Practical Player Modelling using PsychologyBased Player Traits and Domain Specific Features. User Modelling and UserAdapted Interaction. 2016 Feb 8; online Special Issue on Personality in Personalized Systems150.
|
Attention Forcing for Sequencetosequence Model Training ; Autoregressive sequencetosequence models with attention mechanism have achieved stateoftheart performance in many tasks such as machine translation and speech synthesis. These models can be difficult to train. The standard approach, teacher forcing, guides a model with reference output history during training. The problem is that the model is unlikely to recover from its mistakes during inference, where the reference output is replaced by generated output. Several approaches deal with this problem, largely by guiding the model with generated output history. To make training stable, these approaches often require a heuristic schedule or an auxiliary classifier. This paper introduces attention forcing, which guides the model with generated output history and reference attention. This approach can train the model to recover from its mistakes, in a stable fashion, without the need for a schedule or a classifier. In addition, it allows the model to generate output sequences aligned with the references, which can be important for cascaded systems like many speech synthesis systems. Experiments on speech synthesis show that attention forcing yields significant performance gain. Experiments on machine translation show that for tasks where various reorderings of the output are valid, guiding the model with generated output history is challenging, while guiding the model with reference attention is beneficial.
|
Minimum Excess Risk in Bayesian Learning ; We analyze the best achievable performance of Bayesian learning under generative models by defining and upperbounding the minimum excess risk MER the gap between the minimum expected loss attainable by learning from data and the minimum expected loss that could be achieved if the model realization were known. The definition of MER provides a principled way to define different notions of uncertainties in Bayesian learning, including the aleatoric uncertainty and the minimum epistemic uncertainty. Two methods for deriving upper bounds for the MER are presented. The first method, generally suitable for Bayesian learning with a parametric generative model, upperbounds the MER by the conditional mutual information between the model parameters and the quantity being predicted given the observed data. It allows us to quantify the rate at which the MER decays to zero as more data becomes available. Under realizable models, this method also relates the MER to the richness of the generative function class, notably the VC dimension in binary classification. The second method, particularly suitable for Bayesian learning with a parametric predictive model, relates the MER to the minimum estimation error of the model parameters from data. It explicitly shows how the uncertainty in model parameter estimation translates to the MER and to the final prediction uncertainty. We also extend the definition and analysis of MER to the setting with multiple model families and the setting with nonparametric models. Along the discussions we draw some comparisons between the MER in Bayesian learning and the excess risk in frequentist learning.
|
Population and Inequality Dynamics in Simple Economies ; While the use of spatial agentbased and individualbased models has flourished across many scientific disciplines, the complexities these models generate are often difficult to manage and quantify. This research reduces populationdriven, spatial modeling of individuals to the simplest configurations and parameters an equal resource opportunity landscape with equally capable individuals; and asks the question, Will valid complex population and inequality dynamics emerge from this simple economic model Two foraging economies are modeled subsistence and surplus. The resulting, emergent population dynamics are characterized by their sensitivities to agent and landscape parameters. The various steady and oscillating regimes of singlespecies population dynamics are generated by appropriate selection of model growth parameters. These emergent dynamics are shown to be consistent with the equationbased, continuum modeling of singlespecies populations in biology and ecology. The intrinsic growth rates, carry capacities, and delay parameters of these models are implied for these simple economies. Aggregate measures of individual distributions are used to understand the sensitivities to model parameters. New local measures are defined to describe complex behaviors driven by spatial effects, especially extinctions. This simple economic model is shown to generate significantly complex population and inequality dynamics. Model parameters generating the intrinsic growth rate have strong effects on these dynamics, including large variations in inequality. Significant inequality effects are shown to be caused by birth costs above and beyond their contribution to the intrinsic growth rate. The highest levels of inequality are found during the initial nonequilibrium period and are driven by factors different than those driving steady state inequality.
|
A Hybrid Model for Combining Neural Image Caption and kNearest Neighbor Approach for Image Captioning ; A hybrid model is proposed that integrates two popular image captioning methods to generate a textbased summary describing the contents of the image. The two image captioning models are the Neural Image Caption NIC and the knearest neighbor approach. These are trained individually on the training set. We extract a set of five features, from the validation set, for evaluating the results of the two models that in turn is used to train a logistic regression classifier. The BLEU4 scores of the two models are compared for generating the binaryvalue ground truth for the logistic regression classifier. For the test set, the input images are first passed separately through the two models to generate the individual captions. The fivedimensional feature set extracted from the two models is passed to the logistic regression classifier to take a decision regarding the final caption generated which is the best of two captions generated by the models. Our implementation of the knearest neighbor model achieves a BLEU4 score of 15.95 and the NIC model achieves a BLEU4 score of 16.01, on the benchmark Flickr8k dataset. The proposed hybrid model is able to achieve a BLEU4 score of 18.20 proving the validity of our approach.
|
Semisupervised teacherstudent deep neural network for materials discovery ; Data driven generative machine learning models have recently emerged as one of the most promising approaches for new materials discovery. While the generator models can generate millions of candidates, it is critical to train fast and accurate machine learning models to filter out stable, synthesizable materials with desired properties. However, such efforts to build supervised regression or classification screening models have been severely hindered by the lack of unstable or unsynthesizable samples, which usually are not collected and deposited in materials databases such as ICSD and Materials Project MP. At the same time, there are a significant amount of unlabelled data available in these databases. Here we propose a semisupervised deep neural network TSDNN model for highperformance formation energy and synthesizability prediction, which is achieved via its unique teacherstudent dual network architecture and its effective exploitation of the large amount of unlabeled data. For formation energy based stability screening, our semisupervised classifier achieves an absolute 10.3 accuracy improvement compared to the baseline CGCNN regression model. For synthesizability prediction, our model significantly increases the baseline PU learning's true positive rate from 87.9 to 97.9 using 149 model parameters. To further prove the effectiveness of our models, we combined our TSDNNenergy and TSDNNsynthesizability models with our CubicGAN generator to discover novel stable cubic structures. Out of 1000 recommended candidate samples by our models, 512 of them have negative formation energies as validated by our DFT formation energy calculations. Our experimental results show that our semisupervised deep neural networks can significantly improve the screening accuracy in largescale generative materials design.
|
DiffusionBERT Improving Generative Masked Language Models with Diffusion Models ; We present DiffusionBERT, a new generative masked language model based on discrete diffusion models. Diffusion models and many pretrained language models have a shared training objective, i.e., denoising, making it possible to combine the two powerful models and enjoy the best of both worlds. On the one hand, diffusion models offer a promising training strategy that helps improve the generation quality. On the other hand, pretrained denoising language models e.g., BERT can be used as a good initialization that accelerates convergence. We explore training BERT to learn the reverse process of a discrete diffusion process with an absorbing state and elucidate several designs to improve it. First, we propose a new noise schedule for the forward diffusion process that controls the degree of noise added at each step based on the information of each token. Second, we investigate several designs of incorporating the time step into BERT. Experiments on unconditional text generation demonstrate that DiffusionBERT achieves significant improvement over existing diffusion models for text e.g., D3PM and DiffusionLM and previous generative masked language models in terms of perplexity and BLEU score.
|
Smart Contract Generation for InterOrganizational Process Collaboration ; Currently, interorganizational process collaboration IOPC has been widely used in the design and development of distributed systems that support business process execution. Blockchainbased IOPC can establish trusted data sharing among participants, attracting more and more attention. The core of such study is to translate the graphical model e.g., BPMN into program code called smart contract that can be executed in the blockchain environment. In this context, a proper smart contract plays a vital role in the correct implementation of blockchainbased IOPC. In fact, the quality of graphical model affects the smart contract generation. Problematic models e.g., deadlock will result in incorrect contracts causing unexpected behaviours. To avoid this undesired implementation, this paper explores to generate smart contracts by using the verified formal model as input instead of graphical model. Specifically, we introduce a prototype framework that supports the automatic generation of smart contracts, providing an endtoend solution from modeling, verification, translation to implementation. One of the cores of this framework is to provide a CSPbased formalization for the BPMN collaboration model from the perspective of message interaction. This formalization provides precise execution semantics and model verification for graphical models, and a verified formal model for smart contract generation. Another novelty is that it introduces a syntax treebased translation algorithm to directly map the formal model into a smart contract. The required formalism, verification and translation techniques are transparent to users without imposing additional burdens. Finally, a set of experiments shows the effectiveness of the framework.
|
Correcting Model Misspecification via Generative Adversarial Networks ; Machine learning models are often misspecified in the likelihood, which leads to a lack of robustness in the predictions. In this paper, we introduce a framework for correcting likelihood misspecifications in several paradigm agnostic noisy prior models and test the model's ability to remove the misspecification. The ABCGAN framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks GANs and Approximate Bayesian Computation ABC. This new paradigm assists the existing GANs by incorporating any subjective knowledge available about the modeling process via ABC, as a regularizer, resulting in a partially interpretable model that operates well under low data regimes. At the same time, unlike any Bayesian analysis, the explicit knowledge need not be perfect, since the generator in the GAN can be made arbitrarily complex. ABCGAN eliminates the need for summary statistics and distance metrics as the discriminator implicitly learns them and enables simultaneous specification of multiple generative models. The model misspecification is simulated in our experiments by introducing noise of various biases and variances. The correction term is learnt via the ABCGAN, with skip connections, referred to as skipGAN. The strength of the skip connection indicates the amount of correction needed or how misspecified the prior model is. Based on a simple experimental setup, we show that the ABCGAN models not only correct the misspecification of the prior, but also perform as well as or better than the respective priors under noisier conditions. In this proposal, we show that ABCGANs get the best of both worlds.
|
Some General Aspects of Coset Models and Topological KazamaSuzuki Models ; We study global aspects of N2 KazamaSuzuki coset models by investigating topological GH KazamaSuzuki models in a Lagrangian framework based on gauged WessZuminoWitten models. We first generalize Witten's analysis of the holomorphic factorization of bosonic GH models to models with N1 and N2 supersymmetry. We also find some new anomalyfree and supersymmetric models based on nondiagonal embeddings of the gauge group. We then explain the basic properties action, symmetries, metric independence, ... of the topologically twisted GH KazamaSuzuki models. We explain how all of the above generalizes to nontrivial gauge bundles. We employ the path integral methods of localization and abelianization shown to be valid also for nontrivial bundles to establish that the twisted GH models can be localized to bosonic HH models with certain quantum corrections, and can hence be reduced to an Abelian bosonic TT model, T a maximal torus of H. We also present the action and the symmetries of the coupling of these models to topological gravity. We determine the bosonic observables for all the models based on classical flag manifolds and the bosonic observables and their fermionic descendants for models based on complex Grassmannians.
|
Controversial stimuli pitting neural networks against each other as models of human recognition ; Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models' ability to predict human responses, we synthesize controversial stimuli images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits MNIST and objects in small natural images CIFAR10. For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digitobject category in each image. We quantified how accurately each model predicted the human judgments. The best performing models were a generative AnalysisbySynthesis model based on variational autoencoders for MNIST and a hybrid discriminativegenerative Joint Energy Model for CIFAR10. These DNNs, which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a groundtruth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe outofdistribution tests that reveal the models' inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.
|
To what extent do human explanations of model behavior align with actual model behavior ; Given the increasingly prominent role NLP models will play in our lives, it is important for human expectations of model behavior to align with actual model behavior. Using Natural Language Inference NLI as a case study, we investigate the extent to which humangenerated explanations of models' inference decisions align with how models actually make these decisions. More specifically, we define three alignment metrics that quantify how well natural language explanations align with model sensitivity to input words, as measured by integrated gradients. Then, we evaluate eight different models the base and large versions of BERT, RoBERTa and ELECTRA, as well as anRNN and bagofwords model, and find that the BERTbase model has the highest alignment with humangenerated explanations, for all alignment metrics. Focusing in on transformers, we find that the base versions tend to have higher alignment with humangenerated explanations than their larger counterparts, suggesting that increasing the number of model parameters leads, in some cases, to worse alignment with human explanations. Finally, we find that a model's alignment with human explanations is not predicted by the model's accuracy, suggesting that accuracy and alignment are complementary ways to evaluate models.
|
MEGA Model Stealing via Collaborative GeneratorSubstitute Networks ; Deep machine learning models are increasingly deployedin the wild for providing services to users. Adversaries maysteal the knowledge of these valuable models by trainingsubstitute models according to the inference results of thetargeted deployed models. Recent datafree model stealingmethods are shown effective to extract the knowledge of thetarget model without using real query examples, but they assume rich inference information, e.g., class probabilities andlogits. However, they are all based on competing generatorsubstitute networks and hence encounter training instability.In this paper we propose a datafree model stealing framework,MEGA, which is based on collaborative generatorsubstitute networks and only requires the target model toprovide label prediction for synthetic query examples. Thecore of our method is a model stealing optimization consisting of two collaborative models i the substitute modelwhich imitates the target model through the synthetic queryexamples and their inferred labels and ii the generatorwhich synthesizes images such that the confidence of thesubstitute model over each query example is maximized. Wepropose a novel coordinate descent training procedure andanalyze its convergence. We also empirically evaluate thetrained substitute model on three datasets and its applicationon blackbox adversarial attacks. Our results show that theaccuracy of our trained substitute model and the adversarialattack success rate over it can be up to 33 and 40 higherthan stateoftheart datafree blackbox attacks.
|
Unified EinsteinVirasoro Master Equation in the General NonLinear Sigma Model ; The Virasoro master equation VME describes the general affineVirasoro construction TLabJaJbiDa dif Ja in the operator algebra of the WZW model, where Lab is the inverse inertia tensor and Da is the improvement vector. In this paper, we generalize this construction to find the general oneloop Virasoro construction in the operator algebra of the general nonlinear sigma model. The result is a unified EinsteinVirasoro master equation which couples the spacetime spintwo field Lab to the background fields of the sigma model. For a particular solution LGab, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affineVirasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affineVirasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of Gstructures on manifolds with torsion.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.