text
stringlengths
62
2.94k
Explanationbased Finetuning Makes Models More Robust to Spurious Cues ; Large Language Models LLMs are so powerful that they sometimes learn correlations between labels and features that are irrelevant to the task, leading to poor generalization on outofdistribution data. We propose explanationbased finetuning as a general approach to mitigate LLMs' reliance on spurious correlations. Unlike standard finetuning where the model only predicts the answer given the input, we finetune the model to additionally generate a freetext explanation supporting its answer. To evaluate our method, we finetune the model on artificially constructed training sets containing different types of spurious cues, and test it on a test set without these cues. Compared to standard finetuning, our method makes GPT3 davinci remarkably more robust against spurious cues in terms of accuracy drop across four classification tasks ComVE 1.2, CREAK 9.1, eSNLI 15.4, and SBIC 6.5. The efficacy generalizes across multiple model families and scales, with greater gains for larger models. Finally, our method also works well with explanations generated by the model, implying its applicability to more datasets without humanwritten explanations.
A Comprehensive Survey on Generative Diffusion Models for Structured Data ; In recent years, generative diffusion models have achieved a rapid paradigm shift in deep generative models by showing groundbreaking performance across various applications. Meanwhile, structured data, encompassing tabular and time series data, has been received comparatively limited attention from the deep learning research community, despite its omnipresence and extensive applications. Thus, there is still a lack of literature and its reviews on structured data modelling via diffusion models, compared to other data modalities such as visual and textual data. To address this gap, we present a comprehensive review of recently proposed diffusion models in the field of structured data. First, this survey provides a concise overview of the scorebased diffusion model theory, subsequently proceeding to the technical descriptions of the majority of pioneering works that used structured data in both datadriven general tasks and domainspecific applications. Thereafter, we analyse and discuss the limitations and challenges shown in existing works and suggest potential research directions. We hope this review serves as a catalyst for the research community, promoting developments in generative diffusion models for structured data.
Variational Positiveincentive Noise How Noise Benefits Models ; A large number of works aim to alleviate the impact of noise due to an underlying conventional assumption of the negative role of noise. However, some existing works show that the assumption does not always hold. In this paper, we investigate how to benefit the classical models by random noise under the framework of Positiveincentive Noise PiNoise. Since the ideal objective of PiNoise is intractable, we propose to optimize its variational bound instead, namely variational PiNoise VPN. With the variational inference, a VPN generator implemented by neural networks is designed for enhancing base models and simplifying the inference of base models, without changing the architecture of base models. Benefiting from the independent design of base models and VPN generators, the VPN generator can work with most existing models. From the experiments, it is shown that the proposed VPN generator can improve the base models. It is appealing that the trained variational VPN generator prefers to blur the irrelevant ingredients in complicated images, which meets our expectations.
Continual Learners are Incremental Model Generalizers ; Motivated by the efficiency and rapid convergence of pretrained models for solving downstream tasks, this paper extensively studies the impact of Continual Learning CL models as pretrainers. In both supervised and unsupervised CL, we find that the transfer quality of the representation often increases gradually without noticeable degradation in finetuning performance. This is because CL models can learn improved taskgeneral features when easily forgetting taskspecific knowledge. Based on this observation, we suggest a new unsupervised CL framework with masked modeling, which aims to capture fluent taskgeneric representation during training. Furthermore, we propose a new finetuning scheme, GLobal Attention Discretization GLAD, that preserves rich taskgeneric representation during solving downstream tasks. The model finetuned with GLAD achieves competitive performance and can also be used as a good pretrained model itself. We believe this paper breaks the barriers between pretraining and finetuning steps and leads to a sustainable learning framework in which the continual learner incrementally improves model generalization, yielding better transfer to unseen tasks.
A generative model for surrogates of spatialtemporal wildfire nowcasting ; Recent increase in wildfires worldwide has led to the need for realtime fire nowcasting. Physicsdriven models, such as cellular automata and computational fluid dynamics can provide highfidelity fire spread simulations but they are computationally expensive and timeconsuming. Much effort has been put into developing machine learning models for fire prediction. However, these models are often regionspecific and require a substantial quantity of simulation data for training purpose. This results in a significant amount of computational effort for different ecoregions. In this work, a generative model is proposed using a threedimensional VectorQuantized Variational Autoencoders to generate spatialtemporal sequences of unseen wildfire burned areas in a given ecoregion. The model is tested in the ecoregion of a recent massive wildfire event in California, known as the Chimney fire. Numerical results show that the model succeed in generating coherent and structured fire scenarios, taking into account the impact from geophysical variables, such as vegetation and slope. Generated data are also used to train a surrogate model for predicting wildfire dissemination, which has been tested on both simulation data and the real Chimney fire event.
FinDiff Diffusion Models for Financial Tabular Data Generation ; The sharing of microdata, such as fund holdings and derivative instruments, by regulatory institutions presents a unique challenge due to strict data confidentiality and privacy regulations. These challenges often hinder the ability of both academics and practitioners to conduct collaborative research effectively. The emergence of generative models, particularly diffusion models, capable of synthesizing data mimicking the underlying distributions of realworld data presents a compelling solution. This work introduces 'FinDiff', a diffusion model designed to generate realworld financial tabular data for a variety of regulatory downstream tasks, for example economic scenario modeling, stress tests, and fraud detection. The model uses embedding encodings to model mixed modality financial data, comprising both categorical and numeric attributes. The performance of FinDiff in generating synthetic tabular financial data is evaluated against stateoftheart baseline models using three realworld financial datasets including two publicly available datasets and one proprietary dataset. Empirical results demonstrate that FinDiff excels in generating synthetic tabular financial data with high fidelity, privacy, and utility.
Towards NonInvertible Anomalies from Generalized Ising Models ; The 1d transversefield Ising model, when projected to the Z2 symmetric sector, is known to have a noninvertible gravitational anomaly that can be compensated by the Z2 toric model in 2d. In this paper, we study the generalization of this type of bulkboundary correspondence in a large class of qubit lattice models in arbitrary dimensions, called the generalized Ising GI models. We provide a systematic construction of exactly solvable bulk models, where the GI models can terminate on their boundaries. In each bulk model, any ground state is robust against local perturbations. If the model has degenerate ground states with periodic boundary condition, the phase is topological andor fracton ordered. The construction generates abundant examples, including not only prototype ones such as Z2 toric code models in any dimensions no less than two, and the Xcube fracton model, but also more diverse ones such as the Z2times Z2 topological order, the 4d Z2 topological order with pureloop excitations, etc. The boundary of the solvable model is potentially anomalous and corresponds to precisely only sectors of the GI model that host certain total symmetry charges andor satisfy certain boundary conditions. We derive a concrete condition for such bulkboundary correspondence. The condition is violated only when the bulk model is either trivial or fracton ordered. A generalized notion of KramersWannier duality plays an important role in the construction. Also, utilizing the duality, we find an example where a single anomalous theory can be realized on the boundaries of two distinct bulk fracton models, a phenomenon not expected in the case of topological orders. More generally, topological orders may also be generated starting with qubit lattice models beyond the GI models, such as those with SPT orders, through a variant bulk construction, which we provide in an appendix.
Comparison of Strong Gravitational Lens Model Software II. HydraLens ComputerAssisted Strong Gravitational Lens Model Generation and Translation ; The behavior of strong gravitational lens model software in the analysis of lens models is not necessarily consistent among the various software available, suggesting that the use of several models may enhance the understanding of the system being studied. Among the publicly available codes, the model input files are heterogeneous, making the creation of multiple models tedious. An enhanced method of creating model files and a method to easily create multiple models, may increase the number of comparison studies. HydraLens simplifies the creation of model files for four strong gravitational lens model software packages, including Lenstool, GravlensLensmodel, glafic and PixeLens, using a custom designed GUI for each of the four codes that simplifies the entry of the model for each of these codes, obviating the need for user manuals to set the values of the many flags and in each data field. HydraLens is designed in a modular fashion, which simplifies the addition of other strong gravitational lens codes in the future. HydraLens can also translate a model generated for any of these four software packages into any of the other three. Models created using HydraLens may require slight modifications, since some information may be lost in the translation process. However the computer generated model greatly simplifies the process of developing multiple lens models. HydraLens may enhance the number of direct software comparison studies, and also assist in the education of young investigators in gravitational lens modeling. Future development of HydraLens will further enhance its capabilities.
Comparison of dark energy models after Planck 2015 ; We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the alpha dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the ChevalliearPolarskiLinder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the DvaliGabadadzePorrati model, and the Ricci dark energy model are excluded by the current observations.
A Survey of Diffusion Models in Natural Language Processing ; This survey paper provides a comprehensive review of the use of diffusion models in natural language processing NLP. Diffusion models are a class of mathematical models that aim to capture the diffusion of information or signals across a network or manifold. In NLP, diffusion models have been used in a variety of applications, such as natural language generation, sentiment analysis, topic modeling, and machine translation. This paper discusses the different formulations of diffusion models used in NLP, their strengths and limitations, and their applications. We also perform a thorough comparison between diffusion models and alternative generative models, specifically highlighting the autoregressive AR models, while also examining how diverse architectures incorporate the Transformer in conjunction with diffusion models. Compared to AR models, diffusion models have significant advantages for parallel generation, text interpolation, tokenlevel controls such as syntactic structures and semantic contents, and robustness. Exploring further permutations of integrating Transformers into diffusion models would be a valuable pursuit. Also, the development of multimodal diffusion models and largescale diffusion language models with notable capabilities for fewshot learning would be important directions for the future advance of diffusion models in NLP.
A Conceptual Approach to Complex Model Management with Generalized Modelling Patterns and Evolutionary Identification ; Complex systems' modeling and simulation are powerful ways to investigate a multitude of natural phenomena providing extended knowledge on their structure and behavior. However, enhanced modeling and simulation require integration of various data and knowledge sources, models of various kinds datadriven models, numerical models, simulation models, etc., intelligent components in one composite solution. Growing complexity of such composite model leads to the need of specific approaches for management of such model. This need extends where the model itself becomes a complex system. One of the important aspects of complex model management is dealing with the uncertainty of various kinds context, parametric, structural, inputoutput to control the model. In the situation where a system being modeled, or modeling requirements change over time, specific methods and tools are needed to make modeling and application procedures metamodeling operations in an automatic manner. To support automatic building and management of complex models we propose a general evolutionary computation approach which enables managing of complexity and uncertainty of various kinds. The approach is based on an evolutionary investigation of model phase space to identify the best model's structure and parameters. Examples of different areas healthcare, hydrometeorology, social network analysis were elaborated with the proposed approach and solutions.
Bicycle Longitudinal Motion Modeling ; This research effort uses vehicular traffic flow techniques to model bicyclist longitudinal motion while accounting for bicycle interactions. Specifically, an existing carfollowing model, the FadhlounRakha FR model is reparametrized to model bicyclists. Initially, the study evaluates the performance of the proposed model formulation using experimental datasets collected from two ringroad bicycle experiments; one conducted in Germany in 2012, and the second in China in 2016. The validation of the model is achieved through investigating and comparing the proposed model outputs against those obtained from two stateoftheart models, namely the Necessary Deceleration Model NDM, which is a model specifically designed to capture the longitudinal motion of bicyclists; and the Intelligent Driver Model, which is a carfollowing model that was demonstrated to be suitable for singlefile bicycle traffic. Through a quantitative and qualitative evaluation, the proposed model formulation is demonstrated to produce modeling errors that are consistent with the other two models. While all three models generate trajectories that are consistent with empirically observed bicyclefollowing behavior, only the proposed model allows for an explicit and straightforward tuning of the bicyclist physical characteristics and the road environment. A sensitivity analysis, demonstrates the effect of varying the different model parameters on the produced trajectories, highlighting the robustness and generality of the proposed model.
Seeds Don't Lie An Adaptive Watermarking Framework for Computer Vision Models ; In recent years, various watermarking methods were suggested to detect computer vision models obtained illegitimately from their owners, however they fail to demonstrate satisfactory robustness against model extraction attacks. In this paper, we present an adaptive framework to watermark a protected model, leveraging the unique behavior present in the model due to a unique random seed initialized during the model training. This watermark is used to detect extracted models, which have the same unique behavior, indicating an unauthorized usage of the protected model's intellectual property IP. First, we show how an initial seed for random number generation as part of model training produces distinct characteristics in the model's decision boundaries, which are inherited by extracted models and present in their decision boundaries, but aren't present in nonextracted models trained on the same dataset with a different seed. Based on our findings, we suggest the Robust Adaptive Watermarking RAW Framework, which utilizes the unique behavior present in the protected and extracted models to generate a watermark keyset and verification model. We show that the framework is robust to 1 unseen model extraction attacks, and 2 extracted models which undergo a blurring method e.g., weight pruning. We evaluate the framework's robustness against a naive attacker unaware that the model is watermarked, and an informed attacker who employs blurring strategies to remove watermarked behavior from an extracted model, and achieve outstanding i.e., 0.9 AUC values. Finally, we show that the framework is robust to model extraction attacks with different structure andor architecture than the protected model.
Avoiding Latent Variable Collapse With Generative Skip Models ; Variational autoencoders learn distributions of highdimensional data. They model data with a deep latentvariable model and then fit the model by maximizing a lower bound of the log marginal likelihood. VAEs can capture complex distributions, but they can also suffer from an issue known as latent variable collapse, especially if the likelihood model is powerful. Specifically, the lower bound involves an approximate posterior of the latent variables; this posterior collapses when it is set equal to the prior, i.e., when the approximate posterior is independent of the data. While VAEs learn good generative models, latent variable collapse prevents them from learning useful representations. In this paper, we propose a simple new way to avoid latent variable collapse by including skip connections in our generative model; these connections enforce strong links between the latent variables and the likelihood function. We study generative skip models both theoretically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, we study images MNIST and Omniglot and text Yahoo. Compared to existing VAE architectures, we show that generative skip models maintain similar predictive performance but lead to less collapse and provide more meaningful representations of the data.
Distillation based Multitask Learning A Candidate Generation Model for Improving Reading Duration ; In feeds recommendation, the first step is candidate generation. Most of the candidate generation models are based on CTR estimation, which do not consider user's satisfaction with the clicked item. Items with low quality but attractive title i.e., click baits may be recommended to the user, which worsens the user experience. One solution to this problem is to model the click and the reading duration simultaneously under the multitask learning MTL framework. There are two challenges in the modeling. The first one is how to deal with the zero duration of the negative samples, which does not necessarily indicate dislikes. The second one is how to perform multitask learning in the candidate generation model with double tower structure that can only model one single task. In this paper, we propose an distillation based multitask learning DMTL approach to tackle these two challenges. We model duration by considering its dependency of click in the MTL, and then transfer the knowledge learned from the MTL teacher model to the student candidate generation model by distillation. Experiments conducted on dataset gathered from traffic logs of Tencent Kandian's recommender system show that the proposed approach outperforms the competitors significantly in modeling duration, which demonstrates the effectiveness of the proposed candidate generation model.
BPLF A BiParallel Linear Flow Model for Facial Expression Generation from Emotion Set Images ; The flowbased generative model is a deep learning generative model, which obtains the ability to generate data by explicitly learning the data distribution. Theoretically its ability to restore data is stronger than other generative models. However, its implementation has many limitations, including limited model design, too many model parameters and tedious calculation. In this paper, a biparallel linear flow model for facial emotion generation from emotion set images is constructed, and a series of improvements have been made in terms of the expression ability of the model and the convergence speed in training. The model is mainly composed of several coupling layers superimposed to form a multiscale structure, in which each coupling layer contains 11 reversible convolution and linear operation modules. Furthermore, this paper sorted out the current public data set of facial emotion images, made a new emotion data, and verified the model through this data set. The experimental results show that, under the traditional convolutional neural network, the 3layer 33 convolution kernel is more conducive to extracte the features of the face images. The introduction of principal component decomposition can improve the convergence speed of the model.
Offline Reinforcement Learning with Reverse Modelbased Imagination ; In offline reinforcement learning offline RL, one of the main challenges is to deal with the distributional shift between the learning policy and the given dataset. To address this problem, recent offline RL methods attempt to introduce conservatism bias to encourage learning in highconfidence areas. Modelfree approaches directly encode such bias into policy or value function learning using conservative regularizations or special network structures, but their constrained policy search limits the generalization beyond the offline dataset. Modelbased approaches learn forward dynamics models with conservatism quantifications and then generate imaginary trajectories to extend the offline datasets. However, due to limited samples in offline datasets, conservatism quantifications often suffer from overgeneralization in outofsupport regions. The unreliable conservative measures will mislead forward modelbased imaginations to undesired areas, leading to overaggressive behaviors. To encourage more conservatism, we propose a novel modelbased offline RL framework, called Reverse Offline Modelbased Imagination ROMI. We learn a reverse dynamics model in conjunction with a novel reverse policy, which can generate rollouts leading to the target goal states within the offline dataset. These reverse imaginations provide informed data augmentation for modelfree policy learning and enable conservative generalization beyond the offline dataset. ROMI can effectively combine with offtheshelf modelfree algorithms to enable modelbased generalization with proper conservatism. Empirical results show that our method can generate more conservative behaviors and achieve stateoftheart performance on offline RL benchmark tasks.
Adversarial Training of Denoising Diffusion Model Using Dual Discriminators for HighFidelity MultiSpeaker TTS ; The diffusion model is capable of generating highquality data through a probabilistic approach. However, it suffers from the drawback of slow generation speed due to the requirement of a large number of time steps. To address this limitation, recent models such as denoising diffusion implicit models DDIM focus on generating samples without directly modeling the probability distribution, while models like denoising diffusion generative adversarial networks GAN combine diffusion processes with GANs. In the field of speech synthesis, a recent diffusion speech synthesis model called DiffGANTTS, utilizing the structure of GANs, has been introduced and demonstrates superior performance in both speech quality and generation speed. In this paper, to further enhance the performance of DiffGANTTS, we propose a speech synthesis model with two discriminators a diffusion discriminator for learning the distribution of the reverse process and a spectrogram discriminator for learning the distribution of the generated data. Objective metrics such as structural similarity index measure SSIM, melcepstral distortion MCD, F0 root mean squared error F0 RMSE, shorttime objective intelligibility STOI, perceptual evaluation of speech quality PESQ, as well as subjective metrics like mean opinion score MOS, are used to evaluate the performance of the proposed model. The evaluation results show that the proposed model outperforms recent stateoftheart models such as FastSpeech2 and DiffGANTTS in various metrics. Our implementation and audio samples are located on GitHub.
Total Generate Cycle in Cycle Generative Adversarial Networks for Generating Human Faces, Hands, Bodies, and Natural Scenes ; We propose a novel and unified Cycle in Cycle Generative Adversarial Network C2GAN for generating human faces, hands, bodies, and natural scenes. Our proposed C2GAN is a crossmodal model exploring the joint exploitation of the input image data and guidance data in an interactive manner. C2GAN contains two different generators, i.e., an imagegeneration generator and a guidancegeneration generator. Both generators are mutually connected and trained in an endtoend fashion and explicitly form three cycled subnets, i.e., one image generation cycle and two guidance generation cycles. Each cycle aims at reconstructing the input domain and simultaneously produces a useful output involved in the generation of another cycle. In this way, the cycles constrain each other implicitly providing complementary information from both image and guidance modalities and bringing an extra supervision gradient across the cycles, facilitating a more robust optimization of the whole model. Extensive results on four guided imagetoimage translation subtasks demonstrate that the proposed C2GAN is effective in generating more realistic images compared with stateoftheart models. The code is available at httpsgithub.comHa0TangC2GAN.
Leveraging Conditional Generative Models in a General Explanation Framework of Classifier Decisions ; Providing a humanunderstandable explanation of classifiers' decisions has become imperative to generate trust in their use for daytoday tasks. Although many works have addressed this problem by generating visual explanation maps, they often provide noisy and inaccurate results forcing the use of heuristic regularization unrelated to the classifier in question. In this paper, we propose a new general perspective of the visual explanation problem overcoming these limitations. We show that visual explanation can be produced as the difference between two generated images obtained via two specific conditional generative models. Both generative models are trained using the classifier to explain and a database to enforce the following properties i All images generated by the first generator are classified similarly to the input image, whereas the second generator's outputs are classified oppositely. ii Generated images belong to the distribution of real images. iii The distances between the input image and the corresponding generated images are minimal so that the difference between the generated elements only reveals relevant information for the studied classifier. Using symmetrical and cyclic constraints, we present two different approximations and implementations of the general formulation. Experimentally, we demonstrate significant improvements w.r.t the stateoftheart on three different public data sets. In particular, the localization of regions influencing the classifier is consistent with human annotations.
ClassEval A ManuallyCrafted Benchmark for Evaluating LLMs on Classlevel Code Generation ; In this work, we make the first attempt to evaluate LLMs in a more challenging code generation scenario, i.e. classlevel code generation. We first manually construct the first classlevel code generation benchmark ClassEval of 100 classlevel Python code generation tasks with approximately 500 personhours. Based on it, we then perform the first study of 11 stateoftheart LLMs on classlevel code generation. Based on our results, we have the following main findings. First, we find that all existing LLMs show much worse performance on classlevel code generation compared to on standalone methodlevel code generation benchmarks like HumanEval; and the methodlevel coding ability cannot equivalently reflect the classlevel coding ability among LLMs. Second, we find that GPT4 and GPT3.5 still exhibit dominate superior than other LLMs on classlevel code generation, and the secondtier models includes InstructStarcoder, InstructCodegen, and Wizardcoder with very similar performance. Third, we find that generating the entire class all at once i.e. holistic generation strategy is the best generation strategy only for GPT4 and GPT3.5, while methodbymethod generation i.e. incremental and compositional is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information. Lastly, we find the limited model ability of generating methoddependent code and discuss the frequent error types in generated classes. Our benchmark is available at httpsgithub.comFudanSELabClassEval.
Generalized LotkaVolterra GLV Models and Generic Emergence of Scaling Laws in Stock Markets ; This is a pedagogical review of the the Generalized LotkaVolterra GLV model wit1 lambda wit a W t c W t wit where i1, ......, N and W w1 w2 ...wNN is the average of the wi's. The GLV models provide a generic method to simulate, analyze and understand a wide class of phenomena which are characterized by truncated powerlaw probability distributions Pw dw w1 alpha dw and truncated Levy flights fluctuations Lalpha W. The implications and the interpretation of the model in the stock markets are discussed.
A lattice worldsheet sum for 4d Euclidean general relativity ; A lattice model for four dimensional Euclidean quantum general relativity is proposed for a simplicial spacetime. It is shown how this model can be expressed in terms of a sum over worldsheets of spin networks, and an interpretation of these worldsheets as spacetime geometries is given, based on the geometry defined by spin networks in canonical loop quantized GR. The spacetime geometry has a Planck scale discreteness which arises naturally from the discrete spectrum of spins of SU2 representations and not from the use of a spacetime lattice. The lattice model of the dynamics is a formal quantization of the classical lattice model of citeRei97a, which reproduces, in a continuum limit, Euclidean general relativity.
Comparing selfinteracting scalar fields and R R3 cosmological models ; We generalize the wellknown analogies between m2 phi2 and R R2 theories to include the selfinteraction lambda phi4term for the scalar field. It turns out to be the R R3 Lagrangian which gives an appropriate model for it. Considering a spatially flat Friedman cosmological model, common and different properties of these models are discussed, e.g., by linearizing around a ground state the masses of the resp. spin 0parts coincide. Finally, we prove a general conformal equivalence theorem between a Lagrangian L LR, L'L ne 0, and a minimally coupled scalar field in a general potential.
Generalized Calogero models through reductions by discrete symmetries ; We construct generalizations of the CalogeroSutherlandMoser system by appropriately reducing a classical Calogero model by a subset of its discrete symmetries. Such reductions reproduce all known variants of these systems, including some recently obtained generalizations of the spinSutherland model, and lead to further generalizations of the elliptic model involving spins with SUn noninvariant couplings.
Standard Model bundles of the heterotic string ; We show how to construct supersymmetric threegeneration models with gauge group and matter content of the Standard Model in the framework of nonsimplyconnected elliptically fibered CalabiYau manifolds Z. The elliptic fibration on a cover CalabiYau, where the model has 6 generations of SU5 and the bundle is given via the spectral cover description, has a second section leading to the needed free involution. The relevant involution on the defining spectral data of the bundle is identified for a general CalabiYau of this type and invariant bundles are generally constructible.
Generalizations of HoLee's binomial interest rate model I from one to multifactor ; In this paper a multifactor generalization of HoLee model is proposed. In sharp contrast to the classical HoLee, this generalization allows for those movements other than parallel shifts, while it still is described by a recombining tree, and is stationary to be compatible with principal component analysis. Based on the model, generalizations of durationbased hedging are proposed. A continuoustime limit of the model is also discussed.
An Alternative Topological Field Theory of Generalized Complex Geometry ; We propose a new topological field theory on generalized complex geometry in two dimension using AKSZ formulation. Zucchini's model is A model in the case that the generalized complex structuredepends on only a symplectic structure. Our new model is B model in the case that the generalized complex structure depends on only a complex structure.
Errorsinvariables models a generalized functions approach ; Identification in errorsinvariables regression models was recently extended to wide models classes by S. Schennach Econometrica, 2007 S via use of generalized functions. In this paper the problems of non and semi parametric identification in such models are reexamined. Nonparametric identification holds under weaker assumptions than in S; the proof here does not rely on decomposition of generalized functions into ordinary and singular parts, which may not hold. A consistent nonparametric plugin estimator for regression functions in the space of absolutely integrable functions constructed. Semiparametric identification via a finite set of moments is shown to hold for classes of functions that are explicitly characterized; unlike S existence of a moment generating function for the measurement error is not required.
Diverse Beliefs ; This paper presents a general framework for studying diverse beliefs in dynamic economies. Within this general framework, the characterization of a centralplanner general equilbrium turns out to be very easy to derive, and leads to a range of interesting applications. We show how for an economy with log investors holding diverse beliefs, rational overconfidence is to be expected; volumeoftrade effects are effectively modelled; the Keynesian beauty contest' can be modelled and analysed; and bubbles and crashes arise naturally. We remark that models where agents receive private information can formally be considered as models of diverse beliefs.
Magnetic field generation in Higgs inflation model ; We study the generation of magnetic field in Higgsinflation models where the Standard Model Higgs boson has a large coupling to the Ricci scalar. We couple the Higgs field to the Electromagnetic fields via a non renormalizable dimension six operator suppressed by the Planck scale in the Jordan frame. We show that during Higgs inflation magnetic fields with present value 106 Gauss and comoving coherence length of 100 kpc can be generated in the Einstein frame. The problem of large backreaction which is generic in the usual inflation models of magnetogenesis is avoided as the backreaction is suppressed by the large Higgscurvature coupling.
Linear Cellular Automata as Discrete Models for Generating Cryptographic Sequences ; In this paper, we develop a new cellular automatabased linear model for several nonlinear pseudorandom number generators with practical applications in symmetric cryptography. Such a model generates all the solutions of linear binary difference equations as well as many of these solutions are pseudorandom keystream sequences. In this way, a linear structure based on cellular automata may be used to generate not only difference equation solutions but also cryptographic sequences. The proposed model is very simple since it is based exclusively on successive concatenations of a basic linear automaton.
A Simple Computational Model for AcceptanceRejection of Binary Sequence Generators ; A simple binary model to compute the degree of balancedness in the output sequence of LFSRcombinational generators has been developed. The computational method is based exclusively on the handling of binary strings by means of logic operations. The proposed model can serve as a deterministic alternative to existing probabilistic methods for checking balancedness in binary sequence generators. The procedure here described can be devised as a first selective criterium for acceptancerejection of this type of generators.
Disentangling Factors of Variation via Generative Entangling ; Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spikeandslab restricted Boltzmann machine which we generalize to include higherorder interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.
Generalized Minkowski space with changing shape ; In earlier papers we changed the concept of the inner product to a more general one, to the socalled Minkowski product. This product changes on the tangent space hence we could investigate a more general structure than a Riemannian manifold. Particularly interesting such a model when the negative direct component has dimension one and the model shows certain spacetime character. We will discuss this case here. We give a deterministic and a nondeterministic random variant of a such a model. As we showed, the deterministic model can be defined also with a shape function.
Generalized sineGordon models and quantum braided groups ; We determine the quantized function algebras associated with various examples of generalized sineGordon models. These are quadratic algebras of the general FreidelMaillet type, the classical limits of which reproduce the lattice Poisson algebra recently obtained for these models defined by a gauged WessZuminoWitten action plus an integrable potential. More specifically, we argue based on these examples that the natural framework for constructing quantum lattice integrable versions of generalized sineGordon models is that of affine quantum braided groups.
Two New Definitions of Stable Models of Logic Programs with Generalized Quantifiers ; We present alternative definitions of the firstorder stable model semantics and its extension to incorporate generalized quantifiers by referring to the familiar notion of a reduct instead of referring to the SM operator in the original definitions. Also, we extend the FLP stable model semantics to allow generalized quantifiers by referring to an operator that is similar to the sm operator. For a reasonable syntactic class of logic programs, we show that the two stable model semantics of generalized quantifiers are interchangeable.
LloydTopor Completion and General Stable Models ; We investigate the relationship between the generalization of program completion defined in 1984 by Lloyd and Topor and the generalization of the stable model semantics introduced recently by Ferraris et al. The main theorem can be used to characterize, in some cases, the general stable models of a logic program by a firstorder formula. The proof uses Truszczynski's stable model semantics of infinitary propositional formulas.
DBI analog of a decaying vacuum cosmology ; In this work I discuss the dynamical and thermodynamical equivalence between a general kessence scalar field cosmology and an arbitrary cosmological model with a decaying vacuum, thus generalizing the approach proposed by Maia and Lima Phys. Rev. D bf 65, 083513 2002. The formalism obtained is quite general and holds for any noncanonical scalar field model. As a special case I derive a DiracBornInfeld DBI model with an exponential potential and constant speed of sound, and show that it is equivalent to a cosmological model with decay law LambdaH 3beta H2.
Automated Test Case Generation using Petri Nets ; Software testing is the process of determining the precision, quality, completeness and security of the software systems. An important step in testing software is the generation of test cases, whose quality plays a vital role in determining the time for testing and subsequently its cost. In this research, it is shown that both structural and behavioural diagrams can be used to represent specifications in a single model using High Level Petri Nets HLPN. This research focuses on automated generation of test models from Petri nets. Moreover, generating consistent formal models HLPN from informal models UML is the highlight of this research.
Option pricing in affine generalized Merton models ; In this article we consider affine generalizations of the Merton jump diffusion model Merton, J. Fin. Econ., 1976 and the respective pricing of European options. On the one hand, the Brownian motion part in the Merton model may be generalized to a logHeston model, and on the other hand, the jump part may be generalized to an affine process with possibly state dependent jumps. While the characteristic function of the logHeston component is known in closed form, the characteristic function of the second component may be unknown explicitly. For the latter component we propose an approximation procedure based on the method introduced in Belomestny et al., J. Func. Anal., 2009. We conclude with some numerical examples.
Categorical generalization of spinfoam models ; We give a brief review of the problem of quantum gravity. After the discussion of the nonrenormalizability of general relativity, we briefly mention the main research directions which aim to resolve this problem. Our attention then focuses on the approach of Loop Quantum Gravity, specifically spinfoam models. These models have some issues concerning the semiclassical limit and coupling of matter fields. The recent developments in category theory provide us with the necessary formalism to introduce a new action for general relativity and perform covariant quantization so that the issues of spinfoam models are successfully resolved.
Warm intermediate inflationary Universe model in the presence of a Generalized Chaplygin Gas ; A warm intermediate inflationary model in the context of Generalized Chaplygin Gas is investigated. We study this model in the weak and strong dissipative regimes, considering a generalized form of the dissipative coefficient GammaGammaT,phi, and we describe the inflationary dynamics in the slowroll approximation. We find constraints on the parameters in our model considering the Planck 2015 data, together with the condition for warm inflation TH, and the conditions for the weak and strong dissipative regimes.
General solutions of integrable cosmological models with nonminimal coupling ; We study the integrable model with minimally and nonminimally coupled scalar fields and the correspondence of their general solutions. Using the model with a minimally coupled scalar field and a the constant potential as an example we demonstrate the difference between the general solutions of the corresponding models in the Jordan and the Einstein frames.
Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods ; In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.
Generating physically realizable stellar structures via embedding ; In this work we present an exact solution of the EinsteinMaxwell field equations describing compact, charged objects within the framework of classical general relativity. Our model is constructed by embedding a fourdimensional spherically symmetric static metric into a five dimensional flat metric. The source term for the matter field is composed of a perfect fluid distribution with charge. We show that our model obeys all the physical requirements and stability conditions necessary for a realistic stellar model. Our theoretical model approximates observations of neutron stars and pulsars to a very good degree of accuracy.
SelfAttentionBased MessageRelevant Response Generation for Neural Conversation Model ; Using a sequencetosequence framework, many neural conversation models for chitchat succeed in naturalness of the response. Nevertheless, the neural conversation models tend to give generic responses which are not specific to given messages, and it still remains as a challenge. To alleviate the tendency, we propose a method to promote messagerelevant and diverse responses for neural conversation model by using selfattention, which is timeefficient as well as effective. Furthermore, we present an investigation of why and how effective selfattention is in deep comparison with the standard dialogue generation. The experiment results show that the proposed method improves the standard dialogue generation in various evaluation metrics.
Graph Deconvolutional Generation ; Graph generation is an extremely important task, as graphs are found throughout different areas of science and engineering. In this work, we focus on the modern equivalent of the ErdosRenyi random graph model the graph variational autoencoder GVAE. This model assumes edges and nodes are independent in order to generate entire graphs at a time using a multilayer perceptron decoder. As a result of these assumptions, GVAE has difficulty matching the training distribution and relies on an expensive graph matching procedure. We improve this class of models by building a message passing neural network into GVAE's encoder and decoder. We demonstrate our model on the specific task of generating small organic molecules
A generalization of Kingman's model of selection and mutation and the Lenski experiment ; Kingman's model of selection and mutation studies the limit type value distribution in an asexual population of discrete generations and infinite size undergoing selection and mutation. This paper generalizes the model to analyse the longterm evolution of Escherichia. coli in Lenski experiment. Weak assumptions for fitness functions are proposed and the mutation mechanism is the same as in Kingman's model. General macroscopic epistasis are designable through fitness functions. Convergence to the unique limit type distribution is obtained.
Ways of Conditioning Generative Adversarial Networks ; The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks GANs that achieve stateoftheart results on MNIST and CIFAR10. We mainly introduce two models an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance loglikelihood of test data under the conditional distributions compared to the methods of concatenation.
Algebraic Bethe Ansatz for the XXZ Gaudin Models with Generic Boundary ; We solve the XXZ Gaudin model with generic boundary using the modified algebraic Bethe ansatz. The diagonal and triangular cases have been recovered in this general framework. We show that the model for odd or even lengths has two different behaviors. The corresponding Bethe equations are computed for all the cases. For the chain with even length, inhomogeneous Bethe equations are necessary. The higher spin Gaudin models with generic boundary is also treated.
Using Category Theory in Modeling Generics in OOP Outline ; Modeling generics in objectoriented programming languages such as Java and C is a challenge. Recently we proposed a new ordertheoretic approach to modeling generics. Given the strong relation between order theory and category theory, in this extended abstract we present how also some tools from category theory, such as adjunctions, monads and operads, are used in our approach to modeling generics.
Persistent spin squeezing of dissipative LipkinMeshkovGlick Model embedded in a general thermal environment ; We investigate spin squeezing for a LipkinMeshkovGlick LMG model coupled to a general nonMarkovian environment in a finite temperature regime. Using the nonMarkovian quantum state diffusion and master equation approach, we numerically study nonMarkovian spin squeezing generation in LMG model. Our results show that the total spin number N, energy kBT, and certain coefficients in a LMG model can play a crucial role in generating spin squeezing. In particular, it shows that the maximum spin squeezing can be significantly enhanced when the participating environment has a relatively long memory time.
GraphDF A Discrete Flow Model for Molecular Graph Generation ; We consider the problem of molecular graph generation using deep models. While graphs are discrete, most existing methods use continuous latent variables, resulting in inaccurate modeling of discrete graph structures. In this work, we propose GraphDF, a novel discrete latent variable model for molecular graph generation based on normalizing flow methods. GraphDF uses invertible modulo shift transforms to map discrete latent variables to graph nodes and edges. We show that the use of discrete latent variables reduces computational costs and eliminates the negative effect of dequantization. Comprehensive experimental results show that GraphDF outperforms prior methods on random generation, property optimization, and constrained optimization tasks.
Exact Solution for Partition function of General Ising Model in Magnetic Fields and Bayesian Networks ; We propose a method for generalizing the Ising model in magnetic fields and calculating the partition function exact solution for the Ising model of an arbitrary shape. Specifically, the partition function is calculated using matrices that are created automatically based on the structure of the system. By generalizing this method, it becomes possible to calculate the partition function of various crystal systems network shapes in magnetic fields when N scale is infinite. Furthermore, we also connect this method for finding the solution to the Ising model in magnetic fields to a method for finding the solution to Bayesian networks in information statistical mechanics applied to data mining, machine learning, and combinatorial optimization.
Coexistence and duality in competing species models ; We discuss some stochastic spatial generalizations of the LotkaVolterra model for competing species. The generalizations take the forms of spin systems on general discrete sets and interacting diffusions on integer lattices. Methods for proving coexistence in these generalizations and some related open questions are discussed. We use duality as the central point of view. It relates coexistence of the models to survival of their dual processes.
Using Generative Models to Simulate Cosmogenic Radiation ; We introduce HAWCgen, a set of deep generative neural network models, which are designed to supplement, or in some cases replace, parts of the simulation pipeline for the High Altitude Water Cherenkov HAWC observatory. We show that simple deep generative models replicate sampling of the reconstruction at a near arbitrary speedup compared to the current simulation. Furthermore, we show that generative models can offer a replacement to the detector simulation at a comparable rate and quality to current methods. This work was done as part of an undergraduate summer intern project at NVIDIA during the month of June, 2018.
ToppLeone generated qexponential distribution and its applications ; ToppLeone distribution is a continuous model distribution used for modelling lifetime phenomena. The main purpose of this paper is to introduce a new framework for generating lifetime distributions, called the ToppLeone generated qexponential family of distributions. Parameter estimation using maximum likelihood method and simulation results to assess effectiveness of the distribution are discussed. Different informative and noninformative priors are used to estimate the shape parameter of q extended ToppLeone generated exponential distribution under normal approximation technique. We prove empirically the importance and flexibility of the new model in model building by using a real data set.
DeepObfusCode Source Code Obfuscation Through SequencetoSequence Networks ; The paper explores a novel methodology in source code obfuscation through the application of textbased recurrent neural network RNN encoderdecoder models in ciphertext generation and key generation. Sequencetosequence models are incorporated into the model architecture to generate obfuscated code, generate the deobfuscation key, and live execution. Quantitative benchmark comparison to existing obfuscation methods indicate significant improvement in stealth and execution cost for the proposed solution, and experiments regarding the model's properties yield positive results regarding its character variation, dissimilarity to the original codebase, and consistent length of obfuscated code.
Graph Embedding VAE A Permutation Invariant Model of Graph Structure ; Generative models of graph structure have applications in biology and social sciences. The state of the art is GraphRNN, which decomposes the graph generation process into a series of sequential steps. While effective for modest sizes, it loses its permutation invariance for larger graphs. Instead, we present a permutation invariant latentvariable generative model relying on graph embeddings to encode structure. Using tools from the random graph literature, our model is highly scalable to large graphs with likelihood evaluation and generation in OV E.
Solvable Models of Magnetic Skyrmions ; We give a succinct summary of the recently discovered solvable models of magnetic skyrmions in two dimensions, and of their general solutions. The models contain the standard Heisenberg term, the most general translation invariant DzyaloshinskiiMoriya DM interaction term and, for each DM term, a particular combination of anisotropy and Zeeman potentials. We argue that simple mathematical features of the explicit solutions help understand general qualitative properties of magnetic skyrmion configurations in more generic models.
DagoBERT Generating Derivational Morphology with a Pretrained Language Model ; Can pretrained language models PLMs generate derivationally complex words We present the first study investigating this question, taking BERT as the example PLM. We examine BERT's derivational capabilities in different settings, ranging from using the unmodified pretrained model to full finetuning. Our best model, DagoBERT Derivationally and generatively optimized BERT, clearly outperforms the previous state of the art in derivation generation DG. Furthermore, our experiments show that the input segmentation crucially impacts BERT's derivational knowledge, suggesting that the performance of PLMs could be further improved if a morphologically informed vocabulary of units were used.
Model Theory for Realvalued Structures ; We consider general structures where formulas have truth values in the real unit interval as in continuous model theory, but whose predicates and functions need not be uniformly continuous with respect to a distance predicate. Every general structure can be expanded to a premetric structure by adding a distance predicate that is a uniform limit of formulas. Moreover, that distance predicate is unique up to uniform equivalence. We use this to extend the central notions in the model theory of metric structures to general structures, and show that many modeltheoretic results from the literature about metric structures have natural analogues for general structures.
On Lagrange multiplier theorems for nonsmooth optimization for a large class of variational models in Banach spaces ; This article develops optimality conditions for a large class of nonsmooth variational models. The main results are based on standard tools of functional analysis and calculus of variations. Firstly we address a model with equality constraints and, in a second step, a more general model with equality and inequality constraints, always in a general Banach space context. We highlight the results in general are well known, however, some novelties are introduced related to the proof procedures, which are in general softer than those concerning the present literature.
KnowledgeGrounded Dialogue Generation with Pretrained Language Models ; We study knowledgegrounded dialogue generation with pretrained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pretrained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform stateoftheart methods in both automatic evaluation and human judgment.
Problems using deep generative models for probabilistic audio source separation ; Recent advancements in deep generative modeling make it possible to learn prior distributions from complex data that subsequently can be used for Bayesian inference. However, we find that distributions learned by deep generative models for audio signals do not exhibit the right properties that are necessary for tasks like audio source separation using a probabilistic approach. We observe that the learned prior distributions are either discriminative and extremely peaked or smooth and nondiscriminative. We quantify this behavior for two types of deep generative models on two audio datasets.
Building LEGO Using Deep Generative Models of Graphs ; Generative models are now used to create a variety of highquality digital artifacts. Yet their use in designing physical objects has received far less attention. In this paper, we advocate for the construction toy, LEGO, as a platform for developing generative models of sequential assembly. We develop a generative model based on graphstructured neural networks that can learn from humanbuilt structures and produce visually compelling designs. Our code is released at httpsgithub.comuoguelphmlrgGenerativeLEGO.
An Emotioncontrolled Dialog Response Generation Model with Dynamic Vocabulary ; In response generation task, proper sentimental expressions can obviously improve the humanlike level of the responses. However, for real application in online systems, high QPS queries per second, an indicator of the flow capacity of online systems is required, and a dynamic vocabulary mechanism has been proved available in improving speed of generative models. In this paper, we proposed an emotioncontrolled dialog response generation model based on the dynamic vocabulary mechanism, and the experimental results show the benefit of this model.
Keyphrase Generation for Scientific Document Retrieval ; Sequencetosequence models have lead to significant progress in keyphrase generation, but it remains unknown whether they are reliable enough to be beneficial for document retrieval. This study provides empirical evidence that such models can significantly improve retrieval performance, and introduces a new extrinsic evaluation framework that allows for a better understanding of the limitations of keyphrase generation models. Using this framework, we point out and discuss the difficulties encountered with supplementing documents with not present in text keyphrases, and generalizing models across domains. Our code is available at httpsgithub.comboudinflirusingkg
Modeling hadronization using machine learning ; We present the first steps in the development of a new class of hadronization models utilizing machine learning techniques. We successfully implement, validate, and train a conditional slicedWasserstein autoencoder to replicate the Pythia generated kinematic distributions of firsthadron emissions, when the Lund string model of hadronization implemented in Pythia is restricted to the emissions of pions only. The trained models are then used to generate the full hadronization chains, with an IR cutoff energy imposed externally. The hadron multiplicities and cumulative kinematic distributions are shown to match the Pythia generated ones. We also discuss possible future generalizations of our results.
Agegraphic Model based on the Generalized Uncertainty Principle ; Many models of dark energy have been proposed to describe the universe since the beginning of the Big Bang. In this study, we present a new model of agegraphic dark energy NADE based on the three generalized uncertainty principles KMM Kempf, Mangan, Mann, Nouicer and GUP higher orders generalized uncertainty principle.Using the obtained relations from three of types of GUP, in the form of three scenariosEmergent,Intermediate,Logamediate, we consider three different eras of the universe evolution. Also we describe the evolution and expansion of the universe in each subsection. We will plot the obtained relations in these models for better comparatione.
Finegrained Contrastive Learning for Definition Generation ; Recently, pretrained transformerbased models have achieved great success in the task of definition generation DG. However, previous encoderdecoder models lack effective representation learning to contain full semantic components of the given word, which leads to generating underspecific definitions. To address this problem, we propose a novel contrastive learning method, encouraging the model to capture more detailed semantic representations from the definition sequence encoding. According to both automatic and manual evaluation, the experimental results on three mainstream benchmarks demonstrate that the proposed method could generate more specific and highquality definitions compared with several stateoftheart models.
ECGAN Selfsupervised generative adversarial network for electrocardiography ; Highquality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints. These limitations, for instance, negatively impact open access to electrocardiography datasets about arrhythmias. This work introduces a selfsupervised approach to the generation of synthetic electrocardiography time series which is shown to promote morphological plausibility. Our model ECGAN allows conditioning the generative process for specific rhythm abnormalities, enhancing synchronization and diversity across samples with respect to literature models. A dedicated sample quality assessment framework is also defined, leveraging arrhythmia classifiers. The empirical results highlight a substantial improvement against stateoftheart generative models for sequences and audio synthesis.
AlignGraph A Group of Generative Models for Graphs ; It is challenging for generative models to learn a distribution over graphs because of the lack of permutation invariance nodes may be ordered arbitrarily across graphs, and standard graph alignment is combinatorial and notoriously expensive. We propose AlignGraph, a group of generative models that combine fast and efficient graph alignment methods with a family of deep generative models that are invariant to node permutations. Our experiments demonstrate that our framework successfully learns graph distributions, outperforming competitors by 25 560 in relevant performance scores.
Generating High Fidelity Synthetic Data via Coreset selection and Entropic Regularization ; Generative models have the ability to synthesize data points drawn from the data distribution, however, not all generated samples are high quality. In this paper, we propose using a combination of coresets selection methods and entropic regularization'' to select the highest fidelity samples. We leverage an EnergyBased Model which resembles a variational autoencoder with an inference and generator model for which the latent prior is complexified by an energybased model. In a semisupervised learning scenario, we show that augmenting the labeled dataset, by adding our selected subset of samples, leads to better accuracy improvement rather than using all the synthetic samples.
Conditional Generative Models are Provably Robust Pointwise Guarantees for Bayesian Inverse Problems ; Conditional generative models became a very powerful tool to sample from Bayesian inverse problem posteriors. It is wellknown in classical Bayesian literature that posterior measures are quite robust with respect to perturbations of both the prior measure and the negative loglikelihood, which includes perturbations of the observations. However, to the best of our knowledge, the robustness of conditional generative models with respect to perturbations of the observations has not been investigated yet. In this paper, we prove for the first time that appropriately learned conditional generative models provide robust results for single observations.
Hidden Layer Interaction A CoCreative Design Fiction for Generative Models ; This paper presents a speculation on a fictive cocreation scenario that extends classical interaction patterns with generative models. While existing interfaces are restricted to the input and output layers, we suggest hidden layer interaction to extend the horizonal relation at play when cocreating with a generative model's design space. We speculate on applying feature visualization to manipulate neurons corresponding to features ranging from edges over textures to objects. By integrating visual representations of a neural network's hidden layers into cocreation, we aim to provide humans with a new means of interaction, contributing to a phenomenological account of the model's inner workings during generation.
ScoreBased Multimodal Autoencoders ; Multimodal Variational Autoencoders VAEs represent a promising group of generative models that facilitate the construction of a tractable posterior within the latent space, given multiple modalities. Daunhawer et al. 2022 demonstrate that as the number of modalities increases, the generative quality of each modality declines. In this study, we explore an alternative approach to enhance the generative performance of multimodal VAEs by jointly modeling the latent space of unimodal VAEs using scorebased models SBMs. The role of the SBM is to enforce multimodal coherence by learning the correlation among the latent variables. Consequently, our model combines the superior generative quality of unimodal VAEs with coherent integration across different modalities.
On multivariate orderings of some general ordered random vectors ; Ordered random vectors are frequently encountered in many problems. The generalized order statistics GOS and sequential order statistics SOS are two general models for ordered random vectors. However, these two models do not capture the dependency structures that are present in the underlying random variables. In this paper, we study the developed sequential order statistics DSOS and developed generalized order statistics DGOS models that describe the dependency structures of ordered random vectors. We then study various univariate and multivariate ordering properties of DSOS and DGOS models under Archimedean copula. We consider both onesample and twosample scenarios and develop corresponding results.
Global smooth solution for the 3D generalized tropical climate model with partial viscosity and damping ; The threedimensional generalized tropical climate model with partial viscosity and damping is considered in this paper. Global wellposedness of solutions of the threedimensional generalized tropical climate model with partial viscosity and damping is proved for alphageqfrac32 and betageq4. Global smooth solution of the threedimensional generalized tropical climate model with partial viscosity and damping is proved in Hsmathbb R3 s2 for alphageqfrac32 and 4leqbetaleq5.
Quantum Money from Abelian Group Actions ; We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptionsmodels compared to generic group actions.
Inflation models and observation ; We consider smallfield models which invoke the usual framework for the effective field theory, and largefield models which go beyond that. Present and future possibilities for discriminating between the models are assessed, on the assumption that the primordial curvature perturbation is generated during inflation. With PLANCK data, the theoretical and observational uncertainties on the spectral index will be comparable, providing useful discrimination between smallfield models. Further discrimination between models may come later through the tensor fraction, the running of the spectral index and nongaussianity. The prediction for the trispectrum in a generic multifield inflation model is given for the first time.
Quantum Symmetry of Hubbard Model Unraveled ; Superconducting quantum symmetries in extended singleband 1dimensional Hubbard models are shown to originate from the classical pseudospin SO4 symmetry of a class of models of which the standard Hubbard model is a special case. Extending the notion of symmetry to include quantum groups allows us to introduce extra parameters but the corresponding quantum symmetric models are restricted to one dimension. All models discussed are related by generalized LangFirsov transformations, some have symmetries away from half filling. The most general model with symmetric nextneighbour interaction terms and classical SO4 symmetry is given explicitly.
Extended Color Models with a Heavy Top Quark ; We present a class of models in which the top quark, by mixing with new physics at a higher energy scale, is naturally heavier than the other standard model particles. We take this new physics to be extended color. Our models contain new particles with masses between 100 GeV and 1 TeV, some of which may be just within the reach of the next generation of experiments. In particular one of our models implies the existence of two right handed top quarks. These models demonstrate the existence of a standard modellike theory consistent with experiment, and leading to new physics below the TeV scale, in which the third generation is treated differently than the first two.
New hysteresis operators with applications to counterterrorism ; We define two models of hysteresis that generalize the Preisach model. The first model is deterministic, the second model is stochastic and it utilizes disconinuous transition probabilities that satisfy impulsive differential equations. For the first model we prove, among other things, a local version of the wiping out property; for the stochastic model, we give methods for the construction of solutions of impulsive differential equations that determine the discontinuous transition probabilities. We also present a gametheoretic problem utilizing a generalized hysteresis operator. These hysteresis operators are motivated by questions of modelling the dynamics of decision making processes of networks of loosely knit terrorist groups.
Modelling recorded crime a full search for cointegrated models ; A modelgenerator is developed that searches for cointegrated models among a potentially large group of candidate models. The generator employs the first step of the EngleGranger procedure and orders cointegrated models according to the information criterions AIC and BIC. Assisted by the generator, a cointegrated relation is established between recorded violent crime in the Netherlands, the number of males aged 1525 years split into Western and nonWestern background and deflated consumption. Insample forecasts reveal that the cointegrated model outperforms the best shortrun models.
Nonhermitian models in higher dimensions ; It is know that PTsymmetric models have real spectra provided the symmetry is not spontaneously broken. Even pseudohermitian models have real spectra, which enlarge the the class of nonhermitian models possessing real spectra. We however consider nonhermitian models in higher dimensions which are not necessarily explicit PTsymmetric nor pseudohermitian. We show that the models may generate real spectra depending upon the coupling constants of the Hamiltonian. Our models thus further generalize the class of nonhermitian systems, which possess real spectra.
Comparison of multiscale analysis models applied to zonal flow generation in iontemperaturegradient mode turbulence ; During the past years the understanding of the multiscale interaction problems have increased significantly. However, at present there exists a range of different analytical models for investigating multiscale interactions and hardly any specific comparisons have been performed among these models. In this work, two different models for the generation of zonal flows from iontemperaturegradient ITG background turbulence are discussed and compared. The methods used is the coherent mode coupling model and the wave kinetic equation model WKE. It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference is used in the WKE approach.
Kuramoto model with coupling through an external medium ; Synchronization of coupled oscillators is often described using the Kuramoto model. Here we study a generalization of the Kuramoto model where oscillators communicate with each other through an external medium. This generalized model exhibits interesting new phenomena such as bistability between synchronization and incoherence and a qualitatively new form of synchronization where the external medium exhibits smallamplitude oscillations. We conclude by discussing the relationship of the model to other variations of the Kuramoto model including the Kuramoto model with a bimodal frequency distribution and the Millennium Bridge problem.
Stability of nonhomegeneous models and fine tuning of initial state ; We apply phase space analysis to inhomogeneous cosmological model given by LemaitreTolman model. We describe some general conditions required to interpret the model stable enough and, in the present paper, apply them to two special cases dust filled homogeneous model with and without cosmological constant. We find that such stability explaining all present astrophysical observations can not be achieved due to instabilities in phase space. This hints that nonhomogeneous models are not likely to be physically viable, although any conclusive analysis requires more realistic modeling of nonhomogeneous universe.
Spincube Models of Quantum Gravity ; We study the statesum models of quantum gravity based on a representation 2category of the Poincare 2group. We call them spincube models, since they are categorical generalizations of spinfoam models. A spincube state sum can be considered as a path integral for a constrained 2BF theory, and depending on how the constraints are imposed, a spincube state sum can be reduced to a path integral for the areaRegge model with the edgelength constraints, or to a path integral for the Regge model. We also show that the effective actions for these spincube models have the correct classical limit.
Kinklike structures in scalar field theories from onefield to twofield models ; In this paper we study the possibility of constructing twofield models from onefield models. The idea is to start with a given onefield model and use the deformation procedure to generate another onefield model, and then couple the two onefield models nontrivially, to get to a twofield model, together with some explicit topological solutions. We show with several distinct examples that the procedure works nicely and can be used generically.
NonGaussian Matern fields with an application to precipitation modeling ; The recently proposed nonGaussian Mat'ern random field models, generated through Stochastic Partial differential equations SPDEs, are extended by considering the class of Generalized Hyperbolic processes as noise forcings. The models are also extended to the standard geostatistical setting where irregularly spaced observations are modeled using measurement errors and covariates. A maximum likelihood estimation technique based on the Monte Carlo Expectation Maximization MCEM algorithm is presented, and it is shown how the model can be used to do predictions at unobserved locations. Finally, an application to precipitation data over the United States for two month in 1997 is presented, and the performance of the nonGaussian models is compared with standard Gaussian and transformed Gaussian models through crossvalidation.
Bayesian Model Selection in Complex Linear Systems, as Illustrated in Genetic Association Studies ; Motivated by examples from genetic association studies, this paper considers the model selection problem in a general complex linear model system and in a Bayesian framework. We discuss formulating model selection problems and incorporating contextdependent it a priori information through different levels of prior specifications. We also derive analytic Bayes factors and their approximations to facilitate model selection and discuss their theoretical and computational properties. We demonstrate our Bayesian approach based on an implemented Markov Chain Monte Carlo MCMC algorithm in simulations and a real data application of mapping tissuespecific eQTLs. Our novel results on Bayes factors provide a general framework to perform efficient model comparisons in complex linear model systems.
The Information Theoretically Efficient Model ITEM A model for computerized analysis of large datasets ; This document discusses the Information Theoretically Efficient Model ITEM, a computerized system to generate an information theoretically efficient multinomial logistic regression from a general dataset. More specifically, this model is designed to succeed even where the logit transform of the dependent variable is not necessarily linear in the independent variables. This research shows that for large datasets, the resulting models can be produced on modern computers in a tractable amount of time. These models are also resistant to overfitting, and as such they tend to produce interpretable models with only a limited number of features, all of which are designed to be well behaved.
FuturePredictionBased Model for Neural Machine Translation ; We propose a novel model for Neural Machine Translation NMT. Different from the conventional method, our model can predict the future text length and words at each decoding time step so that the generation can be helped with the information from the future prediction. With such information, the model does not stop generation without having translated enough content. Experimental results demonstrate that our model can significantly outperform the baseline models. Besides, our analysis reflects that our model is effective in the prediction of the length and words of the untranslated content.
Generalized Phase Representation of IntegrateandFire Models ; The quadratic integrateandfire QIF model captures the normal form bifurcation dynamics of TypeI neurons found in cortex. Remarkably, this model is known to have a dual equivalent representation in terms of phase, called the thetamodel, which has advantages for numerical simulations and analysis over the QIF model. Here, I investigate the nature of the dual representation and derive the general phase model expression for all integrateandfire models. Moreover, I show the condition for which the phase models exhibit discontinuous onset firing rate, the hallmark of TypeII spiking neurons.
Evaluating Contextualized Language Models for Hungarian ; We present an extended comparison of contextualized language models for Hungarian. We compare huBERT, a Hungarian model against 4 multilingual models including the multilingual BERT model. We evaluate these models through three tasks, morphological probing, POS tagging and NER. We find that huBERT works better than the other models, often by a large margin, particularly near the global optimum typically at the middle layers. We also find that huBERT tends to generate fewer subwords for one word and that using the last subword for tokenlevel tasks is generally a better choice than using the first one.
Relational models for contingency tables ; The paper considers general multiplicative models for complete and incomplete contingency tables that generalize loglinear and several other models and are entirely coordinate free. Sufficient conditions of the existence of maximum likelihood estimates under these models are given, and it is shown that the usual equivalence between multinomial and Poisson likelihoods holds if and only if an overall effect is present in the model. If such an effect is not assumed, the model becomes a curved exponential family and a related mixed parameterization is given that relies on nonhomogeneous odds ratios. Several examples are presented to illustrate the properties and use of such models.
Modeling disease progression in longitudinal EHR data using continuoustime hidden Markov models ; Modeling disease progression in healthcare administrative databases is complicated by the fact that patients are observed only at irregular intervals when they seek healthcare services. In a longitudinal cohort of 76,888 patients with chronic obstructive pulmonary disease COPD, we used a continuoustime hidden Markov model with a generalized linear model to model healthcare utilization events. We found that the fitted model provides interpretable results suitable for summarization and hypothesis generation.