text
stringlengths
62
2.94k
Counterfactual Recipe Generation Exploring Compositional Generalization in a Realistic Scenario ; People can acquire knowledge in an unsupervised manner by reading, and compose the knowledge to make novel combinations. In this paper, we investigate whether pretrained language models can perform compositional generalization in a realistic setting recipe generation. We design the counterfactual recipe generation task, which asks models to modify a base recipe according to the change of an ingredient. This task requires compositional generalization at two levels the surface level of incorporating the new ingredient into the base recipe, and the deeper level of adjusting actions related to the changing ingredient. We collect a largescale recipe dataset in Chinese for models to learn culinary knowledge, and a subset of actionlevel finegrained annotations for evaluation. We finetune pretrained language models on the recipe corpus, and use unsupervised counterfactual generation methods to generate modified recipes. Results show that existing models have difficulties in modifying the ingredients while preserving the original text style, and often miss actions that need to be adjusted. Although pretrained language models can generate fluent recipe texts, they fail to truly learn and use the culinary knowledge in a compositional way. Code and data are available at httpsgithub.comxxxiaolcounterfactualrecipegeneration.
Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation ; Generative adversarial networks GANs have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing RS image generation task, an extraordinary phenomenon is observed the GAN model is more sensitive to the size of training data for RS image generation than for natural image generation. In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data. Then we establish a structural causal model SCM of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely Uniformity Regularization UR and Entropy Regularization ER, to increase the information learned by the GAN model at the distributional and sample levels, respectively. We theoretically and empirically demonstrate the effectiveness and versatility of our methods. Extensive experiments on three RS datasets and two natural datasets show that our methods outperform the wellestablished models on RS image generation tasks. The source code is available at httpsgithub.comrootSueCausalRSGAN.
4D Facial Expression Diffusion Model ; Facial expression generation is one of the most challenging and longsought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences i.e. 4D faces that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks 1 Learning the generative model that is trained over a set of 3D landmark sequences, and 2 Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model DDPM, which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmarkguided encoderdecoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the stateoftheart methods. Videos and qualitative comparisons with other methods can be found at httpsgithub.comZOUKaifeng4DFM. Code and models will be made available upon acceptance.
Improving OutofDistribution Robustness of Classifiers via Generative Interpolation ; Deep neural networks achieve superior performance for learning from independent and identically distributed i.i.d. data. However, their performance deteriorates significantly when handling outofdistribution OoD data, where the training and test are drawn from different distributions. In this paper, we explore utilizing the generative models as a data augmentation source for improving outofdistribution robustness of neural classifiers. Specifically, we develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples. Training a generative model directly on the source domains tends to suffer from mode collapse and sometimes amplifies the data bias. Instead, we first train a StyleGAN model on one source domain and then finetune it on the other domains, resulting in many correlated generators where their model parameters have the same initialization thus are aligned. We then linearly interpolate the model parameters of the generators to spawn new sets of generators. Such interpolated generators are used as an extra data augmentation source to train the classifiers. The interpolation coefficients can flexibly control the augmentation direction and strength. In addition, a stylemixing mechanism is applied to further improve the diversity of the generated OoD samples. Our experiments show that the proposed method explicitly increases the diversity of training domains and achieves consistent improvements over baselines across datasets and multiple different distribution shifts.
Symmetric competition as a general model for singlespecies adaptive dynamics ; Adaptive dynamics is a widely used framework for modeling longterm evolution of continuous phenotypes. It is based on invasion fitness functions, which determine selection gradients and the canonical equation of adaptive dynamics. Even though the derivation of the adaptive dynamics from a given invasion fitness function is general and modelindependent, the derivation of the invasion fitness function itself requires specification of an underlying ecological model. Therefore, evolutionary insights gained from adaptive dynamics models are generally modeldependent. Logistic models for symmetric, frequencydependent competition are widely used in this context. Such models have the property that the selection gradients derived from them are gradients of scalar functions, which reflects a certain gradient property of the corresponding invasion fitness function. We show that any adaptive dynamics model that is based on an invasion fitness functions with this gradient property can be transformed into a generalized symmetric competition model. This provides a precise delineation of the generality of results derived from competition models. Roughly speaking, to understand the adaptive dynamics of the class of models satisfying a certain gradient condition, one only needs a complete understanding of the adaptive dynamics of symmetric, frequencydependent competition. We show how this result can be applied to number of basic issues in evolutionary theory.
Generalized coupled wake boundary layer model applications and comparisons with field and LES data for two windfarms ; We describe a generalization of the Coupled Wake Boundary Layer CWBL model for windfarms that can be used to evaluate the performance of windfarms under arbitrary wind inflow directions whereas the original CWBL model Stevens et al., J. Renewable and Sustainable Energy 7, 023115 2015 focused on aligned or staggered windfarms. The generalized CWBL approach combines an analytical Jensen wake model with a topdown boundary layer model coupled through an iterative determination of the wake expansion coefficient and an effective wake coverage area for which the velocity at hubheight obtained using both models converges in the deeparray portion fully developed region of the windfarm. The approach accounts for the effect of the wind direction by enforcing the coupling for each wind direction. Here we present detailed comparisons of model predictions with LES results and field measurements for the Horns Rev and Nysted windfarms operating over a wide range of wind inflow directions. Our results demonstrate that twoway coupling between the Jensen wake model and a topdown model enables the generalized CWBL model to predict the deeparray performance of a windfarm better than the Jensen wake model alone. The results also show that the new generalization allows us to study a much larger class of windfarms than the original CWBL model, which increases the utility of the approach for windfarm designers.
An Evaluation of Generative PreTraining Modelbased Therapy Chatbot for Caregivers ; With the advent of offtheshelf intelligent home products and broader internet adoption, researchers increasingly explore smart computing applications that provide easier access to health and wellness resources. AIbased systems like chatbots have the potential to provide services that could provide mental health support. However, existing therapy chatbots are often retrievalbased, requiring users to respond with a constrained set of answers, which may not be appropriate given that such predetermined inquiries may not reflect each patient's unique circumstances. Generativebased approaches, such as the OpenAI GPT models, could allow for more dynamic conversations in therapy chatbot contexts than previous approaches. To investigate the generativebased model's potential in therapy chatbot contexts, we built a chatbot using the GPT2 model. We finetuned it with 306 therapy session transcripts between family caregivers of individuals with dementia and therapists conducting Problem Solving Therapy. We then evaluated the model's pretrained and the finetuned model in terms of basic qualities using three metainformation measurements the proportion of nonword outputs, the length of response, and sentiment components. Results showed that 1 the finetuned model created more nonword outputs than the pretrained model; 2 the finetuned model generated outputs whose length was more similar to that of the therapists compared to the pretrained model; 3 both the pretrained model and finetuned model were likely to generate more negative and fewer positive outputs than the therapists. We discuss potential reasons for the problem, the implications, and solutions for developing therapy chatbots and call for investigations of the AIbased system application.
Open Problem Approximate Planning of POMDPs in the class of Memoryless Policies ; Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning RL, as an agentenvironment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process MDP for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process POMDP can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.
A Tale of Two Flows Cooperative Learning of Langevin Flow and Normalizing Flow Toward EnergyBased Model ; This paper studies the cooperative learning of two generative flow models, in which the two models are iteratively updated based on the jointly synthesized examples. The first flow model is a normalizing flow that transforms an initial simple density to a target density by applying a sequence of invertible transformations. The second flow model is a Langevin flow that runs finite steps of gradientbased MCMC toward an energybased model. We start from proposing a generative framework that trains an energybased model with a normalizing flow as an amortized sampler to initialize the MCMC chains of the energybased model. In each learning iteration, we generate synthesized examples by using a normalizing flow initialization followed by a shortrun Langevin flow revision toward the current energybased model. Then we treat the synthesized examples as fair samples from the energybased model and update the model parameters with the maximum likelihood learning gradient, while the normalizing flow directly learns from the synthesized examples by maximizing the tractable likelihood. Under the shortrun nonmixing MCMC scenario, the estimation of the energybased model is shown to follow the perturbation of maximum likelihood, and the shortrun Langevin flow and the normalizing flow form a twoflow generator that we call CoopFlow. We provide an understating of the CoopFlow algorithm by information geometry and show that it is a valid generator as it converges to a moment matching estimator. We demonstrate that the trained CoopFlow is capable of synthesizing realistic images, reconstructing images, and interpolating between images.
Deep Equilibrium Approaches to Diffusion Models ; Diffusionbased generative models are extremely effective in generating highquality images, with generated samples often surpassing the quality of those produced by other models under several metrics. One distinguishing feature of these models, however, is that they typically require long sampling chains to produce highfidelity images. This presents a challenge not only from the lenses of sampling time, but also from the inherent difficulty in backpropagating through these chains in order to accomplish tasks such as model inversion, i.e. approximately finding latent states that generate known images. In this paper, we look at diffusion models through a different perspective, that of a deep equilibrium DEQ fixed point model. Specifically, we extend the recent denoising diffusion implicit model DDIM; Song et al. 2020, and model the entire sampling chain as a joint, multivariate fixed point system. This setup provides an elegant unification of diffusion and equilibrium models, and shows benefits in 1 single image sampling, as it replaces the fullyserial typical sampling process with a parallel one; and 2 model inversion, where we can leverage fast gradients in the DEQ setting to much more quickly find the noise that generates a given image. The approach is also orthogonal and thus complementary to other methods used to reduce the sampling time, or improve model inversion. We demonstrate our method's strong performance across several datasets, including CIFAR10, CelebA, and LSUN Bedrooms and Churches.
The code'' of EthicsA Holistic Audit of AI Code Generators ; AIpowered programming language generation PLG models have gained increasing attention due to their ability to generate source code of programs in a few seconds with a plain program description. Despite their remarkable performance, many concerns are raised over the potential risks of their development and deployment, such as legal issues of copyright infringement induced by training usage of licensed code, and malicious consequences due to the unregulated use of these models. In this paper, we present the firstofitskind study to systematically investigate the accountability of PLG models from the perspectives of both model development and deployment. In particular, we develop a holistic framework not only to audit the training data usage of PLG models, but also to identify neural code generated by PLG models as well as determine its attribution to a source model. To this end, we propose using membership inference to audit whether a code snippet used is in the PLG model's training data. In addition, we propose a learningbased method to distinguish between humanwritten code and neural code. In neural code attribution, through both empirical and theoretical analysis, we show that it is impossible to reliably attribute the generation of one code snippet to one model. We then propose two feasible alternative methods one is to attribute one neural code snippet to one of the candidate PLG models, and the other is to verify whether a set of neural code snippets can be attributed to a given PLG model. The proposed framework thoroughly examines the accountability of PLG models which are verified by extensive experiments. The implementations of our proposed framework are also encapsulated into a new artifact, named CodeForensic, to foster further research.
A Generic Dynamical Model of Gammaray Burst Remnants ; The conventional generic model is deemed to explain the dynamics of gammaray burst remnants very well, no matter whether they are adiabatic or highly radiative. However, we find that for adiabatic expansion, the model could not reproduce the Sedov solution in the nonrelativistic phase, thus the model needs to be revised. In the present paper, a new differential equation is derived. The generic model based on this equation has been shown to be correct for both radiative and adiabatic fireballs, and in both ultrarelativistic and nonrelativistic phase.
AndersonYuval approach to the multichannel Kondo problem ; We analyze the structure of the perturbation expansion of the general multichannel Kondo model with channel anisotropic exchange couplings and in the presence of an external magnetic field, generalizing to this case the AndersonYuval technique. For two channels, we are able to map the Kondo model onto a generalized resonant level model. Limiting cases in which the equivalent resonant level model is solvable are identified. The solution correctly captures the properties of the two channel Kondo model, and also allows an analytic description of the crossover from the non Fermi liquid to the Fermi liquid behavior caused by the channel anisotropy.
Chemical Potential of the Generalized Hubbard Model with Correlated Hopping ; In the present paper we study chemical potential of the generalized Hubbard model with correlated hopping. The peculiarity of the model in comparison with similar generalized Hubbard models is the concentration dependence of hopping integrals. Chemical potential as a function of the model energy parameters, electron concentration and temperature is found. It is shown that correlated hopping and temperature changes essentially the chemical potential location; these dependencies differ strongly at different values of the electron concenntration.
qlinear approximants Scaling functions for polygon models ; The perimeter and area generating functions of exactly solvable polygon models satisfy qfunctional equations, where q is the area variable. The behaviour in the vicinity of the point where the perimeter generating function diverges can often be described by a scaling function. We develop the method of qlinear approximants in order to extract the approximate scaling behaviour of polygon models when an exact solution is not known. We test the validity of our method by approximating exactly solvable qlinear polygon models. This leads to scaling functions for a number of qlinear polygon models, notably generalized rectangles, Ferrers diagrams, and stacks.
Generation of uncorrelated random scalefree networks ; Uncorrelated random scalefree networks are useful null models to check the accuracy an the analytical solutions of dynamical processes defined on complex networks. We propose and analyze a model capable to generate random uncorrelated scalefree networks with no multiple and selfconnections. The model is based on the classical configuration model, with an additional restriction on the maximum possible degree of the vertices. We check numerically that the proposed model indeed generates scalefree networks with no two and three vertex correlations, as measured by the average degree of the nearest neighbors and the clustering coefficient of the vertices of degree k, respectively.
Logic programs with monotone abstract constraint atoms ; We introduce and study logic programs whose clauses are built out of monotone constraint atoms. We show that the operational concept of the onestep provability operator generalizes to programs with monotone constraint atoms, but the generalization involves nondeterminism. Our main results demonstrate that our formalism is a common generalization of 1 normal logic programming with its semantics of models, supported models and stable models, 2 logic programming with weight atoms lparse programs with the semantics of stable models, as defined by Niemela, Simons and Soininen, and 3 of disjunctive logic programming with the possiblemodel semantics of Sakama and Inoue.
Twodimensional dilaton black holes ; The twodimensional CGHS model provides an interesting toymodel for the study of black hole evaporation. For this model, a quantum effective action, which incorporates Hawking radiation and backreaction, can be explicitly constructed. In this paper, we study a generalization of this effective action. In our extended model, it is possible to remove certain curvature singularities arising for the original theory. We also find that the flux of Hawking radiation is identical to that encountered in other twodimensional models.
31 spinfoam model of quantum gravity with spacelike and timelike components ; We present a spinfoam formulation of Lorentzian quantum General Relativity. The theory is based on a simple generalization of an Euclidean model defined in terms of a field theory over a group. The model is an extension of a recently introduced Lorentzian model, in which both timelike and spacelike components are included. The spinfoams in the model, corresponding to quantized 4geometries, carry a natural nonperturbative local causal structure induced by the geometry of the algebra of the internal gauge sl2,C. Amplitudes can be expressed as integrals over the spacelike unitvectors hyperboloid in Minkowski space, or the imaginary Lobachevskian space.
Cosmological Models Generalising RobertsonWalker Models ; Considering the physical 3space t constant of the spacetime metrics as spheroidal and pseudo spheroidal, cosmological models which are generalizations of RobertsonWalker models are obtained. Specific forms of these general models as solutions of Einstein's field equations are also discussed in the radiation and the matterdominated eras of the universe.
Astrophysical and Terrestrial Constraints on Singlet Majoron Models ; The general Lagrangian containing the couplings of the Higgs scalars to Majorana neutrinos is presented in the context of singlet Majoron models with intergenerational mixings. The analytical expressions for the coupling of the Majoron field to fermions are derived within these models. Astrophysical considerations imply severe restrictions on the parameters of the model if the singlet Majoron model with three generations is assumed to be embedded in grand unified theories. Bounds that originate from analyzing possible charged leptonviolating decays in terrestrial experiments are also discussed. In particular, we find that experimental searches for muon decays by Majoron emission cannot generally be precluded by astrophysical requirements.
Logarithmic Potential Model of Quigg and Rosner as a Generalization of Naive Quark Model ; Exploiting the explicit mass formulae for logarithmic potential model of Quigg and Rosner it is shown that at least on the level of massrelations this model reproduces the naive quark model relations and generalizes the last one in case of highly nontrivial potential. Generalization includes the relations for higher values of orbital quantum numbers. In particular, preditions for recently discovered atomlike Pstates are no worse than for any other potential models.The advantage consists in simplicity of approach.
More on generalized simplicial chiral models ; By generalizing the auxiliary field term in the Lagrangian of simplicial chiral models on a d1dimensional simplex, the generalized simplicial chiral models has been introduced in cAli. These models can be solved analytically only in d0 and d2 cases at largeN limit. In d0 case, we calculate the eigenvalue density function in strong regime and show that the partition function computed from this density function is consistent with one calculated by path integration directly. In d2 case, it is shown that all V rm TrAAdn models have a third order phase transition, same as the 2dimensional YangMills theory.
Brane World in Generalized Gravity ; We consider RandallSundrumRS model in generalized gravities and see that the localization of gravity happens in generic situations though its effectiveness depends on the details of the configuration. It is shown that RS picture is robust against quantum gravity corrections phi R as long as the correction is reasonably small. We extend our consideration to the model of scalardilaton coupled gravity which leads us to the specific comparison between RS model and inflation models. The exponential and power law hierarchy in RS model are shown to correspond to the exponential and power law inflation respectively.
Noncommutative Integrable Field Theories in 2d ; We study the noncommutative generalization of euclidean integrable models in twodimensions, specifically the sine and sinhGordon and the UN principal chiral models. By looking at treelevel amplitudes for the sinhGordon model we show that its nai ve noncommutative generalization is em not integrable. On the other hand, the addition of extra constraints, obtained through the generalization of the zerocurvature method, renders the model integrable. We construct explicit nonlocal nontrivial conserved charges for the UN principal chiral model using the BrezinItzyksonZinnJustinZuber method.
On local coefficients for nongeneric representations of some classical groups ; This paper is concerned with representations of split orthogonal and quasisplit unitary groups over a nonarchimedean local field which are not generic, but which support a unique model of a different kind, the generalized Bessel model. The properties of the Bessel models under induction are studied, and an analogue of Rodier's theorem concerning the induction of Whittaker models is proved for Bessel models which are minimal in a suitable sense. The holomorphicity in the induction parameter of the Bessel functional is established. Last, local coefficients are defined for each irreducible supercuspidal representation which carries a Bessel functional and also for a certain component of each representation parabolically induced from such a supercuspidal.
Operads, Algebras and Modules in General Model Categories ; In this paper we develop the theory of operads, algebras and modules in cofibrantly generated symmetric monoidal model categories. We give Jsemi model strucures, which are a slightly weaker version of model structures, for operads and algebras and model structures for modules. In a second part we develop the thoery of Smodules of EKMM., which allows a general homotopy theory for commutative algebras and pseudo unital symmetric monoidal categories of modules over them. Finally we prove a base change and projection formula.
NoFreeLunch equivalences for exponential Levy models ; We provide equivalence of numerous nofreelunch type conditions for financial markets where the asset prices are modeled as exponential Levy processes, under possible convex constraints in the use of investment strategies. The general message is the following if any kind of free lunch exists in these models it has to be of the most egregious type, generating an increasing ealth. Furthermore, we connect the previous to the existence of the numeraire portfolio, both for its particular expositional clarity in exponential Levy models and as a first step in obtaining analogues of the nofreelunch equivalences in general semimartingale models.
Algebraic Structure of Lepton and Quark Flavor Invariants and CP Violation ; Lepton and quark flavor invariants are studied, both in the Standard Model with a dimension five Majorana neutrino mass operator, and in the seesaw model. The ring of invariants in the lepton sector is highly nontrivial, with nonlinear relations among the basic invariants. The invariants are classified for the Standard Model with two and three generations, and for the seesaw model with two generations, and the Hilbert series is computed. The seesaw model with three generations proved computationally too difficult for a complete solution. We give an invariant definition of the CPviolating angle theta in the electroweak sector.
A local stochastic Lipschitz condition with application to Lasso for high dimensional generalized linear models ; For regularized estimation, the upper tail behavior of the random Lipschitz coefficient associated with empirical loss functions is known to play an important role in the error bound of Lasso for high dimensional generalized linear models. The upper tail behavior is known for linear models but much less so for nonlinear models. We establish exponential type inequalities for the upper tail of the coefficient and illustrate an application of the results to Lasso likelihood estimation for high dimensional generalized linear models.
Homotopy limits of model categories and more general homotopy theories ; Generalizing a definition of homotopy fiber products of model categories, we give a definition of the homotopy limit of a diagram of left Quillen functors between model categories. As has been previously shown for homotopy fiber products, we prove that such a homotopy limit does in fact correspond to the usual homotopy limit, when we work in a more general model for homotopy theories in which they can be regarded as objects of a model category.
The generalized evolution of linear bias a tool to test gravity ; We derive an exact analytical solution for the redshift evolution of linear and scaleindependent bias, by solving a second order differential equation based on linear perturbation theory. This bias evolution model is applicable to all different types of dark energy and modified gravity models. We propose that the combination of the current bias evolution model with data on the bias of extragalactic mass tracers could provide an efficient way to discriminate between geometrical dark energy models and dark energy models that adhere to general relativity.
On generalized Polya urn models ; We study an urn model introduced in the paper of Chen and Wei, where at each discrete time step m balls are drawn at random from the urn containing colors white and black. Balls are added to the urn according to the inspected colors, generalizing the well known P'olyaEggenberger urn model, case m1. We provide exact expressions for the expectation and the variance of the number of white balls after n draws, and determine the structure of higher moments. Moreover, we discuss extensions to more than two colors. Furthermore, we introduce and discuss a new urn model where the sampling of the m balls is carried out in a stepbystep fashion, and also introduce a generalized Friedman's urn model.
A ModelDriven Probabilistic Parser Generator ; Existing probabilistic scanners and parsers impose hard constraints on the way lexical and syntactic ambiguities can be resolved. Furthermore, traditional grammarbased parsing tools are limited in the mechanisms they allow for taking context into account. In this paper, we propose a modeldriven tool that allows for statistical language models with arbitrary probability estimators. Our work on modeldriven probabilistic parsing is built on top of ModelCC, a modelbased parser generator, and enables the probabilistic interpretation and resolution of anaphoric, cataphoric, and recursive references in the disambiguation of abstract syntax graphs. In order to prove the expression power of ModelCC, we describe the design of a generalpurpose natural language parser.
A Generalization of the NoisyOr Model ; The NoisyOr model is convenient for describing a class of uncertain relationships in Bayesian networks Pearl 1988. Pearl describes the NoisyOr model for Boolean variables. Here we generalize the model to nary input and output variables and to arbitrary functions other than the Boolean OR function. This generalization is a useful modeling aid for construction of Bayesian networks. We illustrate with some examples including digital circuit diagnosis and network reliability analysis.
Optimal dividend problem for a generalized compound Poisson risk model ; In this note we study the optimal dividend problem for a company whose surplus process, in the absence of dividend payments, evolves as a generalized compound Poisson model in which the counting process is a generalized Poisson process. This model including the classical risk model and the PolyaAeppli risk model as special cases. The objective is to find a dividend policy so as to maximize the expected discounted value of dividends which are paid to the shareholders until the company is ruined. We show that under some conditions the optimal dividend strategy is formed by a barrier strategy.
A general parametric model for the dynamic dark energy ; In the present work we suggest new and more generalized parameterizations for the Equation of State, EoS, of dark energy, maintaining the basic structure of twoparameters CPLmodel, but covering both the past and the future of the cosmic history, without divergences and consistently with the current observational data. We propose two generalizations, starting from the extended MZpmodel by Ma and Zhang, 2011, the xiMZpmodel and the DFpmodel. The potential advantages of using these new formulations is their extended range of validity, mainly in the future, to determine possible future scenarios of the cosmic evolution.
Warm GaugeFlation ; Nonabelian gauge field inflation is studied in the context of warm inflation scenario. We introduce this scenario as a mechanism that gives an end for gaugeflation model. Slowroll parameters and perturbation parameters are presented for this model. We find the general conditions which are required for this model to be realizable in slowroll approximation. We also develop our model in the context of intermediate and logamediate scenarios which are exact solutions of inflationary field equation in the Einstein theory. General expressions of slowroll parameters, tensorscalar ratio and scalar spectral index are presented in terms of inflaton field for these two cases. Our model is compatible with recent observational data from Planck satellite.
Kitaev models based on unitary quantum groupoids ; We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a KitaevKong quantum groupoid HmathcalC, we show that the ground state manifold of the generalized model is canonical isomorphic to that of the LevinWen model based on a unitary fusion category mathcalC. Therefore the generalized Kitaev models provide realizations of the target space of the TuraevViro TQFT based on mathcalC.
Generalized Teleparallel Gravity Via Some Scalar Field Dark Energy Models ; We consider generalized teleparallel gravity in the flat FRW universe with a viable powerlaw fT model. We construct its equation of state and deceleration parameters which give accelerated expansion of the universe in quintessence era for the obtained scale factor. Further, we develop correspondence of fT model with scalar field models such as, quintessence, tachyon, Kessence and dilaton. The dynamics of scalar field as well as scalar potential of these models indicate the expansion of the universe with acceleration in the fT gravity scenario.
Stacks and sheaves of categories as fibrant objects ; We show that the category of categories fibred over a site is a generalized Quillen model category in which the weak equivalences are the local equivalences and the fibrant objects are the stacks, as they were defined by J. Giraud. The generalized model category restricts to one on the full subcategory whose objects are the categories fibred in groupoids. We show that the category of sheaves of categories is a model category that is Quillen equivalent to the generalized model category for stacks and to the model category for strong stacks due to A. Joyal and M. Tierney.
BiConnected GaussBonnet Gravity ; We consider a biconnection model in the presence of fourdimensional GaussBonnet term adding to the EinsteinHilbert action. This generalization solves the dynamics issue which exists in pure EinsteinHilbert formalism of biconnection model. As an example we study the Weyl inspired biconnection model and show there is a selfaccelerating solution in this model. To compare it with previous results we try to find appropriate generalization of the Weyl geometrical biconnection model to reach at de RhamGabadadzeTolley massive gravity. In this formalism mixing terms between the potential and kinetic terms appear automatically.
Partitioned conditional generalized linear models for categorical data ; In categorical data analysis, several regression models have been proposed for hierarchicallystructured response variables, e.g. the nested logit model. But they have been formally defined for only two or three levels in the hierarchy. Here, we introduce the class of partitioned conditional generalized linear models PCGLMs defined for any numbers of levels. The hierarchical structure of these models is fully specified by a partition tree of categories. Using the genericity of the r,F,Z specification, the PCGLM can handle nominal, ordinal but also partiallyordered response variables.
Scalar perturbation in warm tachyon inflation in LQC in light of Plank and BICEP2 ; We study warmtachyon inflationary universe model in the context of the effective field theory of loop quantum cosmology. In slowroll approximation the primordial perturbation spectrums for this model are calculated. We also obtain the general expressions of the tensortoscalar ratio, scalar spectral index. We develop this model by using exponential potential, the characteristics of this model is calculated in great details. The parameters of the model are restricted by recent observational data from Planck, WMAP9 and BICEP2.
General fR and conformal inflation from minimal supergravity plus matter ; We embed general fR inflationary models in minimal supergravity plus matter, a single chiral superfield Phi, with or without another superfield S, via a Jordan frame Einsteinscalar description. In particular, inflationary models like a generalized Starobinsky one are analyzed and constraints on them are found. We also embed the related models of conformal inflation, also described as Jordan frame Einsteinscalar models, in particular the conformal inflation from the Higgs model, and analyze the inflationary constraints on them.
Generalized multicritical onematrix models ; We show that there exists a simple generalization of Kazakov's multicritical onematrix model, which interpolates between the various multicritical points of the model. The associated multicritical potential takes the form of a power series with a heavy tail, leading to a cut of the potential and its derivative at the real axis, and reduces to a polynomial at Kazakov's multicritical points. From the combinatorial point of view the generalized model allows polygons of arbitrary large degrees or vertices of arbitrary large degree, when considering the dual graphs, and it is the weight assigned to these large order polygons which brings about the interpolation between the multicritical points in the onematrix model.
Dualities in CHLModels ; We define a very general class of CHLmodels associated with any string theory bosonic or supersymmetric compactified on an internal CFT C x Td. We take the orbifold by a pair g,delta, where g is a possibly nongeometric symmetry of C and delta is a translation along Td. We analyze the Tdualities of these models and show that in general they contain AtkinLehner type symmetries. This generalizes our previous work on N4 CHLmodels based on heterotic string theory on T6 or type II on K3 x T2, as well as the monstrous' CHLmodels based on a compactification of heterotic string theory on the FrenkelLepowskyMeurman CFT Vnatural.
Tilted Two Fluids Cosmological Models with Variable G and In General Relativity ; Tilted two fluids cosmological models with variable G and Lambda In General Relativity are presented. Here one fluid is matter field modelling material content of the universe and another fluid is radiation field modelling the cosmic microwave background CMB. The tiltedness is also considered .To get the deterministic model, we have assumed a supplementary condition where s and n are constants. We have also discussed the behaviours of some physical parameters.
Bayesian Semisupervised Learning with Deep Generative Models ; Neural network based generative models with discriminative components are a powerful approach for semisupervised learning. However, these techniques a cannot account for model uncertainty in the estimation of the model's discriminative component and b lack flexibility to capture complex stochastic patterns in the label generation process. To avoid these problems, we first propose to use a discriminative component with stochastic inputs for increased noise flexibility. We show how an efficient Gibbs sampling procedure can marginalize the stochastic inputs when inferring missing labels in this model. Following this, we extend the discriminative component to be fully Bayesian and produce estimates of uncertainty in its parameter values. This opens the door for semisupervised Bayesian active learning.
A Deep Generative Model for SemiSupervised Classification with Noisy Labels ; Class labels are often imperfectly observed, due to mistakes and to genuine ambiguity among classes. We propose a new semisupervised deep generative model that explicitly models noisy labels, called the Mislabeled VAE MVAE. The MVAE can perform better than existing deep generative models which do not account for label noise. Additionally, the derivation of MVAE gives new theoretical insights into the popular M1M2 semisupervised model.
Heat engine model exhibit superuniversal feature and capture the efficiencies of different power plants ; We propose a generalized model of a heat engine and calculate the minimum and maximum bounds on the efficiency at maximum power. We obtain a universal form of generalized extreme bounds on the efficiency at maximum power. Our model unifies the bounds on the efficiency and the universality features are observed for various heat engine models. Even though our model is a direct generalization of lowdissipation heat engines, the bounds on the efficiency obtained at a single target function capture those observed in the actual power plants working at different dissipation levels.
Perturbed generalized multicritical onematrix models ; We study perturbations around the generalized Kazakov multicritical onematrix model. The multicritical matrix model has a potential where the coefficients of zn only fall off as a power 1ns1. This implies that the potential and its derivatives have a cut along the real axis, leading to technical problems when one performs perturbations away from the generalized Kazakov model. Nevertheless it is possible to relate the perturbed partition function to the taufunction of a KdV hierarchy and solve the model by a genus expansion in the double scaling limit.
The BiSpinor Standard Model with 3 Generations ; We show that if two of four generations of the bispinor Standard Model are massdegenerate or sufficiently close in mass only three generations can be observed. We argue that the Standard Model and its bispinor analog are indistinguishable on the level of the electroweak precision variables S, T, U. As a result the bispinor Standard Model, which describes experimental observed textures of flavor mixing matrices is a better fit to the data then the Standard Model, where the textures are arbitrary.
Variational fdivergence Minimization ; Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific fdivergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative nonlikelihood training criteria have been proposed. Whilst not necessarily statistically efficient, these alternatives may better match user requirements such as sharp image generation. A general variational method for training probabilistic latent variable models using maximum likelihood is well established; however, how to train latent variable models using other fdivergences is comparatively unknown. We discuss a variational approach that, when combined with the recently introduced Spread Divergence, can be applied to train a large class of latent variable models using any fdivergence.
Parametrized postNewtonian limit of the NiehYan modified teleparallel gravity ; The recently proposed NiehYan modified teleparallel gravity is a parityviolating gravity model that modifies the general relativity equivalent teleparallel gravity by a NiehYan term. This model is healthy and simple in form. In this paper, we consider the application of this model to the Solar System and investigate its slowmotion and weakfield approximation in terms of the parametrized postNewtonian formalism. We find that all the postNewtonian parameters of the model are the same as those of general relativity, which makes the model compatible with the Solar System experiments.
The Generalized Metastable Switch Memristor Model ; Memristor device modeling is currently a heavily researched topic and is becoming ever more important as memristor devices make their way into CMOS circuit designs, necessitating accurate and efficient memristor circuit simulations. In this paper, the Generalized Metastable Switch MSS memristor model is presented. The Generalized MSS model consists of a voltagedependent stochastic component and a voltagedependent exponential diode current component and is designed to be easy to implement, computationally efficient, and amenable to modeling a wide range of different memristor devices.
Gaussian mixture models with Wasserstein distance ; Generative models with both discrete and continuous latent variables are highly motivated by the structure of many realworld data sets. They present, however, subtleties in training often manifesting in the discrete latent being under leveraged. In this paper, we show that such models are more amenable to training when using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.
Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information ; Sequencetosequence seq2seq neural models have been actively investigated for abstractive summarization. Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding. In this paper, we propose a novel semanticaware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content. A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies offtopic information, where our model yields significantly better performance than the popular pointergenerator summarizer. Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.
Generalized inverse xgamma distribution A nonmonotone hazard rate model ; In this article, a generalized inverse xgamma distribution GIXGD has been introduced as the generalized version of the inverse xgamma distribution. The proposed model exhibits the pattern of nonmonotone hazard rate and belongs to family of positively skewed models. The explicit expressions of some distributional properties, such as, moments, inverse moments, conditional moments, mean deviation, quantile function have been derived. The maximum likelihood estimation procedure has been used to estimate the unknown model parameters as well as survival characteristics of GIXGD. The practical applicability of the proposed model has been illustrated through a survival data of guinea pigs.
Majorana neutrino masses in gaugeHiggs unification ; The theory that the extraspace component of the gauge field is identified with the standard model Higgs boson, is called the gaugeHiggs unification GHU scenario. We examine how the small neutrino masses are naturally generated in the GHU framework. We find out two model classes where the following matter multiplets are introduced 1. adjoint rep. lepton PsiA, 2. fundamental rep. lepton PsiF and scalar SigmaF. We present a concrete model in each class. At the model in class 1, the neutrino masses are generated by the admixture of the seesaw mechanism typeI and III. At the model in class 2, the masses are generated by the inverse seesaw mechanism.
A general model for vegetation patterns including rhizome growth ; Vegetation patterns, a natural phenomenon observed worldwide, are typically driven by spatially distributed feedback. However, the spatial colonization mechanisms of clonal plants, driven by the growth of a rhizome, are usually not considered in prototypical models. Here we propose a general equation for the vegetation density that includes all main clonalgrowth features as well as the essential ingredients leading to spatial selforganization. This generic model reproduces the phase diagram of a fully detailed model of clonal growth. The relation of each term of the model with the mechanisms of clonal growth is discussed.
ParameterConditioned Sequential Generative Modeling of Fluid Flows ; The computational cost associated with simulating fluid flows can make it infeasible to run many simulations across multiple flow conditions. Building upon concepts from generative modeling, we introduce a new method for learning neural network models capable of performing efficient parameterized simulations of fluid flows. Evaluated on their ability to simulate both twodimensional and threedimensional fluid flows, trained models are shown to capture local and global properties of the flow fields at a wide array of flow conditions. Furthermore, flow simulations generated by the trained models are shown to be orders of magnitude faster than the corresponding computational fluid dynamics simulations.
Modeling Long Context for TaskOriented Dialogue State Generation ; Based on the recently proposed transferable dialogue state generator TRADE that predicts dialogue states from utteranceconcatenated dialogue context, we propose a multitask learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for taskoriented dialogue state generation. By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long. In our experiments, our proposed model achieves a 7.03 relative improvement over the baseline, establishing a new stateoftheart joint goal accuracy of 52.04 on the MultiWOZ 2.0 dataset.
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering ; Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain stateoftheart results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that generative models are good at aggregating and combining evidence from multiple passages.
A Note on Data Simulations for Voting by Evaluation ; Voting rules based on evaluation inputs rather than preference orders have been recently proposed, like majority judgement, range voting or approval voting. Traditionally, probabilistic analysis of voting rules supposes the use of simulation models to generate preferences data, like the Impartial Culture IC or Impartial and Anonymous Culture IAC models. But these simulation models are not suitable for the analysis of evaluationbased voting rules as they generate preference orders instead of the needed evaluations. We propose in this paper several simulation models for generating evaluationbased voting inputs. These models, inspired by classical ones, are defined, tested and compared for recommendation purpose.
MolGrow A Graph Normalizing Flow for Hierarchical Molecular Generation ; We propose a hierarchical normalizing flow model for generating molecular graphs. The model produces new molecular structures from a singlenode graph by recursively splitting every node into two. All operations are invertible and can be used as plugandplay modules. The hierarchical nature of the latent codes allows for precise changes in the resulting graph perturbations in the top layer cause global structural changes, while perturbations in the consequent layers change the resulting molecule marginally. The proposed model outperforms existing generative graph models on the distribution learning task. We also show successful experiments on global and constrained optimization of chemical properties using latent codes of the model.
Complementarity of generative models for road networks ; Understanding the dynamics of road networks has theoretical implications for urban science and practical applications for sustainable longterm planning. Various generative models to explain road network growth have been introduced in the literature. We propose in this paper a systematic benchmark of such models integrating different paradigms spatial interactions, costbenefit compromises, biological network growth, focusing on the feasible space of generated network measures. We find a quantitatively high complementarity between the different models. This confirms the necessity of a plurality of urban models, in line with integrative approaches to urban systems.
A generalization of the Scotogenic model ; The Scotogenic model is a radiative neutrino mass model able to induce Majorana neutrino masses at the 1loop level and simultaneously include a dark matter candidate. In this work, we generalize the original Scotogenic model to arbitrary numbers of generations of the Scotogenic states. After that, we present the light neutrino mass matrix, with some details of its derivation, and provide a useful approximate expression as well. Finally, we numerically solve the Renormalization Group Equations to explore the highenergy behavior of the model.
Spanish Legalese Language Model and Corpora ; There are many Language Models for the English language according to its worldwide relevance. However, for the Spanish language, even if it is a widely spoken language, there are very few Spanish Language Models which result to be small and too general. Legal slang could be think of a Spanish variant on its own as it is very complicated in vocabulary, semantics and phrase understanding. For this work we gathered legaldomain corpora from different sources, generated a model and evaluated against Spanish general domain tasks. The model provides reasonable results in those tasks.
A mathematical model of the vowel space ; The articulatoryacoustic relationship is manytoone and non linear and this is a great limitation for studying speech production. A simplification is proposed to set a bijection between the vowel space f1, f2 and the parametric space of different vocal tract models. The generic area function model is based on mixtures of cosines allowing the generation of main vowels with two formulas. Then the mixture function is transformed into a coordination function able to deal with articulatory parameters. This is shown that the coordination function acts similarly with the Fant's model and with the 4Tube DRM derived from the generic model.
Effective dimension of machine learning models ; Making statements about the performance of trained models on tasks involving new data is one of the primary goals of machine learning, i.e., to understand the generalization power of a model. Various capacity measures try to capture this ability, but usually fall short in explaining important characteristics of models that we observe in practice. In this study, we propose the local effective dimension as a capacity measure which seems to correlate well with generalization error on standard data sets. Importantly, we prove that the local effective dimension bounds the generalization error and discuss the aptness of this capacity measure for machine learning models.
Gauge embedding procedure classical and quantum equivalence between dual models ; In this paper the gauge embedding procedure of dualization is reassessed through a deeper analysis of the mutual equivalence of vector field models of more generic forms, explicitly, a general modified massive gaugebreaking extension of electrodynamics and its dual gaugeinvariant model we derive in the paper. General relations between the vector field propagators and interaction terms of these models are obtained. Further, these models are shown to be equivalent at treelevel and oneloop physical calculations. Finally, we discuss extension of this equivalence to all loop orders.
Noncommutative spaces and superspaces from Snyder and Yang type models ; The relativistic D4 Snyder model is formulated in terms of D4 dS algebra o4,1 generators, with noncommutative Lorentzinvariant Snyder quantum spacetime provided by fracO4,1O3,1 coset generators. Analogously, in relativistic D4 Yang models the quantumdeformed relativistic phase space is described by the algebras of coset generators fracO5,1O3,1 or fracO4,2O3,1. We extend these algebraic considerations by using respective dS superalgebras, which provide Lorentzcovariant quantum superspaces SUSY Snyder model as well as relativistic quantum phase super spaces SUSY Yang model.
Modelling macroparasitic diseases dynamics ; In this work we present a general framework for the modeling of the transmission dynamics of macroparasites which do not reproduce within the host like Ascaris lumbricoides, Trichuris trichiura, Necator americanus y Ancylostoma duodenale. The basic models are derived from general probabilistic models for the parasite densitydependent mating probability. Here we considered the particular, and common case, of a negative binomial distribution for the number of parasites in hosts. We find the basic reproductive number and we show that the system exhibit a saddlenode bifurcation at some value of the basic reproduction number. We also found the equilibria and basic reproduction number of a model for the more general case of heteregeneous host populations.
Concrete categories and higherorder recursion With applications including probability, differentiability, and full abstraction ; We study concrete sheaf models for a callbyvalue higherorder language with recursion. Our family of sheaf models is a generalization of many examples from the literature, such as models for probabilistic and differentiable programming, and fully abstract logical relations models. We treat recursion in the spirit of synthetic domain theory. We provide a general construction of a lifting monad starting from a class of admissible monomorphisms in the site of the sheaf category. In this way, we obtain a family of models parametrized by a concrete site and a class of monomorphisms, for which we prove a general computational adequacy theorem.
MultiEarth 2022 The Champion Solution for ImagetoImage Translation Challenge via Generation Models ; The MultiEarth 2022 ImagetoImage Translation challenge provides a wellconstrained test bed for generating the corresponding RGB Sentinel2 imagery with the given Sentinel1 VV VH imagery. In this challenge, we designed various generation models and found the SPADE 1 and pix2pixHD 2 models could perform our best results. In our selfevaluation, the SPADE2 model with L1loss can achieve 0.02194 MAE score and 31.092 PSNR dB. In our final submission, the best model can achieve 0.02795 MAE score ranked No.1 on the leader board.
A Neural Model of Number Comparison with Surprisingly Robust Generalization ; We propose a relatively simple computational neuralnetwork model of number comparison. Training on comparisons of the integers 19 enable the model to efficiently and accurately simulate a wide range of phenomena, including distance and ratio effects and robust generalization to multidigit integers, negative numbers, and decimal numbers. An accompanying logical model of number comparison provides further insights into the workings of number comparison and its relation to the Arabic number system. These models provide a rational basis for the psychology of number comparison and the ability of neural networks to efficiently learn a powerful system with robust generalization.
Study of weakbasis invariants in the universal seesaw model using Hilbert series ; Universal Seesaw Model is a model which explains the mass hierarchy of the quark sector. This model introduces vectorlike quarks. The top quark mass is generated in the electroweak scale and the other quark mass is generated using a seesawlike mechanism. The invariant theory helps construct a weakbasis invariant. We study the weakbasis invariant WBI using Hilbert Series HS and apply it to the Universal Seesaw Model, particularly the onegeneration case of the quark sector.
Computation of partition functions of free fermionic solvable lattice models via permutation graphs ; In this paper, we introduce a novel and general method for computing partition functions of solvable lattice models with free fermionic Boltzmann weights. The method is based on the permutation graph'' and the Fmatrix'' the permutation graph is a generalization of the Rmatrix, and the Fmatrix is constructed based on the permutation graph. The method allows generalizations to lattice models that are related to Cartan types B and C. Two applications are presented they involve an ice model related to Tokuyama's formula and another ice model representing a Whittaker function on the metaplectic double cover of mathrmSp2r,F with F being a nonarchimedean local field.
Linear Interpolation In Parameter Space is Good Enough for FineTuned Language Models ; The simplest way to obtain continuous interpolation between two points in high dimensional space is to draw a line between them. While previous works focused on the general connectivity between model parameters, we explored linear interpolation for parameters of pretrained models after finetuning. Surprisingly, we could perform linear interpolation without a performance drop in intermediate points for finetuned models. For controllable text generation, such interpolation could be seen as moving a model towards or against the desired text attribute e.g., positive sentiment, which could be used as grounds for further methods for controllable text generation without inference speed overhead.
Diffusion Models in NLP A Survey ; Diffusion models have become a powerful family of deep generative models, with recordbreaking performance in many applications. This paper first gives an overview and derivation of the basic theory of diffusion models, then reviews the research results of diffusion models in the field of natural language processing, from text generation, textdriven image generation and other four aspects, and analyzes and summarizes the relevant literature materials sorted out, and finally records the experience and feelings of this topic literature review research.
Scotogenic models with a general lepton flavor dependent U1 gauge symmetry ; We discuss radiative neutrino mass models with a general lepton flavor dependent U1 gauge symmetry. The scotogenic model is adopted for neutrino mass generation in which Z2 odd singlet fermions and an inert scalar doublet are introduced. A lepton flavor dependent local U1 symmetry is applied to realize twozero texture of a Majorana mass matrix of Z2 odd singlet fermions where we explore minimal construction choosing U1 charges of the standard model fermions and singlet fermions. Then we investigate neutrino mass matrix and show some predictions for constructed models, and discuss some phenomenological implications.
Cofibrantly generated model structures for functor calculus ; Model structures for many different kinds of functor calculus can be obtained by applying a theorem of Bousfield to a suitable category of functors. In this paper, we give a general criterion for when model categories obtained via this approach are cofibrantly generated. Our examples recover the homotopy functor and nexcisive model structures of Biedermann and Rondigs, with different proofs, but also include a model structure for the discrete functor calculus of Bauer, Johnson, and McCarthy.
Maximum likelihood thresholds of generic linear concentration models ; The maximum likelihood threshold of a statistical model is the minimum number of datapoints required to fit the model via maximum likelihood estimation. In this paper we determine the maximum likelihood thresholds of generic linear concentration models. This turns out to be the number one would expect from a naive dimension count, which is surprising and nontrivial to prove given that the maximum likelihood threshold is a semialgebraic concept. We also describe geometrically how a linear concentration model can fail to exhibit this generic behavior and briefly discuss connections to rigidity theory.
Examining the Emergence of Deductive Reasoning in Generative Language Models ; We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided. We observe notable differences in the performance of models coming from different training setups and find that the deductive reasoning ability increases with scale. Further, we discover that the performance generally does not decrease with the length of the deductive chain needed to reach the conclusion, with the exception of OpenAI GPT3 and GPT3.5 models. Our study considers a wide variety of transformerdecoder models, ranging from 117 million to 175 billion parameters in size.
High Fidelity Image Counterfactuals with Probabilistic Causal Models ; We present a general causal generative modelling framework for accurate estimation of high fidelity image counterfactuals with deep structural causal models. Estimation of interventional and counterfactual queries for highdimensional structured variables, such as images, remains a challenging task. We leverage ideas from causal mediation analysis and advances in generative modelling to design new deep causal mechanisms for structured variables in causal models. Our experiments demonstrate that our proposed mechanisms are capable of accurate abduction and estimation of direct, indirect and total effects as measured by axiomatic soundness of counterfactuals.
CaloScore v2 Singleshot Calorimeter Shower Simulation with Diffusion Models ; Diffusion generative models are promising alternatives for fast surrogate models, producing highfidelity physics simulations. However, the generation time often requires an expensive denoising process with hundreds of function evaluations, restricting the current applicability of these models in a realistic setting. In this work, we report updates on the CaloScore architecture, detailing the changes in the diffusion process, which produces higher quality samples, and the use of progressive distillation, resulting in a diffusion model capable of generating new samples with a single function evaluation. We demonstrate these improvements using the Calorimeter Simulation Challenge 2022 dataset.
Generating observation guided ensembles for data assimilation with denoising diffusion probabilistic model ; This paper presents an ensemble data assimilation method using the pseudo ensembles generated by denoising diffusion probabilistic model. Since the model is trained against noisy and sparse observation data, this model can produce divergent ensembles close to observations. Thanks to the variance in generated ensembles, our proposed method displays better performance than the wellestablished ensemble data assimilation method when the simulation model is imperfect.
Dynamical models with a general anisotropy profile ; Both numerical simulations and observational evidence indicate that the outer regions of galaxies and dark matter haloes are typically mildly to significantly radially anisotropic. The inner regions can be significantly nonisotropic, depending on the dynamical formation and evolution processes. In an attempt to break the lack of simple dynamical models that can reproduce this behaviour, we explore a technique to construct dynamical models with an arbitrary density and an arbitrary anisotropy profile. We outline a general construction method and propose a more practical approach based on a parameterized anisotropy profile. This approach consists of fitting the density of the model with a set of dynamical components, each of which have the same anisotropy profile. Using this approach we avoid the delicate finetuning difficulties other fitting techniques typically encounter when constructing radially anisotropic models. We present a model anisotropy profile that generalizes the OsipkovMerritt profile, and that can represent any smooth monotonic anisotropy profile. Based on this model anisotropy profile, we construct a very general sevenparameter set of dynamical components for which the most important dynamical properties can be calculated analytically. We use the results to look for simple onecomponent dynamical models that generate simple potentialdensity pairs while still supporting a flexible anisotropy profile. We present families of Plummer and Hernquist models in which the anisotropy at small and large radii can be chosen as free parameters. We also generalize these two families to a threeparameter family that selfconsistently generates the set of Veltmann potentialdensity pairs. Abridged...
Why are deep nets reversible A simple theory, with implications for training ; Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying proof of correctness for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for RELU deep nets, with the following characteristics i The generative model is just the reverse of the feedforward net if the forward transformation at a layer is A then the reverse transformation is AT. This can be seen as an explanation of the old weight tying idea for denoising autoencoders. ii Its correctness can be proven under a clean theoretical assumption the edge weights in reallife deep nets behave like random numbers. Under this assumption which is experimentally tested on reallife nets like AlexNet it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of randomlike deep nets; and that it helps the training.
Combining Recurrent Neural Networks and Adversarial Training for Human Motion Synthesis and Control ; This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks RNNs and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long shortterm memory LSTM cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks GANs, such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contactaware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to highquality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, onlineoffline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.
Semantic Object Accuracy for Generative TexttoImage Synthesis ; Generative adversarial networks conditioned on textual image descriptions are capable of generating realisticlooking images. However, current methods still struggle to generate images based on complex image captions from a heterogeneous domain. Furthermore, quantitatively evaluating these texttoimage models is challenging, as most evaluation metrics only judge image quality but not the conformity between the image and its caption. To address these challenges we introduce a new model that explicitly models individual objects within an image and a new evaluation metric called Semantic Object Accuracy SOA that specifically evaluates images given an image caption. The SOA uses a pretrained object detector to evaluate if a generated image contains objects that are mentioned in the image caption, e.g. whether an image generated from a car driving down the street contains a car. We perform a user study comparing several texttoimage models and show that our SOA metric ranks the models the same way as humans, whereas other metrics such as the Inception Score do not. Our evaluation also shows that models which explicitly model objects outperform models which only model global image characteristics.
No slowroll inflation a la Generalized Chaplygin Gas in General Relativity ; The Generalized Chaplygin Gas GCG model is characterized by the equation of state P A rhoalpha, where A0 and alpha 1. The model has been extensively studied due to its interesting properties and applicability in several contexts, from latetime acceleration to primordial inflation. Nonetheless we show that the inflationary slowroll regime cannot be satisfied by most of the parameter space of the GCG model when General Relativity GR is considered. In particular, although the model has been applied to inflation with 0 alpha 1, we show that for 1 alpha le 1 there is no expansion of the Universe but an accelerated contraction. For alpha le 53, the second slowroll parameter etaH is larger than unity, so there is no sustained period of inflation. Only for alpha very close to 1 the model produces enough efolds, thus greatly reducing its parameter space. Moreover, we show that the model is ruled out by the Planck 2018 results. Finally, we extend our analysis to the Generalized ChaplyginJacobi Gas GCJG model. We find that the introduction of a new parameter does not change the previous results. We thus conclude that the violation of the slowroll conditions is a generic feature of the GCG and GCJG models during inflation when GR is considered and that the models are ruled out by the Planck 2018 results.
Model Generation with Provable Coverability for Offline Reinforcement Learning ; Modelbased offline optimization with dynamicsaware policy provides a new perspective for policy learning and outofdistribution generalization, where the learned policy could adapt to different dynamics enumerated at the training stage. But due to the limitation under the offline setting, the learned model could not mimic real dynamics well enough to support reliable outofdistribution exploration, which still hinders policy to generalize well. To narrow the gap, previous works roughly ensemble randomly initialized models to better approximate the real dynamics. However, such practice is costly and inefficient, and provides no guarantee on how well the real dynamics could be approximated by the learned models, which we name coverability in this paper. We actively address this issue by generating models with provable ability to cover real dynamics in an efficient and controllable way. To that end, we design a distance metric for dynamic models based on the occupancy of policies under the dynamics, and propose an algorithm to generate models optimizing their coverage for the real dynamics. We give a theoretical analysis on the model generation process and proves that our algorithm could provide enhanced coverability. As a downstream task, we train a dynamicsaware policy with minor or no conservative penalty, and experiments demonstrate that our algorithm outperforms prior offline methods on existing offline RL benchmarks. We also discover that policies learned by our method have better zeroshot transfer performance, implying their better generalization.
Can segmentation models be trained with fully synthetically generated data ; In order to achieve good performance and generalisability, medical image segmentation models should be trained on sizeable datasets with sufficient variability. Due to ethics and governance restrictions, and the costs associated with labelling data, scientific development is often stifled, with models trained and tested on limited data. Data augmentation is often used to artificially increase the variability in the data distribution and improve model generalisability. Recent works have explored deep generative models for image synthesis, as such an approach would enable the generation of an effectively infinite amount of varied data, addressing the generalisability and data access problems. However, many proposed solutions limit the user's control over what is generated. In this work, we propose brainSPADE, a model which combines a synthetic diffusionbased label generator with a semantic image generator. Our model can produce fully synthetic brain labels ondemand, with or without pathology of interest, and then generate a corresponding MRI image of an arbitrary guided style. Experiments show that brainSPADE synthetic data can be used to train segmentation models with performance comparable to that of models trained on real data.
Executionbased Evaluation for Data Science Code Generation Models ; Code generation models can benefit data scientists' productivity by automatically generating code from context and text descriptions. An important measure of the modeling progress is whether a model can generate code that can correctly execute to solve the task. However, due to the lack of an evaluation dataset that directly supports executionbased model evaluation, existing work relies on code surface form similarity metrics e.g., BLEU, CodeBLEU for model selection, which can be inaccurate. To remedy this, we introduce ExeDS, an evaluation dataset for execution evaluation for data science code generation tasks. ExeDS contains a set of 534 problems from Jupyter Notebooks, each consisting of code context, task description, reference program, and the desired execution output. With ExeDS, we evaluate the execution performance of five stateoftheart code generation models that have achieved high surfaceform evaluation scores. Our experiments show that models with high surfaceform scores do not necessarily perform well on execution metrics, and executionbased metrics can better capture model code generation errors. Source code and data can be found at httpsgithub.comJunjieHuangExeDS
Your Diffusion Model is Secretly a ZeroShot Classifier ; The recent wave of largescale texttoimage diffusion models has dramatically increased our textbased image generation abilities. These models can generate realistic images for a staggering variety of prompts and exhibit impressive compositional generalization abilities. Almost all use cases thus far have solely focused on sampling; however, diffusion models can also provide conditional density estimates, which are useful for tasks beyond image generation. In this paper, we show that the density estimates from largescale texttoimage diffusion models like Stable Diffusion can be leveraged to perform zeroshot classification without any additional training. Our generative approach to classification, which we call Diffusion Classifier, attains strong results on a variety of benchmarks and outperforms alternative methods of extracting knowledge from diffusion models. Although a gap remains between generative and discriminative approaches on zeroshot recognition tasks, our diffusionbased approach has significantly stronger multimodal compositional reasoning ability than competing discriminative approaches. Finally, we use Diffusion Classifier to extract standard classifiers from classconditional diffusion models trained on ImageNet. Our models achieve strong classification performance using only weak augmentations and exhibit qualitatively better effective robustness to distribution shift. Overall, our results are a step toward using generative over discriminative models for downstream tasks. Results and visualizations at httpsdiffusionclassifier.github.io
UniDiff Advancing VisionLanguage Models with Generative and Discriminative Learning ; Recent advances in visionlanguage pretraining have enabled machines to perform better in multimodal object discrimination e.g., imagetext semantic alignment and image synthesis e.g., texttoimage generation. On the other hand, finetuning pretrained models with discriminative or generative capabilities such as CLIP and Stable Diffusion on domainspecific datasets has shown to be effective in various tasks by adapting to specific domains. However, few studies have explored the possibility of learning both discriminative and generative capabilities and leveraging their synergistic effects to create a powerful and personalized multimodal model during finetuning. This paper presents UniDiff, a unified multimodal model that integrates imagetext contrastive learning ITC, textconditioned image synthesis learning IS, and reciprocal semantic consistency modeling RSC. UniDiff effectively learns aligned semantics and mitigates the issue of semantic collapse during finetuning on small datasets by leveraging RSC on visual features from CLIP and diffusion models, without altering the pretrained model's basic architecture. UniDiff demonstrates versatility in both multimodal understanding and generative tasks. Experimental results on three datasets Fashionman, Fashionwoman, and Ecommercial Product showcase substantial enhancements in visionlanguage retrieval and texttoimage generation, illustrating the advantages of combining discriminative and generative finetuning. The proposed UniDiff model establishes a robust pipeline for personalized modeling and serves as a benchmark for future comparisons in the field.
Improving Image Captioning Descriptiveness by Ranking and LLMbased Fusion ; StateofTheArt SoTA image captioning models often rely on the Microsoft COCO MSCOCO dataset for training. This dataset contains annotations provided by human annotators, who typically produce captions averaging around ten tokens. However, this constraint presents a challenge in effectively capturing complex scenes and conveying detailed information. Furthermore, captioning models tend to exhibit bias towards the average'' caption, which captures only the more general aspects. What would happen if we were able to automatically generate longer captions, thereby making them more detailed Would these captions, evaluated by humans, be more or less representative of the image content compared to the original MSCOCO captions In this paper, we present a novel approach to address previous challenges by showcasing how captions generated from different SoTA models can be effectively fused, resulting in richer captions. Our proposed method leverages existing models from the literature, eliminating the need for additional training. Instead, it utilizes an imagetext based metric to rank the captions generated by SoTA models for a given image. Subsequently, the top two captions are fused using a Large Language Model LLM. Experimental results demonstrate the effectiveness of our approach, as the captions generated by our model exhibit higher consistency with human judgment when evaluated on the MSCOCO test set. By combining the strengths of various SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich, informative nature of humangenerated descriptions. This advance opens up new possibilities for generating captions that are more suitable for the training of both visionlanguage and captioning models.
A General Framework For Frequentist Model Averaging ; Model selection strategies have been routinely employed to determine a model for data analysis in statistics, and further study and inference then often proceed as though the selected model were the true model that were known a priori. This practice does not account for the uncertainty introduced by the selection process and the fact that the selected model can possibly be a wrong one. Model averaging approaches try to remedy this issue by combining estimators for a set of candidate models. Specifically, instead of deciding which model is the 'right' one, a model averaging approach suggests to fit a set of candidate models and average over the estimators using certain data adaptive weights. In this paper we establish a general frequentist model averaging framework that does not set any restrictions on the set of candidate models. It greatly broadens the scope of the existing methodologies under the frequentist model averaging development. Assuming the data is from an unknown model, we derive the model averaging estimator and study its limiting distributions and related predictions while taking possible modeling biases into account. We propose a set of optimal weights to combine the individual estimators so that the expected mean squared error of the average estimator is minimized. Simulation studies are conducted to compare the performance of the estimator with that of the existing methods. The results show the benefits of the proposed approach over traditional model selection approaches as well as existing model averaging methods.