text
stringlengths 62
2.94k
|
---|
An Alternative Derivation of Johannisson's Regular Perturbation Model ; We provide here an alternative derivation of the generalization of the nonlinear Turin model for dispersion unmanaged coherent optical links provided in Johannisson's report arXiv1205.2193
|
On calculus of functors in model categories ; We present an analysis of some constructions and arguments from the universe of T. G. Goodwillie's Calculus, in a general model theoretic setting.
|
Field localization and Nambu JonaLasinio mass generation mechanism in an alternative 5dimensional brane model ; We consider a 5dimensional brane world model with a single brane which is distinct from the well known RandallSundrum model. We discuss the similarities and differences between our brane model and the RandallSundrum brane model. In particular we focus on the localization of 5D fields with different spins spin 0, spin 12, spin 1 to the brane, and a selfconsistent mass generation mechanism. We find that the brane model studied here has different and in some cases superior localization properties for fieldsparticles with different spins to the brane, as compared to the original 5dimensional brane models. In addition this alternative 5D brane model exhibits a self generation mechanism which recalls the selfconsistent approach of Nambu and JonaLasinio.
|
Mathematical analysis of a marine ecosystem model with nonlinear coupling terms and nonlocal boundary conditions ; We investigate the weak solvability of initial boundary value problems associated with an ecosystem model of the marine phosphorus cycle. The analysis covers the model equations themselves as well as their linearization which is important in the model calibration via parameter identification. We treat both cases simultaneously by investigating a system of advectiondiffusionreaction equations coupled by general reaction terms and boundary conditions. We derive a weak formulation of the generalized equations and prove two theorems about its unique solvability provided that the reaction terms consist of Lipschitz continuous and monotone operators. In the proofs, we adapt different techniques Galerkin approximation, Banach's Fixed Point Theorem to the multidimensional model equation. By applying the general theorems to the problems associated with the phosphorus model we obtain results about existence and uniqueness of their solutions. Actually, by assuming a generalized setting the theorems establish the basis for the mathematical analysis of the whole model class to which the investigated phosphorus model belongs.
|
Generative models versus underlying symmetries to explain biological pattern ; Mathematical models play an increasingly important role in the interpretation of biological experiments. Studies often present a model that generates the observations, connecting hypothesized process to an observed pattern. Such generative models confirm the plausibility of an explanation and make testable hypotheses for further experiments. However, studies rarely consider the broad family of alternative models that match the same observed pattern. The symmetries that define the broad class of matching models are in fact the only aspects of information truly revealed by observed pattern. Commonly observed patterns derive from simple underlying symmetries. This article illustrates the problem by showing the symmetry associated with the observed rate of increase in fitness in a constant environment. That underlying symmetry reveals how each particular generative model defines a single example within the broad class of matching models. Further progress on the relation between pattern and process requires deeper consideration of the underlying symmetries.
|
Symbolic regression of generative network models ; Networks are a powerful abstraction with applicability to a variety of scientific fields. Models explaining their morphology and growth processes permit a wide range of phenomena to be more systematically analysed and understood. At the same time, creating such models is often challenging and requires insights that may be counterintuitive. Yet there currently exists no general method to arrive at better models. We have developed an approach to automatically detect realistic decentralised network growth models from empirical data, employing a machine learning technique inspired by natural selection and defining a unified formalism to describe such models as computer programs. As the proposed method is completely general and does not assume any preexisting models, it can be applied out of the box to any given network. To validate our approach empirically, we systematically rediscover predefined growth laws underlying several canonical network generation models and credible laws for diverse realworld networks. We were able to find programs that are simple enough to lead to an actual understanding of the mechanisms proposed, namely for a simple brain and a social network.
|
Model selection and minimax estimation in generalized linear models ; We consider model selection in generalized linear models GLM for highdimensional data and propose a wide class of model selection criteria based on penalized maximum likelihood with a complexity penalty on the model size. We derive a general nonasymptotic upper bound for the expected KullbackLeibler divergence between the true distribution of the data and that generated by a selected model, and establish the corresponding minimax lower bounds for sparse GLM. For the properly chosen nonlinear penalty, the resulting penalized maximum likelihood estimator is shown to be asymptotically minimax and adaptive to the unknown sparsity. We discuss also possible extensions of the proposed approach to model selection in GLM under additional structural constraints and aggregation.
|
Generalized Model of VSCbased Energy Storage Systems for Transient Stability Analysis ; This paper presents a generalized energy storage system model for voltage and angle stability analysis. The proposed solution allows modeling most common energy storage technologies through a given set of linear differential algebraic equations DAEs. In particular, the paper considers, but is not limited to, compressed air, superconducting magnetic, electrochemical capacitor and battery energy storage devices. While able to cope with a variety of different technologies, the proposed generalized model proves to be accurate for angle and voltage stability analysis, as it includes a balanced, fundamentalfrequency model of the voltage source converter VSC and the dynamics of the dc link. Regulators with inclusion of hard limits are also taken into account. The transient behavior of the generalized model is compared with detailed fundamentalfrequency balanced models as well as commonlyused simplified models of energy storage devices. A comprehensive case study based on the WSCC 9bus test system is presented and discussed.
|
Neural Variational Inference for Text Processing ; Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bagofwords generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.
|
Exact solvability and asymptotic aspects of generalized XX0 spin chains ; Building on our earlier work unsciteSaZa, we introduce and study generalized XX0 models. We explicitly construct a longrange interacting spin chain, referred to as the Selberg model, and study the correlation functions of the Selberg and XX0 models. Using a matrix integral representation of the generalized XX0 model and applying asymptotic analysis in nonintersecting Brownian motion, the phase structure of the Selberg model is determined. We find that tails of the TracyWidom distribution, of Gaussian unitary ensemble, govern a discretetocontinuous thirdorder phase transition in Selberg model. The same method also reproduces the GrossWitten phase transition of the original XX0 model. Finally, we conjecture universal features for the phase structure of the generalized XX0 model.
|
General phase spaces from discrete variables to rotor and continuum limits ; We provide a basic introduction to discretevariable, rotor, and continuousvariable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limittaking procedures used to travel between phase spaces to a general class of Hamiltonians including many local stabilizer codes and provide six examples the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code which we generalize to qudits, and the Kitaev honeycomb model. We obtain continuousvariable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel manybody extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U1 lattice gauge theory.
|
Chest Xray Inpainting with Deep Generative Models ; Generative adversarial networks have been successfully applied to inpainting in natural images. However, the current stateoftheart models have not yet been widely adopted in the medical imaging domain. In this paper, we investigate the performance of three recently published deep learning based inpainting models context encoders, semantic image inpainting, and the contextual attention model, applied to chest xrays, as the chest exam is the most commonly performed radiological procedure. We train these generative models on 1.2M 128 times 128 patches from 60K healthy xrays, and learn to predict the center 64 times 64 region in each patch. We test the models on both the healthy and abnormal radiographs. We evaluate the results by visual inspection and comparing the PSNR scores. The outputs of the models are in most cases highly realistic. We show that the methods have potential to enhance and detect abnormalities. In addition, we perform a 2AFC observer study and show that an experienced human observer performs poorly in detecting inpainted regions, particularly those generated by the contextual attention model.
|
Training Question Answering Models From Synthetic Data ; Question and answer generation is a data augmentation method that aims to improve question answering QA models given the limited amount of human labeled data. However, a considerable gap remains between synthetic and humangenerated questionanswer pairs. This work aims to narrow this gap by taking advantage of large language models and explores several factors such as model size, quality of pretrained models, scale of data synthesized, and algorithmic choices. On the SQuAD1.1 question answering task, we achieve higher accuracy using solely synthetic questions and answers than when using the SQuAD1.1 training set questions alone. Removing access to real Wikipedia data, we synthesize questions and answers from a synthetic corpus generated by an 8.3 billion parameter GPT2 model. With no access to human supervision and only access to other models, we are able to train state of the art question answering networks on entirely modelgenerated data that achieve 88.4 Exact Match EM and 93.9 F1 score on the SQuAD1.1 dev set. We further apply our methodology to SQuAD2.0 and show a 2.8 absolute gain on EM score compared to prior work using synthetic data.
|
Continuous spin models on annealed generalized random graphs ; We study Gibbs distributions of spins taking values in a general compact Polish space, interacting via a pair potential along the edges of a generalized random graph with a given asymptotic weight distribution P, obtained by annealing over the random graph distribution. First we prove a variational formula for the corresponding annealed pressure and provide criteria for absence of phase transitions in the general case. We furthermore study classes of models with second order phase transitions which include rotationinvariant models on spheres and models on intervals, and classify their critical exponents. We find critical exponents which are modified relative to the corresponding meanfield values when P becomes too heavytailed, in which case they move continuously with the tailexponent of P. For large classes of models they are the same as for the Ising model treated in citeDomGiaGibHofPri16. On the other hand, we provide conditions under which the model is in a different universality class, and construct an explicit example of such a model on the interval.
|
Disentangling Latent Factors of Variational AutoEncoder with Whitening ; After deep generative models were successfully applied to image generation tasks, learning disentangled latent variables of data has become a crucial part of deep generative model research. Many models have been proposed to learn an interpretable and factorized representation of latent variable by modifying their objective function or model architecture. To disentangle the latent variable, some models show lower quality of reconstructed images and others increase the model complexity which is hard to train. In this paper, we propose a simple disentangling method based on a traditional whitening process. The proposed method is applied to the latent variables of variational autoencoder VAE, although it can be applied to any generative models with latent variables. In experiment, we apply the proposed method to simple VAE models and experiment results confirm that our method finds more interpretable factors from the latent space while keeping the reconstruction error the same as the conventional VAE's error.
|
Regularized adversarial examples for model interpretability ; As machine learning algorithms continue to improve, there is an increasing need for explaining why a model produces a certain prediction for a certain input. In recent years, several methods for model interpretability have been developed, aiming to provide explanation of which subset regions of the model input is the main reason for the model prediction. In parallel, a significant research community effort is occurring in recent years for developing adversarial example generation methods for fooling models, while not altering the true label of the input,as it would have been classified by a human annotator. In this paper, we bridge the gap between adversarial example generation and model interpretability, and introduce a modification to the adversarial example generation process which encourages better interpretability. We analyze the proposed method on a public medical imaging dataset, both quantitatively and qualitatively, and show that it significantly outperforms the leading known alternative method. Our suggested method is simple to implement, and can be easily plugged into most common adversarial example generation frameworks. Additionally, we propose an explanation quality metric APE Adversarial Perturbative Explanation, which measures how well an explanation describes model decisions.
|
Modelbased RL in Contextual Decision Processes PAC bounds and Exponential Improvements over Modelfree Approaches ; We study the sample complexity of modelbased reinforcement learning henceforth RL in general contextual decision processes that require strategic exploration to find a nearoptimal policy. We design new algorithms for RL with a generic model class and analyze their statistical properties. Our algorithms have sample complexity governed by a new structural parameter called the witness rank, which we show to be small in several settings of interest, including factored MDPs. We also show that the witness rank is never larger than the recently proposed Bellman rank parameter governing the sample complexity of the modelfree algorithm OLIVE Jiang et al., 2017, the only other provably sampleefficient algorithm for global exploration at this level of generality. Focusing on the special case of factored MDPs, we prove an exponential lower bound for a general class of modelfree approaches, including OLIVE, which, when combined with our algorithmic results, demonstrates exponential separation between modelbased and modelfree RL in some richobservation settings.
|
Bounds from ISWgalaxy crosscorrelations on generalized covariant Galileon models ; Several modified cosmological models exist, which also try to address the tensions between data and predictions of the LambdaCDM model. Galileon models are particular scalar tensor theories that represent one such possibilities. While it is commonly understood that there may be inconsistencies between predictions of some Galileon models and observations, in particular concerning ISWgalaxy crosscorrelations, there is no proof yet that these models are completely ruled out. Indeed, by using a specific background in the generalized covariant Galileon theory known as the the tracker solution, here we show that, after imposing all standard theoretical stability constraints, it is still possible to identify a region in the parameter space of the model that allows for positive ISWgalaxy crosscorrelations. By a physical interpretation in terms of a chisquare analysis, we confirm the expectation that in this viable region the predictions of generalized covariant Galileon theory on the tracker solution background have higher likelihood when they approach the physics of the LambdaCDM model.
|
Modeling the respiratory Central Pattern Generator with resonateandfire IzhikevichNeurons ; Computational models of the respiratory central pattern generator rCPG are usually based on biologicallyplausible Hodgkin Huxley neuron models. Such models require numerous parameters and thus are prone to overfitting. The HH approach is motivated by the assumption that the biophysical properties of neurons determine the network dynamics. Here, we implement the rCPG using simpler Izhikevich resonateandfire neurons. Our rCPG model generates a 3phase respiratory motor pattern based on established connectivities and can reproduce previous experimental and theoretical observations. Further, we demonstrate the flexibility of the model by testing whether intrinsic bursting properties are necessary for rhythmogenesis. Our simulations demonstrate that replacing predicted mandatory bursting properties of preinspiratory neurons with spike adapting properties yields a model that generates comparable respiratory activity patterns. The latter supports our view that the importance of the exact modeling parameters of specific respiratory neurons is overestimated.
|
A Generative Model for Exploring Structure Regularities in Attributed Networks ; Many realworld networks known as attributed networks contain two types of information topology information and node attributes. It is a challenging task on how to use these two types of information to explore structural regularities. In this paper, by characterizing potential relationship between link communities and node attributes, a principled statistical model named PSBPG that generates link topology and node attributes is proposed. This model for generating links is based on the stochastic blockmodels following a Poisson distribution. Therefore, it is capable of detecting a wide range of network structures including community structures, bipartite structures and other mixture structures. The model for generating node attributes assumes that node attributes are high dimensional and sparse and also follow a Poisson distribution. This makes the model be uniform and the model parameters can be directly estimated by expectationmaximization EM algorithm. Experimental results on artificial networks and real networks containing various structures have shown that the proposed model PSBPG is not only competitive with the stateoftheart models, but also provides good semantic interpretation for each community via the learned relationship between the community and its related attributes.
|
Mathematical Models in Isotropic Cosmology ; An axiomatic approach to the mathematical models of the isotropic cosmology.
|
Linear dynamical neural population models through nonlinear embeddings ; A body of recent work in modeling neural activity focuses on recovering lowdimensional latent features that capture the statistical structure of largescale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra flexibility allows the model to capture a richer set of neural variability than a purely linear model, but retains an easily visualizable lowdimensional latent space. To fit this class of nonconjugate models we propose a variational inference scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to stateoftheart neural population models, fLDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability.
|
Standard Model Extension with Flipped Generations ; An extension of the Standard Model is presented that leads to the possible existence of new gauge bosons with masses in the range of a few TeV. Due to the fact that their couplings to Standard Model fermions are strongly suppressed, it is possible for them to be hidden from current searches. The model contains additional generations of fermions with quantum numbers resembling those of the Standard Model fermion generations but with a twist their charge assignments are such that their electric charges and chiralities are flipped with respect to those of their corresponding Standard Model counterparts. This feature provides a way to obtain potential dark matter candidates and the interesting possibility for a Lepton number conserving dimensionfive operator for Dirac neutrino masses. The model implications associated to electroweak precision parameters, flavor changing neutral currents, and diphoton rate contributions are briefly discussed. The general assumptions of this set up are also used to sketch a couple of variants of the model with peculiar features that could motivate further study.
|
Explanation of Reinforcement Learning Model in Dynamic MultiAgent System ; Recently, there has been increasing interest in transparency and interpretability in Deep Reinforcement Learning DRL systems. Verbal explanations, as the most natural way of communication in our daily life, deserve more attention, since they allow users to gain a better understanding of the system which ultimately could lead to a high level of trust and smooth collaboration. This paper reports a novel work in generating verbal explanations for DRL behaviors agent. A rulebased model is designed to construct explanations using a series of rules which are predefined with prior knowledge. A learning model is then proposed to expand the implicit logic of generating verbal explanation to general situations by employing rulebased explanations as training data. The learning model is shown to have better flexibility and generalizability than the static rulebased model. The performance of both models is evaluated quantitatively through objective metrics. The results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems. Additionally, seven variants of the learning model are designed to illustrate the contribution of input channels, attention mechanism, and proposed encoder in improving the quality of verbal explanation.
|
A Neural Generative Model for Joint Learning Topics and TopicSpecific Word Embeddings ; We propose a novel generative model to explore both local and global context for joint learning topics and topicspecific word embeddings. In particular, we assume that global latent topics are shared across documents, a word is generated by a hidden semantic vector encoding its contextual semantic meaning, and its context words are generated conditional on both the hidden semantic vector and global latent topics. Topics are trained jointly with the word embeddings. The trained model maps words to topicdependent embeddings, which naturally addresses the issue of word polysemy. Experimental results show that the proposed model outperforms the wordlevel embedding methods in both word similarity evaluation and word sense disambiguation. Furthermore, the model also extracts more coherent topics compared with existing neural topic models or other models for joint learning of topics and word embeddings. Finally, the model can be easily integrated with existing deep contextualized word embedding learning methods to further improve the performance of downstream tasks such as sentiment classification.
|
Quantifying Bounds of Model Gap for Synchronous Generators ; In practice, uncertainties in parameters and model structures always cause a gap between a model and the corresponding physical entity. Hence, to evaluate the performance of a model, the bounds of this gap must be assessed. In this paper, we propose a trajectorysensitivitybased approach to quantify the bounds of the gap. The trajectory sensitivity is expressed as a linear timevarying system. We thus first derive several bounds for a general linear timevarying system in different scenarios. The derived bounds are then applied to obtain bounds of the model gap for generator plant models with different types of structural information, e.g., models of different orders. Case studies are carried out to show the efficacy of the bounds through synchronous generator models on different accuracy levels.
|
NonAutoregressive Text Generation with Pretrained Language Models ; Nonautoregressive generation NAG has recently attracted great attention due to its fast inference speed. However, the generation quality of existing NAG models still lags behind their autoregressive counterparts. In this work, we show that BERT can be employed as the backbone of a NAG model to greatly improve performance. Additionally, we devise mechanisms to alleviate the two common problems of vanilla NAG models the inflexibility of prefixed output length and the conditional independence of individual token predictions. Lastly, to further increase the speed advantage of the proposed model, we propose a new decoding strategy, ratiofirst, for applications where the output lengths can be approximately estimated beforehand. For a comprehensive evaluation, we test the proposed model on three text generation tasks, including text summarization, sentence compression and machine translation. Experimental results show that our model significantly outperforms existing nonautoregressive baselines and achieves competitive performance with many strong autoregressive models. In addition, we also conduct extensive analysis experiments to reveal the effect of each proposed component.
|
Calibrating generalized predictive distributions ; In prediction problems, it is common to model the datagenerating process and then use a modelbased procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated that is, the predictive distribution's quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a modelbased formulation for a more complicated nonmodelbased approach, here we propose a strategy in which the data itself helps determine if the assumed modelbased solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration GPrC algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithm's validity, efficiency, and robustness.
|
GA and ILS for optimizing the size of NFA models ; Grammatical inference consists in learning a formal grammar as a set of rewrite rules or a finite state machine. We are concerned with learning Nondeterministic Finite Automata NFA of a given size from samples of positive and negative words. NFA can naturally be modeled in SAT. The standard model 1 being enormous, we also try a model based on prefixes 2 which generates smaller instances. We also propose a new model based on suffixes and a hybrid model based on prefixes and suffixes. We then focus on optimizing the size of generated SAT instances issued from the hybrid models. We present two techniques to optimize this combination, one based on Iterated Local Search ILS, the second one based on Genetic Algorithm GA. Optimizing the combination significantly reduces the SAT instances and their solving time, but at the cost of longer generation time. We, therefore, study the balance between generation time and solving time thanks to some experimental comparisons, and we analyze our various model improvements.
|
Non Randomness Stock Market Price Model ; A new model for the stock market price analysis is proposed. It is suggested to look at price as an everywhere discontinuous function of time of bounded variation.
|
Interacting twocomponent fluid models with varying EoS parameter ; In this paper, we consider Universe filled with twocomponent fluid. We study two different models. In the first model we assume barotropic fluid with the linear equation of state as the first component of total fluid. In the second model we assume Van der Waals gas as the first component of total fluid. In both models, the second component assumed generalized ghost dark energy. We consider also interaction between component and discuss, numerically, cosmological quantities for two different parametrization of EoS which varies with time. We consider this as a toy model of our Universe. We fix parameters of the model by using generalized second law of thermodynamics. Comparing our results with some observational data suggests interacting barotropic fluid with EoS parameter omegatomega0costHomega1tfracdotHH and generalized ghost dark energy as an appropriate model to describe our Universe.
|
The method Model Elimination of D.W.Loveland explained ; We present concisely the method Model Elimination of D.W.Loveland. Especially, we explain and prove the correctness of the lemmas generated by this method.
|
Cortical Microcircuits from a Generative Vision Model ; Understanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence. The theoretical setting of Bayesian inference has been suggested as a framework for understanding cortical computation. Based on a recently published generative model for visual inference George et al., 2017, we derive a family of anatomically instantiated and functional cortical circuit models. In contrast to simplistic models of Bayesian inference, the underlying generative model's representational choices are validated with realworld tasks that required efficient inference and strong generalization. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints. The derived model suggests precise functional roles for the feedforward, feedback and lateral connections observed in different laminae and columns, and assigns a computational role for the path through the thalamus.
|
Unsupervised Neural Word Segmentation for Chinese via Segmental Language Modeling ; Previous traditional approaches to unsupervised Chinese word segmentation CWS can be roughly classified into discriminative and generative models. The former uses the carefully designed goodness measures for candidate segmentation, while the latter focuses on finding the optimal segmentation of the highest generative probability. However, while there exists a trivial way to extend the discriminative models into neural version by using neural language models, those of generative ones are nontrivial. In this paper, we propose the segmental language models SLMs for CWS. Our approach explicitly focuses on the segmental nature of Chinese, as well as preserves several properties of language models. In SLMs, a context encoder encodes the previous context and a segment decoder generates each segment incrementally. As far as we know, we are the first to propose a neural model for unsupervised CWS and achieve competitive performance to the stateoftheart statistical models on four different datasets from SIGHAN 2005 bakeoff.
|
Question Generation from Paragraphs A Tale of Two Hierarchical Models ; Automatic question generation from paragraphs is an important and challenging problem, particularly due to the long context from paragraphs. In this paper, we propose and study two hierarchical models for the task of question generation from paragraphs. Specifically, we propose a a novel hierarchical BiLSTM model with selective attention and b a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs. We model a paragraph in terms of its constituent sentences, and a sentence in terms of its constituent words. While the introduction of the attention mechanism benefits the hierarchical BiLSTM model, the hierarchical Transformer, with its inherent attention and positional encoding mechanisms also performs better than flat transformer model. We conducted empirical evaluation on the widely used SQuAD and MS MARCO datasets using standard metrics. The results demonstrate the overall effectiveness of the hierarchical models over their flat counterparts. Qualitatively, our hierarchical models are able to generate fluent and relevant questions
|
DoubletTriplet Splitting in Fertile LeftRight Symmetric Heterotic String Vacua ; Classification of LeftRight Symmetric LRS heteroticstring vacua in the free fermionic formulation, using random generation of generalised GSO GGSO projection coefficients, produced phenomenologically viable models with probability 4times 1011. Extracting substantial number of phenomenologically viable models requires modification of the classification method. This is achieved by identifying phenomenologically amenable conditions on the Generalised GSO projection coefficients that are randomly generated at the SO10 level. Around each of these fertile cores we perform a complete LRS classification, generating viable models with probabilility 1.4times 102, hence increasing the probability of generating phenomenologically viable models by nine orders of magnitude, and producing some 1.4times 105 such models. In the process we identify a doublettriplet selection mechanism that operates in twisted sectors of the string models that break the SO10 symmetry to the PatiSalam subgroup. This mechanism therefore operates as well in free fermionic models with PatiSalam and Standardlike Model SO10 subgroups.
|
BlackBox Saliency Map Generation Using Bayesian Optimisation ; Saliency maps are often used in computer vision to provide intuitive interpretations of what input regions a model has used to produce a specific prediction. A number of approaches to saliency map generation are available, but most require access to model parameters. This work proposes an approach for saliency map generation for blackbox models, where no access to model parameters is available, using a Bayesian optimisation sampling method. The approach aims to find the global salient image region responsible for a particular blackbox model's prediction. This is achieved by a samplingbased approach to model perturbations that seeks to localise salient regions of an image to the blackbox model. Results show that the proposed approach to saliency map generation outperforms gridbased perturbation approaches, and performs similarly to gradientbased approaches which require access to model parameters.
|
CrossLinguistic Syntactic Evaluation of Word Prediction Models ; A range of studies have concluded that neural word prediction models can distinguish grammatical from ungrammatical sentences with high accuracy. However, these studies are based primarily on monolingual evidence from English. To investigate how these models' ability to learn syntax varies by language, we introduce CLAMS CrossLinguistic Assessment of Models on Syntax, a syntactic evaluation suite for monolingual and multilingual models. CLAMS includes subjectverb agreement challenge sets for English, French, German, Hebrew and Russian, generated from grammars we develop. We use CLAMS to evaluate LSTM language models as well as monolingual and multilingual BERT. Across languages, monolingual LSTMs achieved high accuracy on dependencies without attractors, and generally poor accuracy on agreement across object relative clauses. On other constructions, agreement accuracy was generally higher in languages with richer morphology. Multilingual models generally underperformed monolingual models. Multilingual BERT showed high syntactic accuracy on English, but noticeable deficiencies in other languages.
|
Validation of Abstract SideChannel Models for Computer Architectures ; Observational models make tractable the analysis of information flow properties by providing an abstraction of side channels. We introduce a methodology and a tool, ScamV, to validate observational models for modern computer architectures. We combine symbolic execution, relational analysis, and different program generation techniques to generate experiments and validate the models. An experiment consists of a randomly generated program together with two inputs that are observationally equivalent according to the model under the test. Validation is done by checking indistinguishability of the two inputs on real hardware by executing the program and analyzing the side channel. We have evaluated our framework by validating models that abstract the datacache side channel of a Raspberry Pi 3 board with a processor implementing the ARMv8A architecture. Our results show that ScamV can identify bugs in the implementation of the models and generate test programs which invalidate the models due to hidden microarchitectural behavior.
|
Speaker Sensitive Response Evaluation Model ; Automatic evaluation of opendomain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.
|
Using Human Psychophysics to Evaluate Generalization in Scene Text Recognition Models ; Scene text recognition models have advanced greatly in recent years. Inspired by human reading we characterize two important scene text recognition models by measuring their domains i.e. the range of stimulus images that they can read. The domain specifies the ability of readers to generalize to different word lengths, fonts, and amounts of occlusion. These metrics identify strengths and weaknesses of existing models. Relative to the attentionbased Attn model, we discover that the connectionist temporal classification CTC model is more robust to noise and occlusion, and better at generalizing to different word lengths. Further, we show that in both models, adding noise to training images yields better generalization to occlusion. These results demonstrate the value of testing models till they break, complementing the traditional data science focus on optimizing performance.
|
Wellposedness and Stability Analysis of Two Classes of Generalized Stochastic Volatility Models ; In this paper, to cope with the shortage of sufficient theoretical support resulted from the fastgrowing quantitative financial modeling, we investigate two classes of generalized stochastic volatility models, establish their wellposedness of strong solutions, and conduct the stability analysis with respect to small perturbations. In the first class, a multidimensional pathdependent process is driven by another multidimensional pathdependent process. The second class is a generalized onedimensional stochastic volatility model with Holder continuous coefficients. What greatly differentiates those two classes of models is that both the process and its correlated driving process have their own subdifferential operators, whose one special case is the general reflection operators for multisided barriers. Hence, the models investigated fully cover various newly explored variants of stochastic volatility models whose wellposedness is unknown, and naturally serve as the rigorous mathematical foundation for new stochastic volatility model development in terms of multidimension, pathdependence, and multisided barrier reflection.
|
Posterior Differential Regularization with fdivergence for Improving Model Robustness ; We address the problem of enhancing model robustness through regularization. Specifically, we focus on methods that regularize the model posterior difference between clean and noisy inputs. Theoretically, we provide a connection of two recent methods, Jacobian Regularization and Virtual Adversarial Training, under this framework. Additionally, we generalize the posterior differential regularization to the family of fdivergences and characterize the overall regularization framework in terms of Jacobian matrix. Empirically, we systematically compare those regularizations and standard BERT training on a diverse set of tasks to provide a comprehensive profile of their effect on model indomain and outofdomain generalization. For both fully supervised and semisupervised settings, our experiments show that regularizing the posterior differential with fdivergence can result in wellimproved model robustness. In particular, with a proper fdivergence, a BERTbase model can achieve comparable generalization as its BERTlarge counterpart for indomain, adversarial and domain shift scenarios, indicating the great potential of the proposed framework for boosting model generalization for NLP models.
|
Decentralized Attribution of Generative Models ; Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of userend models where the contents under question are generated from. Existing studies showed empirical feasibility of attribution through a centralized classifier trained on all userend models. However, this approach is not scalable in reality as the number of models ever grows. Neither does it provide an attributability guarantee. To this end, this paper studies decentralized attribution, which relies on binary classifiers associated with each userend model. Each binary classifier is parameterized by a userspecific key and distinguishes its associated model distribution from the authentic data distribution. We develop sufficient conditions of the keys that guarantee an attributability lower bound. Our method is validated on MNIST, CelebA, and FFHQ datasets. We also examine the tradeoff between generation quality and robustness of attribution against adversarial postprocesses.
|
Generative Temporal Difference Learning for InfiniteHorizon Prediction ; We introduce the gammamodel, a predictive model of environment dynamics with an infinite probabilistic horizon. Replacing standard singlestep models with gammamodels leads to generalizations of the procedures central to modelbased control, including the model rollout and modelbased value estimation. The gammamodel, trained with a generative reinterpretation of temporal difference learning, is a natural continuous analogue of the successor representation and a hybrid between modelfree and modelbased mechanisms. Like a value function, it contains information about the longterm future; like a standard predictive model, it is independent of task reward. We instantiate the gammamodel as both a generative adversarial network and normalizing flow, discuss how its training reflects an inescapable tradeoff between trainingtime and testingtime compounding errors, and empirically investigate its utility for prediction and control.
|
Forecasting and Analyzing the Military Expenditure of India Using BoxJenkins ARIMA Model ; The advancement in the field of statistical methodologies to economic data has paved its path towards the dire need for designing efficient military management policies. India is ranked as the third largest country in terms of military spender for the year 2019. Therefore, this study aims at utilizing the BoxJenkins ARIMA model for time series forecasting of the military expenditure of India in forthcoming times. The model was generated on the SIPRI dataset of Indian military expenditure of 60 years from the year 1960 to 2019. The trend was analysed for the generation of the model that best fitted the forecasting. The study highlights the minimum AIC value and involves ADF testing Augmented DickeyFuller to transform expenditure data into stationary form for model generation. It also focused on plotting the residual error distribution for efficient forecasting. This research proposed an ARIMA 0,1,6 model for optimal forecasting of military expenditure of India with an accuracy of 95.7. The model, thus, acts as a Moving Average MA model and predicts the steadystate exponential growth of 36.94 in military expenditure of India by 2024.
|
A Simple Mathematical Model of Politics I ; Politics is everywhere. In this paper, I propose a simple model to demonstrate political behavior in human society.
|
The overtopos at a model ; With a model of a geometric theory in an arbitrary topos, we associate a site obtained by endowing a category of generalized elements of the model with a Grothendieck topology, which we call the antecedent topology. Then we show that the associated sheaf topos, which we call the overtopos at the given model, admits a canonical totally connected morphism to the given base topos and satisfies a universal property generalizing that of the colocalization of a topos at a point. We first treat the case of the base topos of sets, where global elements are sufficient to describe our site of definition; in this context, we also introduce a geometric theory classified by the overtopos, whose models can be identified with the model homomorphisms towards the internalizations of the model. Then we formulate and prove the general statement over an arbitrary topos, which involves the stack of generalized elements of the model. Lastly, we investigate the geometric and 2categorical aspects of the overtopos construction, exhibiting it as a bilimit in the bicategory of Grothendieck toposes.
|
Generalized Circular OneWay Jumping Finite Automata ; A discontinuous model of computation called oneway jumping finite automata was defined by H. Chigahara et. al. This model was a restricted version of the model jumping finite automata. Oneway jumping finite automata change their states after deleting a letter of an input and jump only in one direction. Allowing a state to delete a subword instead of a letter, we define a new model generalized circular oneway jumping finite automata. These automata work on an input in a circular manner. Similar to oneway jumping finite automata, generalized circular oneway jumping finite automata also jump only in one direction. We show that this newly defined model is powerful than oneway jumping finite automata. We define new variantsright and left of the model generalized circular oneway jumping finite automata and compare them. We also compare the newly defined model with Chomsky hierarchy. Finally, we explore closure properties of the model.
|
Cascaded Diffusion Models for High Fidelity Image Generation ; We show that cascaded diffusion models are capable of generating high fidelity images on the classconditional ImageNet generation benchmark, without any assistance from auxiliary image classifiers to boost sample quality. A cascaded diffusion model comprises a pipeline of multiple diffusion models that generate images of increasing resolution, beginning with a standard diffusion model at the lowest resolution, followed by one or more superresolution diffusion models that successively upsample the image and add higher resolution details. We find that the sample quality of a cascading pipeline relies crucially on conditioning augmentation, our proposed method of data augmentation of the lower resolution conditioning inputs to the superresolution models. Our experiments show that conditioning augmentation prevents compounding error during sampling in a cascaded model, helping us to train cascading pipelines achieving FID scores of 1.48 at 64x64, 3.52 at 128x128 and 4.88 at 256x256 resolutions, outperforming BigGANdeep, and classification accuracy scores of 63.02 top1 and 84.06 top5 at 256x256, outperforming VQVAE2.
|
Analysis of ODE2VAE with Examples ; Deep generative models aim to learn underlying distributions that generate the observed data. Given the fact that the generative distribution may be complex and intractable, deep latent variable models use probabilistic frameworks to learn more expressive joint probability distributions over the data and their lowdimensional hidden variables. Learning complex probability distributions over sequential data without any supervision is a difficult task for deep generative models. Ordinary Differential Equation Variational AutoEncoder ODE2VAE is a deep latent variable model that aims to learn complex distributions over highdimensional sequential data and their lowdimensional representations. ODE2VAE infers continuous latent dynamics of the highdimensional input in a lowdimensional hierarchical latent space. The hierarchical organization of the continuous latent space embeds a physicsguided inductive bias in the model. In this paper, we analyze the latent representations inferred by the ODE2VAE model over three different physical motion datasets bouncing balls, projectile motion, and simple pendulum. Through our experiments, we explore the effects of the physicsguided inductive bias of the ODE2VAE model over the learned dynamical latent representations. We show that the model is able to learn meaningful latent representations to an extent without any supervision.
|
NumGPT Improving Numeracy Ability of Generative Pretrained Models ; Existing generative pretrained language models e.g., GPT focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks e.g., math word problems and measurement estimation. In this paper, we propose NumGPT, a generative pretrained model that explicitly models the numerical properties of numbers in texts. Specifically, it leverages a prototypebased numeral embedding to encode the mantissa of the number and an individual embedding to encode the exponent of the number. A numeralaware loss function is designed to integrate numerals into the pretraining objective of NumGPT. We conduct extensive experiments on four different datasets to evaluate the numeracy ability of NumGPT. The experiment results show that NumGPT outperforms baseline models e.g., GPT and GPT with DICE on a range of numerical reasoning tasks such as measurement estimation, number comparison, math word problems, and magnitude classification. Ablation studies are also conducted to evaluate the impact of pretraining and model hyperparameters on the performance.
|
Adversarial robustness for latent models Revisiting the robuststandard accuracies tradeoff ; Over the past few years, several adversarial training methods have been proposed to improve the robustness of machine learning models against adversarial perturbations in the input. Despite remarkable progress in this regard, adversarial training is often observed to drop the standard test accuracy. This phenomenon has intrigued the research community to investigate the potential tradeoff between standard accuracy a.k.a generalization and robust accuracy a.k.a robust generalization as two performance measures. In this paper, we revisit this tradeoff for latent models and argue that this tradeoff is mitigated when the data enjoys a lowdimensional structure. In particular, we consider binary classification under two data generative models, namely Gaussian mixture model and generalized linear model, where the features data lie on a lowdimensional manifold. We develop a theory to show that the lowdimensional manifold structure allows one to obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures. We further corroborate our theory with several numerical experiments, including Mixture of Factor Analyzers MFA model trained on the MNIST dataset.
|
Explaining Documents' Relevance to Search Queries ; We present GenEx, a generative model to explain search results to users beyond just showing matches between query and document words. Adding GenEx explanations to search results greatly impacts user satisfaction and search performance. Search engines mostly provide document titles, URLs, and snippets for each result. Existing modelagnostic explanation methods similarly focus on word matching or contentbased features. However, a recent user study shows that word matching features are quite obvious to users and thus of slight value. GenEx explains a search result by providing a terse description for the query aspect covered by that result. We cast the task as a sequence transduction problem and propose a novel model based on the Transformer architecture. To represent documents with respect to the given queries and yet not generate the queries themselves as explanations, two queryattention layers and maskedquery decoding are added to the Transformer architecture. The model is trained without using any humangenerated explanations. Training data are instead automatically constructed to ensure a tolerable noise level and a generalizable learned model. Experimental evaluation shows that our explanation models significantly outperform the baseline models. Evaluation through user studies also demonstrates that our explanation model generates short yet useful explanations.
|
Exploration of the Parameter Space in Macroeconomic AgentBased Models ; AgentBased Models ABM are computational scenariogenerators, which can be used to predict the possible future outcomes of the complex system they represent. To better understand the robustness of these predictions, it is necessary to understand the full scope of the possible phenomena the model can generate. Most often, due to highdimensional parameter spaces, this is a computationally expensive task. Inspired by ideas coming from systems biology, we show that for multiple macroeconomic models, including an agentbased model and several Dynamic Stochastic General Equilibrium DSGE models, there are only a few stiff parameter combinations that have strong effects, while the other sloppy directions are irrelevant. This suggest an algorithm that efficiently explores the space of parameters by primarily moving along the stiff directions. We apply our algorithm to a mediumsized agentbased model, and show that it recovers all possible dynamics of the unemployment rate. The application of this method to Agentbased Models may lead to a more thorough and robust understanding of their features, and provide enhanced parameter sensitivity analyses. Several promising paths for future research are discussed.
|
Differentiable and Scalable Generative Adversarial Models for Data Imputation ; Data imputation has been extensively explored to solve the missing data problem. The dramatically increasing volume of incomplete data makes the imputation models computationally infeasible in many reallife applications. In this paper, we propose an effective scalable imputation system named SCIS to significantly speed up the training of the differentiable generative adversarial imputation models under accuracyguarantees for largescale incomplete data. SCIS consists of two modules, differentiable imputation modeling DIM and sample size estimation SSE. DIM leverages a new masking Sinkhorn divergence function to make an arbitrary generative adversarial imputation model differentiable, while for such a differentiable imputation model, SSE can estimate an appropriate sample size to ensure the userspecified imputation accuracy of the final model. Extensive experiments upon several reallife largescale datasets demonstrate that, our proposed system can accelerate the generative adversarial model training by 7.1x. Using around 7.6 samples, SCIS yields competitive accuracy with the stateoftheart imputation methods in a much shorter computation time.
|
Fundamental limits to learning closedform mathematical models from data ; Given a finite and noisy dataset generated with a closedform mathematical model, when is it possible to learn the true generating model from the data alone This is the question we investigate here. We show that this modellearning problem displays a transition from a lownoise phase in which the true model can be learned, to a phase in which the observation noise is too high for the true model to be learned by any method. Both in the lownoise phase and in the highnoise phase, probabilistic model selection leads to optimal generalization to unseen data. This is in contrast to standard machine learning approaches, including artificial neural networks, which in this particular problem are limited, in the lownoise phase, by their ability to interpolate. In the transition region between the learnable and unlearnable phases, generalization is hard for all approaches including probabilistic model selection.
|
Conditional set generation using Seq2seq models ; Conditional set generation learns a mapping from an input sequence of tokens to a set. Several NLP tasks, such as entity typing and dialogue emotion tagging, are instances of set generation. Seq2Seq models, a popular choice for set generation, treat a set as a sequence and do not fully leverage its key properties, namely orderinvariance and cardinality. We propose a novel algorithm for effectively sampling informative orders over the combinatorial space of label orders. We jointly model the set cardinality and output by prepending the set size and taking advantage of the autoregressive factorization used by Seq2Seq models. Our method is a modelindependent data augmentation approach that endows any Seq2Seq model with the signals of orderinvariance and cardinality. Training a Seq2Seq model on this augmented data without any additional annotations gets an average relative improvement of 20 on four benchmark datasets across various models BART, T5, and GPT3. Code to use SETAUG available at httpssetgen.structgen.com.
|
Learning to Generate Prompts for Dialogue Generation through Reinforcement Learning ; Much literature has shown that promptbased learning is an efficient method to make use of the large pretrained language model. Recent works also exhibit the possibility of steering a chatbot's output by plugging in an appropriate prompt. Gradientbased methods are often used to perturb the prompts. However, some language models are not even available to the public. In this work, we first explored the combination of prompting and reinforcement learning RL to steer models' generation without accessing any of the models' parameters. Second, to reduce the training effort and enhance the generalizability to the unseen task, we apply multitask learning to make the model learn to generalize to new tasks better. The experiment results show that our proposed method can successfully control several stateoftheart SOTA dialogue models without accessing their parameters. Furthermore, the model demonstrates the strong ability to quickly adapt to an unseen task in fewer steps than the baseline model.
|
DIRECTOR GeneratorClassifiers For Supervised Language Modeling ; Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness and contradictions. The standard language modeling setup fails to address these issues. In this paper, we introduce a new architecture, sc Director, that consists of a unified generatorclassifier with both a language modeling and a classification head for each output token. Training is conducted jointly using both standard language modeling data, and data labeled with desirable and undesirable sequences. Experiments in several settings show that the model has competitive training and decoding speed compared to standard language models while yielding superior results, alleviating known issues while maintaining generation quality. It also outperforms existing model guiding approaches in terms of both accuracy and efficiency.
|
ModelBased Imitation Learning Using Entropy Regularization of Model and Policy ; Approaches based on generative adversarial networks for imitation learning are promising because they are sample efficient in terms of expert demonstrations. However, training a generator requires many interactions with the actual environment because modelfree reinforcement learning is adopted to update a policy. To improve the sample efficiency using modelbased reinforcement learning, we propose modelbased EntropyRegularized Imitation Learning MBERIL under the entropyregularized Markov decision process to reduce the number of interactions with the actual environment. MBERIL uses two discriminators. A policy discriminator distinguishes the actions generated by a robot from expert ones, and a model discriminator distinguishes the counterfactual state transitions generated by the model from the actual ones. We derive structured discriminators so that the learning of the policy and the model is efficient. Computer simulations and real robot experiments show that MBERIL achieves a competitive performance and significantly improves the sample efficiency compared to baseline methods.
|
Adversarial MultiTask Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis ; Recently, deep learningbased generative models have been introduced to generate singing voices. One approach is to predict the parametric vocoder features consisting of explicit speech parameters. This approach has the advantage that the meaning of each feature is explicitly distinguished. Another approach is to predict melspectrograms for a neural vocoder. However, parametric vocoders have limitations of voice quality and the melspectrogram features are difficult to model because the timbre and pitch information are entangled. In this study, we propose a singing voice synthesis model with multitask learning to use both approaches acoustic features for a parametric vocoder and melspectrograms for a neural vocoder. By using the parametric vocoder features as auxiliary features, the proposed model can efficiently disentangle and control the timbre and pitch components of the melspectrogram. Moreover, a generative adversarial network framework is applied to improve the quality of singing voices in a multisinger model. Experimental results demonstrate that our proposed model can generate more natural singing voices than the singletask models, while performing better than the conventional parametric vocoderbased model.
|
Word Play for Playing Othello Reverses ; Language models like OpenAI's Generative PreTrained Transformers GPT23 capture the longterm correlations needed to generate text in a variety of domains such as language translators and recently in gameplay chess, Go, and checkers. The present research applies both the larger GPT3 and smaller GPT2 language models to explore the complex strategies for the game of Othello or Reverses. Given the game rules for rapid reversals of fortune, the language model not only represents a candidate predictor of the next move based on previous game moves but also avoids sparse rewards in gameplay. The language model automatically captures or emulates championshiplevel strategies. The finetuned GPT2 model generates Othello games ranging from 1371 completion, while the larger GPT3 model reaches 41 of a complete game. Like previous work with chess and Go, these language models offer a novel way to generate plausible game archives, particularly for comparing opening moves across a larger sample than humanly possible to explore. A primary contribution of these models magnifies by twofold the previous record for player archives 120,000 human games over 45 years from 19772022, thus supplying the research community with more diverse and original strategies for sampling with other reinforcement learning techniques.
|
Advanced Conditional Variational Autoencoders ACVAE Towards interpreting opendomain conversation generation via disentangling latent feature representation ; Currently endtoend deep learning based opendomain dialogue systems remain black box models, making it easy to generate irrelevant contents with datadriven models. Specifically, latent variables are highly entangled with different semantics in the latent space due to the lack of priori knowledge to guide the training. To address this problem, this paper proposes to harness the generative model with a priori knowledge through a cognitive approach involving mesoscopic scale feature disentanglement. Particularly, the model integrates the macrolevel guidedcategory knowledge and microlevel opendomain dialogue data for the training, leveraging the priori knowledge into the latent space, which enables the model to disentangle the latent variables within the mesoscopic scale. Besides, we propose a new metric for opendomain dialogues, which can objectively evaluate the interpretability of the latent space distribution. Finally, we validate our model on different datasets and experimentally demonstrate that our model is able to generate higher quality and more interpretable dialogues than other models.
|
Relating Regularization and Generalization through the Intrinsic Dimension of Activations ; Given a pair of models with similar training set performance, it is natural to assume that the model that possesses simpler internal representations would exhibit better generalization. In this work, we provide empirical evidence for this intuition through an analysis of the intrinsic dimension ID of model activations, which can be thought of as the minimal number of factors of variation in the model's representation of the data. First, we show that common regularization techniques uniformly decrease the lastlayer ID LLID of validation set activations for image classification models and show how this strongly affects generalization performance. We also investigate how excessive regularization decreases a model's ability to extract features from data in earlier layers, leading to a negative effect on validation accuracy even while LLID continues to decrease and training accuracy remains nearperfect. Finally, we examine the LLID over the course of training of models that exhibit grokking. We observe that well after training accuracy saturates, when models grok'' and validation accuracy suddenly improves from random to perfect, there is a cooccurent sudden drop in LLID, thus providing more insight into the dynamics of sudden generalization.
|
DAG DepthAware Guidance with Denoising Diffusion Probabilistic Models ; Generative models have recently undergone significant advancement due to the diffusion models. The success of these models can be often attributed to their use of guidance techniques, such as classifier or classifierfree guidance, which provide effective mechanisms to tradeoff between fidelity and diversity. However, these methods are not capable of guiding a generated image to be aware of its geometric configuration, e.g., depth, which hinders their application to areas that require a certain level of depth awareness. To address this limitation, we propose a novel guidance method for diffusion models that uses estimated depth information derived from the rich intermediate representations of diffusion models. We first present labelefficient depth estimation framework using internal representations of diffusion models. Subsequently, we propose the incorporation of two guidance techniques based on pseudolabeling and depthdomain diffusion prior during the sampling phase to selfcondition the generated image using the estimated depth map. Experiments and comprehensive ablation studies demonstrate the effectiveness of our method in guiding the diffusion models towards the generation of geometrically plausible images.
|
Extracting Training Data from Diffusion Models ; Image diffusion models such as DALLE 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate highquality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generateandfilter pipeline, we extract over a thousand training examples from stateoftheart models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacypreserving training.
|
Language Model Behavior A Comprehensive Survey ; Transformer language models have received widespread public attention, yet their generated text is often surprising even to NLP researchers. In this survey, we discuss over 250 recent studies of English language model behavior before taskspecific finetuning. Language models possess basic capabilities in syntax, semantics, pragmatics, world knowledge, and reasoning, but these capabilities are sensitive to specific inputs and surface features. Despite dramatic increases in generated text quality as models scale to hundreds of billions of parameters, the models are still prone to unfactual responses, commonsense errors, memorized text, and social biases. Many of these weaknesses can be framed as overgeneralizations or undergeneralizations of learned patterns in text. We synthesize recent results to highlight what is currently known about large language model capabilities, thus providing a resource for applied work and for research in adjacent fields that use language models.
|
A Full Quantum Generative Adversarial Network Model for High Energy Physics Simulations ; The prospect of quantum computing with a potential exponential speedup compared to classical computing identifies it as a promising method in the search for alternative future High Energy Physics HEP simulation approaches. HEP simulations, such as employed at the Large Hadron Collider at CERN, are extraordinarily complex and require an immense amount of computing resources in hardware and time. For some HEP simulations, classical machine learning models have already been successfully developed and tested, resulting in several orders of magnitude speedup. In this research, we proceed to the next step and explore whether quantum computing can provide sufficient accuracy, and further improvements, suggesting it as an exciting direction of future investigations. With a small prototype model, we demonstrate a full quantum Generative Adversarial Network GAN model for generating downsized eightpixel calorimeter shower images. The advantage over previous quantum models is that the model generates real individual images containing pixel energy values instead of simple probability distributions averaged over a test sample. To complete the picture, the results of the full quantum GAN model are compared to hybrid quantumclassical models using a classical discriminator neural network.
|
Selective Amnesia A Continual Learning Approach to Forgetting in Deep Generative Models ; The recent proliferation of largescale texttoimage models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learning to selectively forget concepts in pretrained deep generative models. Our method, dubbed Selective Amnesia, enables controllable forgetting where a user can specify how a concept should be forgotten. Selective Amnesia can be applied to conditional variational likelihood models, which encompass a variety of popular deep generative frameworks, including variational autoencoders and largescale texttoimage diffusion models. Experiments across different models demonstrate that our approach induces forgetting on a variety of concepts, from entire classes in standard datasets to celebrity and nudity prompts in texttoimage models. Our code is publicly available at httpsgithub.comclearnusselectiveamnesia.
|
ModelGenerated Pretraining Signals Improves ZeroShot Generalization of TexttoText Transformers ; This paper explores the effectiveness of modelgenerated signals in improving zeroshot generalization of texttotext Transformers such as T5. We study various designs to pretrain T5 using an auxiliary model to construct more challenging token replacements for the main model to denoise. Key aspects under study include the decoding target, the location of the RTD head, and the masking pattern. Based on these studies, we develop a new model, METROT0, which is pretrained using the redesigned ELECTRAStyle pretraining strategies and then promptfinetuned on a mixture of NLP tasks. METROT0 outperforms all similarsized baselines on prompted NLP benchmarks, such as T0 Eval and MMLU, and rivals the stateoftheart T011B model with only 8 of its parameters. Our analysis on model's neural activation and parameter sensitivity reveals that the effectiveness of METROT0 stems from more balanced contribution of parameters and better utilization of their capacity. The code and model checkpoints are available at httpsgithub.comgonglinyuanmetrot0.
|
Matcher Segment Anything with One Shot Using AllPurpose Feature Matching ; Powered by largescale pretraining, vision foundation models exhibit significant potential in openworld image understanding. Even though individual models have limited capabilities, combining multiple such models properly can lead to positive synergies and unleash their full potential. In this work, we present Matcher, which segments anything with one shot by integrating an allpurpose feature extraction model and a classagnostic segmentation model. Naively connecting the models results in unsatisfying performance, e.g., the models tend to generate matching outliers and falsepositive mask fragments. To address these issues, we design a bidirectional matching strategy for accurate crossimage semantic dense matching and a robust prompt sampler for mask proposal generation. In addition, we propose a novel instancelevel matching strategy for controllable mask merging. The proposed Matcher method delivers impressive generalization performance across various segmentation tasks, all without training. For example, it achieves 52.7 mIoU on COCO20i for oneshot semantic segmentation, surpassing the stateoftheart specialist model by 1.6. In addition, our visualization results show openworld generality and flexibility on images in the wild. The code shall be released at httpsgithub.comaimuofaMatcher.
|
Topologyaware Piecewise Linearization of the AC Power Flow through Generative Modeling ; Effective power flow modeling critically affects the ability to efficiently solve largescale grid optimization problems, especially those with topologyrelated decision variables. In this work, we put forth a generative modeling approach to obtain a piecewise linear PWL approximation of AC power flow by training a simple neural network model from actual data samples. By using the ReLU activation, the NN models can produce a PWL mapping from the input voltage magnitudes and angles to the output power flow and injection. Our proposed generative PWL model uniquely accounts for the nonlinear and topologyrelated couplings of power flow models, and thus it can greatly improve the accuracy and consistency of output power variables. Most importantly, it enables to reformulate the nonlinear power flow and line statusrelated constraints into mixedinteger linear ones, such that one can efficiently solve grid topology optimization tasks like the AC optimal transmission switching OTS problem. Numerical tests using the IEEE 14 and 118bus test systems have demonstrated the modeling accuracy of the proposed PWL approximation using a generative approach, as well as its ability in enabling competitive OTS solutions at very low computation order.
|
Hybrid RetrievalAugmented Generation for Realtime Composition Assistance ; Retrieval augmented models show promise in enhancing traditional language models by improving their contextual understanding, integrating private data, and reducing hallucination. However, the processing time required for retrieval augmented large language models poses a challenge when applying them to tasks that require realtime responses, such as composition assistance. To overcome this limitation, we propose the Hybrid RetrievalAugmented Generation HybridRAG framework that leverages a hybrid setting that combines both client and cloud models. HybridRAG incorporates retrievalaugmented memory generated asynchronously by a Large Language Model LLM in the cloud. By integrating this retrieval augmented memory, the client model acquires the capability to generate highly effective responses, benefiting from the LLM's capabilities. Furthermore, through asynchronous memory integration, the client model is capable of delivering realtime responses to user requests without the need to wait for memory synchronization from the cloud. Our experiments on Wikitext and Pile subsets show that HybridRAG achieves lower latency than a cloudbased retrievalaugmented LLM, while outperforming clientonly models in utility.
|
Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models ; This work introduces a novel class of channel estimators tailored for coarse quantization systems. The proposed estimators are founded on conditionally Gaussian latent generative models, specifically Gaussian mixture models GMMs, mixture of factor analyzers MFAs, and variational autoencoders VAEs. These models effectively learn the unknown channel distribution inherent in radio propagation scenarios, providing valuable prior information. Conditioning on the latent variable of these generative models yields a locally Gaussian channel distribution, thus enabling the application of the wellknown Bussgang decomposition. By exploiting the resulting conditional Bussgang decomposition, we derive parameterized linear minimum mean square error MMSE estimators for the considered generative latent variable models. In this context, we explore leveraging modelbased structural features to reduce memory and complexity overhead associated with the proposed estimators. Furthermore, we devise necessary training adaptations, enabling direct learning of the generative models from quantized pilot observations without requiring groundtruth channel samples during the training phase. Through extensive simulations, we demonstrate the superiority of our introduced estimators over existing stateoftheart methods for coarsely quantized systems, as evidenced by significant improvements in mean square error MSE and achievable rate metrics.
|
Natural Language Supervision for GeneralPurpose Audio Representations ; AudioLanguage models jointly learn multimodal text and audio representations that enable ZeroShot inference. Models rely on the encoders to create powerful representations of the input and generalize to multiple tasks ranging from sounds, music, and speech. Although models have achieved remarkable performance, there is still a performance gap with taskspecific models. In this paper, we propose a Contrastive LanguageAudio Pretraining model that is pretrained with a diverse collection of 4.6M audiotext pairs employing two innovative encoders for ZeroShot inference. To learn audio representations, we trained an audio encoder on 22 audio tasks, instead of the standard training of sound event classification. To learn language representations, we trained an autoregressive decoderonly model instead of the standard encoderonly models. Then, the audio and language representations are brought into a joint multimodal space using Contrastive Learning. We used our encoders to improve the downstream performance by a margin. We extensively evaluated the generalization of our representations on 26 downstream tasks, the largest in the literature. Our model achieves state of the art results in several tasks leading the way towards generalpurpose audio representations.
|
Multimodal Foundation Models From Specialists to GeneralPurpose Assistants ; This paper presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and visionlanguage capabilities, focusing on the transition from specialist models to generalpurpose assistants. The research landscape encompasses five core topics, categorized into two classes. i We start with a survey of wellestablished research areas multimodal foundation models pretrained for specific purposes, including two topics methods of learning vision backbones for visual understanding and texttoimage generation. ii Then, we present recent advances in exploratory, open research areas multimodal foundation models that aim to play the role of generalpurpose assistants, including three topics unified vision models inspired by large language models LLMs, endtoend training of multimodal LLMs, and chaining multimodal tools with LLMs. The target audiences of the paper are researchers, graduate students, and professionals in computer vision and visionlanguage multimodal communities who are eager to learn the basics and recent advances in multimodal foundation models.
|
Automated Enterprise Applications Generation from Requirements Model ; Enterprise applications can be automatically generated from a sophisticated OO design model based on modeldriven approach. The design model contains information about how to decompose the system into components, how to encapsulate the system operations into classes, and how the objects of classes collaborate to fulfill the functionality of the system operations. However, the efforts to build the design model from a validated requirements model are not proportional to the return. In practice, it is very desirable to have an approach that can automatically generate standardized enterprise applications directly from the validated requirements models. In this paper, we propose an approach named RM2EA, which can reach this goal based on the contractbased requirements model. We demonstrate the proposed approach through 13 case studies. The evaluation result shows that the quality and efficiency of the generated applications are almost equal to the applications implemented by developers firstly, we demonstrate that a popular type of enterprise applications i.e., a Jakarta EE application can be successfully generated by customizing and improving the set of rules; secondly, RM2EA can generate more readable or maintainable code; thirdly, the enterprise applications generated by RM2EA achieve similar performance in test results. Overall, the result is satisfactory, and the implementation of the proposed approach can be further enhanced and applied to software development in the industry.
|
Cosmological perturbation theory in Generalized EinsteinAether models ; We investigate the evolution of cosmological perturbations in models of dark energy described by a timelike unit normalized vector field specified by a general function mathcalFmathcalK, socalled Generalized EinsteinAether models. First we study the background dynamics of such models via a designer approach in an attempt to model this theory as dark energy. We find that only one specific form of this designer approach matches LambdaCDM at background order and we also obtain a differential equation which mathcalFmathcalK must satisfy for general wCDM cosmologies. We also present the equations of state for perturbations in Generalized EinsteinAether models, which completely parametrize these models at the level of linear perturbations. A generic feature of modified gravity models is that they introduce new degrees of freedom. By fully eliminating these we are able to express the gauge invariant entropy perturbation and the scalar, vector, and tensor anisotropic stresses in terms of the perturbed fluid variables and metric perturbations only. These can then be used to study the evolution of perturbations in the scalar, vector, and tensor sectors and we use these to evolve the Newtonian gravitational potentials.
|
Identity Preserving Generative Adversarial Network for CrossDomain Person Reidentification ; Person reidentification is to retrieval pedestrian images from nooverlap camera views detected by pedestrian detectors. Most existing person reidentification reID models often fail to generalize well from the source domain where the models are trained to a new target domain without labels, because of the bias between the source and target domain. This issue significantly limits the scalability and usability of the models in the real world. Providing a labeled source training set and an unlabeled target training set, the aim of this paper is to improve the generalization ability of reID models to the target domain. To this end, we propose an image generative network named identity preserving generative adversarial network IPGAN. The proposed method has two excellent properties 1 only a single model is employed to translate the labeled images from the source domain to the target camera domains in an unsupervised manner; 2 The identity information of images from the source domain is preserved before and after translation. Furthermore, we propose IBNreID model for the person reidentification task. It has better generalization ability than baseline models, especially in the cases without any domain adaptation. The IBNreID model is trained on the translated images by supervised methods. Experimental results on Market1501 and DukeMTMCreID show that the images generated by IPGAN are more suitable for crossdomain person reidentification. Very competitive reID accuracy is achieved by our method.
|
Learning better generative models for dexterous, singleview grasping of novel objects ; This paper concerns the problem of how to learn to grasp dexterously, so as to be able to then grasp novel objects seen only from a single viewpoint. Recently, progress has been made in dataefficient learning of generative grasp models which transfer well to novel objects. These generative grasp models are learned from demonstration LfD. One weakness is that, as this paper shall show, grasp transfer under challenging single view conditions is unreliable. Second, the number of generative model elements rises linearly in the number of training examples. This, in turn, limits the potential of these generative models for generalisation and continual improvement. In this paper, it is shown how to address these problems. Several technical contributions are made i a viewbased model of a grasp; ii a method for combining and compressing multiple grasp models; iii a new way of evaluating contacts that is used both to generate and to score grasps. These, together, improve both grasp performance and reduce the number of models learned for grasp transfer. These advances, in turn, also allow the introduction of autonomous training, in which the robot learns from selfgenerated grasps. Evaluation on a challenging test set shows that, with innovations iiii deployed, grasp transfer success rises from 55.1 to 81.6. By adding autonomous training this rises to 87.8. These differences are statistically significant. In total, across all experiments, 539 test grasps were executed on real objects.
|
Domain Generalizer A Fewshot Meta Learning Framework for Domain Generalization in Medical Imaging ; Deep learning models perform best when tested on target test data domains whose distribution is similar to the set of source train domains. However, model generalization can be hindered when there is significant difference in the underlying statistics between the target and source domains. In this work, we adapt a domain generalization method based on a modelagnostic metalearning framework to biomedical imaging. The method learns a domainagnostic feature representation to improve generalization of models to the unseen test distribution. The method can be used for any imaging task, as it does not depend on the underlying model architecture. We validate the approach through a computed tomography CT vertebrae segmentation task across healthy and pathological cases on three datasets. Next, we employ fewshot learning, i.e. training the generalized model using very few examples from the unseen domain, to quickly adapt the model to new unseen data distribution. Our results suggest that the method could help generalize models across different medical centers, image acquisition protocols, anatomies, different regions in a given scan, healthy and diseased populations across varied imaging modalities.
|
Material absorptionbased carrier generation model for modeling optoelectronic devices ; The generation rate of photocarriers in optoelectronic materials is commonly calculated using the Poynting vector in the frequency domain. In timedomain approaches where the nonlinear coupling between electromagnetic EM waves and photocarriers can be accounted for, the Poynting vector model is no longer applicable. One main reason is that the photocurrent radiates lowfrequency EM waves out of the spectrum of the source, e.g., terahertz THz waves are generated in THz photoconductive antennas. These frequency components do not contribute to the photocarrier generation since the corresponding photon energy is smaller than the optoelectronic material's bandgap energy. However, the instantaneous Poynting vector does not distinguish the power flux of different frequency components. This work proposes a material absorptionbased model capable of calculating the carrier generation rate accurately in the time domain. Using the Lorentz dispersion model with poles reside in the optical frequency region, the instantaneous optical absorption, which corresponds to the power dissipation in the polarization, is calculated and used to calculate the generation rate. The Lorentz model is formulated with an auxiliary differential equation method that updates the polarization current density, from which the absorbed optical power corresponding to each Lorentz pole is directly calculated in the time domain. Examples show that the proposed model is more accurate than the Poynting vectorbased model and is stable even when the generated lowfrequency component is strong.
|
Texttoimage Synthesis via Symmetrical Distillation Networks ; Texttoimage synthesis aims to automatically generate images according to text descriptions given by users, which is a highly challenging task. The main issues of texttoimage synthesis lie in two gaps the heterogeneous and homogeneous gaps. The heterogeneous gap is between the highlevel concepts of text descriptions and the pixellevel contents of images, while the homogeneous gap exists between synthetic image distributions and real image distributions. For addressing these problems, we exploit the excellent capability of generic discriminative models e.g. VGG19, which can guide the training process of a new generative model on multiple levels to bridge the two gaps. The highlevel representations can teach the generative model to extract necessary visual information from text descriptions, which can bridge the heterogeneous gap. The midlevel and lowlevel representations can lead it to learn structures and details of images respectively, which relieves the homogeneous gap. Therefore, we propose Symmetrical Distillation Networks SDN composed of a source discriminative model as teacher and a target generative model as student. The target generative model has a symmetrical structure with the source discriminative model, in order to transfer hierarchical knowledge accessibly. Moreover, we decompose the training process into two stages with different distillation paradigms for promoting the performance of the target generative model. Experiments on two widelyused datasets are conducted to verify the effectiveness of our proposed SDN.
|
Generation of 3D Brain MRI Using AutoEncoding Generative Adversarial Networks ; As deep learning is showing unprecedented success in medical image analysis tasks, the lack of sufficient medical data is emerging as a critical problem. While recent attempts to solve the limited data problem using Generative Adversarial Networks GAN have been successful in generating realistic images with diversity, most of them are based on imagetoimage translation and thus require extensive datasets from different domains. Here, we propose a novel model that can successfully generate 3D brain MRI data from random vectors by learning the data distribution. Our 3D GAN model solves both image blurriness and mode collapse problems by leveraging alphaGAN that combines the advantages of Variational AutoEncoder VAE and GAN with an additional code discriminator network. We also use the Wasserstein GAN with Gradient Penalty WGANGP loss to lower the training instability. To demonstrate the effectiveness of our model, we generate new images of normal brain MRI and show that our model outperforms baseline models in both quantitative and qualitative measurements. We also train the model to synthesize brain disorder MRI data to demonstrate the wide applicability of our model. Our results suggest that the proposed model can successfully generate various types and modalities of 3D whole brain volumes from a small set of training data.
|
Plug and Play Language Models A Simple Approach to Controlled Text Generation ; Large transformerbased language models LMs trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language e.g. switching topic or sentiment is difficult without modifying the model architecture or finetuning on attributespecific data and entailing the significant cost of retraining. We propose a simple alternative the Plug and Play Language Model PPLM for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a userspecified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
|
Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data Augmentation ; Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goaloriented dialogs. Due to the inherent hierarchical structure of goaloriented dialogs over utterances and related annotations, the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features. We propose the Variational Hierarchical Dialog Autoencoder VHDA for modeling the complete aspects of goaloriented dialogs, including linguistic features and underlying structured annotations, namely speaker information, dialog acts, and goals. The proposed architecture is designed to model each aspect of goaloriented dialogs using interconnected latent variables and learns to generate coherent goaloriented dialogs from the latent spaces. To overcome training issues that arise from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goaloriented dialogs dialog response generation and user simulation, where our model outperforms previous strong baselines.
|
Responsible Disclosure of Generative Models Using Scalable Fingerprinting ; Over the past years, deep generative models have achieved a new level of performance. Generated data has become difficult, if not impossible, to be distinguished from real data. While there are plenty of use cases that benefit from this technology, there are also strong concerns on how this new technology can be misused to generate deep fakes and enable misinformation at scale. Unfortunately, current deep fake detection methods are not sustainable, as the gap between real and fake continues to close. In contrast, our work enables a responsible disclosure of such stateoftheart generative models, that allows model inventors to fingerprint their models, so that the generated samples containing a fingerprint can be accurately detected and attributed to a source. Our technique achieves this by an efficient and scalable adhoc generation of a large population of models with distinct fingerprints. Our recommended operation point uses a 128bit fingerprint which in principle results in more than 1038 identifiable models. Experiments show that our method fulfills key properties of a fingerprinting mechanism and achieves effectiveness in deep fake detection and attribution. Code and models are available at httpsgithub.comningyu1991ScalableGANFingerprints .
|
Generalized stringnet models A thorough exposition ; We describe how to construct generalized stringnet models, a class of exactly solvable lattice models that realize a large family of 2D topologically ordered phases of matter. The ground states of these models can be thought of as superpositions of different stringnet configurations, where each stringnet configuration is a trivalent graph with labeled edges, drawn in the xy plane. What makes this construction more general than the original stringnet construction is that, unlike the original construction, tetrahedral reflection symmetry is not assumed, nor is it assumed that the ground state wave function Phi is isotropic i.e. in the generalized setup, two stringnet configurations X1, X2 that can be continuously deformed into one another can have different ground state amplitudes, PhiX1 neq PhiX2. As a result, generalized stringnet models can realize topological phases that are inaccessible to the original construction. In this paper, we provide a more detailed discussion of ground state wave functions, Hamiltonians, and minimal selfconsistency conditions for generalized stringnet models than what exists in the previous literature. We also show how to construct string operators that create anyon excitations in these models, and we show how to compute the braiding statistics of these excitations. Finally, we derive necessary and sufficient conditions for generalized stringnet models to have isotropic ground state wave functions on the plane or the sphere a property that may be useful in some applications.
|
Deep learningbased synthetic CT generation from MR images comparison of generative adversarial and residual neural networks ; Currently, MRIonly radiotherapy RT eliminates some of the concerns about using CT images in RT chains such as the registration of MR images to a separate CT, extra dose delivery, and the additional cost of repeated imaging. However, one remaining challenge is that the signal intensities of MRI are not related to the attenuation coefficient of the biological tissue. This work compares the performance of two stateoftheart deep learning models; a generative adversarial network GAN and a residual network ResNet for synthetic CTs sCT generation from MR images. The brain MR and CT images of 86 participants were analyzed. GAN and ResNet models were implemented for the generation of synthetic CTs from the 3D T1weighted MR images using a sixfold crossvalidation scheme. The resulting sCTs were compared, considering the CT images as a reference using standard metrics such as the mean absolute error MAE, peak signaltonoise ratio PSNR and the structural similarity index SSIM. Overall, the ResNet model exhibited higher accuracy in relation to the delineation of brain tissues. The ResNet model estimated the CT values for the entire head region with an MAE of 114.1 HU compared to MAE10.9 HU obtained from the GAN model. Moreover, both models offered comparable SSIM and PSNR values, although the ResNet method exhibited a slightly superior performance over the GAN method. We compared two stateoftheart deep learning models for the task of MRbased sCT generation. The ResNet model exhibited superior results, thus demonstrating its potential to be used for the challenge of synthetic CT generation in PETMR AC and MRonly RT planning.
|
Sketch and Customize A Counterfactual Story Generator ; Recent text generation models are easy to generate relevant and fluent text for the given text, while lack of causal reasoning ability when we change some parts of the given text. Counterfactual story rewriting is a recently proposed task to test the causal reasoning ability for text generation models, which requires a model to predict the corresponding story ending when the condition is modified to a counterfactual one. Previous works have shown that the traditional sequencetosequence model cannot well handle this problem, as it often captures some spurious correlations between the original and counterfactual endings, instead of the causal relations between conditions and endings. To address this issue, we propose a sketchandcustomize generation model guided by the causality implicated in the conditions and endings. In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending. In the customize stage, a generation model is used to fill proper words in the skeleton under the guidance of the counterfactual condition. In this way, the obtained counterfactual ending is both relevant to the original ending and consistent with the counterfactual condition. Experimental results show that the proposed model generates much better endings, as compared with the traditional sequencetosequence model.
|
CM3 A Causal Masked Multimodal Model of the Internet ; We introduce CM3, a family of causally masked generative models trained over a large corpus of structured multimodal documents that can contain both text and image tokens. Our new causally masked approach generates tokens left to right while also masking out a small number of long token spans that are generated at the end of the string, instead of their original positions. The casual masking object provides a type of hybrid of the more common causal and masked language models, by enabling full generative modeling while also providing bidirectional context when generating the masked spans. We train causally masked languageimage models on largescale web and Wikipedia articles, where each document contains all of the text, hypertext markup, hyperlinks, and image tokens from a VQVAEGAN, provided in the order they appear in the original HTML source before masking. The resulting CM3 models can generate rich structured, multimodal outputs while conditioning on arbitrary masked document contexts, and thereby implicitly learn a wide range of text, image, and cross modal tasks. They can be prompted to recover, in a zeroshot fashion, the functionality of models such as DALLE, GENRE, and HTLM. We set the new stateoftheart in zeroshot summarization, entity linking, and entity disambiguation while maintaining competitive performance in the finetuning setting. We can generate images unconditionally, conditioned on text like DALLE and do captioning all in a zeroshot setting with a single model.
|
Personalized Filledpause Generation with Groupwise Prediction Models ; In this paper, we propose a method to generate personalized filled pauses FPs with groupwise prediction models. Compared with fluent text generation, disfluent text generation has not been widely explored. To generate more humanlike texts, we addressed disfluent text generation. The usage of disfluency, such as FPs, rephrases, and word fragments, differs from speaker to speaker, and thus, the generation of personalized FPs is required. However, it is difficult to predict them because of the sparsity of position and the frequency difference between more and less frequently used FPs. Moreover, it is sometimes difficult to adapt FP prediction models to each speaker because of the large variation of the tendency within each speaker. To address these issues, we propose a method to build groupdependent prediction models by grouping speakers on the basis of their tendency to use FPs. This method does not require a large amount of data and time to train each speaker model. We further introduce a loss function and a word embedding model suitable for FP prediction. Our experimental results demonstrate that groupdependent models can predict FPs with higher scores than a nonpersonalized one and the introduced loss function and word embedding model improve the prediction performance.
|
A 3D Generative Model for StructureBased Drug Design ; We study a fundamental problem in structurebased drug design generating molecules that bind to specific protein binding sites. While we have witnessed the great success of deep generative models in drug design, the existing methods are mostly stringbased or graphbased. They are limited by the lack of spatial information and thus unable to be applied to structurebased design tasks. Particularly, such models have no or little knowledge of how molecules interact with their target proteins exactly in 3D space. In this paper, we propose a 3D generative model that generates molecules given a designated 3D protein binding site. Specifically, given a binding site as the 3D context, our model estimates the probability density of atom's occurrences in 3D space positions that are more likely to have atoms will be assigned higher probability. To generate 3D molecules, we propose an autoregressive sampling scheme atoms are sampled sequentially from the learned distribution until there is no room for new atoms. Combined with this sampling scheme, our model can generate valid and diverse molecules, which could be applicable to various structurebased molecular design tasks such as molecule sampling and linker design. Experimental results demonstrate that molecules sampled from our model exhibit high binding affinity to specific targets and good drug properties such as druglikeness even if the model is not explicitly optimized for them.
|
Summarize and Generate to Backtranslate Unsupervised Translation of Programming Languages ; Backtranslation is widely known for its effectiveness in neural machine translation when there is little to no parallel data. In this approach, a sourcetotarget model is coupled with a targettosource model trained in parallel. The targettosource model generates noisy sources, while the sourcetotarget model is trained to reconstruct the targets and vice versa. Recent developments of multilingual pretrained sequencetosequence models for programming languages have been very effective for a broad spectrum of downstream software engineering tasks. Hence, training them to build programming language translation systems via backtranslation is compelling. However, these models cannot be further trained via backtranslation since they learn to output sequences in the same language as the inputs during pretraining. As an alternative, we propose performing backtranslation via code summarization and generation. In code summarization, a model learns to generate natural language NL summaries given code snippets. In code generation, the model learns to do the opposite. Therefore, targettosource generation in backtranslation can be viewed as a targettoNLtosource generation. We show that our proposed approach performs competitively with stateoftheart methods. We have made the code publicly available.
|
GeneraltoSpecific Transfer Labeling for Domain Adaptable Keyphrase Generation ; Training keyphrase generation KPG models require a large amount of annotated data, which can be prohibitively expensive and often limited to specific domains. In this study, we first demonstrate that large distribution shifts among different domains severely hinder the transferability of KPG models. We then propose a threestage pipeline, which gradually guides KPG models' learning focus from general syntactical features to domainrelated semantics, in a dataefficient manner. With Domaingeneral Phrase pretraining, we pretrain SequencetoSequence models with generic phrase annotations that are widely available on the web, which enables the models to generate phrases in a wide range of domains. The resulting model is then applied in the Transfer Labeling stage to produce domainspecific pseudo keyphrases, which help adapt models to a new domain. Finally, we finetune the model with limited data with true labels to fully adapt it to the target domain. Our experiment results show that the proposed process can produce goodquality keyphrases in new domains and achieve consistent improvements after adaptation with limited indomain annotated data. All code and datasets are available at httpsgithub.commemrayOpenNMTkpgrelease.
|
Generative Language Models for ParagraphLevel Question Generation ; Powerful generative models have led to recent progress in question generation QG. However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QGBench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes generalpurpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QGBench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on finetuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QGBench is released along with the finetuned models presented in the paper httpsgithub.comasahi417lmquestiongeneration, which are also available as a demo httpsautoqg.net.
|
Domain Generalization via Ensemble Stacking for Face Presentation Attack Detection ; Face Presentation Attack Detection PAD plays a pivotal role in securing face recognition systems against spoofing attacks. Although great progress has been made in designing face PAD methods, developing a model that can generalize well to unseen test domains remains a significant challenge. Moreover, due to different types of spoofing attacks, creating a dataset with a sufficient number of samples for training deep neural networks is a laborious task. This work proposes a comprehensive solution that combines synthetic data generation and deep ensemble learning to enhance the generalization capabilities of face PAD. Specifically, synthetic data is generated by blending a static image with spatiotemporal encoded images using alpha composition and video distillation. This way, we simulate motion blur with varying alpha values, thereby generating diverse subsets of synthetic data that contribute to a more enriched training set. Furthermore, multiple base models are trained on each subset of synthetic data using stacked ensemble learning. This allows the models to learn complementary features and representations from different synthetic subsets. The metafeatures generated by the base models are used as input to a new model called the metamodel. The latter combines the predictions from the base models, leveraging their complementary information to better handle unseen target domains and enhance the overall performance. Experimental results on four datasets demonstrate low half total error rates HTERs on three benchmark datasets CASIAMFSD 8.92, MSUMFSD 4.81, and OULUNPU 6.70. The approach shows potential for advancing presentation attack detection by utilizing largescale synthetic data and the metamodel.
|
A DataDriven Framework for Designing Microstructure of Multifunctional Composites with DeepLearned DiffusionBased Generative Models ; This paper puts forward an integrated microstructure design methodology that replaces the common existing design approaches 1 reconstruction of microstructures, 2 analyzing and quantifying material properties, and 3 inverse design of materials using deeplearned generative and surrogate models. The longstanding issue of microstructure reconstruction is well addressed in this study using a new class of stateoftheart generative model, the diffusionbased generative model DGM. Moreover, the conditional formulation of DGM for guidance to the embedded desired material properties with a transformerbased attention mechanism enables the inverse design of multifunctional composites. A convolutional neural network CNNbased surrogate model is utilized to analyze the nonlinear material behavior to facilitate the prediction of material properties for building microstructureproperty linkages. Combined, these generative and surrogate models enable large data processing and database construction that is often not affordable with resourceintensive finite element method FEMbased direct numerical simulation DNS and iterative reconstruction methods. An example case is presented to demonstrate the effectiveness of the proposed approach, which is designing mechanoluminescence ML particulate composites made of europium and dysprosium ions. The results show that the inverselydesigned multiple ML microstructure candidates with the proposed generative and surrogate models meet the multiple design requirements e.g., volume fraction, elastic constant, and light sensitivity. The evaluation of the generated samples' quality and the surrogate models' performance using appropriate metrics are also included. This assessment demonstrates that the proposed integrated methodology offers an endtoend solution for practical material design applications.
|
A Bayesian Generative Adversarial Network GAN to Generate Synthetic TimeSeries Data, Application in Combined Sewer Flow Prediction ; Despite various breakthroughs in machine learning and data analysis techniques for improving smart operation and management of urban water infrastructures, some key limitations obstruct this progress. Among these shortcomings, the absence of freely available data due to data privacy or high costs of data gathering and the nonexistence of adequate rare or extreme events in the available data plays a crucial role. Here, Generative Adversarial Networks GANs can help overcome these challenges. In machine learning, generative models are a class of methods capable of learning data distribution to generate artificial data. In this study, we developed a GAN model to generate synthetic time series to balance our limited recorded time series data and improve the accuracy of a datadriven model for combined sewer flow prediction. We considered the sewer system of a small town in Germany as the test case. Precipitation and inflow to the storage tanks are used for the DataDriven model development. The aim is to predict the flow using precipitation data and examine the impact of data augmentation using synthetic data in model performance. Results show that GAN can successfully generate synthetic time series from real data distribution, which helps more accurate peak flow prediction. However, the model without data augmentation works better for dry weather prediction. Therefore, an ensemble model is suggested to combine the advantages of both models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.