article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
metabolic reactions enable cellular function by converting nutrients into energy and by assembling macromolecules that sustain the cellular machinery .cellular metabolism is usually thought of as a collection of pathways comprising enzymatic reactions associated with broad functional categories . yetmetabolic reactions are highly interconnected : enzymes convert multiple reactants into products with other metabolites acting as co - factors ; enzymes can catalyse several reactions , whereas some reactions are catalysed by multiple enzymes , and so on .this enmeshed web of reactions is thus naturally amenable to network analysis , an approach that has been successfully applied to different aspects of cellular and molecular biology , e.g. , protein - protein interactions , transcriptional regulation , or protein structure .tools from network theory have previously been applied to the analysis of structural properties of metabolic networks , including their degree distribution , the presence of metabolic roles , and their community structure . a central challenge, however , is that there are multiple ways to construct a ( mathematical ) graph from a metabolic network .for example , one can create a graph with metabolites as nodes and edges representing the reactions that transform one metabolite into another ; a graph with reactions as nodes and edges corresponding to the metabolites shared among them ; or even a bipartite graph with both reactions and metabolites as nodes .importantly , the conclusions of graph - theoretical analyses are highly dependent on the chosen graph construction .a key feature of metabolic reactions is the directionality of flux : metabolic networks contain both irreversible and reversible reactions , and reversible reactions can change their direction depending on the cellular and environmental contexts .many of the existing graph constructions , however , lead to undirected graphs that disregard such directional information , which is central to metabolic function .furthermore , current graph constructions are usually derived from the whole set of metabolic reactions in an organism , and thus correspond to a generic metabolic ` blueprint ' of the cell .however , cells switch specific pathways on and off to sustain their energetic budget in different environments .hence , such blueprint graphs might not capture the specific metabolic connectivity in a given environment , thus limiting their ability to provide biological insights in different growth conditions . in this paper, we present a flux - based approach to construct metabolic graphs that encapsulate the directional flow of metabolites produced or consumed through enzymatic reactions .the proposed graphs can be tailored to incorporate flux distributions under different environmental conditions . to introduce our approach, we proceed in two steps .we first define the _ probabilistic flux graph _ ( pfg ) , a weighted , directed graph with reactions as nodes , edges that represent supplier - consumer relationships between reactions , and weights given by the probability that a metabolite chosen at random is produced / consumed by the source / target reaction .this graph can be used to carry out graph - theoretical analyses of metabolic organisation independent of cellular context or environmental conditions .we then show that this formalism can be adapted seamlessly to construct the _ metabolic flux graph _ ( mfg ) , a directed , environment - dependent , graph with weights computed from flux balance analysis ( fba ) , the most widespread method to study genome - scale metabolic networks .our formulation overcomes several drawbacks of current constructions of metabolic graphs .firstly , in our flux graphs , an edge indicates that metabolites are produced by the source reaction and consumed by the target reaction , thus accounting for metabolic directionality and the natural flow of chemical mass from reactants to products .secondly , the probabilistic flux graph discounts naturally the over - representation of pool metabolites ( e.g. , adenosine triphosphate ( atp ) , nicotinamide adenine dinucleotide ( nadh ) , protons , water , and other co - factors ) that appear in many reactions and tend to obfuscate the graph connectivity .our construction avoids the removal of pool metabolites from the network , which can change the graph structure drastically .finally , the metabolic flux graph incorporates additional biological information reflecting the effect of the environmental context into the graph construction .in particular , since the weights in the mfg correspond directly to fluxes ( in units of mass per time ) , different biological scenarios can be analysed using fluxes from different fba solutions under different carbon sources and other environmental perturbations . after introducing the mathematical framework , we showcase our approach on the core model of _ escherichia coli _metabolism . in absence of environmental context, our analysis of the pfg reveals the importance of including directionality and appropriate edge weights in the graph to understand the modular organisation of metabolic sub - systems .we then use fba solutions for several relevant growth conditions , and show that the structure of the mfg changes dramatically in each case , thus capturing the environment - dependent nature of metabolism .consider a metabolic network composed of metabolites with concentrations ( ) that participate in reactions [ ] } \sum_{i=1}^n \beta_{ij } x_i}\label{eq : reac},\quad j=1,2,\ldots , m,\ ] ] where and are the stoichiometric coefficients of species in reaction .each reaction takes place at a time - dependent rate , measured in units of concentration per time .the dynamics of the metabolite concentrations is represented compactly by the system of differential equations where is the -dimensional vector of metabolite concentrations , and is the -dimensional vector of reaction rates .the matrix is the stoichiometric matrix with entries , i.e. , the net number of molecules produced ( positive ) or consumed ( negative ) by the -th reaction . to illustrate the different schemes and graphs described in this paper, we use a toy example of a metabolic network including nutrient uptake , biosynthesis of metabolic intermediates , secretion of waste products , and biomass production ( figure [ fig : toymodel_networks]a ) . .( b ) bipartite graph associated with the boolean stoichiometric matrix , and the standard reaction adjacency graph ( rag ) with adjacency matrix .the undirected edges of indicate the number of shared metabolites among reactions .( c ) the probabilistic flux graph ( pfg , ) and two metabolic flux graphs ( mfg , ) are constructed from the unfolded consumption and production stoichiometric matrices .note that the reversible reaction is unfolded into two nodes. the pfg is a directed graph with weights representing the probability that the source reaction produces a metabolite consumed by the target reaction .the mfgs are constructed from two flux balance analysis solutions ( and ) obtained by optimizing a biomass objective function under different flux constraints representing different environmental or cellular contexts ( see sec . [sec : toy_model ] for details ) .the edges of the mfgs represent mass flow from source to target reactions , with weights in units of metabolic flux .the computed fba solutions translate into different connectivity in the resulting mfgs.,scaledwidth=100.0% ] starting from the stoichiometry , there are several ways to construct a graph for a given set of metabolic reactions .a common approach is to define the _ unipartite graph _ with reactions as nodes and the adjacency matrix where is the boolean version of ( i.e. , when and otherwise ) . in this_ reaction adjacency graph _ ( rag ) , two reactions ( nodes ) are connected if they share metabolites , either as reactants or products , and self - loops represent the total number of metabolites that participate in a reaction ( fig .[ fig : toymodel_networks]b ) .though widely studied , the rag has known important limitations , as it overlooks key aspects of the connectivity of metabolic networks . in particular , the rag does not distinguish between forward and backward fluxes , nor does it incorporate information on the irreversibility of reactions ( is a symmetric matrix ) . furthermore , the structure of is dominated by the large number of connections introduced by pool metabolites ( e.g. , water , ions or enzymatic cofactors ) that appear in many reactions . although computational schemes have been designed to ameliorate the pool metabolite bias , their justification does not follow from biophysical considerations .finally , the construction of the graph from rigid topological criteria is not easily extended to incorporate the effect of varying environmental contexts . to address the limitations of the standard reaction adjacency graph , given in eq . , and aiming at enhanced biophysical and biological interpretability , we propose a graph formulation that follows from a flux - based perspective . to construct our graph ,we unfold each reaction into separate forward and reverse directions , and redefine the presence of links between reaction nodes to reflect producer - consumer relationships , i.e. , two reactions are connected if one produces a metabolite that is consumed by the other . as shown below, this definition leads to graphs that naturally account for the reversibility of reactions and allows for a seamless integration of different biological contexts modelled through fba . inspired by matrix formulations of chemical reaction network kinetics , we unfold the reactions into forward and backward components .specifically , let us rewrite the reaction rate vector introduced in as : where and are non - negative vectors containing the forward and backward reaction rates , respectively , and is the -dimensional reversibility vector with components if reaction is reversible and if it is irreversible .the matrix contains in its main diagonal . with these definitions , we rewrite the metabolic model in eq . in terms of the unfolded -dimensional vector of reaction rates , ^t$ ] , to obtain : where is the identity matrix , and is an unfolded version of the stoichiometric matrix corresponding to the forward and reverse reactions .the unfolding into forward and backward fluxes leads us to the definition of _ production _ and _ consumption _ stoichiometric matrices : where is the matrix whose entries are the absolute values of the corresponding entries of .note that each entry of the matrix , denoted , gives the number of molecules of metabolite produced by reaction .conversely , the entries of , denoted , correspond to the number of molecules of metabolite consumed by reaction . within our directional flux framework ,it is natural to consider a probabilistic description of producer - consumer relationships between reactions , as follows . given a stoichiometric matrix , and in the absence of further biological information , the probability that metabolite is produced by reaction and consumed by reaction is : where and are the total number of molecules of produced and consumed by all reactions .we thus define the weight of the edge between reaction nodes and as the probability that _ any _ metabolite chosen at random is produced by and consumed by . summing over all metabolites and normalizing ,the edge weight is defined as these edge weights are the entries of the adjacency matrix of the where , , is a vector of ones , and denotes the moore - penrose pseudoinverse .the pfg is a weighted , directed graph with a double - stochastic adjacency matrix ( ) .it provides a directional blueprint of the whole metabolic model , and naturally scales the contribution of pool metabolites to flux transfer . in figure[ fig : toymodel_networks]c we illustrate the creation of the pfg for a toy network .note that our pfg is distinct from a directed graph directly comparable to the rag , and with similar shortcomings , which can be constructed from boolean versions of the production and consumption stoichiometric matrices as shown sec .[ sec : adir_vs_d_si ] .we now extend the idea behind the construction of the pfg to account for specific environmental context or growth conditions .cells adjust their metabolic fluxes to respond to the availability of nutrients and environmental requirements .flux balance analysis ( fba ) is a widely used method to predict environment - specific flux distributions .fba computes a vector of metabolic fluxes that maximise a cellular objective ( e.g. , biomass , growth or atp production ) .the fba solution is obtained assuming steady state conditions and subject to constraints that describe the availability of nutrients and other extracellular compounds .the key basic elements of fba are briefly summarised in section [ sec : fba ] . to incorporate the biological information afforded by fba solutions into the structure of a metabolic graph, we again define the graph edges in terms of production and consumptions fluxes . similarly to eq ., we unfold the fba solution vector into forward and backward components whence we compute the vector of production and consumption fluxes as the -th entry of is the flux at which metabolite is produced and consumed . note that production and consumption fluxes are identical because of the steady state condition ( in eq . ) .we now construct the flux graph by defining the weight of the edge between reactions and as the _ total flux of metabolites produced by that are consumed by . assuming that the amount of metabolite produced by one reaction is distributed among the reactions that consume it in proportion to their flux , the flux of metabolite from reaction to is given by for example , if the total flux of metabolite is , with reaction producing at a rate and reaction consuming at a rate , then the flux of from to is .summing eq . over all metabolites, we then obtain the edge weight relating reactions and as the edge weights are collected into the adjacency matrix of the where , and denotes the matrix pseudoinverse .the mfg is a directed graph with weights in units of .self loops describe the metabolic flux of autocatalytic reactions , i.e. , those in which products are also reactants .the mfg provides a versatile framework to create environment - specific metabolic graphs from fba solutions . in figure[ fig : toymodel_networks]c we illustrate the creation of mfgs for a toy network . in each casewe compute fba solutions under a fixed uptake flux and constrain the remaining fluxes to account for different biological scenarios . in scenario 1the fluxes are constrained to be strictly positive and no larger than the nutrient uptake flux , while in scenario 2 we impose a positive lower bound on reaction .the graph in scenario 2 displays an extra edge between reactions and and distinct edge weights , as compared to scenario 1 ( see sec . [sec : toy_model ] ) .the results thus illustrate how changes in the fba solutions translate into different graph connectivities and edge weights ., as given by eq . .the nodes represent reactions ; two reactions are linked by an undirected edge if they share reactants or products .the nodes are coloured according to their pagerank score , a measure of their centrality in the graph .( c ) the directed probabilistic flux graph , as computed from eq . .the reversible reactions are unfolded into two overlapping nodes ( one for the forward reaction , one for the backward ) .the directed links indicate flow of metabolites produced by the source node and consumed by the target node .the nodes are coloured according to their pagerank score .( d ) comparison of pagerank percentiles of reactions in and .reversible reactions are represented by two triangles connected by a line ; both share the same pagerank in , but each has its own pagerank in .reactions that appear above ( below ) the diagonal have increased ( decreased ) pagerank in as compared to . ] to illustrate our framework , we construct and analyse flux graphs ( pfg and mfgs ) of the core metabolic model of _ e. coli_ .this model ( fig .[ fig : model_a_dnorm]a ) contains 72 metabolites and 95 reactions , grouped into 11 pathways , which describe the main biochemical routes in central carbon metabolism . to highlight the effect of flux directionality on the constructed graphs , figure [ fig : model_a_dnorm ]compares the standard undirected reaction adjacency graph ( ) and our proposed probabilistic flux graph ( ) for the same metabolic model .the graph has 95 nodes and 1,158 undirected edges , while the graph has 154 nodes and 1,604 directed and weighted edges . the increase in node count is due to the unfolding of forward and backward reactions into separate nodes . unlike the graph ,where the edges represent shared metabolites between two reactions , the directed edges of the graph represent the flow of metabolites from a source to a target reaction .a salient feature of both graphs is their high connectivity , which is not apparent from the traditional pathway representation in figure [ fig : model_a_dnorm]a .the impact of directionality becomes apparent when comparing the importance of reaction nodes on the overall connectivity of each graph , as measured by the pagerank score introduced in the original google algorithm .figure [ fig : model_a_dnorm]b d shows that the pagerank of reactions is substantially different in and .the overall ordering is maintained : exchange reactions tend to have low pagerank , whereas core metabolic reactions have high pageranks in both graphs indeed , the biomass reaction has the highest rank in both cases .however , we observe a dramatic change in the importance of most reactions .for example , the reactions for atp maintenance ( atpm ) , phosphoenolpyruvate synthase ( pps ) and abc - mediated transport of l - glutamine ( glnabc ) drop from being among the top 10% most important reactions in the graph to the bottom percentiles in the graph .conversely , other reactions such as aconitase a ( aconta ) , transaldolase ( tala ) and succinyl - coa synthetase ( sucoas ) , and formate transport via diffusion ( forti ) gain substantial importance in the graph .for instance , forti is the sole consumer of formate , which is produced by pyruvate formate lyase ( pfl ) , a reaction that is highly connected to the rest of the network .importantly , in most of the reversible reactions , such as atp synthase ( atps4r ) , there is a wide gap between the pagerank of the forward and backward reactions , suggesting a marked asymmetry in the importance of metabolic flows . and from fig .[ fig : model_a_dnorm]b c partitioned into communities computed with the markov stability method ; for clarity , the graph edges are not shown .the sankey diagrams show the correspondence between biochemical pathways and the communities found in each graph .the word clouds contain the metabolites that participate in the reactions each community , and the word size is proportional to the number of reactions in which each metabolite participates . ]community detection is a technique that is frequently used for the analysis of complex graphs : nodes are clustered into tightly - related communities in order to reveal the coarse - grained structure of the graph , potentially at different levels of resolution .indeed , the community structure of graphs derived from metabolic networks has been the subject of several analyses .however , most existing community detection methods are only applicable to undirected graphs and fail to capture the directionality of the edges , a key feature in metabolism . in order to account for directionality, we use the markov stability community detection framework , which employs diffusions on graphs to detect groups of nodes where flows are retained persistently across time scales . due to its use of diffusive dynamics , markov stability is ideally suited to find multi - resolution community structure , and naturally incorporates edge directionality , if present ( see sec . [sec : markov ] ) . when applied to metabolic graphs, markov stability can thus reveal groups of reactions that are tightly linked by the flow of metabolites they produce and consume .figure [ fig : avsd_comms ] highlights the strong differences between the community structure of the undirected rag and the directed pfg of the core metabolism of _e. coli _ , underscoring the importance of directionality in these graphs .when applied to the graph , markov stability reveals a robust partition into seven communities ( figure [ fig : avsd_comms]b , see also sec .[ sec : avsd_comms_si ] ) .the reaction communities obtained are largely determined by the edges created by abundant pool metabolites .for example , community c1 is mainly composed of reactions that consume or produce atp and water .note , however , that the biomass reaction ( the largest consumer of atp ) is not a member of c1 because , in the graph construction , any connection involving atp has equal weight .other communities in are also determined by pool metabolites , e.g. c2 is dominated by h , and c3 is dominated by nad and nadp , as shown by the word clouds representing the relative frequency of metabolites that appear in the reactions contained in each community .the community structure in thus reflects the limitations of this graph construction due to the absence of biological context and the large number of uninformative links introduced by pool metabolites .in contrast , we found a robust partition into five communities for the graph ( figure [ fig : avsd_comms]c , see also sec .[ sec : avsd_comms_si ] ) .these communities comprise reactions related consistently by biochemical pathways .community c1 contains the reactions in the pentose phosphate pathway together with the first steps of glycolysis involving d - fructose , d - glucose , or d - ribulose .community c2 contains the main reactions that produce atp from substrate level as well as oxidative phosphorylation and the biomass reaction .community c3 includes the core of the citric acid cycle , anaplerotic reactions related to malate syntheses , as well as the intake of cofactors such as co .community c4 contains reactions that are secondary sources of carbon ( such as malate and succinate ) , as well as oxidative phosphorilation reactions .finally , community c5 contains reactions that are part of the pyruvate metabolism subsystem , as well as transport reactions for the most common secondary carbon metabolites such as lactate , formate , acetaldehyde and ethanol .altogether , the communities of the graph reflect metabolite flows associated with specific cellular functions , a key benefit and consequence of including flux directionality in the graph construction . as seen in fig .[ fig : avsd_comms]c , the communities are no longer exclusively determined by pool metabolites ( e.g. , water is no longer dominant and protons are spread among all communities ) . for a more detailed explanation and comparison of the communities found in the and graphs ,see section [ sec : avsd_comms_si ] .to incorporate the impact of environmental context in our graphs , we construct different metabolic flux graphs , using flux distributions obtained from flux balance analysis applied to the core model of _metabolism under several growth conditions : aerobic growth in rich media with glucose or ethanol , aerobic growth in glucose but phosphate- and ammonium - limited , and anaerobic growth in glucose .the results , summarised in figure [ fig : networksm ] , reveal how changes in metabolite flows induced by different biological contexts are reflected in our graph construction . in all cases ,the mfgs have fewer nodes than the blueprint graph because the fba solutions contain numerous reactions with zero flux .the different environments also affect the graph connectivity and the relative node importance , as measured by their pagerank score .furthermore , the community structure of the mfgs for the four environmental conditions , as obtained with the markov stability framework , reflect the distinct usage of functional pathways by the cell in response to growth requirements under specific environments .we briefly describe the salient features of the analysis ; a more detailed discussion can be found in section [ sec : m_comms_si ] and fig .[ fig : markov_stability_fba_networks ] in the si .[ [ aerobic - growth - in - d - glucose - mathbfm_mathrmglc . ] ] aerobic growth in d - glucose ( ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we observe a robust partition into three communities with an intuitive biological interpretation ( fig .[ fig : networksm]a and fig .[ fig : markov_stability_fba_networks]a ) .firstly , c1 can be thought of as a carbon - processing community , comprising reactions that process carbon from d - glucose to pyruvate including most of the glycolysis and pentose phosphate pathways , together with related transport and exchange reactions .secondly , c2 harbours the bulk of reactions related to oxidative phosphorylation and the production of energy in the cell , including the electron transport chain of nadh dehydrogenase , cytochrome oxidase , and atp synthase , as well as transport reactions for phosphate and oxygen intake and proton balance .the growth reaction is also included in community c2 , consistent with atp being the main substrate for both the atp maintenance ( atpm ) requirement and the biomass reaction in this biological scenario .finally , c3 contains reactions related to the citric acid cycle ( tac ) and the production of nadh and nadph ( i.e. , the cell s reductive power ) , as well as routes that take phosphoenolpyruvic acid ( pep ) as a starting point , thus highlighting carbon intake routes that are strongly linked to the tca cycle .[ [ aerobic - growth - in - ethanol - mathbfm_mathrmetoh . ] ] aerobic growth in ethanol ( ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we found a robust partition into three communities that resemble those found in , with subtle yet important differences ( fig .[ fig : networksm]b and fig .[ fig : markov_stability_fba_networks]b ) .the most salient differences are observed in the carbon - processing community c1 , which clearly reflects the switch of carbon source from d - glucose to ethanol .this community contains gluconeogenic reactions ( instead of glycolytic ) , due to the reversal of flux induced by the change of carbon source , as well as anaplerotic reactions and reactions related to glutamate metabolism .the main role of the reactions in this community is the production of bioprecursors such as pep , pyruvate , 3-phospho - d - glycerate ( 3pg ) , glyceraldehyde-3-phosphate ( g3p ) , d - fructose-6-phosphate ( f6p ) , and d - glucose-6-phosphate , all of which are substrates for growth .consequently , the biomass reaction is also grouped within c1 due the increased metabolic flux of precursors relative to atp production in this biological scenario .the other two reaction communities ( energy - generation c2 and citric acid cycle c3 ) display less prominent differences relative to the graph , with additional pyruvate metabolism and anaplerotic reactions as well as subtle ascriptions of reactions involved in nadh / nadph balance and the source for acetyl - coa . [ [ anaerobic - growth - in - d - glucose - mathbfm_mathrmanaero . ] ] anaerobic growth in d - glucose ( ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the absence of oxygen has a profound impact on the metabolic balance of the cell and the mfg captures the drastic changes in this new regime effectively ( fig .[ fig : networksm]c and fig .[ fig : markov_stability_fba_networks]c ) . both the connectivity andthe reaction communities in this mfg are different from the aerobic scenarios , with a much diminished presence of oxidative phosphorylation pathways and the absence of the first two steps of the electron transport chain ( cytbd and nadh16 ) .we found that has a robust partition into four communities .c1 still contains carbon processing ( glucose intake and glycolysis ) , yet these reactions are decoupled from the pentose phosphate pathway , which is now part of community c3 grouped with the citric acid cycle ( now incomplete ) and the biomass reaction .c3 includes the growth precursors in this scenario , including alpha - d - ribose-5-phosphate ( r5p ) , d - erythrose-4-phosphate ( e4p ) , 2-oxalacetate and nadph .the other two communities are specific to the anaerobic context : c2 contains the conversion of pep into formate ( more than half of the carbon secreted by the cell becomes formate ) ; and c4 includes nadh production and consumption via reactions linked to glyceraldehyde-3-phosphate dehydrogenase ( gapd ) .[ [ aerobic - growth - in - d - glucose - but - limited - phosphate - and - ammonium - mathbfm_mathrmlim . ] ] aerobic growth in d - glucose but limited phosphate and ammonium ( ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + under these growth - limiting conditions , we found a robust partition into three communities ( fig .[ fig : networksm]d and fig .[ fig : markov_stability_fba_networks]d ) .the community structure reflects _ overflow metabolism _ , which occurs when the cell takes in more carbon than it can process . as a consequence, the excess carbon is secreted from the cell , leading to a strong decrease in growth and a partial shutdown of the citric acid cycle .this is reflected in the reduced weight of the tca pathway in c3 and its grouping with the secretion routes of acetate and formate .hence , c3 comprises reactions that would not be strongly coupled in more favorable growth conditions , yet are linked together by metabolic responses appearing due to the limited availability of ammonium and phosphate .furthermore , the carbon - processing community c1 contains the glycolytic pathway , but detached from the pentose phosphate pathway , as in the graph , highlighting its role in precursor formation .the bioenergetic machinery is contained in community c2 , including the pentose phosphate pathway , with a smaller role for the electron transport chain ( 21.8% of the total atp as compared to 66.5% in ) .another advantage of flux - based mfgs is the possibility of applying network - theoretic tools to detect natural groupings of reactions at different levels of resolution , as well as their hierarchical relationship across scales .the markov stability framework can be used to detect multi - resolution community structure in directed graphs ( sec .[ sec : markov ] ) , thus allowing the exploration of the modular multiscale organisation of metabolic reaction networks . ) across levels of resolution .the top panel shows the number of communities of the optimal partition ( blue line ) and two measures of its robustness ( ( green line ) and ( colour map ) ) as a function of the markov time ( see text and methods section ) .the five markov times selected correspond to robust partitions of the graph into 11 , 7 , 5 , 3 , and 2 communities , as signalled by extended low values of and low values ( or pronounced dips ) of .the sankey diagram ( middle panel ) visualises the multiscale organisation of the communities of the flux graph across markov times , and the relationship of the communities with the biochemical pathways .the bottom panel shows the five partitions at the selected markov times .the partition into 3 communities corresponds to that in figure [ fig : networksm]a . ]figure [ fig : vi_alluvialdiagrams ] illustrates this multiscale analysis on the metabolic flux graph of _e. coli _ under aerobic growth in glucose ( ) . by varying the markov time , a parameter in the markov stability method, we scanned the community structures at different resolutions .our results show that , as we move from finer to coarser resolutions , the mfg can be partitioned into 11 , 7 , 5 , 3 , and 2 communities which have high robustness across markov time ( extended plateaux of optimality over , as shown by the low values of ) and are highly robust within the optimisation ensemble ( as shown by dips in ). for further details , see section [ sec : markov ] and refs . .the sankey diagram in fig .[ fig : vi_alluvialdiagrams ] allows us to visualise the pathway composition of the graph partitions and their relationships across different resolutions . as we decrease the resolution ( longer markov times ) , the reactions in different pathways assemble and split into different groupings , reflecting both specific relationships and general organisation principles associated with this growth condition .a general observation is that glycolysis is grouped together with oxidative phosphorylation across most scales , underlining the fact that those two pathways function as cohesive metabolic sub - units in aerobic conditions .in contrast , the exchange and transport pathways appear spread among multiple partitions across all resolutions .this is expected , as these are enabling functional pathways in which reactions do not interact amongst themselves but rather feed substrates to other pathways .other reaction groupings reflect more specific relationships .for example , the citric acid cycle ( always linked to anaplerotic reactions ) appears as a cohesive unit across most scales , and only splits in two in the very final flux grouping , reflecting the global role of the tca cycle in linking to both glycolysis and oxidative phosphorylation . the pentose phosphate pathway , on the other hand , is split into two groups ( one linked to glutamate metabolism and another one linked to glycolysis ) across early scales , only merging into the same community towards the final groupings .this suggests a more interconnected flux relationship of the different steps of the penthose phosphate pathway with the rest of metabolism .figure [ fig : markov_stability_fba_networks ] contains a multiscale analyses of the communities for the other three growth scenarios .metabolic reactions are commonly understood in terms of functional pathways that are heavily interconnected to form metabolic networks , i.e. , metabolites linked by arrows representing enzymatic reactions between them ( figs .[ fig : model_a_dnorm ] and [ fig : networksm ] ). however , such standard representations are not amenable to rigorous graph - theoretic analysis .importantly , there are fundamentally different graphs that can be constructed from the metabolic reaction information depending on the chosen representation of species / interactions as nodes / edges , e.g. , reactions as nodes ; metabolites as nodes ; or both reaction and metabolites as nodes . each one of these graphs can be directed or undirected , and with weighted links computed according to different rules .the choices and subtleties in graph construction are crucial both to capture the relevant metabolic information and to interpret their topological properties . here , we have presented a flux - based strategy to build graphs for metabolic networks .our graphs have reactions as nodes and directed edges representing the flux of metabolites produced by a source reaction and consumed by a target reaction .this principle can be applied to build both ` blueprint ' graphs ( pfg ) , which summarise the probabilistic fluxes of the whole metabolism of an organism , as well as context - specific graphs ( mfgs ) , which reflect specific environmental conditions .the blueprint probabilistic flux graph , with edge weights equal to the probability that the source / target reactions produce / consume a molecule of a metabolite chosen at random , naturally tames the over - representation of pool metabolites without the need to remove them from the graph arbitrarily , as is often done in the literature . the context - specific metabolic flux graphs ( mfgs )incorporate the effect of the environment , as edge weights correspond to the total flux of metabolites between reactions as calculated by flux balance analysis ( fba ) .computing fba solutions for different environments allows us to build metabolic graphs systematically for different growth media . to exemplify our approach, we built and analysed pfg and mfgs for the core metabolism of _e. coli_. through the analysis of topological properties and community structure of these graphs , we highlighted the importance of weighted directionality in metabolic graph construction and revealed the flux - mediated relationships between functional pathways under different environments .in particular , the mfgs capture specific metabolic adaptations such as the glycolytic - gluconeogenic switch , overflow metabolism , and the effects of anoxia .we note that although we have illustrated our analysis on the core metabolism of _e. coli _ , the proposed graph construction can be readily applied to large genome - scale metabolic networks .our flux graphs provide a systematic connection between network theory and constraint - based methods widely employed in metabolic modelling , thus opening avenues towards environment - dependent , graph - based analyses of cell metabolism .an area of interest would be to use mfgs to study how the community structure of flux graphs across scales can help characterise metabolic conditions that maximise the efficacy of drug treatments or disease - related distortions , e.g. , cancer - related metabolic signatures .in particular , mfgs can quantify metabolic robustness via graph statistics upon removal of reaction nodes .the proposed graph construction framework can be extended in different directions .the core idea behind our framework is the distinction between production and consumption fluxes , and how to encode both in the links of a graph .this general principle can also be used to build other potentially useful graphs .for example , two other graphs that describe relationships between reactions are : the competition and synergy graphs are undirected and their edge weights represent the probability that two reactions consume ( ) or produce ( ) metabolites picked at random .there exist corresponding fba versions of the competition and synergy flux graphs , which follow from , and the definitions and .these graphs could help reveal further relationships between metabolic reactions in the cell and will be the subject of future studies .our approach could also be extended to include dynamic adaptations of metabolic activity , e.g. , by using dynamic extensions of fba or by incorporating static and time - varying enzyme concentrations .of particular interest to metabolic modelling , we envision that mfgs could provide a novel route to evaluate the robustness of fba solutions by exploiting the non - uniqueness of the mfg from each fba solution in the space of graphs .such results could enhance the interface between network science and metabolic analysis , allowing for the systematic exploration of the system - level organisation of metabolism in response to environmental constraints and disease states .flux balance analysis ( fba ) is a widely - adopted approach to analyse metabolism and cellular growth .fba calculates the reaction fluxes that optimise growth in specific biological contexts .the main hypothesis behind fba is that cells adapt their metabolism to maximise growth in different biological conditions .the conditions are encoded as constraints on the fluxes of certain reactions ; for example , constraints reactions that import nutrients and other necessary compounds from the exterior . the mathematical formulation of the fba is described in the following constrained optimisation problem : where is the stoichiometry matrix of the model , the vector of fluxes , is an indicator vector ( i.e. , when is the biomass reaction and zero everywhere else ) so that is the flux of the biomass reaction .the constraint enforces mass - conservation at stationarity , and and are the lower and upper bounds of each reaction s flux . through these vectors, one can encode a variety of different scenarios .the biomass reaction represents the most widely - used flux that is optimised , although there are others can be used as well . in our simulations, we set the individual carbon intake rate to 18.5 for every source available in each scenario .we allowed oxygen intake to reach the maximum needed in to consume all the carbon except in the anaerobic condition scenario , in which the upper bound for oxygen intake was . in the scenario with limited phosphate and ammonium intake , the levels of nh and phosphate intake were fixed at and respectively ( a reduction of 50% compared to a glucose - fed aerobic scenario with no restrictions ) .we extract the communities in each network using the markov stability community detection framework .this framework uses diffusion processes on the network to find groups of nodes ( i.e. , communities ) that retain flows for longer than one would expect on a comparable random network ; in addition , markov stability incorporates directed flows seamlessly into the analysis .the diffusion process we use is a continuous - time markov process on the network . from the adjacency matrix construct a rate matrix for the process : , where is the diagonal matrix of out - strengths , . when a node has no outgoing edges then we simply let .in general , a directed network will not be strongly - connected and thus a markov process on will not have a unique steady state . to ensure the uniqueness of the steady state we must add a _ teleportation _ component to the dynamics by which a random walker visiting a node can follow an outgoing edge with probability or jump ( teleport ) uniformly to any other node in the network with probability .the rate matrix of a markov process with teleportation is : {\mathbf{1}\mathbf{1}^t},\ ] ] where the vector is an indicator for dangling nodes : if node has no outgoing edges then , and otherwise .here we use .the markov process is described by the ode : where .the solution of is and its stationary state ( i.e. , ) is , where is the leading left eigenvector of .a hard partition of the graph into communities can be encoded into the matrix , where if node belongs to community and zero otherwise .the _ clustered autocovariance matrix _ of is and the entry of measures how likely it is that a random walker that started the process in community finds itself in community after time when at stationarity .the diagonal elements of thus record how good the communities in are at retaining flows .the _ markov stability of the partition _ is then defined as the optimised communities are obtained by maximising the cost function over the space of all partitions for every time to obtain an optimised partition .this optimisation is np - hard ; hence with no guarantees of optimality . here we use the louvain greedy optimisation heuristic , which is known to give high quality solutions in an efficient manner .the value of the markov time , i.e. the duration of the markov process , can be understood as a resolution parameter for the partition into communities . in the limit , markov stabilitywill assign each node to its own community ; as grows , we obtain larger communities because the random walkers have more time to explore the network .we scan through a range of values of to explore the multiscale community structure of the network .the code for markov stability can be found at github.com/michaelschaub/partitionstability . to identify the important partitions across time , we use two criteria of robustness .firstly , we optimise 100 times for each value of and we assess the consistency of the solutions found .a relevant partition should be a robust outcome of the optimisation , i.e. , the ensemble of optimised solutions should be similar as measured with the normalised variation of information : where is a shannon entropy and is the relative frequency of finding a node in community in partition .we then compute the average variation of information of the ensemble of solutions from the louvain optimisations at each markov time : if all louvain runs return similar partitions , then small , indicating robustness of the partition to the optimisation . hence we select partitions with low values ( or dips ) of .secondly , relevant partitions should also be optimal across markov time , as indicated by a low values of the cross - time variation of information : therefore , we also search for partitions with extended low value plateaux of .m.b.d . acknowledges support from the james s. mcdonnell foundation postdoctoral program in complexity science / complex systems fellowship award ( # 220020349-cs / pd fellow ) , and the oxford - emirates data science lab .g.b . acknowledges the support from the spanish ministry of economy fpi program ( bes-2012 - 053772 ) .d.o . acknowledges support from an imperial college research fellowship and from the human frontier science program through a young investigator grant ( rgy0076 - 2015 ) .j.p . acknowledges the support from the spanish ministry of economy and eu feder through the synbiofactory project ( cicyt dpi2014 - 55276-c5 - 1 ) .m.b . acknowledges funding from the epsrc through grants ep / i017267/1 and ep / n014529/1 .[ [ data - statement ] ] data statement : + + + + + + + + + + + + + + + no new data was generated during the course of this research .a directed version of the rag could in principle be obtained from the boolean production / consumption matrices and as follows .projecting onto the space of reactions gives the ( asymmetric ) adjacency matrix where the entries represent the total number of metabolites _ produced _ by reaction that are _ consumed _ by reaction .a directed version of the reaction adjacency graph on nodes ( directly comparable to the standard rag ) is then clearly , when the metabolic model contains only reversible reactions , ( i.e. , the reversibility vector is all ones , ) , it follows that .although does not include spurious edges introduced by non - existent backward reactions , its structure is still obscured by the effect of uninformative connections created by pool metabolites .as an illustration of the graph construction , the toy metabolic network in fig . [fig : toymodel_networks ] was taken from ref .the graph matrices for this model are as follows : * reaction adjacency graph , eq . : .\ ] ] * probabilistic flux graph , eq . :.\ ] ] * metabolic flux graph for fba scenario 1 , eq . : .\ ] ] * metabolic flux graph for fba scenario 2 , , eq . : .\ ] ]a robust partition into seven communities in the rag was found at markov time ( fig . [ fig : markov_stability_template_networks]a ) . the communities at this resolution ( fig .[ fig : avsd_comms]e ) are : * community c1 contains all the reactions that consume or produce atp and water ( two pool metabolites ) .production of atp comes mostly from oxidative phosphorylation ( atps4r ) and substrate level phosphorylation reactions such as phosphofructokinase ( pfk ) , phosphoglicerate kinase ( pgk ) and succinil - coa synthase ( sucoas ) .reactions that consume atp include glutamine synthetase ( glns ) and atp maintenance equivalent reaction ( atpm ) .the reactions l - glutamine transport via abc system ( glnabc ) , acetate transport in the form of phosphotransacetilase ( ptar ) , and acetate kinase ( ackr ) are also part of this community .additionally , c1 ( green ) contains also reactions that involve h . under normal conditions wateris assumed to be abundant in the cell , thus the biological link that groups these reactions together is tenuous .* community c2 includes the reactions nadh dehydrogenase ( nadh16 ) , cytochrome oxidase ( cytbd ) , and transport and exchange reactions .these two reactions involve pool metabolites ( such as h ) which create a large number of connection .other members include fumarate reductase ( fr7 ) and succinate dehydrogenase ( sucdi ) which couple the tca cycle with the electron transport chain ( through ubiquinone-8 reduction and ubiquinol-8 oxidation ) .reactions that include export and transport of most secondary carbon sources ( such as pyruvate , ethanol , lactate , acetate , malate , fumarate , succinate or glutamate ) are included in the community as well .these reactions are included in the community because of their influence in the proton balance of the cell .most of these reactions do not occur under normal circumstances .this community highlights the fact that in the absence of biological context , many reactions that do not normally interact can be grouped together .* community c3 contains reactions that produce or consume nicotinamide adenine dinucleotide ( nad ) , nicotinamide adenine dinucleotide phosphate ( nadp ) , or their reduced variants nadh and nadph .the main two reactions of the community are nad(p ) transhydrogenase ( thd2 ) and nad transhydrogenase ( nadtrhd ) .there are also reactions related to the production of nadh or nadph in the tca cycle such as isocitrate dehydrogenase ( icdhyr ) , 2-oxoglutarate dehydrogenase ( akgdh ) and malate dehydrogenase ( mdh ) .the community also includes reactions that are not frequently active such as malic enzime nad ( me1 ) and malic enzime nadh ( me2 ) or acetate dehydrogenase ( acald ) and ethanol dehydrogenase ( alcd2x ) .* community c4 contains the main carbon intake of the cell ( glucose ) , the initial steps of glycolysis , and most of the pentose phosphate shunt .these reactions are found in this community because the metabolites involved in these reactions ( e.g. , alpha - d - ribose-5-phosphate ( r5p ) or d - erythrose-4-phosphate ( e4p ) ) are only found in these reactions .this community includes the biomass reaction due to the number of connections created by growth precursors .* communities c5 , c6 and c7 are small communities that contain oxygen intake , ammonium intake and acetaldehyde secretion reactions , respectively . and . * ( a ) communities in .top plot : variation of information ( vi ) of the best partition found at markov time with every other partition at time .bottom plot : number of communities and vi of the ensemble of solutions found at each markov time .a robust partition into seven communities is found at .( b ) communities and vi in .a robust partition into five communities is found at . ] a robust partition into five communities in the pfg was found at markov time ( fig .[ fig : markov_stability_template_networks]b ) .the communities at this resolution ( fig .[ fig : avsd_comms]c ) are : * community c1 includes the first half of the glycolysis and the complete pentose phosphate pathway .the metabolites that create the connections among these reactions such as d - fructose , d - glucose , or d - ribulose .* community c2 contains the main reaction that produces atp through substrate level ( pgk , pyk , ackr ) and oxidative phosphorylation ( atps4r ) .the flow of metabolites among the reactions in this community includes some pool metabolites such as atp , adp , h , and phosphate .however , there are connections created by metabolites that only appear in a handful of reactions such as adenosine monophosphate ( amp ) whose sole producer is phosphoenolpyruvate synthase ( pps ) and its sole consumer is atps4r .this community also contains the biomass reaction . *community c3 includes the core of the citric acid ( tca ) cycle such as citrate synthase ( cs ) , aconitase a / b ( aconta / b ) , and anaplerotic reactions such as malate synthase ( mals ) , malic enzyme nad ( me1 ) , and malic enzyme nadp ( me2 ) .this community also includes the intake of cofactors such as co .* community c4 contains reactions that are secondary sources of carbon such as malate and succinate , as well as oxidative phosphorilation reactions .* community c5 contains some reactions part of the pyruvate metabolism subsystem such as d - lactate dehydrogenase ( ldh - d ) , pyruvate formate lyase ( pfl ) or acetaldehyde dehydrogenase ( acald ) .in addition , it also includes the tranport reaction for the most common secondary carbon metabolites such as lactate , formate , acetaldehyde and ethanol .this graph has 48 reactions with nonzero flux and 227 edges . at markov time ( fig .[ fig : markov_stability_fba_networks]a ) this graph has a partition into three communities ( fig .[ fig : networksm]a ) : * community c1 comprises the intake of glucose and most of the glycolysis and pentose phosphate pathway .the function of the reactions in this community consists of carbon intake and processing glucose into phosphoenolpyruvate ( pep ) .this community produces essential biocomponents for the cell such as alpha - d - ribose 5-phosphate ( rp5 ) , d - erythrose 4-phosphate ( e4p ) , d - fructose-6-phosphate ( f6p ) , glyceraldehyde-3-phosphate ( g3p ) or 3-phospho - d - glycerate ( 3pg ) .other reactions produce energy atp and have reductive capabilities for catabolism .* community c2 contains the electron transport chain which produces the majority of the energy of the cell . in the core _e coli _ metabolic model the chain is represented by the reactions nadh dehydrogenase ( nadh16 ) , cytochrome oxidase bd ( cytbd ) and atp synthase ( atps4r ) .this community also contains associated reactions to the electron transport such as phosphate intake ( expi(e ) , pit2 ) , oxygen intake ( exo2(e ) , o2 t ) and proton balance ( exh(e ) ) .this community also includes the two reactions that represent energy maintenance costs ( atpm ) , and growth ( biomass ) ; this is consistent with the biological scenario because atp is the main substrate for both atpm , and the biomass reaction . *community c3 contains the tca cycle at its core .the reactions in this community convert pep into atp , nadh and nadph .in contrast with c1 , there is no precursor formation here . beyond the tca cycle , pyruvate kinase ( pyk ) , phosphoenolpyruvate carboxylase( ppc ) and pyruvate dehydrogenase ( pdh ) appear in this community .these reactions highlight the two main carbon intake routes in the cycle : oxalacetate from pep through phosphoenol pyruvate carboxylase ( ppc ) , and citrate from acetyl coenzyme a ( acetyl - coa ) via citrate synthase ( cs ) .furthermore , both routes begin with pep , so it is natural for them to belong to the same community along with the rest of the tca cycle . likewise , the production of l - glutamate from 2-oxoglutarate ( akg ) by glutamate dehydrogenase ( gludy ) is strongly coupled to the tca cycle .* community c1 in this graph is similar to its counterpart in , but with important differences .for example , the reactions in charge of the glucose intake ( exglc(e ) and glcpts ) are no longer part of the network ( i.e. , they have zero flux ) , and reactions such as malic enzyme napd ( me2 ) and phosphoenolpyruvate caboxykinase ( ppck ) , which now appear in the network , belong to this community .this change in the network reflect the cell s response to a new biological situation .the carbon intake through ethanol has changed the direction of glycolysis into gluconeogenesis ( the reactions in c1 in fig . [fig : networksm]a are now operating in the reverse direction in fig . [fig : networksm]b ) .the main role of the reactions in this community is the production of bioprecursors such as pep , pyruvate , 3-phospho - d - glycerate ( 3pg ) glyceraldehyde-3-phosphate ( g3p ) , d - fructose-6-phosphate ( f6p ) , and d - glucose-6-phosphate , all of which are substrates for growth .reactions me2 and ppck also belong to this community due to their production of pyr and pep .reactions that were in a different community in , such as gludy and icdhyr which produce precursors l - glutamate and nadph respectively , are now part of c1 .this community also includes the reactions that produce inorganic substrates of growth such as nh , co and h . *community c2 contains the electron transport chain and the bulk of atp production , which is similar to c2 .however , there are subtle differences that reflect changes in this new scenario .ethanol intake and transport reactions ( exetoh(e ) and etoht2r ) appear in this community due to their influence in the proton balance of the cell .in addition , c2 contains nadp transhydrogenase ( thd2 ) which is in charge of nadh / nadph balance .this reaction is present here due to the nad consumption involved in the reactions acald and ethanol dehydrogenase ( alcd2x ) , which belong to this community as well .* community c3 contains most of the tca cycle .the main difference between this community and c1 is that here acetyl - coa is extracted from acetaldehyde ( which comes from ethanol ) by the reaction acetaldehyde dehydrogenase reaction ( acald ) , instead of the classical pyruvate from glycolysis .the glycoxylate cycle reactions isocitrate lyase ( icl ) and malate synthase ( mals ) which now appear in the network , also belong to this community .these reactions are tightly linked to the tca cycle and appear when the carbon intake is acetate or ethanol to prevent the loss of carbon as co .* community c1 contains the reactions responsible d - glucose intake ( exglc ) and most of the glycolysis . the reaction that represents the cellular maintenance energy cost , atp maintenance requirement ( atpm ) ,is included in this community because of the increased strength of its connection to the substrate - level phosphorilation reaction phosphoglycerate kinase ( pgk ) .also note that reactions in the pentose phosphate pathway do not belong to the same community as the glycolysis reactions ( unlike in and ) .* community c2 contains the conversion of pep into formate through the sequence of reactions pyk , pfl , forti and exfor(e ) .more than half of the carbon secreted by the cell becomes formate .* community c3 includes the biomass reaction and the reactions in charge of supplying it with substrates .these reactions include the pentose phosphate pathway ( now detached from c1 ) , which produce essential growth precursors such as alpha - d - ribose-5-phosphate ( r5p ) or d - erythrose-4-phosphate ( e4p ) . the tca cycle is present as well because its production of two growth precursors : 2-oxalacetate and nadph . finally , the reactions in charge of acetate production ( ackr , act2r and exac(e ) ) are also members of this community through the ability of ackr to produce atp .glutamate metabolism reaction gludy is also included in this community .it is worth mentioning that the reverse of atp synthase ( atps4r ) is present in this community because here , unlike in , atps4r consumes atp instead of producing it . when this flux is reversed , then atps4r is in part responsible for ph homeostasis .* community c4 includes the main reactions involved in nadh production and consumption , which occurs via glyceraldehyde-3-phosphate dehydrogenase ( gapd ) .nadh consumption occurs in two consecutive steps in ethanol production : in acald and alcd2x .the phosphate intake and transport reactions expi(e ) and pit2r belong to this community because most of the phosphate consumption takes place at gapd .interestingly , the core reaction around which the community forms ( gapd ) is not present in the community .it is included in earlier markov times but when communities start to get larger the role of gapd becomes more relevant as a part of the glycolysis than its role as a nadh hub .this is a good example of how the graph structure and the clustering method are able to capture two different roles in the same metabolite .has a robust partition into three communities at .( b ) has a partition into three communities at .( c ) has four communities at .( d ) has three communities at . ] * community c1 contains the glycolysis pathway ( detached from the pentose phosphate pathway ) .this community is involved in precursor formation , atp production , substrate - level phosphorylation and processing of d - glucose into pep .* community c2 contains the bioenergetic machinery of the cell ; the main difference to the previous scenarios is that the electron transport chain has a smaller role in atp production ( atps4r ) , and substrate - level phosphorylation ( pgk , pyk , sucoas , ackr ) becomes more important . in electron transport chain is responsible for the 21.8% of the total atp produced in the cell while in it produces 66.5% .the reactions in charge of intake and transport of inorganic ions such as phosphate ( expi(e ) and pit2r ) , o ( exo(e ) and o)and h ( exh and h ) belong to this community as well .this community includes the reactions in the pentose phosphate pathway that produce precursors for growth : transketolase ( tkt2 ) produces e4p , and ribose-5-phosphate isomerase ( rpi ) produces r5p .* community c3 is the community that differs the most from those in the other aerobic growth networks ( and ) .this community gathers reactions that under normal circumstances would not be so strongly related but that the limited availability of ammonium and phosphate have forced together ; its members include reactions from the tca cycle , the pentose phosphate pathway , nitrogen metabolism and by - product secretion .the core feature of the community is carbon secretion as formate and acetate .reactions ppc , malate dehydrogenase ( mdh ) reverse and me2 channel most of the carbon to the secretion routes in the form of formate and acetate .the production of l - glutamine seems to be attached to this subsystem through the production of nadph in me2 and its consumption in the glutamate dehydrogenase napd ( gludy ) . ma hw , zhao xm , yuan yj , zeng ap ( 2004 ) decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph . bioinformatics ( oxford , england ) 20 : 18706 .sauer u , lasko dr , fiaux j , hochuli m , glaser r , et al .( 1999 ) metabolic flux ratio analysis of genetic and environmental modulations of escherichia coli central carbon metabolism .journal of bacteriology 181 : 6679 - 6688 . king za , drger a , ebrahim a , sonnenschein n , lewis ne , et al . ( 2015 ) escher : a web application for building , sharing , and embedding data - rich visualizations of biological pathways .plos comput biol 11 : 1 - 13 .folch - fortuny a , tortajada m , prats - montalbn jm , llaneras f , pic j , et al .( 2015 ) mcr als on metabolic networks : obtaining more meaningful pathways . chemometrics and intelligent laboratory systems 142 : 293303 .schilling ch , letscher d , palsson bo ( 2000 ) theory for the systemic definition of metabolic pathways and their use in interpreting metabolic function from a pathway - oriented perspective .j theor biol 203 : 229248 .page l , brin s , motwani r , winograd t ( 1999 ) the pagerank citation ranking : bringing order to the web .technical report 1999 - 66 , stanford infolab .http://ilpubs.stanford.edu:8090/422/. previous number = sidl - wp-1999 - 0120 .delvenne jc , schaub mt , yaliraki sn , barahona m ( 2013 ) the stability of a graph partition : a dynamics - based framework for community detection . in :mukherjee a , choudhury m , peruani f , ganguly n , mitra b , editors , dynamics on and of complex networks , volume 2 , springer new york , modeling and simulation in science , engineering and technology .221 - 242 .colijn c , brandes a , zucker j , lun ds , weiner b , et al .( 2009 ) interpreting expression data with metabolic flux models : predicting _ mycobacterium tuberculosis _mycolic acid production .plos comput biol 5 : e1000489 .
cells adapt their metabolic fluxes in response to changes in the environment . we present a systematic flux - based framework for the construction of graphs to represent organism - wide metabolic networks . our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions . the methodology can be applied in the absence of a specific biological context by modelling fluxes as probabilities , or tailored to different environmental conditions by incorporating flux distributions computed from constraint - based modelling such as flux balance analysis . we illustrate our approach on the central carbon metabolism of _ escherichia coli _ and study the derived graphs under various growth conditions . the results reveal drastic changes in the topological and community structure of the metabolic graphs , which capture the re - routing of metabolic fluxes under each growth condition . by integrating constraint - based models and tools from network science , our framework allows for the interrogation of environment - specific metabolic responses beyond fixed , standard pathway descriptions .
ice crystals with a variety of shape , size and mass are present in clouds .the properties of these crystals are markedly dependent on the temperature and other properties of the atmosphere .a classification of ice crystals with a description of crystal shapes , size and mass can be found in .certain atmospheric and cloud behaviors are characterized by parameters related to the ice particle dynamics for different shapes and sizes .a precise estimation of ice crystals falling velocity is required to quantitatively determine their evolution in the atmosphere .the knowledge of the falling velocity is necessary for the simulation of ice water paths and for the determination of cloud boundaries .also it is used for the study of microphysical process in clouds and for climate modeling . the need for more accurate theoretical models to simulate cloudshas increased the requirement of more detailed measurements of relationships between the settling velocity , masses , and dimensions for a large spectrum of ice crystal types .a precise determination of these relationships allow us to obtain accurate parameterizations of the settling velocity of cloud particles .these parametrizations are essential to have an accurate simulation of cloud in general circulation models ( gcms ) of precipitation amount , cloud dissipation and cloud optical properties .although there have been many proposals in literature , the ice crystal sedimentation in the atmosphere has not been completely characterized .there are analytical solutions that precisely determine the sedimentation falling velocity for spheres particles . due to the large variety of shapes , sizes and masses of ice crystals , and the range of reynolds numbers involved in these problems , there is no precise analytical estimation to predict the falling velocity for shapes other than spheres .many works in literature provide schemes to parameterize the ice crystal masses , shapes and size to predict the settling velocity .ice particle terminal velocities are often calculated theoretically or experimentally by determining a relationship between the reynolds number ( ) , and the best ( or davis ) number , ( ) .there are a number of experimental works in which the most important variables are measured .terminal velocity , mass and size have been measured for various ice particle types .these datasets are obtained from laboratory measurement and observations of real ice particles falling through the atmosphere .some well known experimental datasets can be found in .the proposals by have shown quite good approximations to the falling speed of ice particles for .these proposals proved to be in good agreement with experimental data for a variety of particle types .however showed that for viscous flow regimes ( ) these formulations overestimate the crystal falling velocity . , using the approximation of with results from , give an estimate for the sedimentation rate of small ice crystals whose maximum dimension is smaller than m .this estimate for columnar ice crystals is in agreement with most experimental data ( within 20% ) . a complete review of the main theoretical approximation that have been proposed and many experimental results can be found in ; also a lots of relevant references are presented in these works .the sedimentation of an ice crystal in the atmosphere is a fluid mechanical problem that can be modeled as a rigid body moving immersed in a fluid flow .this rigid body moves under the action of its own weight , buoyancy force and interacting with others crystals and with the fluid that surrounds it . given a characterization for the shape , size and mass density of the crystals , together with its atmospheric habitat, it is possible to completely determine the dynamical behavior of the crystals by using some adequate computational fluid dynamic ( cfd ) method .having an accurate numerical method to solve these problems allows us in turn to test and to improve the parametrization laws , and to compute the settling velocities for sizes , shapes and masses for which experimental data are not available .also , the sensitivity of the problem to different parameters can be studied numerically . to the knowledge of the author ,there are no numerical results that describe appropriately the dynamics of ice crystals in the atmosphere .the lbm is a cfd method that proved to be successful to treat multiple problems involving both compressible and incompressible flows on simple and complex geometrical settings .in particular the lbm provide a simple way for treating accurately the flow surrounding an immersed body , in arbitrary movement , with no regular geometry .for a complete modern review of this topic see .the behavior of particles in sedimentation have been analyzed using lbm in a variety of problems . in this paperwe use lbm to study the dynamical behavior of ice crystals with two different shapes : simple columns and six bullet policrystal ( a combination of columns or bullets ) .in particular , the ice crystal settling velocity is obtained numerically for both shapes in a range of masses and sizes ( characteristic lengths of ) .the lbm results for the fluid mechanical problems are obtained in a pure viscous regime ( ) .this is the flow regime of the smallest particles falling in a cloud .the accuracy in the lbm to treat this problem is evaluated by comparison with some well known experimental data in literature , and with theoretical proposal from and . in order to prove the correctness and accuracy of the proposed lbm algorithm , two fluid mechanical problems at low reynolds numberwere tested .the idea of these benchmarks is to test our algorithm in translational and rotational problems , within the regime that include the ice crystal sedimentation problems .the paper is organized as follows : in section [ sec : lbm ] we present the basic equations of the lbm , introduce notation and some details about the boundary conditions methods , force evaluation , and grid refinement techniques . in section [ sec : lbm_benchmarks ] we evaluate the correctness of the proposed lbm algorithm to solve two benchmarks , _ sphere sedimentation in a square duct _( section [ subsec : spheresedimentation ] ) and _ pure rotation of rigid bodies in couette flow _ ( section [ subsec : couetteflow ] ) , well known in the literature . in section [ sec : iceparticlesedimentation ] the sedimentation of ice crystals in the atmosphere is solved using lbm .numerical results for _ columnar ice crystals _ and _ six bullet - rosette ice policrystals _ are showed in sections [ subsec : columnariceparticles ] and [ subsec : rosetteiceparticles ] respectively . in section [ sec : conclusionanddiscussion ] conclusions and discussions are presented .in this section we present the basic equations of the lbm , introduce notation and the main concepts we use along the paper .in addition to the lattice boltzmann equation that govern the physics of the bulk fluid ; one needs to prescribe a method to apply boundary conditions , to evaluate the fluid force on a body and to implement grid refinement where necessary . in the next sections we briefly review these topics .the numerical results in this paper are obtained by solving the lattice boltzmann equation ( _ lbe _ ) , a particular phase - space and temporal discretization of the boltzmann equation ( _ be _ ) . the be governs the time evolution of the single - particle distribution function where and are the position and velocity of the particle in phase space .the lbe is a discretized version of the be , where takes values on a uniform grid ( the lattice ) , and is not only discretized , but also restricted to a finite number ( the number of discrete velocities in the model ) of values . in an isothermal situation and in the absence of external forces , like gravity, the lbe can be written as : here is the -th component of the discretized distribution function at the lattice site time and discrete velocity .the function is a discrete version of the equilibrium distribution function and is a linearized collision operator . in our simulations we use a simple relaxation time model ( _ srt _ ) , a simplified approximation of that follows from the bhatnagar , gross , and krook ( _ bgk _ ) approximation with relaxation time . in compressible - flow modelsthe lattice constant that separate two nearest neighbor nodes , and the time step are related with the speed of sound by . in incompressible - flow models ,the same relation between and holds , but the constant is no longer related to the speed of sound .the coordinates of a lattice node are denoted by , where the integer multi index ( or , in the two - dimensional case ) denotes a particular site in the lattice .the macroscopic quantities such as the fluid mass density and velocity , are obtained , in boltzmann theory , as marginal distributions of and when integrating over . in lbmthis integrals are approximated by proper quadratures .specific values of s and s , are made so that these quadratures give exact results for the -moments of order 0 , 1 and 2 .we have and in the simulations we present in this paper , we are interested in incompressible flow problems , where we modify eq .[ eq : lbmmacroscopics_1 ] according to the quasi - incompressible approximation presented in . in this approximation replaced by , a constant fluid mass density .a single time step of the discrete evolution equation [ eq : lbe ] is frequently written as a two - stage process and the computation of on the whole lattice , eq .[ eq : collision ] , is called the _ collision step _ , while the computation of at , eq .[ eq : streaming ] , on the whole lattice is called _ streaming step ._ we refer to the lattice boltzmann models with the standard notation , where is the number of space dimensions of the problem , and is the number of discrete velocities .also we add an at the end to indicate that we use a quasi - incompressible approximation , .the problems we are interested in are those in which rigid bodies move inside an unbounded fluid domain . because the impossibility to model an infinite fluid domain, we have to restrict the problem to a finite computational fluid domain .the size of the computational fluid domain has to be a compromise between minimizing the computational work the smaller the size the better , and minimizing the undesirable effect of the boundary conditions the larger the domain the better . the computational fluid domain is a block of fluid bounded by regular borders .the rigid bodies that move inside the domain are described by geometries as required .the flow in the interior of the domain is computed by solving the lbe .close to the boundaries a special treatment is used so that the flow obeys the physical boundary conditions . in the present work ,we use both dirichlet and outflow open - boundary conditions ( convective boundary conditions or sommerfeld like ) . the correct imposition of the boundary conditions on arbitrary boundary geometries , like the boundary of rigid bodies , has been one of the main issues in lbm . to impose dirichlet boundary conditions , for the velocity or pressure , on a regular boundary which is coincident with the grid ,we use the method proposed in .these regular boundaries are adequately represented by linked lattice nodes . to impose dirichlet conditions for the velocity on boundaries of arbitrary shape we use the method proposed by .we also use the method shown in and obtain no significant differences in the results . in order to reduce the length of the numerical flow domainwe apply a convective , or sommerfeld like , boundary conditions on the open boundaries of the domain .these boundary conditions allow us to model a long or quasi - infinite physical domain with a reduced computational domain .these type of conditions have been extensively applied in computational fluid mechanics .we make a particular implementation of convective boundary condition in the lbm context .more details about our implementation are given in section [ subsec : lbm_cbc ] .the outflow open - boundary conditions allow us to represent a long or quasi - infinite physical domain by a finite computational domain . in many fluid mechanical problems on a large , quasi - infinite fluid domain ,the region in which the velocity gradients , viscosity and inertial forces are significant is rather small .it is a challenge to adequately represent such a fluid mechanical problem on a bounded , as small as possible , computational domain in such a way that the interesting physics is captured accurately and at the same time the computational cost is reduced significantly .strictly speaking , one needs to apply convenient boundary conditions at the finitely close computational boundaries so that these conditions do not affect the physical behavior of the flow in the interior or ruin the intrinsic lbm accuracy .there are different approaches in the literature to treat the outflow open - boundary conditions in the lbm context .we can divide these approaches in at least two categories , the ones based on mesoscopic variables and the ones based on macroscopic variables .the latter are generally extensions of boundary conditions extensively applied in classical methods ( finite difference ( _ fd _ ) , finite volume ( _ fv _ ) and finite element ( _ fe _ ) methods ) of computational fluid mechanics ( _ cfd _ ) to solve the navier - stokes ( _ ns _ ) equations .we are mainly interested in non - stationary quasi - incompressible problems . in the lbm contextthe convective boundary condition ( cbc ) proposed in to treat outflow open - boundaries has shown acceptable results in these kind of problems .also , in it is shown that cbc gives better results than neumann boundary conditions ( _ nbc _ ) when using fv methods to solve the incompressible ns equation .the nbc are based on macroscopic variables , and were also tested in lbm .the results presented in show that cbc are a better option than nbc in non - stationary problems .these works show that nbc introduce undesirable perturbations in the fluid domain , specially in non - stationary problems . in our numerical tests we use cbc to fix the velocity in the outflow open - boundaries .to complete the cbc method we use some dirichlet boundary condition method to fix the unknown mesoscopic ( ) variables that have to satisfy the macroscopic cbc .the cbc is defined as : \ ] ] where is a velocity or pressure function , and is a reference velocity that have to be defined adequately . is the boundary of the fluid domain , and is the outward normal vector in . in our numerical implementation cbc are applied on straight , grid - coincident boundaries only . to show the implementation of the cbc , we consider as an example a boundary with normal vector .given the lbe solution at time [ eq : cbcgeneral ] gives , for where we have approximated the space derivative by a second order accurate , backward difference operator and the time derivative by a first order accurate , forward difference operator .from equation [ eq : u_incbcborder ] the velocity that satisfy the cbc can be determined .knowing the boundary velocity at some dirichlet boundary condition can be used to determine the corresponding mesoscopic variables .we implement the cbc method coupled with the proposal of to impose the boundary velocity determined from equation [ eq : u_incbcborder ] .equation [ eq : u_incbcborder ] is incomplete until we provide a prescription for the value of .various criteria can be found in the literature to do this . sets this quantity equal to the mean velocity on the outflow open convective boundary . sets equal to the upstream free velocity with some additional conditions to ensure mass conservation in the fluid domain .mass conservation problems are also observed in the case of .in our implementation we set as a constant : the upstream mean velocity .we have observed , by performing typical benchmark tests , that mass is acceptably conserved by the resulting scheme .it is of crucial importance , in many applications that involve moving bodies surrounded by a fluid flow , to have a good method or algorithm to compute the flow force and torque acting on the bodies . by good we mean a method that is simple to apply , that is accurate and fast , so as not to spoil the efficiency of the flow computing method .the accuracy in the determination of the force and torque acting on a moving body directly affects the body s movement . for a review of lbm methods that involve flow force evaluation on suspended particles we refer to section 6 of and references therein . the classical way to compute forces , andso torque , on submerged bodies is via the computation and integration of the stress tensor on the surface of the body . in lbmthe stress tensor is a local variable , its computation and extrapolation from the lattice to the surface is computationally expensive , which ruins the efficiency of the lbm .however , this method is widely used in lbm . an standard method to evaluate forces on submerged bodies in lbm is the _ momentum exchange _ ( _ me _ ) , introduced firstly by in lbm applications . the me algorithm is specifically designed and adapted to lbm; it is therefore more efficient than stress integration from the computational point of view .some improvements to ladd proposal have been introduced in , and different approaches to improve the methods in problems with moving bodies were made . in this work ,force and torque are evaluated by using the methods presented in .the motion of each body is determined by solving the newton s equations of motion with rotations formulated using quaternions .the forces acting over bodies are given by the fluid flow forces , weight and buoyancy forces .to integrate in time we use euler forward numerical scheme , which is first order accurate as the lbm method itself .many problems in fluid mechanics are such that big gradients of the fluid variables appear only in regions which are small as compared with the whole computational domain .to resolve well the space variations of the fluid variables one needs a grid size which is small enough . in lbma simple lattice is a cartesian grid of equispaced nodes .the distance between two nearest neighbor nodes , the grid size , is for a real problem , the computational domain is covered by an arrangement of grids. this arrangement can be as simple as a unique lattice or block grid with a single size , or a complex arrangement of grids with different grid sizes . in a problem with more or less uniform space variations all over , a single block grid that covers the whole computational domain may be suitable . in a problem where high space variations occur in a small region , a small grid size needs to be used in that region . but using this small grid size on the whole computational domain would be a waste of computational effort .the right thing to do is to use an arrangement of grids with different grid sizes .the methods to integrate various grid blocks with different grid sizes into a single computational domain are known as grid refinement methods . in lbmthere are at least two grid refinement methods : _ multi - grid method _ ( mg ) and _ multi - domain method _( md ) ( or multi - block ) . in the mg method , a grid block with small grid size is always superimposed to a grid block with larger grid size .several layers of grids can be superimposed in this way . in md methodthe grids with different grid sizes overlap just in a selected set of lattice nodes .this overlapping occurs only on a small region with two adjacent grid blocks of different grid sizes , see . in this workwe use md methods .we select this method because it has better numerical performance and lesser memory requirement than mg method .a disadvantage of the md method , though , is that its implementation is more complex than that of mg where some additional grids are used as interface to interchange data between different levels of grid size .we use a priori refinement method .this means that we chose the arrangement of refined grids in the domain before solving the fluid problem , and this arrangement is fixed in time .this last characteristic becomes the main numerical performance limitation of our method . in non - stationary fluid problems the smallest scales to solve andits position within the fluid domain are not easy to predict .a priori refinement methods are not the most efficient option in non - stationary fluid problems , one frequently needs to implement _ adaptive refinement methods _ .this adaptive methods are able to change the refined grids arrangement at run time .in this section we show lbm results for two fluid mechanical problems well known in the literature . our purpose is to test the correctness of the proposed lbm algorithm to solve problems at low reynolds numbers as those in ice crystals sedimentation .the rigid body equations of movement are independently solved for translations and rotations , they are only coupled through the fluid forces .therefore we chose two particular problems , with well known results , to test pure translations and pure rotations .we are interested in the problem of an sphere sedimenting inside a square section duct full of a viscous fluid .we are particularly interested in the determination of the sphere s terminal falling velocity in figure [ fig : schematicfallingsphereandresults] we show a scheme of the problem configuration .we compute for different values of the relation between the sphere s diameter and the length of the square cross section .we are looking for the relation , as a function of , where is the sphere s terminal falling velocity in an unbounded fluid domain .the ratio is commonly named _wall correction factor_. the terminal falling velocity is the stationary velocity that a body ( sphere ) reaches in a sedimentation process ; the body and viscous forces are equal in magnitude in this stationary regime . is obtained from a stokes flow regime defined by with , the sphere s material density , the fluid s density and the fluid s kinematic viscosity .the value of given by equation [ eq : terminalvelocity ] is an adequate estimation if the fluid flow is consistent with stoke problem hypothesis .the sphere sedimenting in a square section duct is a problem analyzed in the literature . numerically determine the wall correction factor using the lbm , and experimentally analyze this problem . evaluate the wall correction factor for different configurations and compare their results with the experimental results presented in . use a quasi - incompressible lbm and some particular boundary conditions and forces evaluation method . experimentally study the wall interference effects over the sphere terminal velocity .they test the wall correction factor for different sphere diameters and duct cross section configuration . as a main resultthey show a curve fitting to the obtained experimental data in those problems .we will use this fitting equation to compare with our lbm results . in figure [fig : schematicfallingsphereandresults] we show an schematic arrangement of grids refinement and the sphere placed in its initial position at .the refinement region is placed from to , a bounded fluid region where will take place the rigid body displacement .we use a computational domain of length to represent a quasi - infinite physical one .we made several numerical test for different domain configurations .we look for the best combination of length domains ( , , and ) and sphere placement ( ) that produce acceptable results and minimize the computational cost .we refer as acceptable results , those results uninfluenced by the computational domain length . from the numerical test we found that acceptable results require . in figure[ fig : schematicfallingsphereandresults] we show the obtained lbm numerical results along with the polynomial fitting from the experimental data presented in .we use in our computations m and .the refinement region with length is placed at from the bottom wall .the results showed in figure [ fig : schematicfallingsphereandresults] are obtained using the lattice boltzmann model with a domain discretization of for the fine grid .dirichlet boundary conditions are set over all the domain s walls . in the horizontal top wall ( see figure [ fig : schematicfallingsphereandresults] ) the boundary velocity is set according to the convective boundary condition presented in section [ subsec : lbm_cbc ] . over the sphere surface we set the velocity boundary condition using methods for non regular boundaries as presented in .we evaluate the wall correction factor for eight relations ranging from to , with diameters mm to mm respectively . the sphere is initially placed in the cross sectional center at from the bottom wall domain . as initial condition we set an equal and homogeneous velocity in the fluid flow domain .the numerical tests consist in releasing a sphere in a particular way , as explained below , from its initial position with velocity at .the initial velocity is approximately equal to and opposite in direction to the terminal falling velocity of the rigid body inside the square duct .there are at least two main reasons to analyze the problem in a constant velocity frame . on the one hand, being the rigid body at an approximately constant position , relative to the computational fluid domain , reduces the region of refinement and simplifies the refinement algorithm , since we can keep the refinement region fixed ( no dynamical refinement process is necessary ) . on the other hand , the constant velocity fluid domain with adequate boundary conditions allows us to reduce the length of the domain .the rigid body is roughly static in the computational fluid domain length and we can fix the minimal appropriate relation so that the rigid body stays in the refined region .we apply a particular initialization procedure so that the body stays in a prefixed gap ( refined region ) .we initially impose a fictitious external force during some time interval so that the relative velocity between the rigid body and the fluid domain is approximately .after this time interval the rigid body is left to move freely in the fluid domain . because the application of the artificial initial force there is a fictitious transient which we are not interested in . once the sphere reaches the stationary regime ( after ) , the terminal falling velocity is considered equal to the sphere stationary falling velocity .is the length of the refined region starting at height as measured from the bottom domain wall .b ) wall correction factor vs. for a sphere falling inside a square cross section duct .the continuous line is the polynomial fitting of experimental data from .,title="fig : " ] is the length of the refined region starting at height as measured from the bottom domain wall .b ) wall correction factor vs. for a sphere falling inside a square cross section duct .the continuous line is the polynomial fitting of experimental data from .,title="fig : " ] the reynolds number ( ) in our tests are in the range , while the range in and are and respectively .the results we have obtained , presented in figure [ fig : schematicfallingsphereandresults] , have an acceptable correlation with those experimentally obtained in .our results are close to the experimentally adjusted curve in the whole -range .the largest differences between experimental and lbm results are observed for which is particularly noticed for spheres of small radii .the reason for these differences can be that the discretization is not fine enough , since the domain discretization is kept constant for all relations .our numerical results show a better approximation to the experimental results than those presented in for the same problem .the pure translation assumption is justified by the angular displacement results .if flow instabilities are present , like typically those from high reynolds number problems , this statement is not longer true . in this section we want to test our lbm algorithm for describing pure rotation of rigid bodies immersed in a fluid flow .particularly , two rigid bodies ( 2d and 3d ) immersed in a couette flow are analyzed .our main interest is to benchmark our lbm results with those presented by and .we test two rigid body configurations : an ellipse and an ellipsoid as we show in the problem scheme in figure [ fig : esquemaellipsoid ] .the rigid body geometry is defined by : where , and are the three principal semi - axis length of the rigid body , and are the cartesian coordinates in a frame fixed to the body . the ellipse description from equation [ eq : ellipsoidalequation ] is obtained for .the rigid body spatial orientation is defined by the euler angles and following the rotational order .: free rotational ellipsoid in a couette flow , the fluid domain is a rectangular prism with side length and .we show the case .scheme : free rotational ellipse in a couette flow in a rectangular domain with length and . ]the fluid domain in the ellipsoidal test is a rectangular prism with , and side lengths .the ellipse test has a rectangular fluid domain with and side lengths as shown in figure [ fig : esquemaellipsoid ] . the rigid body and the fluid domain centroids are initially coincident .the fluid and rigid body mass density are set equal with no resultant buoyancy forces . in two parallel walls or edgeswe impose a uniform velocity as shown in figure [ fig : esquemaellipsoid ] . on all the remaining walls or edges ,periodic boundary conditions are applied . the dirichlet velocity boundary conditions in non regular geometries are imposed by method .the rigid body dynamics in a pure rotational problem is strongly influenced by the reynolds number problem defined as : where is the shear ratio and is the major rigid body semi - axis .some cautions should be taken when these results are compared with those in literature , because there are different definitions for .the dynamic of a rigid body immersed in a couette flow has , as has been shown by , an analytical solution in the limit .the rigid body rotational velocity about axis is determined from by : where the rotational angle is : this solution , valid to and ( ellipse and ellipsoid ) rotational problems in the limit help us validating our lbm numerical results , which we present in the next sections . in this test case we analyze the rotational behavior of an ellipse immersed in a couette flow . in figure[ fig : esquemaellipsoid] we show a schematic configuration of the ellipse and the fluid domain we use .we test the problem at two reynolds numbers and and evaluate the results convergence to the problem s analytical solution presented in .the obtained results are presented in figure [ fig : resellipseellipsoid] along with the jeffery analytical solution ( equation [ eq : angvelanaliticajeffery ] ) . in fractions of shear ratio at low reynolds numbers .figure : ellipse rotational angle as function of at and compared with the analytical solution at .the problem configuration is showed schematically in figure [ fig : esquemaellipsoid] .figure : ellipsoid rotational angle as function of at and compared with the analytical solution at .the problem configuration is showed schematically in figure [ fig : esquemaellipsoid].,title="fig : " ] in fractions of shear ratio at low reynolds numbers .figure : ellipse rotational angle as function of at and compared with the analytical solution at .the problem configuration is showed schematically in figure [ fig : esquemaellipsoid] .figure : ellipsoid rotational angle as function of at and compared with the analytical solution at .the problem configuration is showed schematically in figure [ fig : esquemaellipsoid].,title="fig : " ] the presented results are obtained with parameters m and m , and rigid body geometry defined by m , and .we impose a dirichlet velocity boundary condition at the walls , where we set m/s as we show in figure [ fig : esquemaellipsoid ] . to achieve the desired reynolds number we choose an appropriate kinematic viscosity . for the test at we set and a domain discretization of lattice grid points ; while for the test at we set with a domain discretization of .the fluid flow problems are solved using a lbm with simple relaxation time ( srt ) .the results presented in figure [ fig : resellipseellipsoid] show convergence to the analytical solution when .it is possible to observe , from figure [ fig : resellipseellipsoid] , a highly sensitive dynamical rigid body behavior when the number increase . for the results show considerable changes with respect to the results at .in particular there is an increase in the rotational period and a decrease in the lower rotational velocity for in comparison with those at . in both problemswe observe a periodic dynamical behavior as expected .the obtained results for the rotational behavior of an ellipse in couette flow are thus in agreement with those presented in which shows the correctness of our algorithm to solve 2d pure rotational problems .we study here the rotational behavior of an ellipsoid immersed in a couette flow , and compare the results with the analytical solution of .figure [ fig : esquemaellipsoid] shows a scheme of the geometry and fluid domain configuration for this problem .our lbm results are showed in figure [ fig : resellipseellipsoid] , obtained for reynolds numbers and , along with the jeffery s analytical solution ( see equation [ eq : angvelanaliticajeffery ] ) .the parameters in our computations are m ( see figure [ fig : esquemaellipsoid] ) .the ellipsoidal geometry is defined in equation [ eq : ellipsoidalequation ] with m , , and the initial rigid body orientation is .the fluid domain is discretized by grid lattice points .we impose a constant velocity in the walls parallel to planes and as we shown in figure [ fig : esquemaellipsoid] .the kinematic viscosity is set to get the desired reynolds numbers , and we use the simple relaxation time lattice boltzmann model .we also test the problem using a lattice boltzmann model and we do not find appreciable differences with the results obtained using . from the results in figure [ fig : resellipseellipsoid] we can see that the lbm results converge to the analytical solution when .our results for the rotational behavior of an ellipsoid in couette flow agree with those presented in ; which shows the correctness of our algorithm to solve 3d pure rotational problems .in this section we study the main problem of this paper ; we solve and analyze the dynamics of ice crystals sedimentation in the atmosphere .in particular , we are interested in evaluate the ice crystal settling velocity by using lbm for different ice crystal shapes in a range of size and mass .the sedimentation of an ice crystal in the atmosphere is a fluid mechanical problem we model as follows .the crystal is considered a rigid body that moves under the action of its own weight , the buoyancy force and interacting only with the fluid that surrounds it .a simplifying assumption is adopted : no interactions between rigid bodies is considered .we are only interested in the dynamics of isolated rigid bodies in the atmosphere .this assumption is a good approximation to the movement of ice crystals in a cloud , since the concentration of ice particles in cirrus typically ranges between 50 and 500 , while the maximum ice particle concentrations in cumulonimbus clouds reached 300 .the dynamics of ice crystals of two shapes is analyzed , simple columns and six bullet - rosette policrystals . in both casesthe settling velocity is determined for several characteristic lengths .the obtained lbm results are compared with some well known experimental data in literature , as much as with the theoretical proposals from and . with the intention to be clear , in the next sections we presentseparately the obtained lbm results for both geometrical configurations . in this sectionwe present the settling velocity results for sedimentation of columnar ice crystals in atmosphere obtained by using lbm .we compare these results with those obtained by experimental and theoretical methods well known in the literature .columnar ice crystals with quasi - hexagonal cross section and needle ice crystals , typically grown at temperatures between and , have also been observed below . in our simulationsthe ice crystals are modeled by columns of hexagonal cross section , and the sedimentation studied in fluid flow regimes with .this is approximately the flow regime where the smallest particle can fall in a cloud .we adopt as the length reference to evaluate the reynolds number , where is the ice crystal length and is the semi - length of its cross section .we perform numerical tests for a variety of aspect ratios ] .velocity is expressed in centimeters per second , space in micrometers .the lbm results are showed with squares ( ) .the experimental results are represented with : triangles ( ) for data from , hollow circles ( ) for data , and with filled circles ( ) for data . ] for columnar ice crystals as a function of the length .the showed results correspond to $ ] .velocity is expressed in centimeters per second , space in micrometers .the lbm results are showed with squares ( ) .the experimental results are represented with : triangles ( ) for data from , and with filled circles ( ) for data . ] as can be observed in the figures [ fig : rescolumnarexperimental_ar=1_2 ] and [ fig : rescolumnarexperimental_ar=2_3 ] , the lbm results are in accordance with laboratory measurements .the results from present some dispersion as expected for a set of experimental data . for the length range andaspect ratio analyzed , all the numerical results are included in the data dispersion . in the figure[ fig : results_inc ] the settling velocities obtained by lbm are shown in comparison with laboratory measurements presented in as a function of crystals capacitance .this parameter depends on the particle geometry and is obtained in for different geometries . for hexagonal columnsthe capacitance is : the numerical results are in the regime which allow us to compare with the measurements presented in , where they choose as the characteristic length to evaluate the reynolds number .the reynolds number regime of the lbm results is if we take as the characteristic length . for columnar ice crystals as a function of the capacitance .velocity is expressed in centimeters per second and distances in micrometers .the lbm results are showed with squares ( ) .the experimental data from are represented as : triangles , for aspect ratios between 1 and 2 , and diamonds for aspect ratios between 2 and 3 . ]it is possible to observe from the figure [ fig : results_inc ] that the dispersion of the lbm results have a notable decrease when the capacitance is used as variable .the same observation was pointed out by for the experimental results .we can also observe from figure [ fig : results_inc ] that the results obtained with lbm are not unifomly distributed within the region containing experimental data .a bias towards the low part in this region can be seen .a bigger amount of computation would allow us to adjust a curve to the lbm results , though this is out of the scope of this work . in the figure [ fig : rescolumnariceparticle ]we show the normalized falling velocity obtained by lbm in comparison with some well known theoretical and experimental results from the literature .these results are the same we have presented for hexagonal columns but in a normalized way . the normalized velocity is computed as proposed by , this is , to obtain , the crystal settling velocity is divided by a sedimentation velocity for an `` equivalent sphere '' . here , equivalent sphere means a sphere with diameter and mass equal to the ice crystal mass . , based in results from , propose an expression for using the stokes solution for an sphere in a viscous flow where the sphere radius is replaced by an effective hidrodynamic radius proportional to the capacitance . for hexagonal columns in function of ice crystalsaspect ratio .the dashed line shows the normalized terminal velocity proposed in for a random orientation ice crystals .the experimental results are presented like those in . ]the dashed line in the figure [ fig : rescolumnariceparticle ] is the theoretical proposal presented by to the normalized falling velocity .this proposal is formulated for columnar ice crystals with hexagonal cross section in random orientation .the results labeled as mhkc in the figure [ fig : rescolumnariceparticle ] correspond to the proposals from in random orientation .mhkc is a typical nomenclature in literature to reference this group of methods .these are considered identical for .also we select from the literature and show in figure [ fig : rescolumnariceparticle ] some experimental data presented by .the lbm results at ( with as characteristic length ) in the figure [ fig : rescolumnariceparticle ] are close and slightly above to the theoretical proposal for hexagonal columns in random orientation .as expected from the previous comparison with experimental data , the lbm results in the figure [ fig : rescolumnariceparticle ] are below the mhkc proposals .the lbm results placed further above than proposal belong to ice crystals falling in vertical or approximately vertical orientation .the lbm results are obtained using a computational fluid domain as we have shown in the figure [ fig : schematicfallingsphereandresults ] with a relation .a lattice boltzmann model was used to obtain the numerical results .using models with more velocities ( or ) , more accuracy can be obtained at a higher computational cost .nevertheless , as shown in section [ sec : lbm_benchmarks ] , the model give acceptable ( precise enough ) results with a moderate computational cost .we use a grid refinement scheme with three to five grid scales along the longitudinal axis of the fluid domain .the boundary conditions in the fluid domain are assigned as : free slip on vertical walls ; dirichlet constant velocity on the bottom wall ; and convective boundary condition ( as presented in section [ subsec : lbm_cbc ] ) on the top wall .the fluid dynamics was computed in a fluid domain that moves with constant velocity .we use an initialization procedure as explained in section [ subsec : spheresedimentation ] .therefore , there is a fictitious transient movement which we are not interested in .the presence of near free slip walls may have a minor but not negligible influence in the numerical falling velocity .these weak blockage effects are similar to those tested in section [ subsec : spheresedimentation ] .for blockage ratios ( defined here as ) bigger than certain value , corrections should be applied to the results of numerical simulations .we observe that for blockage ratios smaller than the influence of the walls is negligible .this maximum acceptable blockage value was obtained by evaluating the interference effects on an sphere in sedimentation .these interference effects are quantified by the relation between the lbm obtained settling velocity and that obtained from theoretical estimations in an unrestricted domain .differences less than between these velocities were observed for .the computed results shown in this paper were obtained with blockage ratios smaller than .this configuration allow us to get by numerical evaluation of and with obtained from stokes equation . for particles in the analyzed reynolds number regimewe have not observed a preferential orientation .the observed crystal behaviors are in accordance with the results presented by , and with the experimental observations in . has pointed out some behaviors about orientation of falling bodies at low . from experiments and theoretical predictions, he notes that needles will fall with indeterminate orientation in the limit .also , at small but finite there will be an aligning torque tending to make its long axes horizontal . to prove these results at small but finite , a number of numerical test were made in the regime . for cylindrical particles in this regimeits long axes keep approximately horizontal or a few degrees around this orientation .to check this preference orientation , we also made numerical tests with particles whose long axes are initially in a vertical orientation .the result we observe is that the particles always turn its long axes to an approximately horizontal orientation .this section presents the obtained falling velocity results for ice policrystals sedimentation .the geometry type of the analyzed ice policrystals is the six bullet - rosette .a schematic geometrical configuration is shown in the figure [ fig : res6bulletrosetteicepolicrystal ] .bullets and bullet - rosettes are policrystals found in a variety of configurations and orientations in cirrus cloud . presented policrystals habit percentages and a classification in shape and size in cirrus clouds as a function of temperature . found that bullet rosettes were the predominant form above in a mid - latitude cirrus cloud . noted that bullet rosettes were typically observed between and .there are different bullet - rosette models , use 19-bullet rosette configuration in their study .we are interested in policrystals with six hexagonal bullets placed at from each others .these policrystal geometries are known as six bullet - rosette ( or 6-rosette ) in the literature .the geometrical models used to represent the ice policrystals are built by six arms of length and diameter .the characteristic length of the policrystals we analyze are in a range .the arms are represented by hexagonal columns .these arms are joined together from one of its ends and placed at from each others as shown the figure [ fig : res6bulletrosetteicepolicrystal ] .the policrystal aspect ratio is defined as relative to the columnar length , .the analyzed models are a slight variation of those presented in .a bullet density of is used as recommended in .the fluid properties are as those at in an international standard atmosphere ( isa ) . :schematic model used to represent the 6 bullet - rosette ice policrystal .figure : normalized settling velocity as a function of the crystal aspect ratio .the figure also shows the normalized terminal velocity proposed in for 6 bullet - rosette ice policrystal , and the proposal from for the settling velocity of bullet - rosette with .,title="fig : " ] : schematic model used to represent the 6 bullet - rosette ice policrystal .figure : normalized settling velocity as a function of the crystal aspect ratio .the figure also shows the normalized terminal velocity proposed in for 6 bullet - rosette ice policrystal , and the proposal from for the settling velocity of bullet - rosette with .,title="fig : " ] the obtained lbm results and the normalized velocity as proposed by , are shown in the figure [ fig : res6bulletrosetteicepolicrystal ] .the is obtained by dividing the six bullet - rosette settling velocity by the terminal falling velocity for an equivalent sphere .where equivalent sphere means an sphere with diameter and mass equal to the ice policrystal mass . the figure [ fig : res6bulletrosetteicepolicrystal ] shows the lbm results for aspect ratios between 1 and 2.5 .the analyzed fluid mechanical problems are in a pure viscous regime at . to compare with our results ,we show in dashed line a theoretical proposal by for six bullet - rosette policrystal .also we show an estimation from for the falling velocity of bullet - rosette with . to the knowledge of the author ,there are not available experimental measurement of falling velocity for the six - bullet ice policrystals .as can be observed in the figure , the numerical results are close and slightly below to those from the proposal .one of the reasons of the noticeable gap between lbm results and proposal may be the geometrical differences in the policrystal model .it can be observed from the figure [ fig : res6bulletrosetteicepolicrystal ] that the proposal overestimate the falling velocity for in comparison with and lbm results . the lbm normalized vertical velocity in the figure [ fig : res6bulletrosetteicepolicrystal ] were obtained as in section [ subsec : columnariceparticles ] for columnar ice crystals .we present in this work a lattice - boltzmann method to determine the dynamics of ice crystals in the atmosphere . given a characterization for the shape , size and mass density of the crystals , together with its atmospheric habitat , it is possible to completely determine the dynamical behavior of the crystals .the numerical method proposed provides good results for the sedimenting velocity for the geometries , sizes and range of reynolds number analyzed . for the hexagonal column crystals , the results obtained by lbm are completely inside the dispersion region of the experimental , laboratory , measurements .when the capacitance ( eq . [ eq : columnarcapacitance ] , ) is used as variable , the dispersion of both experimental and lbm results decreases noticeable . in this case a small bias of the lbm results towards the lower end of the dispersion region can be observed . by direct comparison , we see that the lbm computed sedimenting velocity turns out to be a little higher than the proposal , and much lower than the proposal of mhkc ( proposals from ) .this is observed for all the aspect ratios analyzed .the bigger differences we find with the proposal of occur for columns with vertical or near vertical alignment . in the problem of the six bullet rosette crystals the lbm results turn out to be very close and slightly below to the proposal of .the difference in the models used to represent the geometry of the crystals could be the reason for this slight difference . to the best of the author s knowledge , there are no reported experimental results for polycrystals of the size , shape and in the reynolds regime analyzed .we want to emphasize that a great deal of problems , with different geometries and values of parameters ( like density , aspect ratio , etc . )can be analyzed by using the lbm methods . in this wayone could get statistical characterizations , as those obtained via laboratory experiments , for new crystal shapes ; one could also study the sensitivity models parameters , etc .the author wants to thank nesvit e. castellano , rodrigo e. brgesser and omar e. ortiz for useful discussions and contributions .n. e. castellano and r. e. brgesser brought the author s attention to this subject .j. p. giovacchini is a fellowship holder of conicet ( argentina ) .this work was supported in part by grants 05-b454 of secyt , unc and piddef 35/12 ( ministry of defense , argentina ) .
lattice boltzmann method ( lbm ) is used to simulate and analyze the sedimentation of small ( ) ice particles in the atmosphere . we are specially interested in evaluating the terminal falling velocity for two ice particle shapes : columnar ice crystals and six bullet - rosettes ice policrystal . the main objective in this paper is to investigate the lbm suitability to solve ice crystal sedimentation problems , as well as to evaluate these numerical methods as a powerful numerical tool to solve these problems for arbitrary ice crystal shapes and sizes . lbm results are presented in comparison with laboratory experimental results and theoretical proposals well known in the literature . the numerical results show good agreement with experimental and theoretical results for both geometrical configurations .
it is well known that for a cross - section data set , the ordinary least squares ( ols ) estimator is typically consistent for the coefficient vector of the best linear approximation to the conditional mean , even if the conditional mean is not necessarily linear .such a `` robust '' nature of ols is one of the reasons why ols is popular in empirical studies ( * ? ? ?* chapter 3 ) .suppose now that a panel data set is available .in such a case , we are typically interested in estimating the partial effects of the observed explanatory variables , , on the conditional mean of the dependent variable , , conditional on and the unobservable individual effect , where denotes the index for individuals and denotes the index for time . for that purpose ,a popular strategy is to model the conditional mean ] is truly additive in and , and linear in .the goals of this paper are twofold .the first objective is to study the asymptotic properties of the fe estimator under possible model misspecification when both and are large .the asymptotics used is the joint asymptotics where and jointly go to infinity ( more precisely , we index by and let as ) .assume that is independent and identically distributed ( i.i.d . ) , and for each , conditional on , is stationary and weakly dependent .suppose that ] , which gives some rational to use the fe estimator ( and its variant ) even when the model is possibly misspecified .we regard this probability limit as a pseudo - true parameter ( see section [ sec : interpretation ] for the discussion on interpretation or plausibility of this probability limit ; especially if ] ) .we then establish the asymptotic distributional properties of the fe estimator around the pseudo - true parameter when and jointly go to infinity .we demonstrate that , as in the correct specification case , the fe estimator suffers from the incidental parameters bias of which the top order is .moreover , we show that , after the incidental parameters bias is completely removed , the rate of convergence of the fe estimator depends on the degree of model misspecification and is either or .the second goal of the paper is to establish asymptotically valid inference on the ( pseudo - true ) parameter vector .since the fe estimator has the bias of order , the first step is to reduce the bias to . for that purpose , one can use existing general - purpose bias reduction methods proposed in the recent nonlinear panel data literature .for example , one can use the half - panel jackknife ( hpj ) proposed by .we refer to and for alternative approaches on bias correction for fixed effects estimators in panel data models .after the bias is properly reduced , the fe estimator has the centered limiting normal distribution provided that or depending on the degree of model misspecification .we are then interested in estimating the covariance matrix or quantiles of the centered limiting normal distribution . to this end , we study the asymptotic properties of the clustered covariance matrix ( ccm ) estimator and the cross section bootstrap under the prescribed setting . we show that the ccm estimator ( with an appropriate estimator of the parameter vector ) is consistent in a suitable sense and can be used to make asymptotically valid inference on the parameter vector provided that or .this shows that inference using the ccm estimator is `` robust '' to model misspecification .also the cross section bootstrap can consistently estimate the centered limiting distribution of the fe estimator without any knowledge on the degree of model misspecification and hence is robust to model misspecification , and moreover , interestingly , _ without any growth restriction on . the second feature of the cross section bootstrap is notable and shows ( in a sense ) that the incidental parameters bias does not appear in the bootstrap distribution .allowing for potential model misspecification is of importance in practice . in particular , this paper is of practical importance because it provides an interpretation for the fe estimator under potential misspecification , and additionally , it proposes methods for inference in linear panel data model with large and that are robust to model misspecification .however , the study of estimation and inference for linear panel data models that are robust to model misspecification is scarce .an exception is where he considered the lag order misspecification of panel ar models and established the asymptotic properties of the fe estimator under possible misspecification of the lag order .however , his focus is on the incidental parameters bias and he did not study the inference problem on the pseudo - true parameter .moreover , did not cover a general form of model misspecification .this paper fills this void .furthermore , the asymptotic properties of the ccm estimator and the cross section bootstrap when model misspecification and the incidental parameters bias ( in the coefficient estimate ) are present have not been studied in a systematic form and hence is under - developed . investigated the asymptotic properties of the ccm estimator when and are large but did not allow the case where the incidental parameters bias appears , nor did he cover model misspecification . studied the asymptotic properties of the cross section bootstrap when and are large , but ruled out the case where the incidental parameters bias appears , nor did he cover model misspecification as well .hence we believe that this paper is the first one that establishes a rigorous theoretical ground on the use of the ccm estimator and the cross section bootstrap when model misspecification and the incidental parameters bias are present .it is important to notice that , even without model misspecification , these asymptotic properties of the ccm and cross section bootstrap when the incidental parameters bias is present are new .we conduct monte carlo simulations to evaluate the finite sample performance of the estimators and inference methods under misspecification .we are particularly interested in the empirical coverage of the 95% nominal confidence interval .the empirical coverage probability using the ccm and cross - section bootstrap , especially the cross section bootstrap applied to pivotal statistics , is good .we also apply the procedures discussed in this paper to a model of unemployment dynamics at the u.s .state level .the results generate speed of adjustment of the unemployment rate towards the state specific equilibrium of about 17% .in addition , the analysis of estimates indicates that increments in economic growth are associated with smaller unemployment rates .the organization of this paper is as follows . in section [ sec : interpretation ] , we discuss the interpretation of fe estimator under misspecification . in section [ sec : asymptotics ] , we present the theoretical results on the asymptotic properties of the fe estimator under misspecification . in section [ sec :inference ] , we presents the results on the inference methods . in section [ sec : mc ] , we report a monte carlo study to assess the finite sample performance of the estimators and inference methods , together with a simple application to a real data .section [ sec : conclusion ] concludes .we place all the technical proofs to the appendix .we also include additional theoretical and simulation results in the appendix .* notation * : for a generic vector , let denote the -th element of . for a generic matrix , let denote its -th element .for a generic vector with index , .let denote the euclidean norm .for any matrix , let denote the operator norm of .for any symmetric matrix , let denote the minimum eigenvalue of .we also use the notation for a generic vector .* note on asymptotics * : in what follows , we consider the asymptotic framework in which as , so that if we write , it automatically means that .the limit is always taken as .this asymptotic scheme is used to capture the situation where and are both large .in this section we clarify the probability limit , which we will regard as a pseudo - true parameter , of the fe estimator under the joint asymptotics and discuss interpretation ( or plausibility ) of the pseudo - true parameter .the discussion is to some extent parallel to the linear regression case but there is a subtle difference due to the appearance of individual effects .suppose that we have a panel data set , where is an unobservable individual - specific random variable taking values in an abstract ( polish ) space , is a scalar dependent variable and is a vector of explanatory variables .typically , the random variable , which is constant over time , represents an individual characteristic such as ability or firm s managerial quality which we would include in the analysis if it were observable ( see * ? ? ?* chapter 10 ) .assume that is i.i.d ., and for each , conditional on , is a realization of a stationary weakly dependent process . herethe marginal distribution of is invariant with respect to .typically , we are interested in estimating the partial effects of on the conditional mean ] is written in the form , and `` model misspecification '' signifies any violation of this condition . for instance, this can happen if there are omitted variables or if nonlinearity occurs in the model .we discuss more details below ( see examples 1 and 2 below for concrete examples ) .the fe estimator defined by is consistent for the coefficient vector on as goes to infinity and is fixed if the specification is correct ( for the moment , assuming that exists ) and additionally the strict exogoneity assumption = { \mathbb{e}}[y_{it } \mid { \bm{x}}_{it},c_{i}] ] may not be written in the form , and consider the probability limit of the fe estimator when and jointly go to infinity .proposition [ prop1 ] ahead shows that , subject to some technical conditions , we have , as and , ^{-1 } { \mathbb{e } } [ { \widetilde}{{\bm{x}}}_{it } { \widetilde}{y}_{it } ] = : { \bm{\beta}}_{0 } , \label{ptrue}\ ] ] where ] ( for a moment , assume that some moments exist ) . to gain some insight, we provide a heuristic derivation of this probability limit under the sequential asymptotics where first and then . by definition ,we have since is weakly dependent conditional on , as first , we have , \ \bar{{\widetilde}{{\bm{x}}}}_{i } \stackrel{{\mathbb{p}}}{\to } \bm{0 } , \ t^{-1 } \sum_{t=1}^{t } { \widetilde}{{\bm{x}}}_{it } { \widetilde}{y}_{it } \stackrel{{\mathbb{p}}}{\to } { \mathbb{e } } [ { \widetilde}{{\bm{x}}}_{i1 } { \widetilde}{y}_{i1 } \mid c_{i}] ] . by the law of large numbers, the right side converges in probability to as . in what follows ,we discuss an interpretation of defined in ( [ ptrue ] ) .a direct interpretation is that is the coefficient vector of the best linear approximation to ] .however , the link emerges from the following discussion .let be the best partial linear predictor of on , i.e. , = \min_{g \in l_{2}(c_{1 } ) , \bm{b } \in \mathbb{r}^{p } } { \mathbb{e } } [ ( y_{it } - g(c_{i})-{\bm{x}}_{it}'\bm{b})^{2}],\ ] ] where < \infty \} ] , i.e. , - g_{0}(c_{i } ) - { \bm{x}}_{it}'\bm{b}_{0})^{2 } ] = \min_{g \in l_{2}(c_{1 } ) , \bm{b } \in \mathbb{r}^{p } } { \mathbb{e } } [ ( { \mathbb{e}}[y_{it } \mid { \bm{x}}_{it},c_{i } ] - g(c_{i})-{\bm{x}}_{it}'\bm{b})^{2 } ] .\label{papprox}\ ] ] therefore , the vector defined by ( [ ptrue ] ) is identical to the coefficient vector on of the best partial linear approximation to ] is indeed additive in and , and linear in , then coincides with the `` true '' coefficient on in ] due to possible model misspecification . in ( [ regression ] ), there are two scenarios on violation of the conditional mean restriction = 0 ] with positive probability but = \bm{0} ] with positive probability . depending on these two cases ,the asymptotic properties of the fe estimator _ do _ change drastically ( see section 3 ) .generally , both cases can happen .we give three simple examples to fix the idea . _example 1_. panel ar model with misspecified lag order .suppose that the true data generating process follows a panel ar(2 ) model = 0,\ ] ] where is such that and .conditional on , is stationary and typically weakly dependent . by a simple calculation, we have = c_{i}/(1-\phi_{1}-\phi_{2}) ] , we have . hence is independent of .suppose now that we incorrectly fit a panel ar(1 ) model .note that in this case , we have = c_{i } + \phi_{1 } y_{i , t-1 } + \phi_{2 } { \mathbb{e } } [ y_{i , t-2 } \mid y_{i , t-1},c_{i } ]. \end{aligned}\ ] ] hence ] is the first order autocorrelation coefficient of , i.e. , . letting , we have , i.e. , . by the independence of from , we have = { \mathbb{e } } [ { \widetilde}{y}_{i , t-1 } \epsilon_{it } ] = 0 ] , and = \phi { \widetilde}{x}_{it}^ { * } + u_{it} ] is given by ^{-1 } { \mathbb{e } } [ { \widetilde}{y}_{it } { \widetilde}{x}_{it } ] = \frac{{\mathbb{e } } [ ( { \widetilde}{x}_{it}^{*})^{2}]}{{\mathbb{e } } [ ( { \widetilde}{x}_{it}^{*})^{2 } ] + { \mathbb{e } } [ v_{it}^{2 } ] } \phi.\ ] ] finally , by , we have = { \mathbb{e } } [ ( { \widetilde}{x}_{it}^{*})^{2 } \mid c_{i } ] \phi - ( { \mathbb{e } } [ ( { \widetilde}{x}_{it}^{*})^{2 } \mid c_{i } ] + { \mathbb{e } } [ v_{it}^{2 } ] ) \beta_{0} ] .it is routine to verify that suppose that we incorrectly fit a panel ar(1 ) model . here= y_{it} ] is given by }{{\mathbb{e } } [ y_{i , t-1}^{2 } ] } = \frac{{\mathbb{e } } [ c_{i}/(1-c_{i}^{2})]}{{\mathbb{e } } [ 1/(1-c_{i}^{2})]}.\ ] ] here , and = { \mathbb{e } } [ c_{i } y_{i , t-1}^{2 } - \beta_{0 } y_{i , t-1}^{2 } \mid c_{i } ] = \frac{c_{i}-\beta_{0}}{1-c_{i}^{2}},\ ] ] which is non - zero a.s .if obeys a continuous distribution .the results in this paper could be interpreted as corresponding to the pseudo - likelihood model . under the additional assumptions of independence and normality ,the resulting ( conditional ) maximum likelihood estimator ( mle ) of ( given ) is identical to the fe estimator .thus , the fe estimator defined in ( [ fe ] ) can be viewed as a pseudo - mle . made an interesting observation about the pseudo - true parameter when the link function is misspecified for the generalized linear model ( see their corollary 1 ) .that is , the pseudo - true parameter is proportional to the true one up to nonzero scalar .however , their setting is significantly different from ours ; first of all in corollary 1 they assumed that the true model satisfies a generalized linear model but only the link function is misspecified , and only cross section data are available . in our case basicallyno `` model '' is assumed ( and hence the `` true parameter '' is not well - defined in general ) , so that their result does not extend to our setting . in this paperwe focus on the fe estimator .this is because the fe estimator is widely used in practice and , as we have shown , the probability limit of the fe estimator under misspecification admits a natural and plausible interpretation , parallel to the linear regression case .there could be alternative estimators ; for example , we could consider the average of the individual - wise ols estimators , i.e. , let denote the ols estimator obtained by regressing on with each fixed , and consider the estimator . however , this estimator does not share the interpretation that the fe estimator possesses . in some casesthe probability limit of happens to be identical to , but not in general . to keep tight focus , we only consider the fe estimator in what follows .in this section , we study the asymptotic properties of the fe estimator under possible model misspecification ( i.e. , ] and ] and ] a.s . and \leq m ] .assume that the matrix is nonsingular . in condition ( a1 ), is an unobservable , individual - specific random variable allowed to be dependent with in an arbitrary form .condition ( a1 ) assumes that the observations are independent in the cross section dimension , but allows for dependence in the time dimension conditional on individual effects .we refer to section 2.6 of for some basic properties of mixing processes .condition ( a1 ) is similar to condition 1 of , and allows for a flexible time series dependence .the mixing condition is also similar to assumption 1 ( iii ) of , while she allowed for cross section dependence .the mixing condition is used only to bound covariances and moments of sums of random variables , and not crucial for the central limit theorem .therefore , in principle , it could be replaced by assuming directly such bounds .we assume the mixing condition to make the paper clear .note that because of stationarity assumption made in ( a1 ) , the marginal distribution of is invariant with respect to , i.e. , ., is drawn from the stationary distribution , conditional on . ]the stationary assumption rules out time trends , but is needed to well - define the pseudo - true parameter , and maintained in this paper .extensions to non - stationary cases will need different analysis and are not covered in this paper .condition ( a2 ) is a moment condition . as usual, there is a trade - off between the mixing condition and the moment condition .condition ( a2 ) implies that \leq m' ] for are understood as ] for . we first note that under conditions ( a1)-(a3 ) , both and are well behaved in the following sense .[ lem1 ] under conditions ( a1)-(a3 ) , we have \| < \infty ] . all the technical proofs for section 2 are gathered in appendix [ appendix a ] .we now state the asymptotic properties of the fe estimator .define = \bm{0} ] in .another source , which only contributes to higher order terms , is conditional correlation between and for conditional on , which arises from using instead of ] a.s . and-consistent otherwise .in fact , the rate of convergence of the fe estimator ( after the incidental parameters bias is removed ) depends on the order of the covariance matrix of the term ) + \frac{1}{n } \sum_{i=1}^{n } { \mathbb{e } } [ { \widetilde}{{\bm{x}}}_{i1 } \epsilon_{i1 } \mid c_{i}].\ ] ] by this decomposition , ) \right \}^{\otimes 2 } \right ] + { \mathbb{e}}\left [ \left ( \frac{1}{n } \sum_{i=1}^{n } { \mathbb{e } } [ { \widetilde}{{\bm{x}}}_{i1 } \epsilon_{i1 } \mid c_{i } ] \right ) ^{\otimes 2 } \right ] \\ & = \frac{1}{nt } { \mathbb{e}}\left [ \left \ { \frac{1}{\sqrt{t } } \sum_{t=1}^{t }( { \widetilde}{{\bm{x}}}_{1 t } \epsilon_{1 t } - { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{i1 } \epsilon_{i1 } \mid c_{1 } ] ) \right \}^{\otimes 2 } \right ] + \frac{1}{n } { \mathbb{e}}[{\mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1 } \epsilon_{1,1 } \mid c_{1}]^{\otimes 2}].\end{aligned}\ ] ] since is weakly dependent conditional on , the first term is . on the other hand ,the second term is zero if = \bm{0} ] a.s . and otherwise .intuitively , unless = \bm{0} ] a.s . , by condition ( a1 ) , the covariance between and converges to zero sufficiently fast as , so has the faster rate ) . in some cases , at least theoretically , it may happen that \neq \bm{0} ] a.s . , which corresponds to the case where the matrix ^{\otimes 2}] ] .this shows that , after subtracting the incidental parameters bias , the fe estimator may have different rates of convergence within its linear combinations .moreover , the remainder term in the expansion ( [ expansion2 ] ) has the constant term of order , so that the extra bias term of order appears in such a case ., it is shown that the remainder term in the expansion ( [ expansion2 ] ) is in fact further expanded as a^{-1}{\mathbb{e } } [ { \widetilde}{{\bm{x}}}_{1,1}\epsilon_{1,1 } ' \mid c_{1 } ] ] + o_{{\mathbb{p } } } [ n^{-1/2 } \max \ { n^{-1},t^{-1 } \}] ] a.s . or not ) .it is possible to have an alternative expression of the bias term of order .[ cor1 ] suppose that conditions ( a1)-(a3 ) are satisfied .then we have : + o(t^{-1 } ) = : b+o(t^{-1}).\ ] ] in particular , the bias term of order in the expansion ( [ bahadur ] ) is rewritten as .finally , we provide some comments on the relation to the previous work .proposition [ prop1 ] is a nontrivial extension of theorem 2 of , in which he established the asymptotic properties of the fe estimator for panel ar models with exogenous variables allowing for lag order misspecification .proposition [ prop1 ] allows for a more general form of model misspecification , including lag order misspecification as a special case , and exhausts the incidental parameters bias of any order .[ rem : hansen ] proposition [ prop1 ] is related to . considered a model with =0 ] , and showed that the ols estimator is -consistent if there is no condition on time series dependence and -consistent if a mixing condition is satisfied for time series dependence .what matters for the rate of convergence of the ols estimator in is the order of the covariance matrix of the term , which is in the `` no mixing '' case and in the mixing case .while there is a similarity , proposition [ prop1 ] is not nested to his results in several aspects .first , in proposition [ prop1 ] , the rate of convergence of the fe estimator ( after the incidental parameters bias is removed ) depends on the degree of model misspecification ( i.e. , = \bm{0} ] for all under our setting ( if we think of as transformed variables ) , so that his results do not cover the case where the incidental parameters bias appears . on the other hand , while we exclusively assume that is mixing conditional on , covered the case where no such mixing condition is satisfied .therefore , the two papers are complementary in nature . obtained a general incidental parameters bias formula for nonlinear panel data models , allowing for potential model misspecification , when is going to some constant .their general result could be applied to the present setting in the case where =\bm 0 ] .this expansion is also the key for studying the properties of cross section bootstrap .by proposition [ prop1 ] , the fe estimator has the bias of order . in many econometric applications , is typically smaller than , so that the normal approximation neglecting the bias may not be accurate in either case of = \bm{0} ] .therefore , the first step to make inference on is to remove the bias of order and reduce the order of the bias to . under model misspecification ,bias reduction methods that depend on specific models ( such as panel ar models ) may not work properly .instead , we can use general - purpose bias reduction methods proposed in the recent nonlinear panel data literature .for example , the half - panel jackknife ( hpj ) proposed by is able to remove the bias of order even under model misspecification .suppose that is even .let and .for , construct the fe estimator based on the split sample .then the hpj estimator is defined by . using the expansion ( [ bahadur ] ) and corollary [ cor1 ] , as , we have the expansion , \label{bahadurh}\ ] ] so that the bias is reduced to in either case .therefore , we have provided that . proposed other automatic bias reduction methods applicable to general nonlinear panel data models .their bias reduction methods are basically applicable to the model misspecification case .is large . ] alternatively , a direct approach to bias correction is to analytically estimate the bias term . in this case, we typically estimate the first order bias term by using the technique of hac covariance matrix estimation ( see * ? ? ?moreover , another alternative approach is to use bias reducing priors on individual effects ( see , for example , * ? ? ? * and references therein ) .see also for a review on bias correction for fixed effects estimators in nonlinear panel data models . by the previous discussion ,a bias corrected estimator typically has the expansion . \label{expansion3}\ ] ] given this expansion , provided that , the distribution of can be approximated by . statistical inference on be implemented by using this normal approximation . as usual , since the matrices and are unknown , we have to replace them by suitable estimators .a natural estimator of is defined in ( [ fe2 ] ) , which is in fact consistent ( i.e. , ) as long as ( see lemma [ lem2 ] in appendix [ appendix a ] ) .we thus focus on the problem of estimating the matrix .we consider the estimator suggested by : where and is a suitable estimator of .proposition [ prop2 ] establishes the rate of convergence of .all the technical proofs of this section are gathered in appendix [ appendix b ] .[ prop2 ] suppose that conditions ( a1)-(a3 ) are satisfied .let be any estimator of such that ] .this bias appears even if we could use in place of . however , for inference purposes , the rate of convergence given in proposition [ prop2 ] is sufficient , and we do not consider the bias correction to .assume now that is nonsingular in either case of = \bm{0} ] denote the expectation under . given a vector valued statistic depending on both and , and a deterministic sequence , we write `` in probability '' if for every and , as , and `` in probability '' if for every and , there exists a constant such that for all ( recall that ) .we are now in position to state the main result of this section .[ prop3 ] suppose that conditions ( a1)-(a3 ) are satisfied .letting as , we have : , \label{bahadurb}\ ] ] in probability .therefore , provided that is nonsingular , we have : where the inequalities are interpreted coordinatewise .interestingly , proposition [ prop3 ] shows that despite the fact that the original fe estimator has the incidental parameters bias of which the top order is , as shown in proposition [ prop1 ] , the bootstrap distribution made by applying the cross section bootstrap to the fe estimator does not have the incidental parameters bias . as a consequence , the bootstrap distribution approaching the centered normal distribution holds without any specific growth condition on .in fact , this is not surprising .the main source of the incidental parameters bias comes from the term , which is linear in the cross section dimension .the bootstrap analogue of this term is thus , so that the difference of these terms has mean zero with respect to .the same machinery applies to the term , so that the incidental parameters bias is completely removed in the bootstrap distribution .the previous discussion also has the following implication : the cross section bootstrap can not be used as a way to correct the incidental parameter bias .recall that in the cross section case , the bootstrap can be used to correct the second order bias coming from the quadratic term ; here the incidental parameters bias comes from the terms linear in the cross section dimension , so that the cross section bootstrap does not work as a way to correct the bias .proposition [ prop3 ] shows that , for fixed and , we have where is the distribution function of the standard normal distribution ( recall that is the -th element of a vector and denotes the -element of a matrix ) .suppose that we have a bias corrected estimator having the expansion ( [ expansion3 ] ) . then by a standard argument , we can deduce that provided that .note that can be computed with any precision by using simulation .moreover , the computation of does not require any knowledge on the speed of , and in this sense the cross section bootstrap is robust to model misspecification .an analogous result holds for the hpj estimator ( see section 4.1 ) .[ cor4 ] suppose that conditions ( a1)-(a3 ) are satisfied .let denote the hpj estimator based on the bootstrap sample .letting as , we have : ,\ ] ] in probability .therefore , provided that is nonsingular , we have : this corollary follows directly from the definition of the hpj estimator and proposition [ prop3 ] , and hence we omit the proof . basically , the conclusion of corollary [ cor4 ] holds for other reasonable bias corrected estimators .we do not attempt to encompass generality in this direction .analogous results hold for pivotal statistics .because of the space limitation , we push the formal results on pivotal statistics to the appendix ( appendix [ appendix ] ). the higher order properties of the cross section bootstrap will be very complicated in this setting and we do not attempt to study them here .however , it is of interest to quantify the order of the convergence in , say , ( [ bootb ] ) in appendix [ appendix ] , which is left to future research .there are some earlier works on the bootstrap for panel data . called the cross section bootstrap in this paper the `` block bootstrap '' and studied its numerical properties by using simulations , but did not study its theoretical properties . developed the asymptotic properties of the cross section bootstrap when the strict exogeneity is met , hence excluding the possibility that the incidental parameters bias appears . studies the asymptotic properties of the moving block bootstrap for panel data , which resamples the data in the time series dimension and hence is different from the cross section bootstrap .importantly , while allowed for cross section dependence which we exclude here , she assumed that the number of time periods , , is sufficiently large ( typically ) so that the incidental parameters bias does not appear .lastly , proposed to use the cross section bootstrap for inference for nonlinear panel data models such as panel probit models , but did not give any theoretical result .the asymptotic properties of the cross section bootstrap were largely unknown when the incidental parameters bias appears , even without model misspecification , and the results in this section contribute to filling this void and give useful suggestions to empirical studies . in ( [ bfe ] ) ,the weights are multinomially distributed .it is possible to consider other weights .a perhaps simplest variation is to draw independent weights from a common distribution with mean and variance , which corresponds to the _ weighted bootstrap _ ( see , for example , * ? ? ?let denote ( [ bfe ] ) with replaced by these independent weights .then the conclusion of proposition [ prop3 ] holds with replaced by and replaced by .since the proof is completely analogous , we omit the details for brevity .so far , we have discussed the distributional properties of the cross section bootstrap . given proposition [ prop3 ] ,it is natural to estimate the asymptotic covariance matrix by the conditional covariance matrix of .however , since convergence in distribution does not imply moment convergence , proposition [ prop3 ] does not guarantee that \to a^{-1}\sigma a^{-1} ] '' case and the last case corresponds to `` \neq \bm{0} ] may not be additive in and , nor linear in , when both the number of individuals , , and the number of time periods , , are large .we make several contributions to the literature .first , we have shown that the probability limit of the fe estimator is identical to the coefficient vector on of the best partial linear approximation to ] in probability .letting as , we have : ,\ ] ] in probability .in particular , as long as , we have in probability .proposition [ prop4 ] establishes the rate of convergence of .this result is parallel to that in proposition [ prop2 ] .note that the condition that ] a.s . or not .consider , for the sake of simplicity , testing the null hypothesis , where is fixed , and consider the -statistic and its bootstrap version based on either the fe or hpj estimate : where is the vector such that and for . here because ( see lemma [ lem3 ] ) and in probability , we can deduce that under conditions ( a1)-(a3 ) , this holds as long as and does not require any specific growth restriction on .moreover , when , under conditions ( a1)-(a3 ) , we have provided that .in order to shed more light on the performance of the proposed methods , figure [ fig1 ] presets the bias and rmse of the fe and hpj estimators from a fixed cross - section when varying the time series . in these simulations , we only considered the ar(2 ) model given in the second example with .we fix the cross - section dimension at .the left panel shows the bias of the estimators as a function of .the results show that the hpj estimator has a small bias for small , but the bias disappears even for relatively small . on the contrary , the bias in the fe is large , and it remains relatively substantial even for large time series .the right panel displays the rmse for the estimators .it shows a good performance of the hpj estimator for moderate time dimensions .increases for . _ [ fig1 ] ] here we consider the case where the true dgp is where , , and . this model is a panel - data version of exponential ar ( expar ) models ( see * ? ? ?note that by ( * ? ? ?* example 3.2 ) , for each , the process ( which is independent of ) is geometrically ergodic , so that condition ( a1 ) is satisfied . as in the previous case ,we incorrectly fit a panel ar(1 ) model and estimate the slope parameter .the value of the pseudo - true parameter is .the simulation results for this case are presented in table [ table.mc.3 ] .* table [ table.mc.3 ] * : the fe estimate is severely biased and the fe - ccm option performs poorly because of the presence of large bias . on the other hand , the hpj estimate is able to largely remove the bias at the cost of slight variance inflation relative to the fe estimate in the finite sample . in this case , the hk and gmm estimates are able to reduce the bias to some extent , although gmm still presenting larger bias .the empirical coverage of the hk option is reasonably well ( which is partly due to the fact that the panel expar model is `` close '' to the panel ar(1 ) model ) .gmm is under coverage .in addition , the empirical coverage of the hk and gmm options worsen as becomes large . among the other options , hpj - hpjpb works particularly well .we shall recall here the notational convention . for a generic vector , denotes the -th element of , and for a generic matrix , denotes the -th element of .moreover , for a sequence indexed by , we write . in this section , we introduce some inequalities for -mixing processes , which will be used in the proofs below .let denote a stationary process taking values in some polish space , and let denote its -mixing coefficients .[ thma1 ] let denote the -field generated by ( ) .pick any integer .let and be real - valued random variables measurable with respect to and , respectively . if < \infty ] for some and such that , then we have - { \mathbb{e } } [ \xi ] { \mathbb{e } } [ \eta ] | \leq 12( { \mathbb{e } } [ | \xi |^{q } ] ) ^{q^{-1}}({\mathbb{e } } [ | \eta |^{r } ] ) ^{r^{-1 } } \alpha(k)^{1-q^{-1}-r^{-1}}.\ ] ] to illustrate an application of davidov s inequality , suppose that , and assume that < \infty ] . for bounding higher order moments ,we make use of yokoyama s ( 1980 ) theorem 3 .[ thma2 ] suppose that .assume that = 0 ] .if , then there exists a constant independent of such that \leq c t^{r/2}.\ ] ] [ lem2 ] suppose that conditions ( a1)-(a3 ) are satisfied . as ( and automatically ) , we have : ( i ) is nonsingular with probability approaching one , and + ; ( ii ) = t^{-1 } b_{t } = o(t^{-1}) ] ; ( iv ) ) \stackrel{d}{\to } n(\bm{0},v_{1}) ] ( the right side is absolutely convergent in ) . * part ( i ) * : recall that ] and ) \\& \quad + \frac{1}{n } \sum_{i=1}^{n } ( { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{i1 } { \widetilde}{{\bm{x}}}_{i1 } ' \mid c_{i } ] - { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1 } { \widetilde}{{\bm{x}}}_{1,1 } ' ] ) - \frac{1}{n } \sum_{i=1}^{n } ( \bar{{\widetilde}{{\bm{x}}}}_{i}\bar{{\widetilde}{{\bm{x}}}}_{i } ' - { \mathbb{e } } [ \bar{{\widetilde}{{\bm{x}}}}_{1}\bar{{\widetilde}{{\bm{x}}}}_{1 } ' ] ) \\ & = : { \widehat}{d}_{1 } + { \widehat}{d}_{2 } - { \widehat}{d}_{3}.\end{aligned}\ ] ] fix any .we wish to show that , and .we make use of theorems [ thma1 ] and [ thma2 ] .put ] , we have , which implies that .the fact that is deduced from a direct evaluation of the variance .we next show that . by the cross section independence and the cauchy - schwarz inequality, we have = \frac{1}{n^{2 } } \sum_{i=1}^{n } \operatorname{var}(\bar{{\widetilde}{x}}_{i}^{a } \bar{{\widetilde}{x}}_{i}^{b } ) = n^{-1 } \operatorname{var}(\bar{{\widetilde}{x}}_{1}^{a } \bar{{\widetilde}{x}}_{1}^{b } ) \leq n^{-1 } ( { \mathbb{e } } [ { \mathbb{e } } [ ( \bar{{\widetilde}{x}}_{1}^{a})^{4 } \mid c_{1}]])^{1/2 } ( { \mathbb{e } } [ { \mathbb{e } } [ ( \bar{{\widetilde}{x}}_{1}^{b})^{4 } \mid c_{1 } ] ] ) ^{1/2}.\ ] ] using theorem [ thma2 ] to bound ] , we have = o(n^{-1}t^{-2}) ] and ] .define ) ] . here , using theorem [ thma2 ] to bound ]. therefore , we obtain the desired result .the expansion ( [ bahadur ] ) follows from lemma [ lem2 ] ( note that we use the fact that ) .suppose that = \bm{0} ] , so that by lemma [ lem2 ] ( iv ) , .suppose now that \neq \bm{0} ] .the asymptotic normality follows from the fact that if = \bm{0} ] .observe that \| & = \sum_{|k| \geq t } |k|^{-1 } \cdot |k|\| { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1 } \epsilon_{1,1+k } ] \| \\ & \leq \frac{1}{t } \sum_{|k| \geq t } |k| \| { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1 } \epsilon_{1,1+k } ] \| \\ & \leq \frac{1}{t } \sum_{k=-\infty}^{\infty } |k| \| { \mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1 } \epsilon_{1,1+k } ] \| = o(t^{-1}).\end{aligned}\ ] ] therefore , we have , which implies the desired result . since , we have where by definition, we have by theorems [ thma1 ] and [ thma2 ] , we can show that \leq c nd_{nt}^{-1} ] and \leq c ] and ] for some constant .when , so that , the assertion directly follows from the proof of ( * ? ?* theorem 1 ) with .the proof for the case is almost the same as that for the case . ]therefore , we have . if = \bm{0} ] with positive probability . clearly , .likewise , we have ^{2 } { \mathbb{e}}\left [ \left ( \frac{1}{\sqrt{t } } \sum_{t=1}^{t } \eta_{1 t } \right ) ^{2 } \mid c_{1 } \right ] \right ] .\ ] ] by theorem [ thma1 ] , we have \leq c ] ; ( iii ) = o_{{\mathbb{p } } } ( n^{-1/2 } t^{-1 } ) ] and , for any fixed , = \sum_{i=1}^{n } a_{i}^{2 } - n(n^{-1}\sum_{i=1}^{n } a_{i})^{2} ] , we have & = { \widehat}{s}_{2}^ { * } -{\widehat}{s}_{2 } + { \widehat}{s}_{2 } - { \mathbb{e } } [ { \widehat}{s}_{2 } ] \\ & = \frac{1}{n } \sum_{i=1}^{n } ( w_{ni}-1 ) \bar{{\widetilde}{{\bm{x}}}}_{i } \bar{\epsilon}_{i } + ( { \widehat}{s}_{2 } - { \mathbb{e } } [ { \widehat}{s}_{2 } ] ) .\end{aligned}\ ] ] by lemma [ lem2 ] ( iii ) , the second term is . it remains to show that the first term is .fix any . observe that \leq \frac{1}{n^{2}}\sum_{i=1}^{n } ( \bar{{\widetilde}{x}}^{a}_{i } \bar{\epsilon}_{i})^{2}.\ ] ] by the proof of lemma [ lem2 ] ( iii ) , the expectation of the right side is , which implies the desired result .[ lem4 ] under conditions ( a1)-(a3 ) , we have : ) \leq { \bm{x}}\ } - { \mathbb{p}}\ { n(\bm{0 } , v_{1 } ) \leq { \bm{x}}\ } | \stackrel{{\mathbb{p}}}{\to } 0,\ ] ] where )({\widetilde}{{\bm{x}}}_{1,1+k}\epsilon_{1,1+k}-{\mathbb{e}}[{\widetilde}{{\bm{x}}}_{1,1+k}\epsilon_{1,1+k } \mid c_{1}])'] ] .thus , it suffices to show that as . by the cross section independence , .\label{moment}\ ] ] as in the proof of proposition [ prop2 ] , we can show that = o(1) ] for any , which in turn implies the desired result .we are now in position to prove the lemma .define in such a way that if for some for .letting , we have ) = n^{-1/2 } \sum_{i=1}^{n}{\widetilde}{\bm{u}}_{ni}^{*} ] for almost every realization of . by the above fact, we obtain the desired result .we wish to verify that .observe that ) \notag \\ & \quad + \frac{1}{n } \sum_{i=1}^{n } ( w_{ni}-1 ) { \mathbb{e } } [ { \widetilde}{{\bm{x}}}_{i1 } \epsilon_{i1 } \mid c_{i } ] .\label{expansion}\end{aligned}\ ] ] here the first term is and the second term is zero if = \bm{0} ] .then the second term in ( [ expansion ] ) vanishes , and the assertion ( [ bootd ] ) follows from lemma [ lem4 ] .suppose that \neq \bm{0} ] .this completes the proof .the proof is similar to that of proposition [ prop2 ] . since , we have where by the proof of proposition [ prop2 ] , together with the assumption that ] , and when \neq \bm{0} ] and = o(n^{-3}) ] & ] & ] + ci & ] & ] & ] & ] & ] + ci & ] & ] & ] & ] & ] + ci & ] & ] & ] & ] & ] + ci & ] & ] & ] & ] & ] + ci & ] & ] & ] & ] & ] + ci & ] & ] & $ ] +
this paper considers fixed effects ( fe ) estimation for linear panel data models under possible model misspecification when both the number of individuals , , and the number of time periods , , are large . we first clarify the probability limit of the fe estimator and argue that this probability limit can be regarded as a pseudo - true parameter . we then establish the asymptotic distributional properties of the fe estimator around the pseudo - true parameter when and jointly go to infinity . notably , we show that the fe estimator suffers from the incidental parameters bias of which the top order is , and even after the incidental parameters bias is completely removed , the rate of convergence of the fe estimator depends on the degree of model misspecification and is either or . second , we establish asymptotically valid inference on the ( pseudo - true ) parameter . specifically , we derive the asymptotic properties of the clustered covariance matrix ( ccm ) estimator and the cross section bootstrap , and show that they are robust to model misspecification . this establishes a rigorous theoretical ground for the use of the ccm estimator and the cross section bootstrap when model misspecification and the incidental parameters bias ( in the coefficient estimate ) are present . we conduct monte carlo simulations to evaluate the finite sample performance of the estimators and inference methods , together with a simple application to the unemployment dynamics in the u.s .
cooperation is a key aspect in the real world , ranging from biological systems to human behavior .therefore , people restore to the game theory to study the emergency and maintenance of cooperation in biology , psychology , computer science , and economics . especially , the prisoner s dilemma game ( pdg ) , has become a metaphor to approach the emergency of cooperation and altruism behavior . in the tradition pdg ,each of two players chooses a strategy from cooperation ( ) or defection ( ) simultaneously and gets payoff .they both receive upon mutual and upon mutual .a defector gets when it plays game with cooperator who gets . in pdg, we have and . because the mutual get the highest total income , is the better choice than no mater what the other player s strategy . without any mechanism for the evolution of cooperation ,natural selection favors defection .the other widely studied games include snowdrift game , public good game , rock - paper - scissors game , and so on .the complex network has also attracted lots of attentions in the past few years .the complex network is ubiquitous in nature .the human society can also be described as the systems composed of interacting agents .the classical social network maps the individual into the node , and the connection between individuals into the link .the evolutionary game theory in spatial structure has became a unifying paradigm to study how cooperation may be sustained in a structured population .it was found that the spatial extension is one of several natural mechanisms to enforce cooperation .network structure will affect the behavior of strategy density . in lattice network ,the cooperation is usually get together to support each other to resist the defection .santos and pacheco found in scale - free networks the strong correlation leads to the dominating trait throughout the entire range of parameters of both games in scale - free networks . and also , there are anmount of researches on other networks , like small - world and random network . when the player on the structure network chose the better strategy to play game , in fact , not that the players select the proper strategy , but player s strategy is determined by the network structure .for example , in scale - free networks , the large degree nodes ( hubs ) and the nodes which connect to hubs tend to be occupied by .the networks used in the most papers of this field are statistic .the connection will never change once it is build .it is not realistic enough , as the interactions themselves help shape the network .what is more , in the real world , the relationship between the people is not constant .sometimes people can not cut some relationship with their relatives , neighbors or colleagues but they can end their old relationship and build a new one . sometimes this changing is caused by the results of the game , because people would like to make friends in a reciprocal respect .for example , people always like to make friends with rich one for a sake of pursuing fortune .so , when we study the social model in network like pdg , the network structure should be dynamical entities .the nodes can remove or sustain their link in network according to the game results .till now , there are few models studied the cooperative behaviors in a groups with adaptive connections . besides some early work ,arne build a coevolution model of strategy and structure . in this model , the probability of forming or cutting link between node and is based on their strategies .the changing of network structure is result from the strategy changing in the network .then it also affect the strategy density back. however , the link could change even if the nodes strategy do not change in their model .the rewire of link in this model is not the player s own decision .et al . _ also build a coevolution model that the node rewire its link only for changing its strategy .moreover , in this model , the node rewire its long range link based on the existed network structure , not the playing game results . in our opinion , a rational model for coevolution of game andnetwork structure should contain two features : ( 1 ) the nodes rewire their links only when agents change their status ; ( 2 ) the rewiring should be based on the playing results of game . in this paper , we will present a coevolution model of the pdg and network .we use pdg as a metaphor to studying cooperation between unrelated individuals and consider a social networks with four fixed local links and one adjustable long - range link ( lrl ) .the agents in the network play game with their network neighbors .they will change their strategies and adjust lrls according to the results of game .then the network structure changing also affect the cooperation density .we set up a system of players arranged at the nodes of a ring lattice network .each node is connected with four local nodes .these local interactions will not change during the whole process of the evolution . besides four fixed links , every node in this lattice has an adjustable lrl which connects to another node and self - connections and the duplicate linksare excluded .we call the lrl out - link for the node to whom it belong or in - link for the node to whom it connect. the node can select another node to which the out - link wires , but it can not give up the lrl .therefore , each node has at least one out - link and many possible in - links .when node changes its strategy , it will also rewire its lrl .we will discuss when and how lrls rewire later .as suggested by nowak and may , we adopt , , and . then can be considered as the temptation to against .every player plays the pdgs with its neighbors on network and itself and get the total payoff .after each round of the game , players are allowed to inspect their neighbors total payoffs and change their strategies in the next round .the player updates its strategy by selecting one of its neighbors with a probability , where is the community composed of the nearest neighbors of the player , and is the degree of node at time . in the spirit of preferential attachment proposed by a .-barbasi and r. albert , we incorporate the preferential selection rule to model social behaviors . in eq .[ eq1 ] , player with large degree has more probability to impact his neighbors . that is true in the society that people who have great impact often have lots of social relations and they are also focused by their friends .node will follow the node s strategy by the probability , } , \label{eq2}\ ] ] where and are the total payoffs of node and , and indicates the noise generated by the players allowing irrational choices .if node has the same strategy with or do not mimic s strategy , node will do nothing .otherwise , it will rewire its lrl to a new one .there are two rewiring rules in our model : random rewiring and preferential rewiring . with probability ,the density of cooperation in the network , node will chose a new node randomly .for the rest probability , node will chose a new node according to the node s payoff . in the preferential rewiring rule ,the node rewires its link according to the payoff of all nodes in network , where is the probability of node rewiring its link to and presents all nodes in the graph . is used to change the effect of payoff . indicates that the payoff has no effect here and the nodes rewire their links randomly . for , the node will prefer to connect the node with larger payoffs .so it also looks like a kind of preferential selection rule .we run our simulations with varying and for fixed and the system size .all the results in this paper are obtained from the average results with different monte carlo ( mc ) simulation trails .we start with node linking its lrls to other nodes randomly with equal probability and random initial state with as the initial state .the players update their strategies in random sequence . in every mc step ,all nodes have one chance to change their strategies and rewire their links .( color online ) frequency of cooperators for different as functions of the advantage of defectors .,scaledwidth=50.0% ] ( color online ) frequency of cooperators evolve with for systems at different parameters on pdgs.,scaledwidth=50.0% ] figure [ fig1 ] shows the frequency of cooperators in our model as the functions of for different .similar to evolutionary game in regular network , we also find two thresholds in our model .full cooperation is achieved if does not exceed the threshold . for , can not resist the temptation of and can not survive in the network. in the region of , and can coexist in the network .compared with the case of , the position of does not change with .however , affect the conspicuously .the probability of node using preferential selection to rewire its lrl is .therefore does not work at close to or close to .when , the qualitative results remain unaffected by that decreases monotonous with .when , there exists a region of promoting cooperation obviously .this promotion starts at ( ) and this region enlarge with increasing .but the effect of promotion does not increase with .we observe that does not change at for , , and .actually , the transition is caused by the changing of network structure .we will discuss it in the next subsection . in order to discuss how the promotes in the promotion region, we present the time evolutions of in fig .[ fig2 ] for fixed with different values .the red , blue , and black lines are the averages of trials for , , and respectively .the green one is the time series of one trail in the black line . for and , decreases with time to its station state quickly . as shown in fig .[ fig1 ] , for is a little higher than that of .however , for , decreases like firstly , and then the evolution of network drives increasing with time to . considering that the black line is the average of trails , we believe the green line in fig .[ fig2 ] contains more details of the evolution . in the early stage of the green line, decreases to a temporary stable state in a manner similar to but a little larger than .however , at , there is a sharp increasing in the green line from about to which is also the final level of the average result ( the black line ) .it means that the gradually increasing of the black line is caused by the average effect of same sharply increase at different times . in this modelthe behavior of and the evolution of network structure are equal important .the evolution of network structure results in the transition of . in order to describe the network structure ,we first present the degree distribution in fig .panel ( a ) is in the case of the stable state of red line in fig .[ fig2 ] . herethe preferential rewiring does not work and all lrls select the target nodes randomly .considering the self - connection is forbidden , we know here usually is large enough , so one can get figure [ fig3](b ) is for the stable state of blue line in fig . in ( b ) is similar to that of ( a ) but the largest degree is .[ fig3](c ) is for the stable state of gree line in fig .[ fig2 ] and ( d ) is for the green line after the sharp increasing . both ( c ) and ( d ) in fig .[ fig3 ] are the degree distributions of one trial , but not the cumulative stationary degree distribution of different trials . by comparing ( c ) with ( d ) , it is helpful to uncover the reason of the sharp increasing in fig .[ fig2 ] . in fig .[ fig3 ] ( d ) , there is only one node that its degree is larger than half of the other nodes connected to it .we name this node which has the largest degree in the network as hub node ( hn ) . as presented in fig .[ figbc ] , the other nodes can be divided into two types : the nodes connect their lrls to hn and the nodes do not .we name the first node as an and the second one as bn .the number of them are and , respectively .( color online ) the cumulative stationary degree distributions in pdgs.,scaledwidth=50.0% ] ( color online ) illustration of hn , an , and bn .each node in the network has four fixed links and there are five red nodes wire their lrls to the blue one . in order to make an and hn prominent, we do not draw the lrls of other nodes .the blue node has the largest degree in this net , so blue node is hn , and the red one is an and the others are bn .we draw the arrows in the figure to present these lrls are out - links for an and in - links for hn.,scaledwidth=30.0% ] now , we exam the detail of the network after the sharp increasing in the green line ( , ) of fig .note that the strategy of hn is always and the strategy of most ans is also . before the sharp increasing or in the case of other parameters without sharp increasing ,the hns are also prefer to . this phenomenon is also observed in some other networks with hub nodes .more detailed information of our model are listed in table [ tab1 ] . in table[ tab1 ] , is the cooperation density of an and is for bn .almost all nodes of ans chose the strategy , so we do not need to present the mean payoff of an with .what is more , it is found that is close to the case of ( for , for , and for ) .it means that the existence of an does not affect the strategy density of bn . as discussed in ref , an can resist the temptation of by mimicking the strategy of hn .after the sharp increasing , the probability that an mimics the strategy of hn is much larger than that of other neighbors .the hn s payoff is also larger because it has a lots of in - link lrls .we will discuss the details of these probabilities in the next subsection . on the other hand, only the node with strategy can grow into hn .if hn is occupied by , hn will get higher payoff temporarily .however , as we discussed above , an will follow hn s strategy and the strategy of an will be . then the hn can not earn payoff from its in - link lrls .once hn can not earn enough payoff , both preferential and random rewiring will drive ans to rewire its lrl to other nodes .then a new hn with strategy will appear in the network .so it seems that strategy is a better choice for hn because it can earn a stable higher payoff . from table[ tab1 ] , we also find that the bns with earn the most payoff and the payoff of bn with is close to the payoff of an .however , although the mean payoff of bn with is the highest , in fact , the density of cooperator does nt decrease with time .it shows that the probability of mimicking strategy and mimicking strategy are the same ..the detailed information of prisoner s dilemma games ( ) . [ cols="^,^,^,^",options="header " , ] [ tab1 ] each horizontal line in fig .[ fig4 ] presents a snapshot of the network .we arrange these snapshots with time from top to the botton to show how the an evolves with time .so we can depict every player s strategies in network and observe the evolution of these strategies .the riht panel is at the same time with the left .there is also a sharp increasing of at the same time like the green line in fig .[ fig2 ] in looks . at ,about mc steps before the transition , increase gradually from about to .after the sharp increasing , still increase gradually to the final stable state .moreover , before the sharp increasing happened , one can observe many black blocks ( the upper part of the left panel in figure [ fig4 ] ) .it means the model has the similar feature of pdg in regular network that the node tends to get together for blocks to resist the .these blocks start at a few , maybe three or more , and then close to each other in the network coincidentally . then a block is established and it will grow to change their neighbors strategies .after some mc steps , the block will shrink and then disappears in the last .after the sharp increasing of , there are too few red dots ( in an ) .the green strip ( in an ) indicates that the ans or bns are very stable in the network .the probability of an change to bn is very small and vice versa .based on the results in the above context , the effect of is different from various and . after the sharp increasing, the nodes in network can be divided into an and bn . almost all an are and the density of in bn is close to the case of .so we can use the mean field theory and some basic feature of stable state to explain why the sharp increasing happened .after the sharp increasing in fig .[ fig3 ] , the system reaches the stable state gradually .then we have or , where isthe average number of nodes changed from an to bn in one mc step and is that changed from bn to an .considering that there are too few in an , we assume that is only caused by and random rewire . here, we neglect the preferential rewiring . because the contribution of preferential rewiring is only about of random rewiring .then we get here , means the change happened in the random rewiring , and is the mean degree of nodes in networks .we neglect self - connection and multi - connection forbidden and we have here .because an has the same strategy with hn , an only mimics the strategy from other neighbors .the big fraction is the probability of an do not chose hn to mimic the strategy . the last is the probability of mimicked target with strategy .we assume is the probability of success in the mimicking .then will be more complicate .we assume that bn change to an because they use the preferential rewiring .the contribution of random rewiring is about of preferential rewiring , so we neglect it and derive the following formula , q_{b\rightarrow a } ( 1-p_c)n_b \nonumber \\ & & \frac{(n_a+5)^{\alpha}}{(n_a+5)^{\alpha}+n_a ( 2 + 4p_c)^{\alpha}+ p_{bc } n_b \left ( 1+(5+\frac{n - n_a}{n})p_c \right ) ^{\alpha } + n_b ( 1-p_{bc } ) \left ( b(5+\frac{n - n_a}{n})p_c \right ) ^{\alpha } } , \label{eq5}\end{aligned}\ ] ] where means the preferential rewiring , and is the probability of bn with strategy to mimic its neighbor and that try to mimic its neighbor .the fraction here is the probability of node rewire to hn using the preferential rewiring .we assume is the probability of success in the mimicking .now , one can get from the simulation of and , and then we know how evolves with time by using . if , will not change with time . and means will increase in the next mc step . however , we do not know and yet . according that the mean payoff of with is larger than mean payoff of in tab .[ tab1 ] , we conjecture and .indeed , we find is fit to our model. we will take with and as examples . for and get from the simulation . fig .[ fig5 ] plots as different . for are two stable points at and , and one unstable point at . for , there is only one stable state at .the unstable point will decrease with the increasing of and coincident with the first stable point at . however , even , the maximal degree in the network is about .so the first stable point can be discarded .when the unstable point crosses or there is only one stable point , the system will reach to the second stable point .( color online ) with various .panel ( b ) is an enlargement of ( a).,scaledwidth=50.0% ]the coevolution of dynamics and network structure is rapidly becoming an important field of the evolutionary game .it contains more details about the social interaction in the real world . in this paper, we build a co - evolution model of pdg and network structure . each node in networkhas four fixed local links and one adjustable lrl .when the node changes its strategy , it will rewire its lrl to another node according to the node s payoff and density of cooperation . and we introduce a parameter to denote the effect of payoff .many early works also proved that the adaptive network can enhance the cooperation .all these enhancements are caused by the emergency of cooperator with large degree in the network . in , the cooperation is very sensitive to the plasticity parameter and only the adaptive network can enhance the cooperation .in our model , the players rewire their lrls for any , but the cooperation is enhanced only in the case of that this enhancement is obvious for in a certain region of .however , our results show that the enhancement of cooperation only happen in the case of changing the network structure property . in our model , for , the node will also rewire its lrl , but the network property will not change and the cooperation level will not be enhanced .the cooperation is enhanced only when the node rewires its lrl according to the payoff .similar phenomena was also observed in our simulations with snowdrift game ( sg ) .we found that sg is more sensitive to than pdg and the obvious enhancement is for a smaller .so we conjecture that the coevolution of network structure and game is an important mechanics to maintain the cooperation in the real society .different from the results in which the cooperation always dominates in the adaptive network and the increasing of cooperation is limited . that is caused by two reasons : ( 1 ) in the probability of the player use the random rewiring .( 2 ) the existance of four fixed links in network can be regarded as a noise to prevent the preferential selection . in , authors discussed the leaders and the global cascades .if every node could change its strategy in a smaller probability , the global cascades of coopertation is also observed in our model .the analysis in this paper is based on the balance of an and bn . however , when the sharp increasing did nt happen , perhaps there exist more than one hns and hn is changing from one node to another frequently . because of the absense of the information about spacial structure and eq .[ eq2 ] , the presented analysis in this model is not very precise any more .actually , it is impossible to include all the details for the analysis .we just hold on the main factors of the model and it works well enough to explain the main features of our model .this work was supported by the national natural science foundation of china under grant no . and by the fundamental research fund for physics and mathematic of lanzhou university .this study is supported by the high - performance computer program in lanzhou university .
_ most papers about the evolutionary game on graph assume the statistic network structure . however , social interaction could change the relationship of people . and the changing social structure will affect the people s strategy too . we build a coevolutionary model of prisoner s dilemma game and network structure to study the dynamic interaction in the real world . based on the asynchronous update rule and monte carlo simulation , we find that , when players prefer to rewire their links to the richer , the cooperation density will increase . the reason of it has been analyzed . _
the starting point for the theory proposed here is simple but somewhat abstract . instead of the five equations and five unknowns that the navier - stokes - fourier formulation employs to describe a single - component fluid in three spatial dimensions , this formulation requires six ( although the number of equations / unknowns drops back down to five for the linearized problem ) .the extra equation governs a new quantity , which i call the mechanical mass density and label as .it is a book - keeping variable to be used when defining inertial quantities such as linear and angular momentum and kinetic energy .the mechanical mass density is assumed to satisfy a continuity equation with nothing but convection to move it , while the ( actual ) mass density is governed by a balance law that includes a mass diffusion term in addition to a convection term . behind the equations of motion that follow ,a philosophy has been adopted to keep thermodynamics and mechanics separate in some key respects . the non - convective mass flux , , is treated as one would the non - convective heat or internal energy flux , not , in other words , as a momentum involving a diffusion velocity .it is assumed to be dissipative and , consequently , its form is chosen based on non - equilibrium thermodynamic principles .on the other hand , the mechanical mass density may be viewed as the background mass density the material would have if it were somehow in a state of thermodynamic equilibrium with only the velocity field directly affecting it through the laws of continuum mechanics .the balance laws stemming from these ideas are proposed in [ secbl ] . in them , there appears a local continuum mechanical velocity , , which is used in the convection terms and in the definitions of the momentum and kinetic energy densities .also , in these inertial density definitions , the mechanical mass density is employed instead of the actual mass density . finally , when postulating a form for the total pressure tensor appearing in the momentum equation , there is no allowance for mass diffusion effects , nor are mass diffusion and external body forces allowed to affect one another directly .the above statements warrant clarification : i am not claiming there to be an absence of inertial effects arising from the mass diffusion . after all , one may define momentum and kinetic energy densities involving the actual mass density or the total mass flux and then postulate balance laws for the time rate of change of those quantities .i prefer not to work with these , however , as their balance laws are complicated by cross - effects . within my formulation ,the velocity is not responsible for the overall mass flow . instead, it is seen to govern , through convection , only the mechanical part of the flow , e.g. that which is caused by pressure disturbances , moving boundaries , gravity , etc . , whereas the diffusional part of the mass flow is handled separately in accordance with non - equilibrium thermodynamics . with this in mind ,the balance laws given in [ secbl ] for the linear momentum , angular momentum , and kinetic energy densities are understood to be laws for the _ mechanical part _ of the linear momentum , angular momentum , and kinetic energy densities .there are some interesting benefits to this approach .( 1 ) it circumvents the angular momentum conservation issues and other concerns raised in ttinger et al . , ( 2 ) because the diffusional part of the mass flux can be made to cancel with the convective part , it allows us to model an impermeable boundary with a non - zero normal velocity at that boundary see [ soundbc]as arising in the case of non - infinite acoustic impedance at a sound barrier see morse and ingard , and ( 3 ) after appropriate constitutive equations and transport coefficients are chosen , it leads to the case of pure diffusion via fick s law when mass density gradients are present yet conditions have been chosen to eliminate all mechanical forces see [ diffusion ] . in [ secce ] and appendix [ applc ] ,the principles of irreversible thermodynamics are used to select general forms for the dissipative fluxes appearing in the balance laws .the navier - stokes - fourier formulation possesses two dissipative fluxes , the heat flux and the viscous pressure tensor , whereas mine at the outset possesses four : the aforementioned two , plus a non - convective mass flux and an energy flux due to chemical work .expressions for the chemical energy and heat fluxes are chosen in strict analogy to the first law of equilibrium thermodynamics . with standard techniques ,my formulation produces linear constitutive laws that are expanded in terms of the usual affinities involving gradients of the velocity and temperature , as in the case of the navier - stokes - fourier formulation .however , there arise additional terms involving gradients of the chemical potential .since the non - convective internal energy flux may be expressed as the sum of those due to heat and chemical work , my formulation is found to have three independent constitutive laws and six transport coefficients ( two for each law ) , five of which are independent if onsager reciprocity is enforced .it may be difficult , if not impossible , to devise experiments for measuring each of my general transport coefficients individually .therefore , the proposed framework would not be very convenient or useful without a transport model to guide specific choices for the formulas of these transport coefficients .many thermodynamics textbooks , e.g. reif ( * ? ? ?* ch.12 ) , present a simple particle diffusion model as an introduction to non - equilibrium transport processes , the general idea being that the particles of a gas diffuse via random collisions with one another , and they carry with them extensive quantities , such as momentum and energy , which are exchanged during the collisions .these types of models , constructed for a classical monatomic ideal gas , are found to be qualitatively useful but quantitatively inaccurate , and thus they are immediately discarded for more complicated procedures based on the boltzmann equation . however , within the setting of my general formulation , it is of interest to revisit these ideas of particle diffusion .i claim that my construction makes such an elementary picture a viable way of describing non - equilibrium transport in fluids .moreover , the ideas behind this model are not constrained to the very restrictive case of a classical monatomic ideal gas . in [ sectc ], a transport model is proposed in which diffusing groups of particles have random encounters with other groups , whereby they exchange not only momentum and energy , but also mass .this gives rise to formulas for all six transport coefficients involving just one diffusion parameter , ., there is a little bit of flexibility regarding this issue as discussed in appendix [ tmd ] . ] furthermore , onsager reciprocity and the second law of thermodynamics are both automatically satisfied , the latter requiring only that the fluid have intrinsic equilibrium thermodynamic stability and .the heat flux law arising from such a model turns out to be different from fourier s , and the diffusion transport assumption results in a new form for the viscous pressure , as well . of course, it is important to be able relate to the typical transport parameters , the bulk and shear viscosities and heat conductivity , in problems for which the navier - stokes - fourier formulation is known to give the right answer .this is easily done by using sound attenuation data , for example , as shown in [ sound ] .an equally important issue is that there must be a way of experimentally distinguishing between the two theories for a problem where the navier - stokes - fourier equations can be shown to fail . light scattering , addressed in [ ls ] , may prove to be such an experiment , and it is currently being conducted . when considering the dimensionless knudsen number , , defined asthe ratio of the fluid s mean free path length to some characteristic length scale in a particular problem , it should be emphasized that , like navier - stokes - fourier , mine is a formulation intended to be used in the hydrodynamic , or small , regime .i do not make any general claims that my continuum formulation works well into the more rarefied gas regime , although experimental data for sound propagation in the noble gases shows that mine performs significantly better than navier - stokes - fourier there see figure [ fig1 ] .the previously mentioned light scattering experiment is being designed to study gases fully in the hydrodynamic regime which , up until now , has either been ignored or not adequately resolved in all other such experiments .sound propagation , light scattering , and several other problems are explored in [ secprobs ] . as a general rule ,one finds that the formulas derived from my equations tend to be , in many ways , much simpler and easier to interpret than those obtained from the navier - stokes - fourier theory .in fact for some problems , when only approximate answers may be computed with the navier - stokes - fourier equations , exact solutions result from my theory .another convenience is that , as demonstrated in appendix [ difp ] , it is quite easy to obtain values for the diffusion parameter , especially when compared to the task of measuring the three transport parameters required for the navier - stokes - fourier formulation .despite these appealing features , there are two aspects of my formulation that may appear troubling .( 1 ) the mechanical mass density is not a directly measurable quantity , and ( 2 ) my formulation is incompatible with the kinetic theory of gases . regarding the first item ,it should be observed that there are many instances of well - accepted macroscopic theories that involve computationally useful , but not directly measurable , quantities .the admitted difference here is that one is accustomed to conceiving of the mass density as an absolute concept with no ambiguity built into its definition and , therefore , seeming to present no need for alternate definitions .it is typical for more licence to be granted when defining new velocities and energies , for example .although the mechanical mass density is an abstract concept , my theory uses it not in place of but rather in conjunction with the actual mass density , which is a quantity that is measurable and tied , in an averaged sense , to the masses of the molecules comprising the fluid .the second of the concerns mentioned above carries more significance .the incompatibility is evident from the fact that the navier - stokes - fourier equations may be derived from the boltzmann equation via the chapman - enskog procedure for the special case of a classical monatomic ideal gas ( see huang ( * ? ? ?* ch.6 ) , for example ) . with this procedure ,the navier - stokes - fourier formulation is the first - order approximation in the parameter .it must be emphasized that the difference between navier - stokes - fourier and my theory is _ not _ a higher - order effect .therefore , mine may not be developed from the burnett equations or grad s moment method , for example .the connection between navier - stokes - fourier and kinetic gas theory is generally interpreted to mean that both are correct , but perhaps both are missing the same concept .i realize that without proof , this seems like a heretical claim .however , if the results of the aforementioned light scattering experiment support my theory , then this would necessitate a reconception of kinetic gas theory in the hydrodynamic regime .i have not been alone in my efforts to include an extra diffusive mechanism as part of the fluid equations of motion . for early examples ,see slezkin , and vallander , and for later examples , see klimontovich , , brenner , , , , and dadzie et al .my formulation proves to be significantly different from each of these treatments .to facilitate the subsequent development , let us first mention a few notational items .the number of lines under a symbol indicates its tensor order : a scalar ( zeroth - order tensor ) has no underlines ( e.g. temperature , ) , a first - order tensor has one underline ( e.g. velocity , ) , a second - order tensor has two underlines ( e.g. pressure tensor , ) , etc .a tensor of arbitrary order is indicated in bold .the symbol is used to represent the second - order identity tensor and and to denote the zero vector and zero second - order tensor , respectively .all of the tensor operators used in this paper are defined in appendix [ appto ] .extensive quantities , i.e. ones that are additive over composite subsystems , are denoted using capital letters .below is a list of the extensive quantities considered here.{lll}\overline{m}=\text{mechanical mass } & \underline{l}=\text{angular momentum } & w=\text{potential energy}\\ m=\text{mass } & e=\text{total energy } & s=\text{entropy}\\ v=\text{volume } & u=\text{internal energy } & \\ \underline{p}=\text{linear momentum } & k=\text{kinetic energy } & \end{array}\ ] ] the densities ( amounts per volume ) corresponding to the extensive quantities above are denoted using lower - case letters .for example , is the angular momentum density , is the mechanical mass density , etc . for any extensive quantity , , its amount per unit mechanical mass is denoted as .for example , is the internal energy per mechanical mass .note that and the intensive quantities considered in this paper are{l}t=\text{absolute temperature}\\ p=\text{thermodynamic pressure}\\ \mu=\text{chemical potential ( per mass),}\end{array}\ ] ] and the remaining symbols to be defined are{l}\underline{x}=\text{position}\\ t=\text{time}\\ \underline{v}=\text{continuum mechanical velocity}\end{array}\ ] ] and a few others that are defined as they arise in the text and listed in appendix [ symb ] .the general balance law for an extensive quantity , , is given by where is the local -density which is assumed to depend on and , is the total -flux , and denotes the volumetric -production / destruction rate .let us decompose the total flux into its non - convective and convective parts: in my proposed continuum formulation for a single - component fluid , i begin by assuming the local balance laws for the mechanical mass , mass , momentum , total energy , and entropy to be given respectively by and note that equation ( [ bl1 ] ) is a continuity equation for the mechanical mass density and that equation ( [ bl2 ] ) governing the actual mass density contains an additional non - convective mass flux . in momentum equation ( [ bl3 ] ) , the momentum density is assumed to be , denotes the total pressure tensor , i.e. the non - convective momentum flux , , and is defined to be the external body force per mechanical mass , assumed conservative , which in this setting means to satisfy with in the total energy equation ( [ bl4 ] ) , there appear , as usual , the heat flux , , and the energy flux arising from mechanical work , , but i have also included an energy flux due to chemical work , . based on equations ( [ bl1])-([bl5 ] ) ,there are a few others which may be derived .defining the kinetic energy density as it is shown in appendix [ appco ] that equations ( [ bl1 ] ) and ( [ bl3 ] ) may be used along with a few tensor identities to compute the following kinetic energy balance law: or using ( [ bl6 ] ) in the last term, also , since ( [ nt2 ] ) implies that the potential energy density may be expressed as one may employ ( [ bl1 ] ) and assumption ( [ bl6.5 ] ) , to obtain the following equation for the potential energy: therefore , by defining the internal energy density , , via one finds that ( [ bl4 ] ) , ( [ bl8 ] ) , and ( [ bl9 ] ) , together with identity ( [ to25 ] ) , yield the internal energy balance law, in view of the above , the non - convective internal energy flux is given by next , let us define the angular momentum density to be and use ( [ bl3 ] ) to compute the angular momentum balance as } { \partial t } & = \underline{x}\times\frac{\partial\left ( \overline{m}\underline{v}\right ) } { \partial t}\nonumber\\ & = -\underline{x}\times\left [ \nabla\cdot\left ( \underline{\underline{p}}+\overline{m}\underline{v}\,\underline{v}\right ) \right ] + \underline{x}\times\left ( \overline{m}\underline{f}^{\left ( \overline{m}\right ) } \right ) .\label{bl13}\ ] ] for the angular momentum to be conserved, } { \partial t}=-\nabla\cdot\left [ \underline{x}\times\left ( \underline{\underline{p}}+\overline{m}\underline{v}\,\underline{v}\right ) \right ] + \underline{x}\times\left ( \overline{m}\underline{f}^{\left ( \overline{m}\right ) } \right ) \label{bl14}\ ] ] must be satisfied , and in appendix [ appco ] , this is demonstrated to be true if and only if the pressure tensor is symmetric: henceforth , let us assume ( [ bl15 ] ) to hold . in preparation for the next section , definition ( [ to21 ] ) of the convective derivative may be used to rewrite some of these balance laws . employing the relation ( [ nt1 ] ) and identity ( [ to25 ] ) , one may write ( [ bl1 ] ) as also , using ( [ nt2 ] ) , ( [ bl1 ] ) , and ( [ to25 ] ), one may express equations ( [ bl2 ] ) , ( [ bl11 ] ) , and ( [ bl5 ] ) as and where ( [ bl15 ] ) has been assumed in equation ( [ bl19 ] ) . in order to compare the foregoing balance laws with those typically postulated when deriving the navier - stokes - fourier formulation ,see de groot and mazur ( * ? ? ?* ch.ii ) , for example .let us study phenomena close enough to equilibrium so that classical irreversible thermodynamics is applicable . below , this formalism , which is detailed in de groot and mazur ( * ? ? ?* ch.iii and iv ) , is used to obtain linear non - convective constitutive equations for the fluxes . for a single - component fluid ,i propose an equilibrium fundamental relation for the entropy per mechanical mass of the form, note the following equilibrium thermodynamic relationships, let us assume the local equilibrium hypothesis , which implies that if represents any of the thermodynamic parameters mentioned above , then its corresponding local variable can be defined as in particular , the fundamental equation ( [ ce1 ] ) may be written locally as which , upon taking its total differential , yields provided that the differentiation is performed in a reference frame for which is constant . therefore , we may take the convective derivative of the above and multiply through by to obtain substituting equations ( [ bl17])-([bl20 ] ) into the above and using identity ( [ to28 ] ) , one arrives at the following expression for the volumetric rate of entropy production: in the differential form of the first law of equilibrium thermodynamics , the change in energy due to heat and chemical work are represented by and , respectively . with this in mind , let us take and by employing these in equation ( [ ce9 ] ) and using ( [ bl12 ] ) and ( [ to25 ] ) , one finds the second law of thermodynamics requires that is always satisfied , and as shown in appendix [ applc ] , this motivates the following linear constitutive equations: and in which case , ( [ ce9.3 ] ) becomes \left ( \nabla t\right ) \cdot\left ( \nabla\mu\right ) + \frac{d_{m}}{t}\left\vert \nabla\mu\right\vert ^{2}.\label{ce15}\ ] ] also in appendix [ applc ] , it is shown that onsager reciprocity requires the coefficients to satisfy the relation, assuming this , one may eliminate , say , in equation ( [ ce15 ] ) to find from the above expression , it may be shown that condition ( [ ce10 ] ) for the second law of thermodynamics is always satisfied if let us summarize these findings so far .my general formulation , which i refer to as the -formulation , may be written as equations ( [ bl1])-([bl3 ] ) and ( [ bl11])/([bl12]), with constitutive equations ( [ ce12])-([ce14]), where the transport coefficients , , , , , , and are assumed to satisfy ( [ ce16 ] ) , ( [ ce19 ] ) , and ( [ ce20 ] ) . in the next section ,i introduce a simple diffusion transport model , which yields equations for all six transport coefficients that depend on only one diffusion coefficient . the -formulation may be shown to satisfy galilean invariance and the conservation of angular momentum .furthermore , the constitutive laws each satisfy the principle of objectivity required for material frame indifference . that is to say , all of the fluxes in ( [ smf2 ] ) transform in the expected way under time - dependent rigid body translation and rotation . using the notation from [ secnt ], the navier - stokes - fourier formulation may be written as with constitutive equations, where is the fourier heat conductivity , and are the navier - stokes bulk and shear viscosities , respectively , and represents the specific body force .it is clear that if one were to assume , , , and in the -formulation , then and would be the same quantity , resulting in the navier - stokes - fourier formulation above .therefore , the navier - stokes - fourier formulation may be considered as a special case of the general -formulation .in this section , let us make the following assumption , which i refer to as the diffusion transport assumption : the extensive quantities , mass , momentum , and internal energy , which undergo dissipative transport , do so objectively via diffusion .that is to say their corresponding constitutive equations are each assumed to have the form , or its objective part if this quantity is not material frame indifferent . for the mass , the above assumption implies detailed in callen , for example , legendre transformations may be used to recast this in the grand canonical description , whereupon , ( [ tc2 ] ) becomes , \label{tc3}\ ] ] with specific enthalpy , isothermal compressibility , and thermal expansion coefficient . comparing ( [ tc3 ] ) with ( [ ce12 ] ) ,one obtains the following expressions for the mass transport coefficients: and similarly , for the internal energy , the diffusion transport assumption yields which , in the grand canonical description , becomes{c}\left [ c_{v}+\frac{m\kappa_{t}}{t}\left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}\right ) \left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}-\mu\right ) \right ] \nabla t+\\ m\kappa_{t}\left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}\right ) \nabla\mu \end{array } \right\ } , \label{tc7}\ ] ] where represents the isochoric specific heat per mass . comparison of ( [ tc7 ] ) and ( [ ce14 ] ) gives the internal energy transport coefficients, d\label{tc8}\ ] ] and next , let us examine the momentum under the diffusion transport assumption . using ( [ tc1 ] ) with , one finds or employing identity ( [ to25 ] ) and definitions ( [ to14 ] ) and ( [ to14.1]), with the same procedure as described in jou et al . , for example , one may demonstrate that the first term on the right - hand side of ( [ tc11 ] ) is objective . however , the second and third terms are not . requiring all constitutive equations to be material frame indifferent, one chooses the viscous pressure to be only the objective part of , i.e. which , upon comparison with ( [ ce13 ] ) , yields the following expressions for the viscosity coefficients: to summarize , the diffusion transport assumption gives rise to equations ( [ tc4 ] ) , ( [ tc5 ] ) , ( [ tc13 ] ) , ( [ tc14 ] ) , ( [ tc8 ] ) , and ( [ tc9 ] ) for the transport coefficients appearing in the -formulation: d\nonumber\\ d_{u } & = \frac{m^{2}\kappa_{t}}{\mu}\left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}\right ) d.\nonumber\end{aligned}\ ] ] i refer to the above as the `` diffusion transport coefficients . ''it is easily shown that these satisfy onsager reciprocity condition ( [ ce16 ] ) and that if and there is intrinsic equilibrium thermodynamic stability , i.e. , then conditions ( [ ce19 ] ) and ( [ ce20 ] ) for the second law of thermodynamics automatically hold .substitution of ( [ tc15 ] ) into constitutive equations ( [ ce12])-([ce14 ] ) yields \label{tc16}\\ \underline{\underline{p } } & = p\underline{\underline{1}}-\overline{m}d\left ( \nabla\underline{v}\right ) ^{sym}\label{tc17}\\ \underline{q}_{u } & = -d\left\ { \begin{array } [ c]{c}\left [ mc_{v}+\frac{m^{2}\kappa_{t}}{t}\left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}\right ) \left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}-\mu\right ) \right ] \nabla t+\\ m^{2}\kappa_{t}\left ( h^{\left ( m\right ) } -\frac{t\alpha_{p}}{m\kappa_{t}}\right ) \nabla\mu \end{array } \right\ } .\label{tc18}\ ] ] of course , one may use the grand canonical formalism to transform constitutive equations ( [ tc16 ] ) and ( [ tc18 ] ) back so that they are , again , written in terms of the variables and . in doing so , we have our original postulations, forms ( [ tc19])-([tc21 ] ) display the fact that we have modeled the dissipative fluxes via diffusing groups of particles which transfer mass , momentum , and energy .notice that only one transport parameter appears . is not the self - diffusion coefficient , although these quantities are the same order of magnitude for gases ( see appendix [ difp ] ) . ]the most prevalent transport parameter in the navier - stokes - fourier formulation is the shear viscosity .therefore , for convenience , let us define a dimensionless coefficient ( possibly a function of thermodynamic variables ) via in order to match greenspan s sound attenuation data in the hydrodynamic regime for classical monatomic ideal gases at room temperature , for example , we take .( see [ sound ] . ) in appendix [ difp ] , values of and for various types of gases and liquids are computed using ideas from [ sound ] , comparisons between and the self - diffusion coefficient are made , and the temperature , pressure , and mass density dependence of is examined .although it upsets the underlying simplicity of the diffusion transport assumption , one may be curious to see what happens when each of the diffusion processes involves a different coefficient .this idea is explored in appendix [ tmd ] .we have arrived at the following equations of motion for a single - component fluid near equilibrium and in the hydrodynamic regime : the -formulation balance laws ( [ smf1]), together with the diffusion transport constitutive equations ( [ tc19])-([tc21]), substituting the constitutive equations into the balance laws , we have a system of four partial differential equations in the four unknowns , , , , and . in the above expression for the total pressure tensor, i have indicated that the equilibrium thermodynamic pressure , , should be written in terms of the variables , and .for example , to model a classical monatomic ideal gas , one takes if the fluid has intrinsic equilibrium thermodynamic stability , then it is a straight - forward matter to use legendre transformations in order to replace the and -dependency in the above formulation with other thermodynamic variables .for example , using the helmholtz free energy formalism , we can recast the above in terms of the four unknowns , , , , and ; using the gibbs free energy formalism , we can recast the above in terms of , , , and ; and , as we have already seen , using the grand canonical formalism , we can recast the above in terms of , , , and . due to the nature of certain equations of state and/or boundary conditions , it is often more convenient to work with one of these other forms .i hope to have motivated the proposed formulation by the simple transport mechanism underlying its constitutive equations , one that is easily generalized to model more complicated problems such as multicomponent fluid mixtures ( see appendix [ appmc ] ) .of course , its utility nonetheless hinges on the answers to two important questions : ( 1 ) how adequately does this formulation perform in problems of fluid mechanics , and ( 2 ) are its predictions in the hydrodynamic regime quantifiably different from those found with the navier - stokes - fourier formulation ?next , let us briefly highlight some of the results that have been obtained so far .in all of the following sections but [ shock ] , problems that may be examined with low mach number linearizations are considered ., to be the ratio of the characteristic speed in a particular problem to the fluid s sound speed , the low mach number regime is taken to be that in which .] a one - dimensional linear stability analysis is carried out in [ stability ] to show that my formulation is unconditionally stable provided that the diffusion parameter , , is positive . in [ diffusion ] , a mass equilibration problem with no mechanical forces is studied in order to show that my formulation reduces to one of pure diffusion governed by fick s law .next , low amplitude sound propagation , which is the subject of [ sound ] , is demonstrated to be a way of relating my diffusion parameter , , to the transport parameters of the navier - stokes - fourier formulation by matching attenuations in the hydrodynamic ( small ) regime , thereby connecting the two theories in this sense .obviously , therefore , sound is not a good way to distinguish between my theory and navier - stokes - fourier .however , in [ ls ] , light scattering is shown to provide a way of testing the difference . the shifted brillouin peaks , which correspond to the sound ( phonon ) part of the spectrum , are virtually indistinguishable , as one would expect , but the central rayleigh peak predictions are significantly different . for example , in a classical monatomic ideal gas in the hydrodynamic regime , my theory predicts a rayleigh peak that is about taller and narrower than the navier - stokes - fourier theory ( but with the same area so that the well - verified landau - placzek ratio remains intact ) .previous experimenters , e.g. clark and fookson et al . , did not study gases fully in the hydrodynamic regime , , choosing instead to focus on more rarefied gases with knudsen numbers in the range , , which extends into the slip flow and transition regimes .therefore , in order to verify my theory , it is important to conduct a high - resolution light scattering experiment for a gas in the hydrodynamic regime .another striking difference in predictions arises from the non - linear steady - state shock wave problem considered in [ shock ] .it is well - known that for this problem , the navier - stokes - fourier formulation and all other accepted theories , for that matter produces normalized density , velocity , and temperature shock wave profiles that are appreciably displaced from one another with the temperature profile in the leading edge of the shock , velocity behind it , and density trailing in the back .my formulation , however , predicts virtually no displacement between these three profiles in the leading edge of the shock and much less pronounced displacements in the middle and trailing end of the shock .if an experiment can be devised to measure both the structure and position of the steady shock profiles of the mass density and , say , the temperature , then this would be another decisive test between my theory and all others .thermophoresis , or the down - temperature - gradient motion of a macroscopic particle in a resting fluid , is discussed in [ therm ] . for an idealized problem, it is shown that , through the steady - state balancing of the mechanical and diffusive terms of the mass flow , my theory provides a mechanism for thermophoresis that is not present in the navier - stokes - fourier formulation . in the latter type, a thermal slip boundary condition at the particle s surface is needed in order to model thermophoretic motion , whereas the former type may be used to describe this motion with a no - slip boundary condition , i.e. one in which the tangential velocity of the particle at its surface equals that of the fluid .the concept of thermal slip is based on kinetic gas theory arguments in the slip flow regime , appropriate for , yet thermophoresis is observed in gases for much smaller knudsen numbers in the hydrodynamic regime and also in liquids . therefore , there are obvious advantages to being able to model this problem without the thermal slip condition .another example in which we cancel the diffusional and convective parts of the mass flux at an impermeable boundary is considered in [ soundbc ] . here, we discuss acoustic impedance at a sound barrier , as presented in morse and ingard .in particular , it is demonstrated that my formulation allows an impermeable boundary with a non - zero normal velocity at that boundary as that which occurs in the case of non - infinite impedance .there is one topic that remains to be introduced . as a step towards bridging the gap between microscopic and macroscopic scales in my theory ,the subject of fluctuating hydrodynamics is addressed in [ flh ] , where it is shown that the thermodynamic interpretation of a continuum mechanical point ( i.e. a point occupied by matter and moving with velocity ) stemming from my formulation is fundamentally different from that of navier - stokes - fourier . that is , in the navier - stokes - fourier theory , a continuum mechanical point is viewed as a thermodynamic subsystem in contact with its surrounding material which acts as a heat reservoir ; whereas in my theory , the surrounding material acts not only as a heat but also a particle reservoir .although i did not explicitly demonstrate it in this paper , the correlation formulas in [ flh ] may be used to derive the light scattering spectra which follow in [ ls ] .let us consider the cartesian one - dimensional problem in which variation is assumed to be in the -direction only with as the only non - zero component of the velocity and there are assumed to be no body forces .if we use the helmholtz free energy formalism to recast the -formulation ( [ smf1 ] ) with diffusion transport constitutive equations ( [ tc19])-([tc21 ] ) in terms of the variables , , , , and , and then linearize about the constant equilibrium state, via the expansions, where is a small dimensionless parameter proportional to the mach number , then we arrive at the following system of linear equations: where the subscript `` '' indicates that the parameter is evaluated at the constant equilibrium state ( [ c1 ] ) . note that for this linearized problem , the mechanical mass equation ( [ c6 ] ) may be uncoupled from the rest of the system , ( [ c7])-([c9 ] ) . postulating a solution,{c}m_{1}\\ v_{x,1}\\ t_{1}\end{array }\right ] , \ ] ] to ( [ c7])-([c9 ] ) that is proportional to for real and complex , one obtains the dispersion relation, in the above , we have employed the equilibrium thermodynamic relationships, where is the adiabatic sound speed and represents the isobaric to isochoric ratio of specific heats . equation ( [ c11 ] ) may be solved for to obtain the three exact roots, clearly , if is satisfied , then the real parts of , , and , are negative for all , resulting in the unconditional stability of linearized system ( [ c7])-([c9 ] ) . for comparison ,carrying out a similar procedure with the navier - stokes - fourier formulation ( [ smf3])/([smf4 ] ) yields the linearization,{c}-\frac{1}{m_{0}}\left ( \frac{1}{m_{0}\kappa_{t,0}}\frac{\partial m_{1}}{\partial x}+\frac{\alpha_{p,0}}{\kappa_{t,0}}\frac{\partial t_{1}}{\partial x}\right ) + \\\frac{1}{m_{0}}\left ( \zeta_{ns,0}+\frac{4}{3}\eta_{ns,0}\right ) \frac{\partial^{2}v_{x,1}}{\partial x^{2}}\end{array } \right ] \label{c16.4}\\ \frac{\partial t_{1}}{\partial t } & = \frac{k_{f,0}}{m_{0}c_{v,0}}\frac{\partial^{2}t_{1}}{\partial x^{2}}-\frac{t_{0}\alpha_{p,0}}{m_{0}\kappa_{t,0}c_{v,0}}\frac{\partial v_{x,1}}{\partial x},\label{c16.5}\ ] ] with the dispersion relation,{c}\omega^{3}+\left [ \frac{\zeta_{ns,0}}{m_{0}}+\left ( \frac{4}{3}+e\!u_{0}\right ) \frac{\eta_{ns,0}}{m_{0}}\right ] \kappa^{2}\omega^{2}+\\ \left [ c_{0}^{2}+e\!u_{0}\left ( \frac{\zeta_{ns,0}}{m_{0}}+\frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}\right ) \frac{\eta_{ns,0}}{m_{0}}\kappa^{2}\right ] \kappa^{2}\omega+\\ \frac{e\!u_{0}}{\gamma_{0}}\frac{\eta_{ns,0}}{m_{0}}c_{0}^{2}\kappa_{st}^{4}\end{array } \right ) = 0,\label{c17}\ ] ] where is the euken ratio defined to be let us also define the quantities, and \frac{\eta_{ns,0}}{m_{0}}\right\ } .\label{c18.2}\ ] ] using these , equation ( [ c17 ] ) yields the following three approximate roots, in the limit , which corresponds to the low knudsen number , or hydrodynamic , regime . for an equilibrium thermodynamically stable fluid, satisfied , and therefore , if , , and are all chosen to be non - negative , then we arrive at the well - known result that the one - dimensional linearized navier - stokes - fourier formulation does not go unstable in the hydrodynamic regime .if we take the limit as the transport parameters go to zero ( in my formulation or in navier - stokes - fourier ) , we are left with the euler equations of pure wave motion for which , on an infinite domain , perturbations travel at the adiabatic sound speed and never decay . in the present section ,we study a problem on the other end of the dichotomy , one of pure diffusion . for this , it is instructive to bear the following example of thermodynamic equilibration in mind .let us consider a cartesian one - dimensional problem on the infinite domain in which the initial conditions are given by{c}m_{0}+\delta_{m}\text { for } x<\left\vert a\right\vert \\ m_{0}\text { for } x>\left\vert a\right\vert \end{array } \right .\label{c23.01}\\ v_{x}\left ( x,0\right ) & = 0\text { for all } x\label{c23.02}\\ t\left ( x,0\right ) & = \left\ { \begin{array } [ c]{c}t_{0}+\delta_{t}\text { for } x<\left\vert a\right\vert \\ t_{0}\text { for } x>\left\vert a\right\vert \end{array } \right ..\label{c23.03}\ ] ] this describes a perturbed subsystem initially shaped like an infinite slab , centered at with width , in contact with two identical infinite reservoirs on either side of it .as time approaches infinity , one expects the subsystem to equilibrate with the reservoirs until the the whole system has uniform mass density and temperature .let us further suppose ( 1 ) that the perturbations and are small compared their respective equilibrium values so that it is appropriate to use the linearized equations presented in [ stability ] and ( 2 ) that and are chosen to satisfy such that the initial pressure is in equilibrium , i.e. note that conditions ( [ c23.02 ] ) and ( [ c23.05 ] ) mean that there are initially no mechanical forces acting on the system , and so it is of interest to see , in a problem such as this , the consequences of assuming a solution . to this end , let us examine the zero velocity solution, to my linearized equations ( [ c7])-([c9 ] ) . substituting ( [ c23.1 ] ) into the equations yields and equation ( [ c23.3 ] )is a constraint that there are no pressure gradients and it forces the mass density and temperature perturbations to be related by as long as there exists any point at which attain their equilibrium values , and , thus causing any additive function of that may appear in ( [ c23.5 ] ) to be zero .( note that at , ( [ c23.5 ] ) satisfies ( [ c23.04 ] ) for the problem discussed above . ) as we can see , the two diffusion equations ( [ c23.2 ] ) and ( [ c23.4 ] ) are perfectly consistent with the constraint ( [ c23.5 ] ) . furthermore , they imply that in the absence of mechanical forces , a non - equilibrium problem , such as the one described in the previous paragraph , equilibrates by pure diffusion. equation ( [ c23.2 ] ) is fick s law which describes diffusion driven by gradients in mass density . by comparison ,if we substitute the zero velocity solution ( [ c23.1 ] ) into the linearized navier - stokes - fourier equations ( [ c16.3])-([c16.5 ] ) , then we find only the trivial solution, unless we examine the special case , , which yields and either way , it is clear that for the initial value problem described previously , the mass density does not equilibrate via pure diffusion when the navier - stokes - fourier formulation is used . in fact , a non - zero velocity solution is required .next , let us employ linearized system ( [ c7])-([c9 ] ) to study one - dimensional cartesian sound propagation for low amplitude sound waves of angular frequency . for this problem ,one postulates a solution proportional to where is real and is complex , leading to the following dispersion relation: equation ( [ c24 ] ) may be solved to obtain six -roots : the propagational pair, the thermal pair,.] and an extra pair, the thermal roots ( [ c26 ] ) are exact solutions to ( [ c24 ] ) , but ( [ c25 ] ) and ( [ c27 ] ) represent approximate solutions in the hydrodynamic regime , requiring note that ( [ c25 ] ) suggests that sound attenuation experiments , such as greenspan s and , may be used to measure the diffusion parameter of my formulation for various fluids under a wide variety of thermodynamic conditions .commonly , one finds and data tabulated from such experiments at given temperatures and pressures , where \text { and } f\equiv\frac{\omega}{2\pi}.\label{c28.5}\ ] ] in the hydrodynamic regime , equation ( [ c25 ] ) implies the formula, and in appendix [ difp ] , this formula is used together with sound propagation data to compute the diffusion parameter for a variety of selected gases and liquids . for the navier - stokes - fourier formulation, one may use the foregoing procedure to obtain the dispersion relation,{c}\left [ -ic_{0}^{2}\sigma_{0}+e\!u_{0}\left ( \frac{\zeta_{ns,0}}{m_{0}}+\frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}\right ) \frac{\eta_{ns,0}}{m_{0}}\omega\right ] \kappa^{4}+\\ \left [ -i\left ( 2\gamma_{0}+\sigma_{0}\right ) \omega - c_{0}^{2}\right ] \omega\kappa^{2}-\omega^{3}\end{array } \right\ } = 0,\label{c29}\ ] ] where and are defined in equations ( [ c18.1 ] ) and ( [ c18.2 ] ) . solving equation ( [ c29 ] ) ,one finds four -roots : the propagational pair, and the thermal pair, , \label{c31}\ ] ] where we have defined , \label{c31.1}\ ] ] and all four roots are approximations in the hydrodynamic regime, note that a comparison of equations ( [ c25 ] ) and ( [ c30])/([c18.2 ] ) gives a convenient way of relating the diffusion coefficient , , of my formulation to the bulk and shear viscosities , and , and the thermal conductivity appearing in the navier - stokes - fourier formulation , i.e. to match these predictions for sound attenuation , one chooses \frac { \eta_{ns,0}}{m_{0}}\right\ } .\label{c33}\ ] ] for example , in a classical monatomic ideal gas for which one typically chooses equation ( [ c33 ] ) implies which in view of equation ( [ tc22 ] ) , written for this linearized problem as yields next , instead of the hydrodynamic regime approximations , ( [ c25 ] ) and ( [ c30 ] ) , let us consider the exact propagational roots: corresponding to my formulation and{c}c_{0}^{2}+i\omega\left [ \left ( \frac{4}{3}+e\!u_{0}\right ) \frac { \eta_{ns,0}}{m_{0}}+\frac{\zeta_{ns,0}}{m_{0}}\right ] -\\ \sqrt{\begin{array } [ c]{c}\left\ { c_{0}^{2}+i\omega\left [ \left ( \frac{4}{3}+e\!u_{0}\right ) \frac{\eta_{ns,0}}{m_{0}}+\frac{\zeta_{ns,0}}{m_{0}}\right ] \right\ } ^{2}+\\ 4e\!u_{0}\frac{\eta_{ns,0}}{m_{0}}\omega\left [ -i\frac{c_{0}^{2}}{\gamma_{0}}+\left ( \frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}+\frac{\zeta_{ns,0}}{m_{0}}\right ) \omega\right ] \end{array } } \end{array } \right ) } { 2e\!u_{0}\frac{\eta_{ns,0}}{m_{0}}\left [ -i\frac{c_{0}^{2}}{\gamma_{0}\omega}+\left ( \frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}+\frac { \zeta_{ns,0}}{m_{0}}\right ) \right ] } } , \label{c38}\ ] ] corresponding to the navier - stokes - fourier formulation . figure [ fig1 ] is a plot of the dimensionless real and imaginary parts, and of the above roots versus the dimensionless parameter, which is inversely proportional to the knudsen number for a gas .the and parameters characterize the sound attenuation and the inverse sound speed , respectively .the curves ( blue for my formulation and red for navier - stokes - fourier ) correspond to a classical monatomic ideal gas for which ( [ c34 ] ) and ( [ c35 ] ) are chosen , as well as the equilibrium thermodynamic formula for the adiabatic sound speed, where is the universal gas constant and represents the atomic weight of the gas .the points are greenspan s sound data for the noble gases , helium , neon , argon , krypton , and xenon , at room temperature .both axes are logarithmic scale , and the upper and lower curves correspond to and , respectively .fig1c.eps from figure [ fig1 ] , it is clear that my theory s sound predictions match those of the navier - stokes - fourier formulation in the hydrodynamic ( high , or low ) regime , as intended .as i have emphasized throughout , my formulation is a continuum theory intended to be used in the regime , and so i make no general claims that it works well into more rarefied higher knudsen number regimes .however , figure [ fig1 ] provides an intriguing demonstration that for this problem , my theory does significantly better than navier - stokes - fourier there . of course , until i can provide a formal explanation , this should be viewed as strictly fortuitous .morse and ingard ( * ? ? ?* chapter 6 ) use the navier - stokes - fourier equations in order to study low amplitude sound waves hitting a wall . for easier comparison with their treatment and for general convenience , as well, let us use the gibbs free energy formalism to recast my formulation ( [ smf1])/([tc19])-([tc21 ] ) with no body forces in terms of the variables , , , , and , and then linearize about the constant equilibrium state, via the expansions, with , as before , a small dimensionless parameter proportional to the mach number .doing so , one obtains the following system: where is the isobaric specific heat per mass , `` '' subscripts are used to indicate evaluation at the equilibrium state , and the equilibrium thermodynamic relationships ( [ c12 ] ) and have been used .again , we see that the mechanical mass equation ( [ c106 ] ) may be uncoupled from the rest of the system and postulating a solution{c}p_{1}\\ v_{x,1}\\ t_{1}\end{array } \right]\ ] ] to the remaining system ( [ c107])-([c109 ] ) that is proportional to with real and complex , unsurprisingly leads to the same dispersion relation ( [ c24 ] ) and the same six roots ( [ c25])-([c27 ] ) as before .however , when my equations are cast in the present form , one notes the following interesting fact : the temperature equation ( [ c109 ] ) may be uncoupled from the pressure and velocity equations , so that the dispersion relation arising from the reduced system , ( [ c107 ] ) and ( [ c108 ] ) , yields only the propagational roots ( [ c25 ] ) and the extra roots ( [ c27 ] ) .the thermal roots ( [ c26 ] ) arise only when the temperature equation ( [ c109 ] ) is coupled back into the system .this means that within my formulation , thermal modes of the solution may contribute only to the temperature variable and not the pressure or velocity variables , i.e. the mechanical variables . on the other hand, propagational and extra modes may contribute to all three .next , let us examine the length scales associated with each of these three types of roots .if we compute the length scale as the distance it takes for a mode to attenuate to of its amplitude , then we find the propagational , thermal , and extra mode lengths to be and respectively , where ( [ c25])-([c27 ] ) and ( [ c28.5 ] ) have been used in the above .for example , if we consider sound waves at frequency mhz as in greenspan and argon gas at normal temperature and pressure ( k and pa ) with m/s(see appendix [ difp ] ) and m / s ( from the classical monatomic ideal gas formula , ) , then the above length scales are computed to be and in this case and in general , for all ideal gases in the hydrodynamic regime the extra mode has a length scale roughly equal to the mean free path length of the gas in its equilibrium state , regardless of sound frequency , and away from boundaries , the propagational modes of sound waves dominate .however , depending on the boundary conditions , the thermal and extra modes may play an important role near walls , causing the formation of boundary layers with lengths and , respectively .when the navier - stokes - fourier equations are recast in the gibbs free energy description and linearized as above , we arrive at the following system: which again yields dispersion relation ( [ c29 ] ) and the four roots ( [ c30 ] ) and ( [ c31 ] ) . unlike in my formulation , none of the above equations uncouple from one another .therefore , in the navier - stokes - fourier formulation the two possible modes , propagational and thermal , may contribute to each of the three variables : pressure , velocity , and temperature .using ( [ c30 ] ) and ( [ c31 ] ) , the length scales associated with the propagational and thermal modes of the navier - stokes - fourier formulation are computed to be and in view of ( [ c33 ] ) , equation ( [ c123n ] ) yields the same propagation length as ( [ c111n ] ) .also , in the hydrodynamic regime , the thermal length given by ( [ c124n ] ) is the same order of magnitude as the thermal length ( [ c112n ] ) from my formulation .let us assume the -plane forms a sound barrier at and , as in morse and ingard , define the acoustic impedance of the wall as infinite impedance corresponds to a perfectly reflected sound wave in which its outgoing velocity is equal in magnitude and opposite in direction to its incoming velocity , resulting in for non - infinite wall impedances , however , there is a non - zero normal fluid velocity at the wall . using my formulation, we may still consider the wall to be both stationary and impermeable by enforcing the no total mass flux condition, which becomes \right\vert _ { x=0}=0\label{c118n}\ ] ] when we assume the diffusion transport constitutive law ( [ tc19 ] ) , recast in the gibbs free energy description , and linearize .on the other hand , if ( [ c116n ] ) is not satisfied in the navier - stokes - fourier formulation , there arises the curious situation of a non - zero normal mass flux at .morse and ingard explain this phenomenon as follows : `` the acoustic pressure acts on the surface and tends to make it move , or else tends to force more fluid into the pores of the surface . ''wall movement and/or fluid lost to pores in the wall are yet never treated explicitly .the procedure for investigating stationary gaussian - markov processes , as described by fox and uhlenbeck , may be used to derive the formulas of fluctuating hydrodynamics for my theory .this procedure involves linearizing the three - dimensional cartesian version of the hydrodynamic equations , interpreting the variables as fluctuating variables , and introducing zero - mean fluctuating hydrodynamic forces .this yields the following langevin type of system: in the above , the mechanical mass density equation has been uncoupled , and inverted hats are used to distinguish the fluctuating variables from their average macroscopic continuum counterparts , e.g. , where denotes a conditional average for a given initial condition .also , , , and represent the fluctuating mass , momentum , and internal energy fluxes , respectively , with assumed to be symmetric and following the steps outlined in fox and uhlenbeck leads to a generalized fluctuation - dissipation theorem for stationary gaussian - markov processes which , when expressed for the problem at hand , yields the following correlation formulas for the cartesian components of the fluctuating fluxes:{l}2k_{b}m_{0}t_{0}d_{0}\delta\left ( \underline{x}-\underline{x}^{\prime } \right ) \delta\left ( t - t^{\prime}\right ) \text { if } i = j = k = l\\ k_{b}m_{0}t_{0}d_{0}\delta\left ( \underline{x}-\underline{x}^{\prime}\right ) \delta\left ( t - t^{\prime}\right ) \text { if } i = k\text { , } j = l\text { , } i\neq j\\ k_{b}m_{0}t_{0}d_{0}\delta\left ( \underline{x}-\underline{x}^{\prime}\right ) \delta\left ( t - t^{\prime}\right ) \text { if } i = l\text { , } j = k\text { , } i\neq j\\ 0\text { otherwise}\end{array } \right . ,\nonumber\end{gathered}\ ] ] and in the above formulas , the indices , , , and , may equal , , or ( the three cartesian directions ) , is used to represent dirac delta distributions , is the kronecker delta , denotes the boltzmann constant , and there appear the following equilibrium thermodynamic fluctuations: \label{c53}\\ \left ( \delta m\delta u\right ) & = \frac{k_{b}m_{0}^{2}t_{0}\kappa_{t,0}}{v}\left ( h_{0}^{\left ( m\right ) } -\frac{t_{0}\alpha_{p,0}}{m_{0}\kappa_{t,0}}\right ) , \nonumber\end{aligned}\ ] ] which apply to a thermodynamic system of fixed volume in contact with a heat / particle reservoir .note the similarity in form of equations ( [ c45])-([c50 ] ) , involving the above fluctuations . for comparison , the langevin system corresponding to the navier - stokes - fourier formulation is{c}-\nabla\left ( \varepsilon\check{p}_{1}\right ) + \left ( \zeta_{ns,0}+\frac{\eta_{ns,0}}{m_{0}}\right ) \nabla\left [ \nabla\cdot\left ( \varepsilon\underline{\check{v}}_{1}\right ) \right ] + \\\eta_{ns,0}\nabla^{2}\left ( \varepsilon\underline{\check{v}}_{1}\right ) + \nabla\cdot\underline{\underline{r}}_{\underline{p}}\end{array } \right ] \label{c54}\\ \frac{\partial\left ( \varepsilon\check{u}_{1}\right ) } { \partial t } & = -m_{0}h_{0}^{\left ( m\right ) } \nabla\cdot\left ( \varepsilon \underline{\check{v}}_{1}\right ) + k_{f,0}\nabla^{2}\left ( \varepsilon \check{t}_{1}\right ) + \nabla\cdot\underline{r}_{u}\nonumber\end{aligned}\ ] ] from which one may derive fox and uhlenbeck s correlations,{l}2k_{b}t_{0}\left ( \frac{4\eta_{ns,0}}{3}+\zeta_{ns,0}\right ) \delta\left ( \underline{x}-\underline{x}^{\prime}\right ) \delta\left ( t - t^{\prime } \right ) \text { if } i = j = k = l\\ 2k_{b}t_{0}\eta_{ns,0}\delta\left ( \underline{x}-\underline{x}^{\prime } \right ) \delta\left ( t - t^{\prime}\right ) \text { if } i = k\text { , } j = l\text { , } i\neq j\\ 2k_{b}t_{0}\eta_{ns,0}\delta\left ( \underline{x}-\underline{x}^{\prime } \right ) \delta\left ( t - t^{\prime}\right ) \text { if } i = l\text { , } j = k\text { , } i\neq j\\ 2k_{b}t_{0}\left ( \zeta_{ns,0}-\frac{2\eta_{ns,0}}{3}\right ) \delta\left ( \underline{x}-\underline{x}^{\prime}\right ) \delta\left ( t - t^{\prime } \right ) \text { if } i = j\text { , } k = l\text { , } i\neq k\\ 0\text { otherwise}\end{array } \right . , \nonumber\end{gathered}\ ] ]and therefore , in the navier - stokes - fourier formulation, whereas my theory predicts a non - zero fluctuating mass flux .this exercise illuminates certain ideas about the thermodynamic nature of a continuum mechanical point , i.e. a point occupied by matter whose motion is tracked by its local velocity , , and whose infinitesimal volume is determined by its continuum mechanical deformation . as per the local equilibrium hypothesis ,near - equilibrium continuum theories are constructed under the assumption that any given continuum mechanical point may be viewed as an equilibrium thermodynamic subsystem in contact with the rest of the material which acts as a reservoir .the question as to whether or not the fluctuating mass flux is zero , decides the very nature of this reservoir .all continuum theories view the surrounding material as a reservoir for heat ( so that the point exchanges energy with the surrounding fluid to maintain a temperature determined by thermodynamics ) , but is it also a particle reservoir ( so that it exchanges mass with the surrounding fluid to maintain a thermodynamically determined chemical potential ) ? my theory , with its non - zero , asserts that it is , whereas the navier - stokes - fourier theory , in view of its ( [ c63 ] ) prediction , argues that it is not , meaning that each continuum mechanical point should be conceived of as having an impermeable wall surrounding it .although subsystems and reservoirs are abstract constructs in this setting , i believe that it makes sense to allow the mass of a continuum mechanical point to fluctuate .this view is reinforced by the natural appearance of fluctuation formulas ( [ c53 ] ) for a heat / particle reservoir in my theory s correlations , ( [ c45])-([c50 ] ) .another conceptual difference between the two formulations is seen by comparing the case of equations ( [ c47 ] ) and ( [ c57 ] ) : my theory predicts no cross - correlations between the diagonal elements of the fluctuating momentum flux , whereas navier - stokes - fourier , in general , does .furthermore , for a classical monatomic ideal gas or any other fluid for which one assumes navier - stokes - fourier theory predicts these cross - correlations to be negative , or in other words , for the diagonal elements of the fluctuating momentum flux to be inversely correlated with one another . one may derive the light scattering spectra corresponding to my formulation by using the procedure in berne and pecora .the main steps involved are to ( 1 ) linearize the cartesian three - dimensional equations expressed in terms of the variables , mechanical mass , particle number density , divergence of the velocity , and temperature the constant equilibrium state, ( 2 ) uncouple the mechanical mass density equation , ( 3 ) take the fourier - laplace transform of the remaining linear system , ( 4 ) solve for the fourier - laplace transformed variables , and ( 5 ) use this solution to construct time - correlation functions and their spectral densities . of particular interest is the density - density spectrum , computed for my formulation to be exactly{c}\left ( 1-\frac{1}{\gamma_{0}}\right ) \frac{d_{0}k^{2}}{\omega^{2}+\left ( d_{0}k^{2}\right ) ^{2}}+\\ \frac{1}{2\gamma_{0}}\left [ \frac{d_{0}k^{2}}{\left ( \omega+\omega\left ( k\right ) \right ) ^{2}+\left ( d_{0}k^{2}\right ) ^{2}}+\frac{d_{0}k^{2}}{\left ( \omega-\omega\left ( k\right ) \right ) ^{2}+\left ( d_{0}k^{2}\right ) ^{2}}\right ] \end{array } \right\ } .\label{c65}\ ] ] in the above , is the magnitude of the wave vector , ; is the angular frequency ; is the structure factor defined as where represents the scattering volume ; and is the frequency shift of the brillouin doublets, the right - hand side of ( [ c65 ] ) is the sum of three lorentzian line shapes , the first of which is the central rayleigh peak , and the other two , the negatively and positively -shifted brillouin peaks . the brillouin peaks represent light scattering due to the sound modes , or phonons , generated by pressure fluctuations , and the rayleigh peak is caused by light being scattered from specific entropy fluctuations .the navier - stokes - fourier density - density spectrum is given by the more complicated expression,{c}-\omega^{2}+i\omega\left [ \frac{\zeta_{ns,0}}{m_{0}}+\left ( \frac{4}{3}+e\!u_{0}\right ) \frac{\eta_{ns,0}}{m_{0}}\right ] k^{2}+\\ \left ( 1-\frac{1}{\gamma_{0}}\right ) \omega^{2}\left ( k\right ) + e\!u_{0}\left ( \frac{\zeta_{ns,0}}{m_{0}}+\frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}\right ) \frac{\eta_{ns,0}}{m_{0}}k^{4}\end{array } \right\ } } { \left\ { \begin{array } [ c]{c}-i\omega^{3}-\omega^{2}\left [ \frac{\zeta_{ns,0}}{m_{0}}+\left ( \frac{4}{3}+e\!u_{0}\right ) \frac{\eta_{ns,0}}{m_{0}}\right ] k^{2}+\\ i\omega\left [ \omega^{2}\left ( k\right ) + e\!u_{0}\left ( \frac{\zeta _ { ns,0}}{m_{0}}+\frac{4}{3}\frac{\eta_{ns,0}}{m_{0}}\right ) \frac{\eta _ { ns,0}}{m_{0}}k^{4}\right ] + \frac{e\!u_{0}}{\gamma_{0}}\frac{\eta_{ns,0}}{m_{0}}\omega^{2}\left ( k\right ) k^{2}\end{array } \right\ } } \right ) , \nonumber\end{gathered}\ ] ] where is the euken ratio defined in ( [ c18 ] ) .in the hydrodynamic regime for which equation ( [ c68 ] ) becomes approximately{c}\left ( 1-\frac{1}{\gamma_{0}}\right ) \frac{\sigma_{0}k^{2}}{\omega ^{2}+\left ( \sigma_{0}k^{2}\right ) ^{2}}+\\ \frac{1}{2\gamma_{0}}\left [ \frac{\gamma_{0}k^{2}}{\left ( \omega + \omega\left ( k\right ) \right ) ^{2}+\left ( \gamma_{0}k^{2}\right ) ^{2}}+\frac{\gamma_{0}k^{2}}{\left ( \omega-\omega\left ( k\right ) \right ) ^{2}+\left ( \gamma_{0}k^{2}\right ) ^{2}}\right ] + \\\frac{b\left ( k\right ) } { 2}\left [ \frac{\left ( \omega+\omega\left ( k\right ) \right ) } { \left ( \omega+\omega\left ( k\right )\right ) ^{2}+\left ( \gamma_{0}k^{2}\right ) ^{2}}-\frac{\left ( \omega-\omega\left ( k\right ) \right ) } { \left ( \omega-\omega\left ( k\right ) \right ) ^{2}+\left ( \gamma_{0}k^{2}\right ) ^{2}}\right ] \end{array } \right\ } , \label{c70}\ ] ] where and are defined in ( [ c18.1 ] ) and ( [ c18.2 ] ) and \frac{\eta_{ns,0}}{m_{0}}\right\ } .\label{c71}\ ] ] in approximate expression ( [ c70 ] ) , the first three terms represent the lorentzian rayleigh and brillouin peaks and the last two terms give a fairly small , but not negligible , non - lorentzian contribution . next , as in clark and fookson et al . , let us define a dimensionless angular frequency, the dimensionless uniformity parameter, which is inversely proportional to the knudsen number for a gas , and the dimensionless density - density spectrum, which has an area of when written as a function of and and integrated over all . figure [ fig2 ] is a plot of the dimensionless density - density spectra , , predicted by my formulation ( blue ) and the navier - stokes - fourier formulation ( red ) for a classical monatomic ideal gas with and equations ( [ c34 ] ) , ( [ c35 ] ) , and ( [ c42 ] ) assumed .the arrows indicate the widths at half - height .( since the spectra are symmetric about the frequency , only the right brillouin peaks are shown . )fig2c.eps both theories predict similar brillouin peaks , , parameters were chosen such that my formulation and the navier - stokes - fourier formulation gave the same sound predictions in the hydrodynamic regime . ] but mine has a rayleigh peak with a maximum about 29% higher and width at half - height about 29% narrower than that of navier - stokes - fourier .on the other hand , the areas under both of the rayleigh peak predictions are the same , leading to identical landau - placzek ratios , i.e. which for a classical monatomic ideal gas is .note that in my theory , the brillouin peaks and the rayleigh peak all have the same width at half - height , a quantity that is inversely proportional to the lifetime of the excitation corresponding to the peak .this means that my theory predicts the same lifetime for phonons as it does for excitations caused by specific entropy fluctuations , whereas the navier - stokes - fourier formulation predicts that excitations caused by specific entropy fluctuations die out sooner than the phonons ( 29% sooner in the case of a classical monatomic ideal gas ) .one other thing to note is that the brillouin portion of my exact spectrum ( [ c65 ] ) predicts phonons corresponding exactly to the adiabatic sound speed , , whereas the approximate navier - stokes - fourier spectrum ( [ c70 ] ) predicts a small amount of dispersion due to the non - lorentzian contributions . in figure[ fig2 ] , it can be observed that for a classical monatomic ideal gas , this dispersion acts to shift the navier - stokes - fourier brillouin peaks slightly toward the center frequency .the problem examined below strains the limits of what one should expect from the navier - stokes - fourier formulation and my theory in two ways : ( 1 ) it is not near - equilibrium , which is counter to the assumption of [ secce ] and appendix [ applc ] used to justify linear constitutive laws and ( 2 ) it is a problem and , therefore , not in the hydrodynamic regime .nonetheless , the navier - stokes - fourier equations are commonly employed as a useful tool for investigating the internal structure of shock waves , and so it is of interest to see how predictions from my theory compare with these .let us consider a cartesian one - dimensional steady - state shock wave like the one described in zeldovich and raizer and depicted schematically in figure [ mffig3 ] in a fixed reference frame .the subscript `` '' is used to indicate undisturbed initial values ahead of the shock , and the subscript `` '' is used for final values after the shock moves through .the initial and final fluid velocities are taken to be where is the constant speed of the piston generating the shock wave . for convenience ,one transforms the problem into a coordinate system moving with the shock wave front as illustrated in figure [ mffig4 ] , where primes denote variables in the moving coordinate system and represents the constant speed of the shock front .fig3c.eps fig4c.eps expressing balance laws ( [ bl1])-([bl4])/([bl12 ] ) with no body forces and diffusion transport constitutive equations ( [ tc19])-([tc21 ] ) in the one - dimensional moving coordinate system , making the steady - state assumption that the time derivatives vanish , using the helmholtz free energy formalism to cast the system in terms of the variables , , , , and , and employing the equilibrium thermodynamic relationships, for a classical monatomic ideal gas , yields the following system of ordinary differential equations: & = 0.\label{c81}\ ] ] for boundary conditions , let us assume that and and that all gradients , , vanish ahead of and behind the shock wave front , i.e. where may represent any of the variables , , , , or .then , by integrating ( [ c78])-([c81 ] ) from to , one finds{c}-d^{\prime}\frac{dv_{x}^{\prime}}{dx^{\prime}}+\\ \left ( \frac{5rt^{\prime}m^{\prime}}{2a\overline{m}^{\prime}}-\frac{5rt_{i}}{2a}\right ) + \frac{1}{2}\left ( v_{x}^{\prime2}-v_{s}^{2}\right ) \end{array } \right ] m_{i}v_{s},\label{c88}\ ] ] where ( [ c85 ] ) has been employed in writing equations ( [ c86])-([c88 ] ) , and by taking the limit of the above equations as , one arrives at the relations where ( [ c89 ] ) and ( [ c90 ] ) have been used to write ( [ c91 ] ) , and ( [ c90 ] ) and ( [ c91 ] ) have been used to obtain ( [ c92 ] ) .one solves ( [ c92 ] ) for the positive root to find the following expression for the shock speed: therefore , if the initial mass density and temperature , and , and the piston speed , , are known , then all of the relevant final values , , , , and , may be computed via ( [ c93 ] ) and ( [ c89])-([c91 ] ) for a classical monatomic ideal gas . note that these are identical to the shock speed and final values found with the navier - stokes - fourier equations .using the same assumptions and carrying out the foregoing procedure for the navier - stokes - fourier formulation , yields the following system:{c}-\frac{4\eta_{ns}^{\prime}}{3m^{\prime}}\frac{dv_{x}^{\prime}}{dx^{\prime}}+\\ \left ( \frac{5rt^{\prime}}{2a}-\frac{5rt_{i}}{2a}\right ) + \frac{1}{2}\left ( v_{x}^{\prime2}-v_{s}^{2}\right ) \end{array } \right ] m_{i}v_{s}\label{c96}\ ] ] with the same boundary conditions as for my theory . in the above , classical monatomic ideal gas assumptions given in ( [ c34 ] ) have been used . for this non - linear problem , my diffusion coefficient , , and the navier - stokes shear viscosity , , may depend on state variables , e.g. it is common to express the viscosity with a temperature power law of the form ( [ dc1 ] ) : where is some reference temperature and is the viscosity measured at ( see appendix [ difp ] ) .also , in appendix [ difp ] , it is shown that in a classical monatomic ideal gas one might reasonably take ( [ dc4]): figure [ mffig5 ] and figure [ mffig6 ] contain plots of numerical solutions to dimensionless versions of ( [ c85])-([c88 ] ) and ( [ c94])-([c96 ] ) with their appropriate boundary conditions and power laws ( [ dc1 ] ) and ( [ dc4 ] ) describing the transport coefficients .the numerical method employed is iterative newton s method with centered differences approximating the spatial derivatives .the dimensionless variables used are where and are the mean free path length and adiabatic sound speed in the initial state , and the mach number is computed as each of the figures corresponds to with parameters chosen to be those of alsmeyer for his shock wave experiments in argon gas : the undisturbed initial values, and the parameters appearing in ( [ dc1]), the curves plotted in figure [ mffig5 ] and figure [ mffig6 ] are the normalized mass density , ( in blue ) , the normalized fluid speed , ( in green ) , and the normalized temperature , ( in red ) versus the dimensionless position , . since the numerical solution sets are arbitrarily positioned on the -axis , the point is chosen to be the center of the mass density profile ( the point at which the highest slope of occurs ) .figure [ mffig5 ] contains the normalized shock wave profiles predicted by my theory and figure [ mffig6 ] contains those predicted by the navier - stokes - fourier formulation . comparing the curves in these two figures, one observes that the mass density , temperature , and fluid velocity profiles are much closer together in my theory than for the navier - stokes - fourier formulation .this difference is perhaps most obvious in the leading edge of the shock wave ( the side ) where my theory predicts these three profiles to be very near one another , whereas navier - stokes - fourier predicts a pronounced displacement between the three with temperature leading , velocity behind it , and mass density in the back .i am unfamiliar with any direct physical arguments or experimental evidence that would explain why these profiles should be displaced in such a manner. therefore , it would be very interesting to obtain measurements revealing the structure and position of these quantities .experiments , such as alsmeyer s , which have been conducted to probe the internal structure of shock waves in a gas , measure only the mass density profile .fig5c.eps fig6c.eps when a macroscopic particle is placed in a resting fluid with a temperature gradient , the particle tends to move in the cooler direction .this phenomenon is known as thermophoresis , and as brenner points out , it is modelled with the navier - stokes - fourier formulation only by invoking a molecularly based thermal slip boundary condition . my formulation which features a non - convective mass flux , however , can provide a natural mechanism for thermophoresis even when a no - slip boundary condition is applied at the particle surface . the simplest problem that may be associated with thermophoresis is examined below using my theory . in it ,gravity and possible boundary layers are neglected , and only small temperature gradients are considered . also , the thermophoretic particle is assumed to be insulated and to have a radius , , much larger than the fluid s mean free path length , , so that the knudsen number , defined for this problem as , is small .let us consider a cartesian one - dimensional problem in which a fluid at average pressure , , lies between two parallel plates , a colder plate at maintained at temperature , , and a hotter plate at kept at temperature , , as shown in figure [ mffig7 ] .also , assume the temperature of the fluid at the plates equals that of the plates , yielding the boundary conditions, and let us define the dimensionless parameter, where is the average temperature, the plates are assumed to be impermeable , i.e. allowing no mass to flow through them , which requires fig7c.eps it is most convenient to study this problem with the linearized gibbs free energy description of my formulation , i.e. equations ( [ c106])-([c109 ] ) , used in [ soundbc ] . here, the linearization is about the constant average state, and is taken to be the parameter defined in ( [ c100.1 ] ) and assumed to be small .note that in equations ( [ c106])-([c109 ] ) , `` '' subscripts are used to indicate evaluation at the average state , .assuming steady - state conditions , the time derivatives appearing on the left - hand side of ( [ c106])-([c109 ] ) vanish , in which case these equations , together with the temperature boundary conditions ( [ c100.01 ] ) and definitions ( [ c100.1 ] ) and ( [ c100.2 ] ) , yield the solutions, one finds an expression for the constant velocity , by enforcing the linearized version of the no mass flow condition ( [ c100.7]): where the diffusion transport constitutive law ( [ tc19 ] ) has been assumed and recast in the gibbs free energy description . substituting ( [ c111 ] ) and ( [ c112 ] ) into the above and using ( [ c104 ] ) and ( [ c100.1 ] ) yields , my theory predicts a constant velocity in the ( colder ) direction that is proportional to the fluid s diffusion coefficient and thermal expansion coefficient and the size of the temperature gradient imposed between the plates .if an insulated macroscopic particle were present in such a velocity field with a no - slip condition applied at its surface , then the particle would simply be carried along at the fluid s velocity , i.e. the particle s thermophoretic velocity , , would equal the fluid s continuum velocity , , computed above .in contrast , when the non - convective mass flux is assumed to be zero , as in the case of the navier - stokes - fourier formulation , the no mass flow condition ( [ c100.7 ] ) yields consequently , thermophoretic motion does not arise in the same sense that it does for my theory . to account for thermophoresis with the navier - stokes - fourier equations ,one then finds it necessary to employ a thermal slip boundary condition at the particle s surface at which there is assumed to be a non - zero tangential velocity proportional to the temperature gradient and in the down - gradient direction . under this assumption ,various researchers have been led , theoretically and experimentally , to the following expression for the thermophoretic velocity in the case of an insulated particle in a gas: where is the thermal slip coefficient , dimensionless and theoretically believed to lie in the interval , , with being a widely accepted theoretical estimate for gases . for the problem considered here , ( [ c116 ] )becomes employing ( [ c35.5 ] ) and assuming an ideal gas for which the thermophoretic velocity predicted by my theory in equation ( [ c114 ] ) may be expressed as therefore , by identifying with , one finds this to be _ the same as the thermal slip formula _ ( [ c117 ] ) .also , recall that in section [ sound ] , the value was obtained for a classical monatomic ideal gas . pertaining to a few other types of gasesare provided in appendix [ difp ] . ] as discussed in brenner , there is no real theory for thermal slip in a liquid .therefore , researchers , e.g. mcnab and meisen , typically apply the gas formulas to liquids , even though they admit that there is no theoretical justification for doing so . for the idealized problem considered here ,mcnab and meisen s data corresponds to a thermal slip coefficient for water and -hexane near room temperature of , an order of magnitude smaller than that of a gas . again using ( [ c35.5 ] ) , but no longer assuming the ideal gas formula ( [ c117.5 ] ) , my theory s thermophoretic velocity ( [ c114 ] ) becomes which when compared with the thermal slip formula ( [ c117 ] ) , yields the relationship, for water at k , we take ( see appendix [ difp ] ) and k ( from the crc handbook of chemistry and physics ) , giving a slip coefficient value of , which is _ the same as the experimental value mentioned above_. it is interesting to note that there is , in general , no positive definite restriction on the thermal expansion coefficient .there exist equilibrium thermodynamic states in water and liquid helium , for example , in which may vanish or become negative . in view of equation ( [ c120 ] ), this means that such states could give rise to the occurrence of no thermophoresis or negative thermophoresis .in this appendix , indicial notation is used in which sums are taken over repeated indices .the indices may assume the values 1 , 2 , or 3 , representing the three spatial directions .* dots and contraction operators: _ { j } & = d_{iji}\label{to6}\ ] ] if and are tensors of order and , respectively , with , then _ { i_{1}\ldots i_{m - n}}=a_{i_{1}\ldots i_{m - n}j_{1}\ldots j_{n}}b_{j_{1}\ldots j_{n}}.\label{to6.5}\ ] ] * norms: * tensor product: * cross products: where is the alternating symbol,{l}1\text { if } ( i , j , k)=(1,2,3),(2,3,1),\text { or } ( 3,1,2)\\ -1\text { if } ( i , j , k)=(1,3,2),(2,1,3),\text { or } ( 3,2,1)\\ 0\text { if any of the indices are repeated}\end{array } \right\ } \label{to12}\ ] ] the transpose , symmetric part , skew - symmetric part , spherical part , and deviatoric part of a tensor are defined by and the second - order identity tensor is defined as where is the kronecker delta,{l}1\text { if } i = j\\ 0\text { if } i\neq j \end{array } \right\ } .\label{to16.2}\ ] ] * gradients ( cartesian coordinate definitions): * divergences ( cartesian coordinate definitions): * convective derivative: * generalized partial derivative : if and are tensors of order and , respectively , then \label{to27}\ ] ] the partial time derivative of as defined by ( [ bl7 ] ) and using the product rule , we find or , substituting ( [ bl3 ] ) and ( [ bl1 ] ) into the above, + \overline{m}\underline{f}^{\left ( \overline{m}\right ) } \cdot\underline{v}+\frac{1}{2}\left\vert \underline{v}\right\vert ^{2}\nabla\cdot\left ( \overline{m}\underline{v}\right ) .\ ] ] one can then use tensor identities ( [ to22 ] ) and ( [ to25 ] ) to obtain + \underline{\underline{p}}^{t}\colon\nabla \underline{v}+\overline{m}\left ( \underline{v}\,\underline{v}\right ) \cdot\cdot\nabla\underline{v}+\\ & \overline{m}\underline{v}\cdot\underline{f}^{\left ( \overline{m}\right ) } + \nabla\cdot\left ( \frac{1}{2}\overline{m}\left\vert \underline{v}\right\vert ^{2}\underline{v}\right ) -\frac{1}{2}\overline{m}\underline{v}\cdot\nabla\left ( \underline{v}\cdot\underline{v}\right ) .\end{aligned}\ ] ] finally , if we employ tensor identities ( [ to23 ] ) and ( [ to26 ] ) , then we arrive at equation ( [ bl7.5 ] ) . with the aid of tensor identity ( [ to27 ] ) , the first term on the right - hand side of ( [ bl13 ] ) may be written as & = -\nabla \cdot\left [ \underline{x}\times\left ( \underline{\underline{p}}+\overline { m}\underline{v}\,\underline{v}\right ) \right ] + c_{1,3}\left [ \underline{\underline{1}}\times\underline{\underline{p}}\right ] + \nonumber\\ & c_{1,3}\left [ \underline{\underline{1}}\times\left ( \overline { m}\underline{v}\,\underline{v}\right ) \right ] , \label{am1}\ ] ] using the fact that and the linearity of the contraction operator , . next , by writing out its components , it is observed that = \underline{0}\ ] ] is satisfied if and only if is symmetric . therefore , the symmetry of causes the third term on the right - hand side of ( [ am1 ] ) to vanish .furthermore , the second term also vanishes , implying conservation of momentum ( [ bl14 ] ) , if and only if is symmetric .here , classical irreversible thermodynamics , as described in de groot and mazur ( * ? ? ?* ch.iii and iv ) and jou et al . , is used to derive linear constitutive laws ( [ ce12])-([ce14 ] ) .recall equation ( [ ce9.3 ] ) for the volumetric entropy production rate and note that the second term on the right - hand side suggests the following decomposition for the pressure tensor of a fluid: where represents the viscous part of the pressure , assumed symmetric , i.e. so that condition ( [ bl15 ] ) is satisfied , and , as before , denotes the equilibrium thermodynamic pressure . consequently , equation ( [ ce9.3 ] ) becomes one can further decompose the viscous pressure into its spherical and deviatoric parts via with substituting ( [ l4 ] ) and ( [ l5 ] ) into ( [ l3 ] ) and using identity ( [ to28 ] ) yields + \nonumber\\ & \underline{\underline{p}}_{visc}^{dev}\cdot\cdot\left [ -\frac{1}{t}\left ( \nabla\underline{v}\right ) ^{sym , dev}\right ] + \underline{q}_{m}\cdot\left [ -\nabla\left ( \frac{\mu}{t}\right ) \right ] , \label{l6}\ ] ] where property ( [ l2 ] ) has been employed in conjunction with tensor identity ( [ to30 ] ) .the terms have been grouped as in expression ( [ l6 ] ) so that has the form, where and represent fluxes and the affinities , respectively .one requires the affinities to be objective tensor quantities and for there to exist equilibrium states for which the affinities and fluxes simultaneously vanish .comparing ( [ l7 ] ) and ( [ l6 ] ) suggests the following identifications:{|l|l|l|}\hline & ( flux ) & ( affinity)\\\hline & & \\\hline & & \\\hline & & \\\hline & & \\\hline \end{tabular } .\label{l8}\ ] ] note that , , , and are each indeed objective tensor quantities .next , let us assume that the fluxes are written as where represents a list of state parameters , each evaluated at , e.g. or .the assumption that the affinities and fluxes vanish in a state of equilibrium implies so that we obtain the following expansions for the fluxes about the equilibrium state: assuming the magnitudes of the affinities are small enough to neglect the higher - order terms and defining the phenomenological coefficients as equation ( [ l11 ] ) becomes the linear constitutive law, with the use of table ( [ l8 ] ) , the linear constitutive laws for our single - component viscous fluid are given by + \nonumber\\ & \underline{\underline{l}}^{\left ( 14\right ) } \cdot\left [ -\nabla\left ( \frac{\mu}{t}\right ) \right ] , \label{l14}\]] + \nonumber\\ & \underline{l}^{\left ( 24\right ) } \cdot\left [ -\nabla\left ( \frac{\mu}{t}\right ) \right ] , \label{l15}\]] + \nonumber\\ & \underline{\underline{\underline{l}}}^{\left ( 34\right ) } \cdot\left [ -\nabla\left ( \frac{\mu}{t}\right ) \right ] , \label{l16}\ ] ] and + \nonumber\\ & \underline{\underline{l}}^{\left ( 44\right ) } \cdot\left [ -\nabla\left ( \frac{\mu}{t}\right ) \right ] .\label{l17}\ ] ] as discussed in de groot and mazur ( * ? ? ?* ch.vi ) , in the absence of external magnetic fields and coriolis forces , onsager reciprocity requires that the phenomenological coefficients appearing in the above satisfy{c}\mathbf{l}^{\left ( kj\right ) } \text { for } \left ( j , k\right ) = ( 1,1),(1,4),(2,2),(2,3),(3,2),(3,3),(4,1),(4,4)\\ -\mathbf{l}^{\left ( kj\right ) } \text { for } \left ( j , k\right ) = ( 1,2),(1,3),(2,1),(2,4),(3,1),(3,4),(4,2),(4,3 ) \end{array } \right\ } .\label{l18}\ ] ] furthermore , as discussed in segel , isotropic materials like fluids possess isotropic tensors for each of their phenomenological coefficients , , and this reduces equations ( [ l14])-([l17 ] ) to , \label{l24}\]] , \label{l26}\ ] ] and \label{l27}\ ] ] with and \label{l32}\ ] ] for scalars , , , , , , and where represents the kronecker delta . substituting ( [ l29 ] ) and ( [ l30 ] ) into equation ( [ l24 ] ) , one finds whereby defining and one obtains equation ( [ ce14 ] ) .similarly , if one employs ( [ l30.2 ] ) and ( [ l31 ] ) , then equation ( [ l27 ] ) becomes and , therefore , defining and yields equation ( [ ce12 ] ) .note that equations ( [ l35.5 ] ) , ( [ l38 ] ) , and ( [ l39 ] ) imply the relation, which is a consequence of the onsager reciprocity .next , let us substitute ( [ l32 ] ) into ( [ l26 ] ) to find \left ( \nabla\underline{v}\right ) _ { kl}^{sym , dev}\\ & = \frac{2\mathcal{l}^{\left ( 33\right ) } } { 3t}tr\left [ \left ( \nabla\underline{v}\right ) ^{sym , dev}\right ] \delta_{ij}-\frac { \mathcal{l}^{\left ( 33\right ) } } { t}\left [ \left ( \nabla\underline{v}\right ) _ { ij}^{sym , dev}+\left ( \nabla\underline{v}\right ) _ { ji}^{sym , dev}\right ] , \end{aligned}\ ] ] which implies since the trace of any deviatoric tensor is zero . if one defines and then ( [ l43 ] ) and ( [ l25 ] ) become and finally , using the two equations above , together with ( [ l5 ] ) , ( [ l4 ] ) , and ( [ l1 ] ) , one arrives at equation ( [ ce13 ] ) for the total pressure .it is easily shown that if the mass and internal energy fluxes are chosen as then onsager reciprocity condition ( [ ce16 ] ) can not be satisfied unless furthermore , constitutive equations ( [ tc16 ] ) and ( [ tc18 ] ) may be used to prove that similar flux laws hold for other thermodynamic extensive quantities , as well , e.g. the entropy , for which consequently , i call ( [ tc4 ] ) , ( [ tc5 ] ) , ( [ tc8 ] ) , and ( [ tc9 ] ) , `` thermodynamic diffusion transport coefficients , '' and ( [ tc16 ] ) and ( [ tc18 ] ) , or the more compactly expressed , ( [ tc19 ] ) and ( [ tc21 ] ) , `` thermodynamic diffusion flux laws . '' on the other hand , there is nothing obvious to prevent the choice of a different diffusion coefficient , , to be associated with the viscous pressure . in this case , the `` momentum diffusion transport coefficients , '' ( [ tc13 ] ) and ( [ tc14 ] ) , may be written as and and the `` momentum diffusion flux law '' ( [ tc20 ] ) as of course , the diffusion transport fluxes are found by choosing thermodynamic and momentum diffusion fluxes with .to model a multicomponent mixture of fluids , one may use the same ideas as those used to obtain ( [ smf1])/([tc19])-([tc21 ] ) . let represent the mass of component in an -component mixture . for an -component fluid near equilibrium and in the continuum regime ,i propose the -formulation balance laws, with diffusion transport constitutive equations, notice that there still appears only one transport coefficient , .also , note that since one of the partial mass equations in the above may be replaced by the total mass equation, with in this setting , the continuum velocity is interpreted as a center of mechanical mass velocity , and in this center of mechanical mass frame , there is a non - zero total mass flux due to diffusion . also , note that if the equilibrium thermodynamic pressure , , may be expressed as a function of and only as in the case of a homogeneous mixture the partial mass density equations may be uncoupled from the rest of the system , leaving formulation ( [ smf1])/([tc19])-([tc21 ] ) derived for a single - component fluid .this means that homogeneous mixtures such as air may be treated in the same way as a single - component fluid .here , measured sound propagation data is used together with formulas from [ sound ] to compute the diffusion coefficient , , of my formulation for various types of fluids .the temperature , pressure , and mass density dependence of this parameter is explored where data is available .table [ tab1 ] provides values of the diffusion coefficient , , and the self - diffusion coefficient , , ( where available ) for several gases and liquids at k ( unless indicated otherwise ) and pa . [ c]|llll|**fluid * * & & & + * gases * & & & + helium & & & + neon & & & + argon & & & + krypton & & & + xenon & & & + nitrogen & & & + oxygen & & & + carbon dioxide & & & + methane & & & + air & & & + * liquids * & & & + water & & & + mercury & & & + glycerol & & & + benzene & & & + ethyl alcohol & & & + castor oil & & & + [ tab1 ] for the noble gases , helium , neon , argon , krypton , and xenon , is given by ( [ c36 ] ) and the diffusion coefficient is computed via ( [ c35 ] ) with shear viscosities from kestin et al . and mass densities determined by the classical ideal gas relationship, which is appropriate for each of the table [ tab1 ] gases in the normal temperature and pressure regime . for the diatomic gases , nitrogen and oxygen , and the polyatomic gases ,carbon dioxide and methane , and are computed by equations ( [ c33 ] ) and ( [ c35.5 ] ) , respectively , using data presented in marques , viscosity data from cole and wakeham and trengrove and wakeham , and densities computed via relation ( [ dc.1 ] ) . for air ) . ] and the liquids , is computed via equation ( [ c28.6 ] ) and by relation ( [ c35.5 ] ) . the sound propagation quantities , and , are calculated for air using with mhz and humidity , given for mercury in hunter et al . , and tabulated for the rest of the liquids in .the viscosities and mass densities are taken from for air , for water , for mercury , for glycerol , and for benzene , for ethyl alcohol , and and for castor oil .the self - diffusion coefficients are taken from kestin et al . for the noble gases , from winn for the diatomic and polyatomic gases , from holz et al . for water , from nachtrieb and petit for mercury , from tomlinson for glycerol , from kim and lee for benzene , and from meckl and zeidler for ethyl alcohol . from the values presented in table [ tab1 ], one makes the following observations . * in each of the gases , the diffusion and self - diffusion coefficients are observed to be the same order of magnitude .however in liquids , the self - diffusion coefficients are several orders of magnitude smaller than the values .this indicates that although in gases the diffusion coefficient may be roughly approximated by self - diffusion , the same may not be said of liquids . * the diffusion parameters of the noble gasesare seen to decrease with increasing mass density . *the value for in air is approximately a weighted average of the values for nitrogen and oxygen based on their fractional compositions in air ( nitrogen and oxygen ) .* with the exception of very light gases like helium and very heavy gases like krypton and xenon , diffusion coefficients for gases at normal temperature and pressure are typically on the order of m .* the values of water and ethyl alcohol which fall into the category of medium viscosity , medium density liquids are similar and on the order of m , an order of magnitude smaller than that of a typical gas .the high viscosity , medium density liquids , glycerol and castor oil , possess similar values on the higher order of m .mercury s viscosity is medium range like water and ethyl alcohol , but its higher mass density results in a smaller on the order of m .[ [ gases ] ] gases + + + + + in general , it is observed from sound attenuation data for gases in the hydrodynamic regime that the product , , may vary with temperature but has negligible pressure or density dependence at a fixed temperature . using this observation ,together with relationship ( [ tc22 ] ) and the fact that the navier - stokes shear viscosity also has negligible pressure or density dependence , leads us to conclude that although is possibly a function of , it does not tend to vary much with or for gases .typically , may be expressed as a temperature power law of the form, where is some reference temperature and is the viscosity measured at . using this in equation ( [ tc22 ] ) , one finds equations ( [ c33 ] ) and ( [ c35.5 ] ) together imply \right\ } .\label{dc3}\ ] ] for noble gases , such as argon , one expects the values for , , and given in ( [ c34 ] ) to hold over a broad temperature range . therefore ,in the noble gases , we take to be constant and , substituting this into ( [ dc2]), on the other hand , as observed in tables [ tab2]-[tab3.5 ] , for non - monatomic gases such as air , nitrogen , and methane , displays a tendency to increase with temperature . in tables[ tab2]-[tab3.5 ] , the values are computed by equation ( [ c28.6 ] ) with calculated using for air and taken from for nitrogen and methane and with calculated from the classical ideal gas formula, where the values are taken from for air and approximated to be constant over the studied temperature range for nitrogen and methane with respective values and .the values appearing in tables [ tab2]-[tab3.5 ] are computed via equation ( [ c35.5 ] ) with navier - stokes shear viscosities taken from for air and from for nitrogen and methane and mass densities taken from for air and computed by ideal gas formula ( [ dc.1 ] ) for nitrogen and methane .[ c]|l|l|l| ( k ) & ( m ) & + & & + & & + & & + & & + & & + & & + & & + & & + [ tab2 ] [ c]|l|l|l| ( k ) & ( m ) & + & & + & & + & & + & & + [ tab3 ] [ c]|l|l|l| ( k ) & ( m ) & + & & + & & + & & + & & + [ tab3.5 ] [ [ liquids ] ] liquids + + + + + + + using equations ( [ c28.6 ] ) and ( [ c35.5 ] ) with sound speed and attenuation data from and mass density and shear viscosity data from , one obtains the estimates of and for water at atmospheric pressure ( pa ) and various temperatures between the freezing and boiling point appearing in table [ tab4 ] .[ c]|c|c|c|c| ( k ) & ( m/s ) & ( m/s ) & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + [ tab4 ] as we can see , the parameter does not vary much over the entire temperature range , but the diffusion coefficient decreases with temperature .an excellent least squares fit is made to the table [ tab4 ] data with in the third column of table [ tab4 ] , measured self - diffusion coefficients from holz et al . are tabulated for water at atmospheric pressure . unlike our observation for gases ,these self - diffusions are two to three orders of magnitude less than the corresponding values .also , unlike , the self - diffusion _ increases _ with temperature . in table[ tab5 ] , and are computed for water at two fixed temperatures , k and k , and various pressures roughly between and atmospheres .the values are obtained using equations ( [ c28.6 ] ) and ( [ c35.5 ] ) together with data presented in litovitz and carnevale .[ c]|llll| ( pa ) & & & + k & & & + & & & + & & & + & & & + & & & + & & & + k & & & + & & & + & & & + & & & + & & & + & & & + [ tab5 ] as we can see in the fourth column of table [ tab5 ] , the product of the mass density and the diffusion coefficient does not vary much over the studied pressure range ( for k and for k ) .therefore , as in gases , it is reasonable to assume that the quantity is essentially pressure independent for water and perhaps for other liquids , as well , although this remains to be experimentally verified . using equations ( [ c28.6 ] ) and ( [ c35.5 ] ) with data from hunter et al . yields the estimates of and for liquid mercury at atmospheric pressure ( pa ) and various temperatures between k and k appearing in table [ tab6 ] .[ c]|l|l|l| ( k ) & ( m ) & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + [ tab6 ] unlike our observation for water in table [ tab4 ] , the values for mercury are seen to increase with increasing temperature .a fairly good fit to the table [ tab6 ] data is given by addition to those introduced in [ secnt ] , this paper utilizes the following symbols and notational conventions : the constants,{ll} & universal gas constant\\ & boltzmann constant\\ & atomic weight\\ & avagadro 's number , \end{tabular}\ ] ] the equilibrium thermodynamic quantities,{ll} & fundamental relation for entropy per mechanical mass\\ & particle number density\\ & specific enthalpy\\ & coefficient of thermal expansion\\ & isothermal compressibility\\ & isochoric specific heat per mass\\ & isobaric specific heat per mass\\ & ratio of specific heats , \\ & adiabatic sound speed , \end{tabular}\ ] ] the non - equilibrium thermodynamic quantities,{ll} & total -flux\\ & non - convective -flux\\ & diffusion transport -flux\\ & volumetric -production / destruction rate \end{tabular}\ ] ] and{ll} & total pressure tensor\\ & viscous part of the pressure tensor\\ & spherical part of \\ & external body force per mechanical mass\\ & heat flux\\ & energy flux due to chemical work , \end{tabular}\ ] ] the transport coefficients,{ll} & coefficients in general mass flux law\\ & coefficients in general momentum flux law\\ & coefficients in general internal energy flux law \end{tabular}\ ] ] and{ll} & navier - stokes shear viscosity\\ & navier - stokes bulk viscosity\\ & fourier heat conductivity\\ & my diffusion transport coefficient\\ & self - diffusion coefficient , \end{tabular}\ ] ] and the dimensionless parameters,{ll} & knudsen number\\ & mach number\\ & linearization parameter \\ & euken ratio\\ & coefficient relating to \end{tabular}\ ] ] furthermore , the subscript `` '' is used to denote evaluation at the equilibrium state and the subscript `` '' to indicate first - order ( linear ) terms in the small expansion about equilibrium .
the continuum equations of fluid mechanics are rederived with the intention of keeping certain mechanical and thermodynamic concepts separate . a new `` mechanical '' mass density is created to be used in computing inertial quantities , whereas the actual mass density is treated as a thermodynamic variable . a new set of balance laws is proposed , including a mass balance equation with a non - convective flux . the basic principles of irreversible thermodynamics are used to obtain linear constitutive equations that are expansions of not only the usual affinities involving gradients of temperature and velocity but also the gradient of the chemical potential . transport coefficients are then chosen based on an elementary diffusion model , which yields simple constitutive laws featuring just one diffusion transport parameter . the resulting formulation differs from the navier - stokes - fourier equations of fluid motion . in order to highlight key similarities and differences between the two formulations , several fluid mechanics problems are examined , including sound propagation , light scattering , steady - state shock waves , and thermophoresis .
agent - based modeling can be used to study systems exhibiting emergent properties which can not be explained by aggregating the properties of the system s components. statistical mechanics and economics share the property to analyze big ensembles where the collective behaviour is found out as a result of interactions at the microscopic level and where agent - based simulations can be applied .many systems are studied in terms of the nature that defines their inner components while others are considered from the point of view of the interactions among the agents that can be pictured through a complex network .plenty of information is encoded in connectivity patterns .hierarchical structures appear in a natural way when we study societies and , according to many authors, one of the milestones is to understand why and how from individuals with initial identical status , inequalities emerge .this is related to the question of hierarchy formation as a self - organization phenomenon due to social dynamics. elitarian distributions can arise starting from a society where people initially own an equal share of economic resources , _ e.g. : _ the exponential distribution for the low and medium income classes in western societies. we consider dragulescu - yakovenko gas - like models in economic exchanges so , let our system be composed of economic agents , being and constant .each agent owns an amount of money so the state of the system at a given time is defined by the values that every variable takes at that moment , money distribution among the agents should never be confused with the notion of wealth distribution .money is only one part inside the whole concept of wealth .transfer of money represents payment for goods and services in a market economy .we study simplified models which keep track of that money flux but do not keep track of what goods or services are delivered . at each interacting step, agents trade by pairs and local conservation of money is sustained , transactions result in some part of the money involved in the interaction changing its owner .for simplicity , we do not consider models where debts are allowed .it is deeply established in the common knowledge that highly - ranked individuals in societies have easier access to resources and better chances to compete .this is a motivation to look for internal correlations between money and surrounding environment .we wonder if the exchange rules that define simple gas - like models for random markets , when implemented on networks , are capable of depicting correlations between purchasing power of an agent inside a social network and the influence of the agent on the rest of the system .we associate the purchasing power concept to the mean money per economic agent computed as a function of the connectivity degree of each agent in a network . at this level ,influence of an agent is only related to the degree of the node representing the agent .we implement the exchange rules on two type of networks : uniform random spatial graph and barabsi - albert model , and then examine the relationship between the former econo - social agent indicators for the different underlying architectures . in section [sec : randommarkets ] , we review two well - known random undirected exchange rules : general and uniform savings models . in section [ sec : rde ] , we introduce a new family of interactions : random but directed ones .the main property of this simple exchange rule is that it is a real inspired model where social inequalities in money distribution emerge in a natural way . in section [ sec : ns ] , we show the relation between mean money per economic agent and the connectivity degree of the agent . for the models with undirected exchange rules , we observe no correlation between money and the degree of the nodes .linear dependence is found for the new random exchange model we propose .section [ sec : conc ] is devoted to gather the most relevant conclusions .for some random economic systems where money is a conserved quantity , the asymptotic distribution of money among the agents is given by the boltzmann - gibbs distribution ( bg ) , where the role of the effective temperature is played by the average amount of money per agent , this feature was first shown by dragulescu and yakovenko in 2000 by means of numerical simulations. subsequently , analytical justification was given by lpez - ruiz _ et al ._ in 2008 and 2012 .bg can be geometrically deduced under the assumption of equiprobability of the possible economic microstates .we also know that an asymptotic evolution towards bg is obtained regardless of the initial distribution for those systems with total money fixed and when considering random symmetric interactions between pairs of components. this comes from bg being the stable fixed point of the distributions space , under the iterated action of the integral operator given by (z)=\iint_{s(z)}\frac{p(x)\,p(y)}{x+y}\;dx\,dy,\label{integral}\ ] ] where is the integration domain .+ let us now consider the gas - like model originally proposed by dragulescu and yakovenko so , at each computational step , we randomly choose a pair of agents and then , one -the labeled as - is chosen to be the winner in the interaction process and the other one -labeled as - becomes the loser and , according to the previously stated rule ( [ transactiondelta ] ) , an amount of money is transferred from the loser to the winner . assuming , it is obviousthat if the loser does not have enough money to pay , which is nothing but the local condition , the transaction is forbidden and we should proceed with a different pair of agents . instead of considering the restriction in the interaction , we state the exchange rule considering that gives rise to the completely random case given by where ] , which is called a propensity factor. this means that each agent saves a fraction of its money when an interaction occurs and trades randomly with the other part : we consider the model with uniform savings which means that is fixed to be constant among the agents and with no dependence on the time .the statistically stationary distribution decays rapidly on both sides of the most probable value for the money per agent which , in this case , is shifted from the poorest part of the system to when see figure [ sg ] .this behaviour was already described as a self - organising feature of the market induced by self - interest of saving by each agent without any global perspective in an analogous way to the self - organisation in markets with restricted comodities. first attempt to give a quantitative description for the steady distribution towards this model evolves is due to patriarca _et al . _ in 2004 .they stated that numerical simulations of ( [ transactionsavings ] ) could be fitted to a standard gamma distribution. subsequently ( 2007 ) , chatterjee and chakrabarti offered a brief study of the consequences that this modelization implies and stated that as increases , effectively the agents retain more of its money in any trading , which can be taken as implying that with increasing , temperature of the scattering process changes. according to their study , fourth and higher order moments of the distributions are in discrepancy with those of the gamma family so , the actual form of the distribution for this model still remains to be found out .( 2011 ) gave an iterative recipe to derive an analytical expression solving an integral equation. a similar expression was derived in a different way by lallouache _we have shown how the propensity factor is introduced ( [ transactionsavings ] ) as a variation for the general random undirected exchange rule ( [ transactionrandom ] ) showing how , from individual responsible decissions -such as saving a fraction of your money when entering an exchange market- , self - organized distributions where the mean and the mode are close arise so , richness would be quite balanced distributed among the group .a completely different scheme that modifies the general rule ( [ transactiondelta ] ) proposes a random sharing of an amount ( instead of ) only when , trading at the level of the lowest economic class in the trade .this model leads to an extreme situation in which all the money in the market drifts to one agent and the rest become truely pauper .from this idea , we give a new and more general exchange rule reflecting this directed or biased orientation for the interaction and including this particular result .we propose an integral operator which is the analytical approach to this new rule in the mean - field or gas - like case and , in section [ sec : ns ] , we implement this rule on networks to study how it is affected when we mix directed interactions with undirected networks .the directed exchange can be understood as a first approach to microeconomic activities where money is transferred only in one direction , similar to payments for goods .we consider the most general family of interations , where ] so , the probability of obtaining a certain value is given by .the interaction of pairs in the first configuration of the system gives rise to the evolution of to the following probability of obtaining : for the second copy , we should consider that money of the second agent after the interaction , should be a value between the initial money it has , , and the maximum possible amount of money it can have after the interaction , which will be associated to get all the money from the first agentso , again , the length of the segment $ ] is so , the probability of is uniformly distributed in that segment and so , the probability to have will be .the expression for the probability of having in the second copy of the system results in we explicitly compute the norm , and the expected value , =\frac{3}{2}\langle u\rangle.\nonumber\end{aligned}\ ] ] from results ( [ normop ] ) and ( [ expop ] ) , together with definition ( [ op1 ] ) , the norm and mean value of the distributions are conserved when we define the operator for the directed random interaction by (x)=\frac{1}{2}\int_{u > x}\frac{p(u)}{u}\,du+\frac{1}{2}\iint_{v < x < u+v}\frac{p(u)\,p(v)}{u}\,du\,dv.\label{opdef}\ ] ] although we can not give a proof for the infinite iteration of this operator , we see that it piles up the distribution at the lower values for which gives rise to a very impoverished population and a very slightest fraction of too opulent agents .even with such a very narrow initial distribution we choose , the effect of this operator is very strong in only a couple of iterations .this result is in perfect agreement with the first model of biased interaction we mentioned in our previous section .other models with asymetric rules that establish a transition between boltzmann - gibbs and pareto distributions can be found in the literature. see figure [ operador ] .we stated in the introduction of this work the desire of finding a first model which , when implemented on networks , is able to show a relation between money and influence of an agent on the system . for this purpose, we simulate the two models we have already studied in section [ sec : randommarkets ] and the new one we have presented in section [ sec : rde ] .we choose two representative cases : the random uniform spatial network ( sp ) and the barabsi - albert model ( ba ) .sp is an easy way to build networks with poissonian degree distributions and ba is an algorithm for generating random scale - free networks following the preferential attachment prescription. at the beginning , every agent is given the same amount of money so , the initial distribution of money among the agents is written and we obtain the steady state distribution for the exchange rules ( [ transactionrandom ] ) , ( [ transactionsavings ] ) and ( [ transactiondirected ] ) implemented on these two different topologies . from the histogram related to the distribution of money among the agents , we can consider the entropy associated to that distribution in a discrete form , where just recalls the coarse graining when computig as a discrete histogram . for simplicity, we measure the entropy as a fraction of the maximum which is given for the exponential distribution . if the money distribution is affected by the topology of the underlying network , it should show some kind of dependence on the degree of the agents . for our purposesit is enough for the reader to associate the degree of a node to its number of neighbouring agents it can interact with . as we will only use for now undirected and static networks ,the degree of an agent will be a distinctive feature .we define the _ mean money per economic agent as a function of k _ , given by note that we compute the mean money of the nodes inside each class of connectivity at every step of our simulation and then , we consider the time - averaged mean money per node according to the different degrees . denotes how many nodes have degree and our simulations run for steps .there is no need to be worried about the transitory regime disturbing the computed results for because it is negligible : less than of total computed steps in the worst case ( see figure [ dnet ] ) .we also compute the standard deviation , , given by ^ 2}\label{sigmamk}\\\nonumber\;\;\forall\,i\;:\;k_i = k.\end{aligned}\ ] ] in figures [ rnet ] , [ snet ] and [ dnet ] we show , for the rules ( [ transactionrandom ] ) , ( [ transactionsavings ] ) and ( [ transactiondirected ] ) , respectively , the stationary probability distribution with entropy - evolution of for the whole simulation with inset detail of the transitory regime when it is required . when we plot the distributions and , it is also shown the characteristic degree distribution of the networks we implement .we clearly see in these figures how rules ( [ transactionrandom ] ) and ( [ transactionsavings ] ) are transparent to the underlying topology , provoking always the decay of the economic system to the bg or gamma - like distributions , respectively , indistinctly of the different connectivities of the agents .we can say that for these types of interactions the economic classes are blind respect to the social influence of the agents . by contrary , the new directed rule ( [ transactiondirected ] ) separates the agents in economic classes correlated with their connectivities , in such a way that more connected agents show a bigger propensity to accumulate more money , in this case with a linear relationship between money and connectivity .this is a characteristic that , in general and independently of the political system installed in the power , seems to be more likely to be found in the reality .\(a ) spatial network : \(b ) barabsi - albert model : \(a ) spatial network : \(b ) barabsi - albert model : \(a ) spatial network : \(b ) barabsi - albert model :we have found that topology does not determine the final equilibrium distribution for the family of undirected random markets , both with or without uniform savings , as can be seen by comparing the distributions from figures [ rnet ] and [ snet ] to those from figures [ rg ] and [ sg ] .thus , from the uniform value of and , we can conclude that the connectivity of an agent immersed in undirected random markets does not determine if the agent will have more or less money , that is , the degree of an agent does not decide its richness .when we consider the directed random market given by the operator ( [ opdef ] ) , simulations plotted in the figure [ dnet ] suggest that this model reproduces very clearly the real insight based on the impression that in certain societies the richness is owned by a very small fraction of the population and poverty is extended among the majority of the agents . for this type of economies, we also discover that the underlying topology determines the stationary distribution of money among the agents , , and that the connectivity of each agent proportionally determines its average richness . assuming that we can apply this rule for any quantity that can be exchanged , andnot only money , this result introduces the interesting idea of how we can create systems with a property shared by its inner components with an steady distribution essentially determined by the topology defined by the connections between the agents , although evidently this statistical equilibrium is dynamical and presents a continuous flow between those agents .it is also interesting to highlight how this system evolves from the initial zero - entropy state to a state with maximum entropy and then how it relaxes towards the asymptotic equilibrium state .let us conclude by saying that we have introduced a new directed random market model in the context of economic gas - like models , which can be understood as a first and simple model characterized by _ more connectivity implies more money_. 00 a. namatame , t. kaizouji and y. aruka , editors , _ the complex networks of economic interactions .essays in agent - based economics and econophysics _( springer , 2006 ) . c. castellano , s. fortunato and v. loreto , _ rev .* 81 * , 591 ( 2009 ) .c.tovey , d. spangler - martin , i.d . chase and m.manfredonia , _ pnas _ * 99 * , 5744 ( 2002 ) .a. dragulescu and v.m .yakovenko , _ eur .j. b _ * 20 * , 585 ( 2001 ) .a. dragulescu and v.m .yakovenko , _ physica a _ * 299 * , 213 ( 2001 ) .a.c . silva and v.m .yakovenko , _ europhys .lett . _ * 69 * , 304 ( 2005 ) .yakovenko , in _ encyclopedia of complexity and system science _ , r. a. meyers ( ed . ) ( springer 2009 ) .a. dragulescu and v.m .yakovenko , _ eur .j. b _ * 17 * , 723 ( 2000 ) .r. lpez - ruiz , j. saudo and x. calbet , _ am .j. phys . _* 76 * , 780 ( 2008 ) .lpez , r. lpez - ruiz and x. calbet , _ j. mathappl . _ * 386 * , 195 ( 2012 ) . r.h .frank , _ microeconomics and behavior _ ( mcgraw - hill / irwin , 2009 ) .a. chakraborti and b.k .chakrabarti , _ eur .phys . j. b _ * 17 * , 167 ( 2000 ) .a. chakraborti , s. pradham and b.k .chakrabarti , _ physica a _ * 297 * , 253 ( 2001 ) .m. patriarca , a. chakraborti and k. kaski , _ phys .e _ * 70 * , 016104 ( 2004 ) .a. chatterjee and b.k .chakrabarti , _ eur .j. b _ * 60 * , 135 ( 2007 ) .x. calbet , j .- l .lpez and r. lpez - ruiz , _ phys .e _ * 83 * , 036108 ( 2011 ) .m. lallouache , a. jedidi and a. chakraborti , _ science and culture _ * 76 * , 478 ( 2010 ) .a. chakraborti , _ int .c _ * 13 * , 1315 ( 2002 ) . c. pellicer - lostao and r. lpez - ruiz , _ int .c _ * 22 * , 21 ( 2011 ) .m. barthelemy , _ physics reports _ * 499 * , 1 ( 2011 ) .r. albert and a .-barabsi , _ revphys . _ * 74 * , 47 ( 2002 ) .
boltzmann - gibbs distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed . when considering a model with uniform savings in the exchanges , the final distribution is close to the gamma family . in this work , we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network . we introduce a new family of interactions : random but directed ones . in this case , it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network . the relation between the mean money per economic agent and its degree is shown to be linear .
wetting and spreading are very common everyday phenomena , but there are still a lot of open problems in this field .a simple example is - what happens when a fluid is forced to spread under an externally impressed force ?there is no fully satisfactory answer to be found in the otherwise comprehensive literature on the topic , though such experiments are routinely used to characterize viscoelastic materials . in the present paper ,we report a set of very simple experiments on the so - called squeeze film and try to find a theoretical framework to explain our results .the problem is briefly described as follows - a small droplet of a liquid , placed on a solid plate will take the shape of a section of a sphere . under equilibrium, the well known young - laplace relation holds , involving an equilibrium angle of contact .the dynamic condition while the fluid is spreading , i.e. the area of contact of the fluid and solid is increasing in time , is also well studied .there are a number of excellent reviews summarising studies on - the rate of spreading , the forces responsible and other interesting features .we now look at a simple extension of the problem - what happens if the fluid is forced to spread beyond its equilibrium area , under a constant weight placed on top of it .a preliminary report was published by our group in . in this paperwe study four different fluids on two different solids , so as to have different combinations of viscosities of fluids as well as different degrees of hydrophilicity of the fluids and solids .the results are analyzed on the basis of viscous energy dissipation with and without slip .we find that while the no - slip result gives a fair quantitative estimate of the contact area as function of time for only one case , it is possible to get a better agreement in 5 of the 8 cases , on introducing a slip coefficient .of course , this involves introduction of a free parameter ( the slip coefficient ) whereas the no - slip analysis did not .the slip coefficient appears to depend more on the solid surface than the fluid , and is an order of magnitude lower for the perspex plate .the results for spreading of the same fluid on different surfaces is quite different .there are 3 cases where the agreement between theory and experiment is unsatisfactory and can not be improved by introducing slip , so other effects are also at play .we describe the experimental setup followed by the theory and present detailed results in the third section .finally we compare experimental results with theory and discuss the implications .the experimental fluids are castor oil , linseed oil , glycerol and ethylene glycol .the choice of fluids is motivated by the following idea . while castor oil and linseed oil are insoluble in water , with a low dielectric constant the other two fluids ethylene glycol and glycerol having high dielectric constants , are soluble in water .( = 4.67 , 3.35 , 47.70 and 43.0 respectively for castor oil , linseed oil , glycol and glycerol ) . again, one fluid belonging to each type according to hydrophilicity , has high viscosity( ) , namely castor oil and glycerol( for castor oil=986 - 451 cp for temp 20 - 30c , glycerol is 1490 - 629 cp for temp 20 - 30c ) , while the others have much lower viscosity( for glycol is 19.9 - 9.13 cp for temp 20 - 40c , linseed oil=33.1 - 17.6 cp for temp 20 - 50c ) . a constant volume of the fluid is measured with a micropipette and placed on the lower plate .the upper plate is then placed on the drop and a load of m kg .( m = 1 to 5 ) is placed on it .the pair of plates are 1 cm thick , one pair is made of float glass and the other of perspex . a video camera , placed below the lower glass plate records the slow increase in area .the video clips are analyzed using image pro - plus software .we find that the area of the drop in contact with the plate increases with time at first .then after some time it reaches a saturation value and does not increase significantly with time any more .the time of saturation and area of saturation are different for different fluid - solid combinations .we plotted the area vs. time graph for all fluid - solid combinations .fig([tacp ] ) and fig([tagp ] ) show respectively area vs. time graphs for castor oil and glycol respectively on perspex for loads from 1 - 5 kg .interestingly when the data for area are divided by the load mass m , the curves come very close to each other , almost collapsing to a single master curve . again , for castor oil on perspex area increases for a long time .it takes several minutes to saturate approximately and continue to increase very slowly even after 5 minutes.for glycol on both the substrates , on the other hand , the area increases very fast and saturates in a few seconds .video photos are taken initially , at the rate of 10 frames / sec , in this case .we discuss in this section standard formulations for this kind of problem .the squeeze flow usually discussed in chemical engineering literature is different from our experiment , ours is a constant volume squeeze , where whole mass of fluid taken is always confined within the two plates . in the usual experiment a _ constant area _setup is used , where both plates are immersed in the fluid , the distance between the plates as function of time is measured as a force is applied on the upper plate .first we outline the steps worked out for the no - slip case , i.e. assuming that the layer of fluid next to the static plate has zero velocity .the analysis roughly follows .we assume the fluid to be newtonian and incompressible with constant volume .ours is a constant force set up , the force acts on the fluid layer. the velocity first builds up , reaches a maximum and then starts to reduce as it squeezes the liquid film .the details are presented in , here we consider the velocity decreasing regime only .we equate the loss of potential energy of the loaded plate with the work done against the viscous resistance . with and representing the instantaneous thickness and radius of the film respectively , the pressure and the atmospheric pressure . in moving over a distance dhthe work done is given by the loss of potential energy is now the pressure p at any point r from the center of the upper plate for any given distance h , where the velocity of the upper plate can be calculated as therefore , we can write , substituting for , we get , which , on integration gives , \ ] ] the right hand expression depends on the impressed force through the area of contact .if we divide both sides by , the lhs is independent of .the rhs is now a function of and .we call this function .we calculate the rhs from experimental data and compare with the lhs .if the principles on which the derivation is based are correct , the rhs calculated from experimental results should be independent of .that is the curves vs. should collapse to a single curve and be numerically equal .so , we test the result by plotting the lhs and rhs separately on the same graph and check how well they agree .of course the agreement can be expected only for the time interval before the area of contact saturates , since the lhs goes on increasing linearly .the results for glycerol on glass are shown in fig([gmt_glcrg ] ) .we see that for this combination assuming no slip gives an acceptable quantitative agreement between the lhs and rhs of eq([gmt ] ) .this calculation involves no adjustable parameter .it may further be noted that the data for different loads collapse almost to a single line , on scaling by .however , for the other fluid - solid combinations , such quantitative agreement is not found .we try to see now if this discrepancy may be attributed to the effect of slip at the fluid and solid interface .we follow the equation given by listed in for this situation .relative motion between a solid surface and material contacting it ( wall slip ) is a phenomenon observed with many materials . in the last section we assumed that the layer of the liquid touching the lower fixed plate has zero velocity . so that the applied force is entirely carried away by viscous dissipation . but experiments show that there may be cases where the layer of the liquid having contact with the substrate may have finite velocities , which may affect the spreading of the liquid drop . in these cases , shear stresses at the sample - plate interfaces may vanish ( for perfect slip ) or reduce ( for partial slip ) .the amount of slip depends not only on the liquid tested , but also on the material and roughness of the substrates .we introduce partial slip in the analysis as follows .no slip and perfect slip are two limiting cases of slip phenomenon .to explain our results , we take partial slip into consideration . stefan gave a slip law of the form , where , is the slip coefficient and is the wall shear stress .the value of is zero for no slip and is infinity for perfect slip cases .it may have any value in between , for partial slip cases , depending on the liquid and the substrate .with the introduction of ,the force equation becomes , dividing by . where , the integral can be evaluated as {h1}^{h2}\ ] ] with the above force equation , on integration , gives us the function similar to for the no - slip case .we fit the slip coefficient so as to get the best fit between the left and right sides of equation [ rmt ] .the results are shown in the next section .table([tabl ] ) shows values of for different fluid - solid combinations .we find that the values of ranges from to for glass and from to for perspex .so glass has a larger than perspex . for a definite substratethe values of are different for different liquids . in table 1we also show the initial contact area ( before loading ) and the viscosities and dielectric constants of the fluids and substrates . figures ( [ rmt_gg ] ) and ( [ rmt_cp ] ) show respectively the graphs comparing and for ethylene glycol on glass and for castor oil on perspex .the coefficients for the best fit are given in table 1 .while five of the eight curves can be fit quite well with a finite , in the remaining three cases a discrepancy remains . herethe fit becomes worse for any finite .this is shown for glycol on perspex in figure([rmt_gp ] ) , the other two cases are glycol on glass and linseed oil on perspex .out of eight cases studied , we find that the theoretical framework considered here can account for five cases fairly well with one free parameter , the slip coefficient .this is not bad considering the complexity of the problem and the simplifications introduced .scrutiny of the table 1 , reveals several interesting features .let us look at , the area of contact with only the plate on the fluid drop and no extra weights .we consider this as the reference condition .the mass of the perspex plate is 250 gm and the glass plate is about 700 gms .it is seen from spreading vs. load data ( see ) that extrapolating to zero load does not change this reference point significantly .we see that for glass ranges from 2 - 5 cm , while for perspex we have values less than 1 and the highest is 1.4 cm .so equilibrium spreading on glass is higher in all cases .however , _ under load _ fluids spread to a much greater extent on perspex , compared to glass .looking at the percentage ( of ) spreading for the 5 kg .load , we see that for glass it is always less than 50 % , whatever the fluid , while for perspex the area increases by 100 to 400 % .so , where the initial spreading is large , the effect of load is less and vice versa .the highest spreading with load is observed for castor oil on perspex , moreover castor oil shows a creep like behavior and continues to spread for very long times , though the rate slows down .possibly visco - elastic effects also play important role and should be taken into consideration .the three cases which can not be explained by the present theory are all for the low viscosity fluids ethylene glycol and linseed oil .this treatment focussing on the viscous effect works better in case of high viscosity .we have tabulated the dielectric constants along with other data in table 1 , because earlier work showed that the polarizabilties of the fluid and substrate are important in spreading phenomena .but here we find no significant correlation except that the maximum spreading ( under load ) is for the low dielectric fluid on the low dielectric substrate . to conclude , spreading of different types of fluids on different solids is a very complex and interesting problem , still open for exploration .very simple experimental apparatus provides a lot of food for thought in this area .the spreading and flow characteristics are routinely used for classifying soft materials in food technology and chemical engineering , so a deeper understanding will be useful .further experiments and theoretical study taking into account effects of rheology , surface roughness and ambient conditions is required .the table shows properties of the fluids and substrates used , represents the viscosity , and , the solid and liquid dielectric constants , s the surface tension .results from the experiment are - the initial area of contact , the percentile spreading of area under loads of 1 and 5 kg . , the slip coefficient which gives a best fit for the spreading master curve . [ cols="^,^,^,^,^,^,^,^,^,^,^ " , ]this work is supported by ugc , govt . of india .ss is grateful to ugc for award of a research fellowship .99 p. g. de gennes , wetting : statics and dynamics , rev .57(1985 ) 827 - 862 william a. zisman , elaine g. shaffin , j.phys.chem.,1960,64(5),pp 519 - 524 .d. bonn , j. eggers , j. meunier , e. rolley , wetting and spreading , to appear in rev .mod . phys .2009 j. engmann , c. servais , a.s .burbridge , squeeze flow theory and applications to rheometry : a review , j. non - newt .fluid mech .132(2005)1 - 27 r. asthana , n. sobczak , wettability , spreading and interfacial phenomena in high - temperature coatings , jom - e 52(2000 ) soma nag , suparna dutta , sujata tarafdar , applied surface science 256 ( 2009 ) 353 - 355 f.r .eirich , d. tabor , collisions through liquid films , proc .( 1948 ) 566 - 581 j stefan , akad.wiss.math.nat.wien 69(2)(1874)713 - 735 s. sinha , t. dutta , s. tarafdar , eur .j. e 25(2008 ) 267 - 275
spreading of different types of fluid on substrates under an impressed force is an interesting problem . here we study spreading of four fluids , having different hydrophilicity and viscosity on two substrates - glass and perspex , under an external force . the area of contact of fluid and solid is video - photographed and its increase with time is measured . the results for different external forces can be scaled onto a common curve . we try to explain the nature of this curve on the basis of existing theoretical treatment where either the no - slip condition is used or slip between fluid and substrate is introduced . we find that of the eight cases under study , in five cases quantitative agreement is obtained using a slip coefficient . + matter physics research centre , physics department + jadavpur university , kolkata 700032 , india + department , st . xavier s college , kolkata 700016 , india + corresponding author : email : sujata.com , + phone:+913324146666(extn . 2760 ) * keywords * : spreading , squeeze film , viscous flow , slip + * pacs nos . * : wetting in liquid - solid interfaces - 68.08.bc + : viscosity , experimental study - 66.20.ej
genomic dna is packaged into chromatin in eukaryotic cells .the building block of chromatin is the nucleosome , a 147 base pair ( bp ) dna segment wrapped in superhelical coils on the surface of a histone octamer . the unstructured tails of the histones are the targets of numerous covalent modifications and may influence how the ordered nucleosome array folds into higher order chromatin structures .chromatin can both block access to dna and juxtapose sites far apart on the linear sequence . the regulation of gene expression by dna - bound factors can be partially understood from their binding energies and thermodynamics .nucleosomes formation energies vary by about 5 kcal / mol _ in vitro _ , comparable to the variation of dna - binding energies in other factors .it is then natural to ask whether the locations of nucleosomes _ in vivo _ can be predicted thermodynamically , either in isolation or in competition with other dna - binding factors . under this scenario ,the role of chromatin and its remodeling enzymes would be catalytic , modifying the rate of assembly but not the final disposition of factors on dna .the alternative is that chromatin remodeling within a regulatory unit actively positions nucleosomes to control access to the dna , in analogy with motor proteins . it has not been possible to quantify by genetics where living cells fall between these extremes .recent computational approaches have used collections of dna positioning sequences isolated _ in vivo _ to train pattern matching tools that were then applied genome - wide. however , the training data may not be representative of direct histone - dna binding if other factors reposition nucleosomes _ in vivo _ by exclusion .furthermore , models based on alignments of nucleosome positioning sequences require a choice of background or reference sequence and it is known that nucleotide composition varies among functional categories of dna .* biophysical model of a nucleosome core particle . * for these reasons we developed a biophysical model for nucleosome formation that resolves the energy into the sum of two potentials modeling the core histone - dna attraction and the intra - dna bending energy . the first potential is assumed to be independent of the dna sequence since there are few direct contacts between the histone side chains and dna bases . for the dna bending , we constructed an empirical quadratic potential using a database of 101 non - homologous , non - histone protein - dna crystal structures .more specifically , we model dna base stacking energies by defining three displacements ( rise , shift , and slide ) and three rotation angles ( twist , roll and tilt ) for each dinucleotide ( fig . 1a ) . together the six degrees of freedom completely specify the spatial position of bp in the coordinate frame of bp ( supporting information ( si ) fig .geometric transformations based on these degrees of freedom can be applied recursively to reconstruct an arbitrary dna conformation in cartesian coordinates .conversely , the atomic coordinates in a crystal structure allow the inference of the displacements and rotations for each dinucleotide ( si methods ) .the elastic force constants in the quadratic dna bending energy are inferred from the protein - dna structural data by inverting a covariance matrix of deviations of dinucleotide geometric parameters from their average values . strongly bent dinucleotides ( with one or more geometric parameters further than 3 standard deviations from the mean ) are iteratively excluded from the data set ( si methods ) .our model does not use any higher order moments of empirical geometry distributions , which would lead to a non - quadratic elastic potential ; nor are there sufficient data to model more than successive base pairs .we assume that the histone - dna potential is at minimum along an ideal superhelix whose pitch and radius are taken from the nucleosome crystal structure , and varies quadratically when the dna deviates from the superhelix .this sequence - independent term represents the balance between the average attractive electrostatic ( and other ) interactions between the histones and the dna phosphate backbone , and steric exclusion between the proteins and dna .the sum of the bending and histone - dna potentials is minimized to yield the elastic energy and the dinucleotide geometries for each nucleosomal sequence ( see methods ) . because the total energy is quadratic ,energy minimization is equivalent to solving a system of linear equations and thus the algorithm can be applied on a genome - wide scale .there is no background in this model , but the results may depend on the choice of the protein - dna structural data set .since our bending energy is empirical and inferred from co - crystal structures , it lacks a physical energy scale . by comparison with the worm - like chain model our units can be converted to kcal / mol through multiplication by 0.26 ( si methods ) .we position multiple nucleosomes and proteins bound to dna using standard thermodynamics and enforce steric exclusion between bound entities in any given configuration ( si methods ) .our program dnabend can compute the sequence - specific energy of dna restrained to follow an arbitrary spatial curve . * analysis of dna geometries from the nucleosome crystal structure .* one way to validate dnabend is to predict the dna conformation in the high - resolution ( 1.9 ) nucleosome crystal structure ( pdb code 1kx5 ) using only the dna sequence .the dnabend predictions are significantly correlated with the experimental geometries for twist , roll , tilt and slide ( ) , but are less successful for shift and rise ( si fig .10 ) , although the peak positions are generally correct .dnabend does not reproduce rapid shift oscillations in the region between bps 35 and 105 , and underestimates the magnitude of observed deviations in rise and slide .we under - predict slide since for certain key base pairs our training structures imply a much smaller mean value of slide ( _ e.g. _ 0.18 for ca steps ) than that observed in the nucleosome structures ( 0.91 for ca steps ) . changing just this one mean valuegives more reasonable magnitudes of slide ( si fig .10 , black curve ) .et al . _ also observed that slide is large and positive in the nucleosome crystal structure , and makes a significant contribution to the superhelical trajectory . because nucleosomal dna is highly bent , different degrees of freedom are strongly coupled ( si fig .11a ) : for example , base pairs tend to tilt and shift simultaneously to avoid a steric clash .these couplings are much less pronounced in the non - histone protein - dna complexes used to derive the elastic energy model ( si fig .11b ) , but nonetheless appear prominently when the 1kx5 dna is positioned by dnabend ( si fig .analysis of available nucleosome crystal structures shows that the tight dna wrapping is facilitated by sharp dna kinks if flexible dinucleotides ( _ e.g. _ 5-ca / tg-3 or 5-ta-3 ) are introduced into the region where the minor groove faces the histone surface . for other sequences and other structural regions the bendingis distributed over several dinucleotides . we substituted all possible dinucleotides into the 1kx5 atomic structure ( keeping dna conformation fixed ) , and computed the elastic energy for each sequence variant .the most sequence specific regions are those where the minor groove faces the histone octamer ( si fig .the specificity is especially dramatic if dna is kinked ( _ e.g. _ at positions 109 , 121 and 131 ; si fig . although these positions are occupied by ca / tg dinucleotides in the crystal structure , the model assigns the lowest energy to ta dinucleotides , consistent with the periodic ta signal previously observed in good nucleosome positioning sequences ( si fig .. * comparison with _ in vitro _ measurements .* dnabend accurately predicts experimental free energies of nucleosome formation ( fig .2a , si fig .in contrast , the alignment model of segal _ et al . _ trained on yeast nucleosomal sequences has little predictive power for the sequences in this set , most of which were chemically synthesized .dnabend also correctly ranks sequences selected _ in vitro _ for their ability to form stable nucleosomes , or to be occupied by nucleosomes _ in vivo _ because the alignment model is constructed using the latter set it assigns anomalously low ( more favorable ) energies to some of the sequences from it , and higher ( less favorable ) energies to the artificial sequences known from experiment to have excellent binding affinities . finally , dnabend correctly ranks mouse genome sequences selected _ in vitro _ on the basis of their high or low binding affinity ( ) , whereas the alignment model has less resolution ( ) , though on average it does assign better scores to the high affinity set ( si fig .12b ) . a further test of how dnabend actually positions nucleosomes on dna can be provided by a collection of sequences where _ in vitro _positions are known with 1 - 2 bp accuracy .we have determined nucleosome positions on synthetic high - affinity sequences 601 , 603 , and 605 using hydroxyl radical footprinting ( si figs .13 and 14 ) , and obtained 3 more sequences from the literature . the measured position is always within 1 - 2 bp of a local minimum in our energy , and that energy minimum in 5 out of 6 cases is within 0.5 - 1.0 kcal / mol of the global energy minimum ( si fig .15 ; note that the total range of sequence - dependent binding energies is kcal / mol ) .we also asked if dnabend could be used to design dna sequences with high and low binding affinities for the histone octamer .free energies of computationally designed sequences were measured _ in vitro _ using salt gradient dialysis ( si table 1 , si methods ) .the free energy of the designed best sequence was lower than the free energy of the designed worst sequence , although only by 1.6 kcal / mol , which is less than the experimentally known range of free energies ( si table 1 , si fig .these results underscore both the ranking power and the limitations of our current dna mechanics model . * periodic dinucleotide distributions in high and low energy sequences .* dnabend - selected nucleosome sequences exhibit periodic dinucleotide patterns that are consistent with those determined experimentally : for example , with lowest energy sequences , 5-aa / tt-3 and 5-ta-3 dinucleotide frequencies are highest in the negative roll regions ( where the minor groove faces inward ) , while 5-gc-3 frequencies are shifted by bp ( si fig .surprisingly , the distributions of at and aa / tt , ta dinucleotides are in phase , despite a very low flexibility of the former ( si fig . 8b ) .it is possible that at steps are used to flank a more flexible kinked dinucleotide .we estimate the energy difference between the best and the worst 147 bp nucleosome forming sequences to be 15.2 kcal / mol , with the energies of 95% of genomic sequences separated byless than 6.4 kcal / mol ( si methods ) .this is larger than the experimental range ( si fig .12a ) because nucleosomes can not be forced in experiments to occupy the worst - possible location on a dna , but instead tend to find the most favorable locations with respect to the 10 bp helical twist .* prediction of nucleosome positions in the yeast genome .* we use nucleosome energies ( computed using the 147 bp superhelix ) and the binding energies of other regulatory factors to construct a thermodynamic model in which nucleosomes form while competing with other proteins for access to genomic sequence .a typical configuration thus contains multiple dna - bound molecules of several types and explicitly respects steric exclusion .we take all such configurations into account using dynamic programming methods that enable us to compute a boltzmann - weighted statistical sum and thus the probability for each factor to bind dna starting at every possible position along the genome ( fig .1b , si methods ) . we also compute the occupancy of each genomic bp , defined as its probability to be covered by any protein of a given type ( fig .there are two adjustable parameters for each dna - binding factor : the temperature , which determines how many factors are bound with high probability , and the free protein concentration , which determines the average occupancy . with our choice of parameters ( which have not been optimized , rather , were fixed to allow for comparison with previous studies - si methods ) , the average nucleosome occupancy is 0.797 , and stable , non - overlapping nucleosomes ( with ) cover 16.3% of the yeast genome .* bioinformatic models based on alignments of nucleosome positioning sequences .* several existing analyses of nucleosome positions are based on alignment models , and these in turn explicitly or implicitly include a background model .we examined the sensitivity of one alignment - based approach to the choice of background by implementing it in two different ways : alignment model i is identical to segal et al . and uses genomic background for one strand and uniform background for the other , while alignment model ii employs genomic background for both strands ( si methods ) . comparing two models allows us to separate more robust predictions from those that depend strongly on the implementation details . although the alignment models correlate well with each other ( ) , we find a small _ negative _ correlation between dnabend - predicted occupancies and energies and those from the alignment model i ( si fig .dnabend energies are strongly correlated if two nucleosomes are separated by a multiple of 10 bp , and anti - correlated if the nucleosomes are separated by a multiple of 5 bp , which puts the helical twist out of phase ( si fig .log scores predicted by the alignment model exhibit weaker oscillations ( si fig .17d ) , probably due to ambiguities in aligning the training sequences . because all three models make somewhat different predictions of average nucleosome occupancies in broad genomic regions ( si fig .19 ) , additional experimental data are required to establish which model is the most accurate .in contrast , all models predict that the distribution of center - to - center distances between proximal stable nucleosomes ( with ) exhibits strong periodic oscillations ( si fig .20 ) . however , similar oscillations occur in a non - specific model where we first create a regular nucleosomal array and then randomly label a given fraction of nucleosomes as stable ( data not shown ) . because steric exclusion is respected by every model , nucleosomes form regular arrays ( si figs .17a , b ) which help induce the observed periodicity in the positions of stable nucleosomes . *nucleosome depletion upstream of the orfs . * microarray - based maps of _ in vivo _ nucleosome positions show striking depletion of nucleosomes from the promoter regions ( fig . we find that this depletion is difficult to explain using nucleosome models alone .however , the nucleosome - free region upstream of the open reading frames ( orfs ) becomes much more pronounced when tata box - binding proteins ( tbps ) and other factors bind their cognate sites ( fig .3a , si fig .displaced nucleosomes are re - arranged in regular arrays on both sides of factor - occluded sites , creating the characteristic oscillatory structure around the nucleosome - free region which includes the occupancy peak over the translation start observed in both regular and h2a.z nucleosomes ( fig .3a and si fig .similar oscillations occur when non - specific nucleosomes compete with tbp ( si fig .21 ) , making intrinsic nucleosome sequence preferences hard to disentangle from the larger phasing effects in this data set .* nucleosome occupancy of tata boxes . * nonetheless , nucleosome stabilities can play a crucial role in gene activation : for example , dnabend predicts that both tata boxes in the promoter of the yeast _ mel1 _ ( -galactosidase ) gene are occupied by a stable nucleosome , in agreement with the extremely low level of background gene expression observed in _ mel1 _promoter - based reporter plasmids ( fig .in contrast , the tata elements of the _ cyc1 _ promoter were shown to be intrinsically accessible _ in vivo _ , resulting in high background expression levels .consistent with these findings , we predict that one of the _ cyc1 _ tata boxes has intrinsically low nucleosome occupancy , and moreover that the nucleosome is easily displaced in competition with tbp ( fig .it has been suggested that tata box - containing genes may be repressed through steric exclusion of tbp by a nucleosome placed over the tata box . in agreement with this hypothesis , in the absence of other factors dnabend predicts slightly higher nucleosome occupancy over tata boxes ( fig .stress - induced genes are tata - rich and may be nucleosome - repressed under non - stress conditions . thus , this prediction of dnabend can be tested experimentally by measuring nucleosome occupancy over tata boxes of stress - induced genes in non - inducing conditions , genome - wide ( to ensure adequate statistics which were not provided by the data sets available to us , although it was found in a very recent genome - wide study of nucleosome occupancy in yeast that promoters of stress - induced genes tend to be covered by nucleosomes when cells are grown in ypd media , whereas the transcription start site for the typical gene was nucleosome - free ) . * nucleosome - induced tf cooperativity and nucleosome depletion over tf binding sites . *it has been known from chromatin immunoprecipitation experiments that some near - consensus dna sequences are not occupied by their cognate transcription factors ( tfs ) , while poorer sites may be occupied . thus it is natural to ask whether histone binding preferences can help distinguish functional and non - functional tf binding sites ._ found that nucleosome occupancy was intrinsically smaller at functional sites ( si fig .24a ) , while dnabend predicts the opposite ( fig . 4a , upper panel ) ( butsee contrary low resolution data from liu _however , if tfs are allowed to compete with nucleosomes at all sites , the nucleosomes become preferentially depleted ( and the tf occupancy becomes higher ) at the functional sites ( fig .4a , lower panel ) .this depletion is not due to the slightly more favorable binding energies of the functional sites ( si fig .25a ) , because it is reversed when their positions are randomized within intergenic regions ( si fig .furthermore , dnabend - predicted nucleosomes are not depleted over functional sites simply because they are less stable - fig .4a showed the opposite .in fact , functional sites tend to occur just upstream of the orfs ( si fig .25b ) , where dnabend - predicted nucleosomes exhibit enhanced stability ( si fig . 23 ) .the only remaining possibility is that the depletion of dnabend - predicted nucleosomes is caused by the spatial clustering of functional binding sites ( fig .4b , si fig .25c ) . in this scenario several dna - binding proteinscooperate to evict the nucleosome and thus enhance their own binding , in a phenomenon known as the nucleosome - induced cooperativity ( fig .4c ) . the cooperativity does not depend on direct interactions between tfs and requires only that two or more tf sites occur within a 147 bp nucleosomal footprint ( fig .4d ) . a model in which tfs compete with nucleosomes for their cognate sites provides the best explanation for the strong nucleosome depletion over functional sites observed in microarray experiments ( fig .although it is not yet conventional to do so , it would be trivial to reward clustering within 147 bp in codes that predict regulatory sites . * predictions of experimentally mapped nucleosome positions .* we used a set of 110 nucleosome positions reported in the literature for 13 genomic loci to see how well dnabend predictions match _ in vivo _ chromatin structure ( fig .5 , si figs .26a - n ) .the predictive power of nucleosome models improves considerably when sequence - specific factors are included ( fig .5 ) , though not in genome - wide sets ( si fig . 27 ) ; this result may be due to the lack of accurate genome - wide knowledge of tf binding sites and energies .for example , at , , and recombination enhancer loci , nucleosomes are positioned in regular arrays whose boundaries are determined by the origin recognition complex , abf1 , rap1 , and mcm1/mat bound to their cognate sites ( si figs .26f - h ) . both intrinsic sequence preferences and boundaries created by other factors contribute to positioning nucleosomes : on one hand , at the _ gal1 - 10 _ locus a nucleosome covering a cluster of gal4 sites is evicted , making bp of promoter sequence accessible to tbp and other factors ( si fig .26a ) , as observed _ in vivo_. on the other hand , gcn4 and abf1 sites at the _his3-pet56-ded1 _ locus are intrinsically nucleosome - depleted , because histones have lower affinity for _his3-pet56 _ and _ ded1 _ promoters ( fig .6 , si fig .26k ) . correctly predicts chromatin re - organization caused by sequence deletions in the _promoter region : sequence - dependent nucleosome positions are refined through the action of gcn4 and tbp to improve the agreement with experiment ( si figs .26l - n ) .we have developed a dna mechanics model capable of accurately predicting _ in vitro _free energies of nucleosome formation , optimal base pairs in the minor and major grooves of nucleosomal dna , and dna geometries specific to each base step .we only get an agreement with the available genome scale data sets for nucleosome depletion from the tata box and functional tf binding sites when we include competition with the relevant factors . in the absence of dna - binding factors dnabend predicts a weak enhancement of nucleosome occupancy over their sites and thus agrees with the conjecture , not yet demonstrated on a genome - wide scale , that nucleosomes provide a default repression . the two alignment models we examined do not support this conjecture ; it is conceivable that they have implicitly captured a hybrid signal from both nucleosomes and dna - bound factors .the highest quality data on generic nucleosome occupancy come from the specific loci summarized in fig .5 . there is a relatively weak signal from all models in the absence of other factors , originating from a few correct predictions at short distances .because dnabend predicts correct binding energies , the weak signal suggests that nucleosomes positioned by thermodynamics on bare dna have relatively weak correlation with _ in vivo _positions , while inclusion of other factors substantially improves the picture .dnabend presents a useful biophysical framework for analysis of _ in vivo _ nucleosome locations and tf - nucleosome competition. it will be interesting to examine its predictions for metazoan genomes , and to modulate gene expression levels in model systems through computational design of nucleosome occupancy profiles .for full details , see si methods . dnabend software and additional supporting data ( including a _ s.cerevisiae_ nucleosome viewer ) are available on the nucleosome explorer website : _the total energy of a nucleosomal dna is given by a weighted sum of two quadratic potentials : where is the sequence - specific dna elastic energy and is the histone - dna interaction energy ( see si methods ) .the potentials are quadratic with respect to deviations of global displacements and local angles from their ideal superhelical values .the weight is fit to maximize the average correlation coefficient between the distributions of geometric parameters observed in 1kx5 and the dnabend predictions ( si fig .the conformation adopted by the dna molecule is the one that minimizes its total energy , yielding a system of linear equations : where and is the number of dinucleotides .the nucleosome energies ( assumed to be given by ) and the energies of other dna - binding factors at each genomic position are then used as input to a dynamic programming algorithm which outputs binding probabilities and bp occupancies for each dna element .a.v.m . was supported by the lehman brothers foundation through a leukemia and lymphoma society fellowship .e.d.s . was funded by the nsf grant dmr-0129848 . j.w .acknowledges support from nigms grants r01 gm054692 and r01 gm058617 , and the use of instruments in the keck biophysics facility at northwestern university .v.m.s . was supported by nsf ( 0549593 ) and nih ( r01 gm58650 ) grants .we thank eran segal , michael y. tolstorukov , wilma k. olson , and victor b. zhurkin for useful discussions and for sharing nucleosome data .99 khorasanizadeh s ( 2004 ) the nucleosome : from genomic organization to genomic regulation ._ cell _ 116:259272 .richmond tj , davey ca ( 2003 ) the structure of dna in the nucleosome core ._ nature _ 423:145150 .boeger h , griesenbeck j , strattan js , kornberg rd ( 2003 ) nucleosomes unfold completely at a transcriptionally active promoter ._ mol cell _ 11:15871598 .wallrath ll , lu q , granok h , elgin scr ( 1994 ) architectural variations of inducible eukaryotic promoters : preset and remodeling chromatin structures . _ bioessays _ 16:165170 . thastrom a , lowary pt , widlund hr , cao h , kubista m , widom j ( 1999 ) sequence motifs and free energies of selected natural and non - natural nucleosome positioning dna sequences . _j mol biol _ 288:213229 .segal e , fondufe - mittendorf y , chen l , thastrom a , field y , moore ik , wang jz , widom j ( 2006 ) a genomic code for nucleosome positioning ._ nature _ 442:772778 .ioshikhes ip , albert i , zanton sj , pugh bf ( 2006 ) nucleosome positions predicted through comparative genomics ._ nat genet _ 38:12101215 .peckham he , thurman re , fu y , stamatoyannopoulos ja , noble ws , struhl k , weng z ( 2007 ) nucleosome positioning signals in genomic dna ._ genome res _ 17:11701177 .luger k , mder aw , richmond rk , sargent df , richmond tj ( 1997 ) crystal structure of the nucleosome core particle at 2.8 resolution ._ nature _ 389:251260 .olson wk , gorin aa , lu x , hock lm , zhurkin vb ( 1998 ) dna sequence - dependent deformability deduced from protein - dna crystal complexes ._ proc nat acad sci _ 95:1116311168 .morozov av , havranek jj , baker d , siggia ed ( 2005 ) protein - dna binding specificity predictions with structural models ._ nucl acids res _ 33:57815798 .arents g , moudrianakis en ( 1993 ) topography of the histone octamer surface : repeating structural motifs utilized in the docking of nucleosomal dna ._ proc nat acad sci _ 90:1048910493 .tolstorukov my , colasanti av , mccandlish dm , olson wk , zhurkin vb ( 2007 ) a novel roll - and - slide mechanism of dna folding in chromatin : implications for nucleosome positioning ._ j mol biol _ 371:725738 .widom j ( 2001 ) role of dna sequence in nucleosome stability and dynamics ._ q rev biophys _ 34:269324 .shrader te , crothers dm ( 1989 ) artificial nucleosome positioning sequences ._ proc nat acad sci _ 86:74187422 .shrader te , crothers dm ( 1990 ) effects of dna sequence and histone - histone interactions on nucleosome placement ._ j mol biol _ 216:6984 .lowary pt , widom j ( 1998 ) new dna sequence rules for high affinity binding to histone octamer and sequence - directed nucleosome positioning ._ j mol biol _ 276:1942 .widlund hr , cao h , simonsson s , magnusson e , simonsson t , nielsen pe , kahn jd , crothers dm , kubista m ( 1998 ) identification and characterization of genomic nucleosome - positioning sequences . _j mol biol _ 267:807817 .cao h , widlund hr , simonsson t , kubista m ( 1998 ) tgga repeats impair nucleosome formation ._ j mol biol _ 281:253260 .thastrom a , bingham lm , widom j ( 2004 ) nucleosomal locations of dominant dna sequence motifs for histone - dna interactions and nucleosome positioning . _j mol biol _ 338:695709 .durbin r , eddy sr , krogh a , mitchison g ( 1998 ) _ biological sequence analysis : probabilistic models of proteins and nucleic acids _( cambridge university press , cambridge , ma ) .yuan g , liu y , dion mf , slack md , wu lf , altschuler sj , rando oj ( 2005 ) genome - scale identification of nucleosome positions in_ s.cerevisiae_. _ science _ 309:626630 .ozsolak f , song js , liu xs , fisher de ( 2007 ) high - throughput mapping of the chromatin structure of human promoters ._ nat biotech _ 25:244248 .kornberg rd , stryer l ( 1988 ) statistical distributions of nucleosomes : nonrandom locations by a stochastic mechanism ._ nucl acids res _ 16:66776690 .fedor mj , lue nf , kornberg rd ( 1988 ) statistical positioning of nucleosomes by specific protein - binding to an upstream activating sequence in yeast ._ j mol biol _ 204:109127 .albert i , mavrich tn , tomsho lp , qi j , zanton sj , schuster sc , pugh bf ( 2007 ) translational and rotational settings of h2a.z nucleosomes across the _ saccharomyces cerevisiae _ genome ._ nature _ 446:572576 .ligr m , siddharthan r , cross fr , siggia ed ( 2006 ) gene expression from random libraries of yeast promoters ._ genetics _ 172:21132122 .melcher k , sharma b , ding wv , nolden m ( 2000 ) zero background yeast reporter plasmids ._ gene _ 247:5361 .chen j , ding m , pederson ds ( 1994 ) binding of tfiid to the cyc1 tata boxes in yeast occurs independently of upstream activating sequences ._ proc nat acad sci _ 91:1190911913 .kuras l , struhl k ( 1999 ) binding of tbp to promoters _ in vivo _ is stimulated by activators and requires pol ii holoenzyme ._ nature _ 399:609613 .struhl k ( 1999 ) fundamentally different logic of gene regulation in eukaryotes and prokaryotes ._ cell _ 98:14 .lee w , tillo d , bray n , morse rh , davis rw , hughes tr , nislow c ( 2007 ) a high - resolution atlas of nucleosome occupancy in yeast ._ nat genet _39:12351244 .macisaac kd , wang t , gordon db , gifford dk , stormo gd , fraenkel e ( 2006 ) an improved map of conserved regulatory sites for _ saccharomyces cerevisiae_. _ bmc bioinformatics _ 7:113 .adams cc , workman jl ( 1995 ) binding of disparate transcriptional activators to nucleosomal dna is inherently cooperative ._ mol cell biol _ 15:14051421 .liu x , lee c - k , granek ja , clarke nd , lieb jd ( 2006 ) whole - genome comparison of leu3 binding in vitro and in vivo reveals the importance of nucleosome occupancy in target site selection . _genome res _16:15171528 .miller ja , widom j ( 2003 ) collaborative competition mechanism for gene activation _ in vivo_._ mol cell biol _ 23:16231632 .li s , smerdon mj ( 2002 ) nucleosome structure and repair of n - methylpurines in the gal1 - 10 genes of _saccharomyces cerevisiae_. _ j biol chem _ 277:4465144659 .sekinger ea , moqtaderi z , struhl k ( 2005 ) intrinsic histone - dna interactions and low nucleosome density are important for preferential accessibility of promoter regions in yeast ._ mol cell _ 18:735748 .basehoar ad , zanton sj , pugh bf ( 2004 ) identification and distinct regulation of yeast tata box - containing genes ._ cell _ 116:699709 .morozov av , siggia ed ( 2007 ) connecting protein structure with predictions of regulatory sites ._ proc nat acad sci _ 104:70687073 . only for the _pho5 _ , , and _deletion loci with dnabend ( si fig .26 ) , and for the _ gal1 - 10 _ , _ ste6 _ loci with alignment model i , for distances bp .see si fig .27 for details of implementation , including the definition of the null ( random ) model with steric exclusion .results from the alignment model ii are similar ( data not shown ) . ]we model each dna basepair as a rigid body and specify its position in space using a local coordinate frame attached to each basepair and defined using $ ] and ( ) . here are the orthonormal basis vectors of the local frame , and is its origin in the fixed global coordinate frame .both the basis vectors and the vector to the origin can be expressed in terms of 6 independent parameters that are sufficient for reconstructing the spatial position of any rigid body . thus apart from a single global translation and rotation an arbitrary dna conformation with basepairs is uniquely specified if sets of 6 geometric parameters are known . by convention , the geometric parametersare chosen to be the three angular degrees of freedom which define unit vectors of the local frame attached to the basepair in the local frame attached to the basepair , and the displacement vector which gives the origin of frame with respect to origin of frame : . here are the helical twist , roll and tilt angles , and is the displacement vector with the x , y , z components called slide , shift and rise ( fig since the geometric parameters which specify the spatial position of the basepair are defined with respect to the frame rigidly attached to the previous basepair , their values capture local deviations in the dna conformation .however , they can also be used to recursively construct the rotation matrix for the basepair in the global frame : where each matrix is the product of three rotations : here , note that and are the rotation matrices around the y and z axes , respectively . the middle term on the right - hand side of equation ( [ ti ] ) introduces both roll and tilt with a single rotation through angle around a roll - tilt axis .the roll - tilt axis lies in the x - y plane of the mid - step coordinate frame ( mid - step triad , or mst : supplementary fig .7 ) , and is in general inclined at an angle to the y - axis . in this scheme , it is the mid - step triad that is used to transform the displacement vector into the fixed global frame : . introducing a single roll - tilt rotation axisis preferable to separate roll and tilt rotations because it eliminates the non - commutativity problem associated with the roll and tilt rotation operations . similarly to the basepair rotation matrices , the mid - step triads are also constructed recursively : where thus having a complete set of local geometric parameters is equivalent to knowing the global dna conformation : first , the recursive relation ( [ ri ] ) is employed to determine the orientations of all basepair coordinate frames ( except for the first one which has to be fixed in space independently and thus provides the overall orientation and position of the dna molecule ) .second , equation ( [ ri : mst ] ) is used to construct the mst frames , which are then employed to transform all local displacements into the global frame . finally , all displacement vectors are added up vectorially to determine the origins of the basepair coordinate frames . if necessary , the basepair frames can then be used to reconstruct the positions of all dna basepair atoms ( using an idealized representation which neglects flexibility of bases in a basepair ) . note that knowing a basepair coordinate frame is in general insufficient for predicting the positions of the phosphate backbone and sugar ring atoms because of their additional degrees of freedom. finally we remark without showing the details of the calculations that the inverse problem is also well - defined : a full set of basepair and mst rotation matrices in the global frame is sufficient to reconstruct all local degrees of freedom .we have implemented the solution to the inverse problem following a previously published description . while the helical twist serves to rotate the consecutive dna basepairs with respect to one another , introducing non - zero roll and tilt angles imposes curvature onto the dna conformation . indeed ,if both roll and tilt are negligible is simply a rotation through around the z - axis ( which coincides with the helical axis in this case ) , and is a rotation through around the same axis . the action of roll and tilt can be seen more clearly by considering the curvature vector , defined as the difference between the tangent vectors for two consecutive basepairs : .the expression for curvature is simplified in the limit of small roll and tilt : since the magnitude of roll ( ) and tilt ( ) are typically much smaller than the helical twist per basepair ( ) , can be expanded to : because we are primarily interested in the magnitude of the curvature vector , we can compute it in the local frame of basepair : where is the z - axis unit vector . using equation ( [ ti_linear ] )we obtain : thus in this limit the magnitude of the curvature vector contains equal contributions from both roll and tilt : .furthermore , the roll and tilt contributions to curvature are shifted by with respect to one another .transformations between local and global descriptions of the dna molecule are important because the sequence - specific dna elastic energy is best described in terms of the local degrees of freedom , whereas it is the global coordinates that can be most naturally restrained to follow an arbitrary spatial curve . in nucleosome core particles the spatial curve used to restraindna conformation is an ideal left - handed superhelix with pitch and radius , described by the following parametric equation : using the arc length to parameterize the curve we obtain : where .a local frame at position is given by a set of three orthonormal frenet vectors ( tangent , normal , and binormal ) : we position sets of frenet basis vectors equidistantly along the superhelical curve : ( ) , where is the number of nucleosomal superhelical turns .fitting an ideal superhelix to the high - resolution crystal structure of the nucleosome core particle reveals that in order to transform the slowly rotating frenet basis vectors into the helically twisted local frames associated with dna basepairs , we impose an additional helical rotation around the tangent vector : here is the cumulative helical twist , is the average helical twist from the structure , and the rotation specified by equations ( [ nb : rot ] ) is performed in the basis .similarly , the mst frames are located at ( ) , and are rotated through to bring them into register with the helical twist .a superhelix described by equation ( [ ideal : t ] ) has constant curvature which is manifested by the constant difference between the consecutive tangent vectors : .this allows us to determine the length of the roll - tilt vector : ( which also gives the maximum value of roll and tilt ) .in this idealized picture , roll and tilt can be assumed to make equal contributions to the curvature . indeed , reconstruction of the local geometric parameters from the full set of helically twisted frenet frames results in and , where and is the initial phase determined by the location of the first basepair .thus twist and rise are constant for every basepair in the ideal superhelix , slide and shift are zero , whereas roll and tilt exhibit oscillations resulting from the superhelical curvature , and shifted by with respect to one another ( supplementary fig .10 ) .the total dna conformation energy is a weighted sum of two terms : the sequence - specific dna elastic energy designed to penalize deviations of the local geometric parameters from their average values observed in the protein - dna structural database , and the restraint energy which penalizes deviations of the dna molecule from the nucleosomal superhelix .thus all interactions ( primarily of electrostatic nature ) between the dna molecule and the histone octamer are modeled implicitly by imposing the superhelical restraint .* dna elastic energy . * using the local degrees of freedom , we represent the sequence - specific dna elastic energy by a quadratic potential : ^t f^{n(s ) } [ \alpha^{s } - \langle \alpha^{n(s ) } \rangle],\ ] ] where the sum runs over all consecutive dinucleotides ( basesteps ) and are the average values computed for the basestep type using a collection of oligonucleotides extracted from a set of 101 non - homologous protein - dna structures . the matrix of force constants is evaluated by inverting the covariance matrix of deviations of local geometric parameters from their average values ( ) : note that our elastic energy model utilizes only and first and second moments of the empirical geometry distributions , and thus disregards all non - gaussian effects such as skewness and bimodality . including higher order moments amounts to introducing non - quadratic corrections to the elastic energy model , which makes the computation much less tractable .this problem is partially alleviated however by removing dinucleotides from the data set if one or more of their geometric parameters are further than 3 standard deviations away from the mean .the mean is then recomputed and the procedure is repeated until convergence . dna elastic energy is a function of 10 basestep type - dependent average values for each of the 6 local degrees of freedom ( all averages and covariances for the dinucleotides related by reverse complementarity are equal by construction ) , and of 15 independent force constants in each symmetric 6d matrix . in order to be able to combine the dna elastic term with the superhelical restraint term defined in the global frame , we apply the following coordinate transformation to equation ( [ e_wo ] ) : ( and denote 3d unit and zero matrices , respectively ) .applying to the local degrees of freedom leaves the angles ( twist , roll , and tilt ) invariant , but transforms the local displacements ( shift , slide , and rise ) into the global frame . to keep the elastic energy invariant , the force constants have to be transformed as well : .equation ( [ e_wo ] ) then becomes : ^t \mathbf{f}^{n(s ) } [ { \mathbf r}_s ( \alpha^{s } - \langle \alpha^{n(s ) } \rangle ) ] , \ ] ] finally , it is convenient to change variables so that all degrees of freedom are expressed in terms of their deviations from the ideal superhelix : note that all the transformations described in this section involve no additional approximations to the original dna elastic energy ( equation ( [ e_wo ] ) ) .thus we are free to choose the most convenient rotation matrix in equation ( [ rs ] ) , and use from the ideal superhelix in equation ( [ e_wo_global ] ) .* superhelical restraint energy . *superhelical restraint energy is used to bend nucleosomal dna into the superhelical shape .we define the restraint energy as the quadratic potential : where and are the nucleosomal and the ideal superhelix radius - vectors in the global frame ( ) : then to the lowest order the difference between the radius vectors is given by : ,\ ] ] where label the vector components , and are the connectivity coefficients constructed as a product of the first derivative of the rotation matrix with respect to the frame angles and the ideal displacements in the local frame . the first term in equation ( [ d : exp ] )represents the net change in the global radius vector caused by the changes in the displacements up to and including the basestep , while the second term reflects the change in the global radius vector resulting from modifying one of the rotation angles .note that changing a single rotation angle at position affects every downstream basepair position linearly by introducing a kink into the dna chain , whereas making a displacement change at that position simply shifts all downstream coordinates by a constant amount .the first derivative of the rotation matrix is evaluated as : upon substitution of the expansion ( [ d : exp ] ) into the restraint energy we obtain an effective quadratic potential : where , and the 6x6 matrix of force constants is given by three distinct 3x3 submatrices : here , where is the theta function , and is the kronecker delta . couples displacements with displacements , couples displacements with angles ( and thus has one connectivity coefficient ) , and couples angles with angles through two connectivity coefficients . summing over some of the theta functions we obtain : where .* total energy and dna conformation minimization . * the total energy of nucleosomal dna is given by : where is the fitting weight introduced to capture the balance between favorable histone - dna interactions and the unfavorable energy of bending dna into the nucleosomal superhelix .we fit to maximize the average correlation coefficient between the distributions of geometric parameters observed in the high - resolution crystal structure of the nucleosome core particle ( 1kx5 ) , and the corresponding dnabend predictions ( supplementary fig .this procedure yields for the 147 bp superhelix and for the 71 bp superhelix bound by the tetramer ( bps 39 through 109 in the 147 bp superhelix ) .dnabend is not very sensitive to the exact value of : we found a correlation of 0.99 between the free energies computed using and and the 71 bp superhelix ( data not shown ) .note that the total energy is a sum of two quadratic potentials written out in terms of the deviations of the global displacements and the local frame angles from their ideal superhelical values .given the value of the fitting weight , the conformation adopted by the dna molecule is the one that minimizes its total energy .since the energy is quadratic , finding the energy minimum is equivalent to solving a system of linear equations : the numerical solution of the system of equations ( [ ederiv ] ) provides a set of geometric parameters corresponding to the minimum dna energy : .these can be used to find the original geometric parameters in the local frame : .* worm - like chain . * according to the worm - like chain model , the energy required to bend a dna molecule is sequence - independent and given by : where is the contour length of the molecule , is the persistence length ( estimated to be ) , kcal / mol , and is the unit tangent vector .the contour length of the ideal 147 bp superhelix is given by . from equations ( [ ideal :s ] ) and ( [ tnb ] ) we obtain , and thus in fig. 2b , the mean and the standard deviation of dnabend energies computed for chromosome iii are . equating the worm - like chain model estimate with the mean dnabend energy ,we obtain a scaling coefficient of 0.26 which can be used to express dnabend energies in kcal / mol .this gives the difference of 15.2 kcal / mol between the best and the worst chromosome iii sequences , and kcal / mol .most sequences differ by or less in fig .2b and are thus separated by kcal / mol .a similar value of the scaling coefficient ( 0.21 ) arises from a linear model fit between experimental and dnabend - predicted free energies in supplementary fig .12a ( red circles ) .chromatin structure plays an important role in regulating eukaryotic gene expression . eukaryotic dna is packaged by histones into chains of nucleosomes that are subsequently folded into nm chromatin fibers . because arrays of nucleosomes form on chromosomal dna under physiological conditions , in order to predict genomic nucleosomal occupancies we need to describe dna packaged into multiple non - overlapping nucleosomes .furthermore , nucleosomes may compete with other dna binding proteins for genomic sequence .for example , several closely spaced tf binding sites may serve to displace the nucleosomes from the promoter region upon tf binding . to take the competition between nucleosomes and other factors into accountwe need to consider configurations with multiple regularly spaced nucleosomes ( accomodating a 20 - 25 bp linker between the end of one nucleosome and the beginning of the next ) as well as other dna - binding proteins .the probability of any such configuration can be computed if we can construct a statistical sum ( partition function ) over all possible configurations .though the sum has exponentially many terms and is thus impossible to evaluate by brute force for all but very short dna sequences , it can nonetheless be efficiently computed with a dynamic programming algorithm . here we develop the dynamic programming approach for a general case of objects of length that are placed on genomic dna ( _ i.e. _ occupy dna basepairs ) .the objects could represent nucleosomes , tfs , or any other dna - binding proteins .the binding energy of object with dna at each allowed position is assumed to be known : , where is the number of basepairs . for nucleosomes we will carry out a genome - wide computation of dna elastic energies as described above , while for other factorsthe energy landscapes will be constructed based on their binding preferences inferred from footprinting experiments , selex assays , bioinformatics predictions , etc .we assign index to the background which can be formally considered to be an object of length 1 : . in the simplest case which we consider here the background energy is zero everywhere , but more sophisticated models could incorporate a global bias by making positions near dna ends less favorable , etc .we wish to compute a statistical sum over all possible configurations in which object overlap is not allowed ( including the background `` object '' ) : where is the total energy of an arbitrary configuration of non - overlapping objects , is the number of objects of type , and is the pre - computed energy of the object of type which occupies positions through .it is possible to evaluate ( or the free energy ) efficiently by recursively computing the partial statistical sums : with the initial condition .the theta function is defined as : it is computationally more efficient to transform equation ( [ z : forward ] ) into log space by defining the partial free energies : with the initial condition . for numerical stability ,we rewrite equation ( [ f : forward ] ) as an explicit update of the partial free energy computed at the previous step : the free energy computed in the final step is the full free energy where all possible configurations are taken into account : . since the algorithm proceeds by computing partial sums from 1 to n it is often called the forward pass .similar equations can be constructed for the backward pass which proceeds from n to 1 : with the initial condition .we transform equation ( [ z : backward ] ) into log space by defining the backward partial free energies : with the initial condition , or equivalently : note that by construction . with the full set of forward and backward partial free energies we can evaluate any statistical quantity of interest . for example, the probability of finding an object of type at positions is given by : another quantity of interest is the occupancy of the basepair by object , defined as the probability that basepair is covered by any object of type : ( note that for ) .if the object is composite ( _ i.e. _ consists of basepairs extended symmetrically on both sides by to take the nonzero linker lengths into account : ) , we may be interested in the partial occupancy by basepairs at the center of the object . equation ( [ occ : def ] ) then becomes : recursively , here we assume that all quantities on the right - hand side are set to zero if their indices are outside their definition domains ( such as ) .finally , we need to take into account the fact that the objects can bind dna in both directions , and thus there are two binding energies for each position : ( object starts at and extends in the 5 to 3 direction ) and ( object starts at and extends in the 3 to 5 direction ) .it is easy to show that the formalism developed above applies without change if the binding energies are replaced by the free energies which take both binding orientations into account : .\ ] ] in the case of a single type of dna - binding object ( such as the nucleosome energies predicted with dnabend ) there are two free parameters : the mean energy of nucleosomes over a chromosome or a given genomic region which affects overall nucleosome occupancy , and the standard deviation which plays the role of inverse temperature . note that dnabend nucleosomes are 157 bp long because a 147 bp histone binding site is flanked on both sides by 5 bp long linkers which do not contribute to the energy . because the background energy is set to zero , making more negative results in fewer bases being devoid of nucleosomes ( this corresponds to increasing free concentration of histone octamers in the histone - dna solution ) , and vice versa .the inverse temperature is related to the nucleosome stability : making smaller results in fewer stable nucleosomes ( _ e.g. _ those present in half or more of all the configurations in equation ( [ z : part ] ) ) , while allowing for a bigger difference between favorable and unfavorable nucleosomal energies results in more configurations with frozen , stable nucleosomes .for all calculations in this paper we have chosen to set , for every chromosome .the nucleosome energies with zero mean result in the average nucleosome occupancy of 0.797 over all chromosomes , and 12538 stable , non - overlapping nucleosomes ( with ) covering 16.3% of the yeast genome .in addition , this value of is close to the scale of dnabend elastic energies .the free parameters described here are also present in the alignment model ( albeit in a more implicit manner , see section [ aln : methods ] ) , and are required in general for each type of dna - binding object .all tf binding energies were computed using position - specific weight matrices ( pwms ) from morozov _ et al. _ and macisaac _ et al . _: where is the length of the binding site , is the nucleotide at position , is the frequency of base at position in the pwm , and is the background frequency of base : ( the same background frequencies as used in the alignment - based nucleosome model , see section [ aln : methods ] ) . is an energy scale , set to 1 for all tfs .the distribution of energies is bounded from below by the energy of the consensus sequence which is a free parameter of the model since it depends on the tf concentration in solution .thus the energy of a tf bound to site is given by : , where is typically set to -6.0 ( relative to the 0.0 average energy of dnabend nucleosomes ) , but can be also varied through a range of energies ( cf .finally , the two energies of binding the same site in both directions are combined using : ,\ ] ] where is the strand bias .a set of tf binding energies computed in this way ( together with the nucleosome binding energy profile ) is used as input to the recursive algorithm from section [ dynapro ] . in practice , we restrict a set of sites to which tfs are allowed to bind by assigning a highly unfavorable energy to all tf positions that are not listed as predicted sites by macisaac _ tbp binding energies were obtained in the same way , except that the tbp pwm was derived using an alignment of tata box sites from basehoar _ et al . _ tbps were only allowed to bind the sites listed in basehoar __ as tata boxes .yuan _ et al ._ have developed a hidden markov model - based approach for inferring nucleosome positions from high resolution tiled microarray data . the data was collected for most of _chromosome iii , plus additional regulatory regions .we have used yuan _`` hmm calls '' listed in the supplementary data to identify probes occupied by either a well - positioned or a fuzzy nucleosome ( manually added `` hand calls '' were not considered ) .we then extended nucleosomal regions from the mid - point of the first occupied probe to the mid - point of the last occupied probe , plus additional 10 bp on each side ( making it consistent with the definition of nucleosome coverage of tf binding sites employed in fig . 3 of yuan _ et al . _ ) . thus the nucleosomal length is given by , where is the number of contiguous probes occupied by a single nucleosome .we find that most nucleosomes occupy 6 contiguous probes and thus cover 120 bp , somewhat less than 147 bp expected from the structural point of view .the total number of nucleosomes placed on chromosome iii is 1045 ; every basepair covered by either well - positioned or fuzzy nucleosome was assigned an occupancy of 1.0 , resulting in the overall average occupancy of 0.488 ( that is , almost half of the genome is covered by the non - overlapping nucleosomes ) .the resulting occupancy profile and the corresponding nucleosome positions were used to compile all yuan _ et al ._ results in this work .albert _ et al ._ have used high throughput dna sequencing to identify genomic positions of h2a.z nucleosomes in _ s.cerevisiae _ with a median 4 bp resolution . we converted their nucleosome positioning data into the occupancy profile by using the `` fine - grain '' nucleosome center coordinates ( reported for both strands in albert _ et al . _supplementary materials ) to place 147 bp nucleosomes on the genome . since many `` fine - grain '' nucleosomes overlap, we end up with longer nucleosome - occupied regions containing multiple hits and surrounded by regions with zero nucleosome coverage . to each base pair in these occupied regionswe then assign an integer equal to the number of nucleosomes that cover this particular base pair , thus producing an unnormalized nucleosome occupancy profile .finally , the occupancy profile is normalized by rescaling the integer counts separately in each occupied region so that the highest occupancy is assigned a value of 1.0 .the total number of overlapping h2a.z nucleosome reads is 34796 ; with the conventions described above the average occupancy is 0.118 over all chromosomes . the resulting occupancy profile and the corresponding nucleosome positions were used to produce albert _ et al ._ results for supplementary figs . 22 and 27b .segal _ et al ._ have recently developed a nucleosome positioning model based on the alignment of 199 sequences occupied by nucleosomes _ in vivo _ in the yeast genome . the main assumption of the model is that it is possible to infer nucleosome formation energies using dinucleotide distributions observed in aligned nucleosomal sequences . to this end, experimentally determined mononucleosome sequences of length bp and their reverse complements were aligned around their centers as described in segal __ the rules for choosing the center of the alignment should be clear from the following example : .... x aagcgttaaacgc seq1 gcgtttaacgctt seq1_rc atgcgacg seq2 cgtcgcat seq2_rc .... here x labels the center of the alignment , which is chosen in an obvious way for sequences with an odd number of nucleotides ( seq1 ) , and by making the number of nucleotides to the left of the central nucleotide one less than the number of nucleotides to the right of the central nucleotide for sequences with an even number of nucleotides ( seq2 ) .a local average over neighboring dinucleotide positions was carried out by shifting the original alignment of by 1 bp to the right as well as 1 bp to the left , and adding the shifted sequences to the original alignment .thus the final alignment has sequences .next , the conditional probability of a nucleotide of type at position ( ) given a nucleotide of type at position ( ) is computed at every position within 67 bp of the center of the alignment : where , is the number of times the dinucleotide occurs in the augmented alignment , and marks the alignment center .keeping conditional probabilities only for 135 central positions disregards dinucleotide counts from the outer edges of the alignment to which fewer sequences contribute .the final model is 157 bp long because 135 conditional probabilities are flanked on each side by 11 background nucleotide frequencies , to approximate steric replusion between nucleosomes : where are the background frequencies from the yeast genome .finally , the model assigns a negative log score ( interpreted as the energy of nucleosome formation ) to a 157 bp long nucleotide sequence : where denotes the nucleotide type at position in the sequence , and are either the genomic background frequencies defined above or uniform background frequencies : ( _ i.e. _ ) .note that by construction the flanking nucleotides do not contribute to equation ( [ logscore ] ) ( apart from a possible difference in background frequencies ) , and thus play the role of embedding linker regions . finally , to reproduce results from segal _ et al ._ the energies of both strands are combined using an empirical formula which implicitly sets the temperature , resulting in 13978 stable , non - overlapping nucleosomes ( with probability ) that cover 18.2% of the yeast genome , and the average occupancy of 0.844 over all chromosomes : where is a 157 bp sequence starting at position in the genome , and is its reverse complement .note that genomic background frequencies are used for one strand while uniform background frequencies are used for the other .we refer to the log scores computed in this way , and the corresponding nucleosome probabilities and occupancies as alignment model i. alternatively , in alignment model ii we use genomic background frequencies for both dna strands and rescale the temperature explicitly to match dnabend and alignment model i : + \langle l^{old}(s_k ) \rangle,\ ] ] where and signifies an average over the chromosome .note that while the log scores are rescaled in equation ( [ logscore : rescaled ] ) , their mean stays the same ( and close to zero because the background scores have been subtracted , cf .equation ( [ logscore ] ) ) .we treat the log scores from equations ( [ logscore : combined ] ) and ( [ logscore : rescaled ] ) as nucleosome energies , and use them to compute nucleosome probabilities and occupancies as described in section [ dynapro ] .we find that alignment model ii predictions do not strongly depend on the exact value of the scaling coefficient .alignment model ii results in the average nucleosome occupancy of 0.810 over all chromosomes ; 26670 stable , non - overlapping nucleosomes ( with ) are placed , covering 34.7% of the yeast genome .the overall correlation between alignment models i and ii is 0.66 for log scores and 0.60 for occupancies .we used a standard competitive nucleosome reconstitution procedure to measure the relative affinity of different dna sequences for binding to histones in nucleosomes . in this method , differing tracer dna molecules compete with an excess of unlabeled competitor dna for binding to a limiting pool of histone octamer .the competition is established in elevated [ nacl ] , such that histone - dna interaction affinities are suppressed and the system equilibrates freely .the [ nacl ] is then slowly reduced by dialysis , allowing nucleosomes to form ; further reduction in [ nacl ] to physiological concentrations or below then `` freezes - in '' the resulting equilibrium , allowing subsequent analysis , by native gel electrophoresis , of the partitioning of each tracer between free dna and nucleosomes .the distribution of a given tracer between free dna and nucleosomes defines an equilibrium constant and a corresponding free energy , valid for that competitive environment .comparison of the results for a given pair of tracer dnas ( in the identical competitive environment ) eliminates the dependence on the details of the competitive environment , yielding the free energy difference ( ) of histone - interaction between the two tracer dnas . to allow for comparison with other work , we include additional tracer dnas as reference molecules : a derivative of the 5s rdna natural nucleosome positioning sequence and the 147 bp nucleosome - wrapped region of the selected high affinity non - natural dna sequence 601 . the 5s and 601 reference sequences were prepared by pcr using plasmid clones as template .146 and 147 bp long dnas analyzed in x - ray crystallographic studies of nucleosomes ( pdb entries 1aoi and 1kx5 , respectively ) were prepared as described using clones supplied by professors k. luger and t.j .richmond , respectively .new 147 bp long dna sequences designed in the present study were prepared in a two - step pcr - based procedure using chemically synthesized oligonucleotide primers .all synthetic oligonucleotides were gel purified prior to use .the central 71 bp were prepared by annealing the two strands .the resulting duplex was gel purified and used as template in a second stage pcr reaction to extend the length on each end creating the final desired 147 bp long dna .the resulting dna was again purified by gel electrophoresis .dna sequences to be analyzed were 5 end - labeled with , and added in tracer quantities to competitive nucleosome reconstitution reactions .reconstitution reactions were carried out as described except that each reaction included 10 g purified histone octamer and 30 g unlabeled competitor dna ( from chicken erythrocyte nucleosome core particles ) in the 50 microdialysis button .* dna templates . *plasmids pgem-3z/601 , pgem-3z/603 and pgem-3z/605 containing nucleosome positioning sequences 601 , 603 and 605 , respectively , were described previously . to obtain templates for hydroxyl radical footprinting experiments the desired -bp dna fragments were pcr - amplified using various pairs of primers and taq dna polymerase ( new england biolabs ) .the sequences of the primers will be provided on request . to selectively label either upper or lower dna strands one of the primers in each pcr reaction was 5-end radioactively labeled with polynucleotide kinase and - atp . the single - end - labeled dna templates were gel - purified and single nucleosomes were assembled on the templates by dialysis from 2 m nacl . nucleosome positioning was unique on at least 95% of the templates .* hydroxyl radical footprinting . *hydroxyl radicals introduce non - sequence - specific single - nucleotide gaps in dna , unless dna is protected by dna - bound proteins . hydroxyl radical footprinting was conducted using single - end - labeled histone - free dna or nucleosomal templates as previously described . in short , 20 - 100 ng of single - end - labeled dna or nucleosomal templates were incubated in 10 mm hepes buffer ( ph 8.0 ) in the presence of hydroxyl radical - generating reagents present at the following final concentrations ( 2 mm fe(ii)-edta , 0.6% h2o2 , 20 mm na - ascorbate ) for 2 min at c. reaction was stopped by adding thiourea to 10 mm final concentration .dna was extracted with phe : chl ( 1:1 ) , precipitated with ethanol , dissolved in a loading buffer and analyzed by 8% denaturing page .* data analysis and sequence alignment . * the denaturing gels were dried on whatman 3 mm paper , exposed to a cyclone screen , scanned using a cyclone and quantified using optiquant software ( perklin elmer ) .positions of nucleotides that are sensitive to or protected from hydroxyl radicals were identified by comparison with the sequence - specific dna markers ( supplementary fig .the dyad was localized by comparison of the obtained footprints with the footprints of the nucleosome assembled on human -satellite dna .the latter footprints were modeled based on the available 2.8 resolution x - ray nucleosome structure .* table 1 .* predicted ( ) and measured ( ) free energies of computationally designed sequences .experimental free energies are shown relative to the reference sequence from the _ l.variegatus _5s rrna gene .histone binding is dominated by the contribution from the tetramer with the 71 bp binding site . the best binder is created by using simulated annealing to introduce mutations and thus minimize the energy of a 71 bp dna molecule . b71s1 and b71s2 have different sequences flanking the 71 bp designed site ( whose contribution dominates the total free energy ) . 601s1 and 601s2 consist of the 71 bp site from the center of the 601 sequence and flanking sequences from b71s1 and b71s2 , respectively .w147s is a 147 bp sequence whose free energy ( with contributions from multiple binding sites ) was maximized by simulated annealing . x146 and x147 are 146 bp and 147 bp dna sequences from nucleosome crystal structures 1aoi and 1kx5. + twist & 0.130 & 0.041 & .046 & .159 & -0.305 & 0.739 + roll & 0.041 & 0.069 & .026 & .053 & -0.213 & -0.135 + tilt & .046 & .026 & 0.406 & -0.408 & .073 & .403 + shift & .159 & .053 & -0.408 & 11.948 & .162 & .511 + slide & -0.305 & -0.213 & .073 & .162 & 8.241 & -5.353 + rise & 0.739 & -0.135 & .403 & .511 & -5.353 & 35.715 + + twist & 0.179 & 0.053 & .015 & .083 & -0.118 & 1.244 + roll & 0.053 & 0.085 & .003 & .048 & 0.049 & 0.089 + tilt & .015 & .003 & 0.323 & -0.312 & .298 & .219 + shift & .083 & .048 & -0.312 & 4.912 & .672 & .872 + slide & -0.118 & 0.049 & .298 & .672 & 10.089 & -3.277 + rise & 1.244 & 0.089 & .219 & .872 & -3.277 & 35.709 + + twist & 0.113 & 0.038 & .001 & .168 & 0.040 & 0.850 + roll & 0.038 & 0.077 & .022 & .006 & -0.023 & 0.067 + tilt & .001 & .022 & 0.280 & -0.365 & .252 & .995 + shift & .168 & .006 & -0.365 & 4.954 & .527 & .093 + slide & 0.040 & -0.023 & .252 & .527 & 4.516 & -2.966 + rise & 0.850 & 0.067 & .995 & .093 & -2.966 & 29.330 + + twist & 0.098 & 0.027 & .034 & .061 & -0.254 & 0.799 + roll & 0.027 & 0.059 & .046 & .165 & -0.016 & 0.202 + tilt & .034 & .046 & 0.393 & -0.965 & .174 & .593 + shift & .061 & .165 & -0.965 & 5.740 & .117 & .963 + slide & -0.254 & -0.016 & .174 & .117 & 2.772 & -4.449 + rise & 0.799 & 0.202 & .593 & .963 & -4.449 & 23.870 + + twist & 0.114 & 0.045 & .008 & .139 & -0.328 & 1.587 + roll & 0.045 & 0.075 & .014 & .005 & -0.083 & 0.797 + tilt & .008 & .014 & 0.218 & -0.537 & .215 & .066 + shift & .139 & .005 & -0.537 & 3.917 & .085 & .401 + slide & -0.328 & -0.083 & .215 & .085 & 3.795 & -7.972 + rise & 1.587 & 0.797 & .066 & .401 & -7.972 & 41.678 + + twist & 0.133 & 0.055 & .027 & .073 & -0.350 & 1.301 + roll & 0.055 & 0.097 & .042 & .099 & -0.158 & 0.336 + tilt & .027 & .042 & 0.408 & -1.012 & .016 & .139 + shift & .073 & .099 & -1.012 & 10.434 & .814 & .105 + slide & -0.350 & -0.158 & .016 & .814 & 6.278 & -7.676 + rise & 1.301 & 0.336 & .139 & .105 & -7.676 & 41.988 + + twist & 0.101 & 0.020 & 0.000 & 0.000 & -0.278 & 0.806 + roll & 0.020 & 0.040 & 0.000 & 0.000 & 0.001 & -0.046 + tilt & 0.000 & 0.000 & 0.255 & -0.510 & 0.000 & 0.000 + shift & 0.000 & 0.000 & -0.510 & 3.104 & 0.000 & 0.000 + slide & -0.278 & 0.001 & 0.000 & 0.000 & 3.991 & -2.936 + rise & 0.806 & -0.046 & 0.000 & 0.000 & -2.936 & 20.174 + + twist & 0.069 & 0.006 & 0.000 & 0.000 & -0.217 & 0.646 + roll & 0.006 & 0.057 & 0.000 & 0.000 & 0.094 & 0.168 + tilt & 0.000 & 0.000 & 0.256 & -0.542 & 0.000 & 0.000 + shift & 0.000 & 0.000 & -0.542 & 3.473 & 0.000 & 0.000 + slide & -0.217 & 0.094 & 0.000 & 0.000 & 4.030 & 1.322 + rise & 0.646 & 0.168 & 0.000 & 0.000 & 1.322 & 34.392 + + twist & 0.189 & 0.068 & 0.000 & 0.000 & 0.111 & 1.195 + roll & 0.068 & 0.124 & 0.000 & 0.000 & 0.438 & -0.397 + tilt & 0.000 & 0.000 & 0.641 & -0.043 & 0.000 & 0.000 + shift & 0.000 & 0.000 & -0.043 & 6.670 & 0.000 & 0.000 + slide & 0.111 & 0.438 & 0.000 & 0.000 & 15.942 & -2.611 + rise & 1.195 & -0.397 & 0.000 & 0.000 & -2.611 & 47.789 + and are shown in blue , and the mst coordinate frameis shown in green .for illustrative purposes , only rise and twist are set to non - zero values .the origin of the mst frame is at the midpoint of the line connecting the origins of two base pair frames ( which are separated by along the z - axis ) ; the mst frame is rotated through with respect to the frame . ] ( ) .black curve in the slide panel is a dnabend prediction based on a modified elastic potential in which the mean slide for the ca dinucleotide was increased from 0.18 ( inferred from a set of non - nucleosome protein - dna complexes and used in this work ) to 0.91 ( inferred from a limited set of available nucleosome structures ) .note that the ca dinucleotides occur at positions ( 3,10,16,26,50,53,60,66,77,85,102,109,115,133 ) in the 1kx5 crystal structure . whereas the slide correlation coefficient between the native and minimized structures remained essentially the same at 0.533 , the absolute magnitude of predicted slide peaks is in better correspondence with the observed values .thus an elastic potential trained on non - nucleosome complexes may not reproduce all structural aspects of highly bent nucleosomal dna . ] . ( b ) the 183 bp sequence from the pgub plasmid [ bps 11,31 ] . ( c ) the 215 bp fragment from the sequence of the chicken gene [ bp 52 ] . ( d , e , f ) synthetic high - affinity sequences 601 [ bp 61 ] , 603 [ bp 81 ] , and 605 [ bp 59 ] .experimentally known nucleosome starting positions are listed in a one - based coordinate system with consecutively numbered base pairs .nucleosomes on sequences 601 , 603 , and 605 were mapped by hydroxyl radical footprinting ( supplementary figs .13 and 14 ) .all dna sequences used in this calculation are available on the nucleosome explorer website : _http://nucleosome.rockefeller.edu_. ] ) predicted using dnabend ( a ) and alignment model i ( b ) .red : null model in which the same number of stable nucleosomes is randomly positioned on the genome without overlap .solid red lines are mean values for 100 random placements , dashed red lines are one standard deviation away .alignment model ii exhibits similar oscillations ( data not shown ) . ]( with pwm constructed using a combination of a transfac mcm1 pwm and a structure - based 1mnm pwm ) .note that the experimentally mapped nucleosomes are phased off the region occupied by ; however , in the dnabend simulation nucleosomes move to the left instead , perhaps because we overestimate the stability of the nucleosome starting at bp 322200 . ]( same pwm as in supplementary fig .note that in contrast to the _ bar1 _ locus ( supplementary fig .26d ) , nucleosomes are shifted in the right direction when binds , phasing off the tf - occupied region . ]locus , with experimental nucleosome positions mapped by weiss _ dark blue : origin recognition complex ( pwm based on the consensus sequence from ref . ) , dark orange : abf1 , pink : rap1 .note that two prominent gaps in the otherwise regularly spaced array of experimentally mapped nucleosomes coincide with the regions of high tf occupancy . ]locus , with experimental nucleosome positions mapped by ravindra _ dark blue : origin recognition complex ( pwm based on the consensus sequence from ref . ) , dark orange : abf1 , pink : rap1 .similar to the locus ( supplementary fig .26f ) , two prominent gaps in the otherwise regularly spaced array of experimentally mapped nucleosomes coincide with the regions of high tf occupancy . ]( same pwm as in supplementary fig .similar to the and loci ( supplementary figs .26f , g ) , two prominent gaps in the otherwise regularly spaced array of experimentally mapped nucleosomes coincide with the regions of high tf occupancy . ]. distances between neighboring nucleosomes are sampled from a gaussian distribution with the mean chosen to reproduce the average occupancy predicted by the actual model , and the standard deviation set to 0.5 of the mean ( negative linker lengths are not allowed ) .thus in the null models nucleosomes are positioned non - specifically but on average form a regular array .the only parameter borrowed from the actual model is the predicted average occupancy .the center - to - center nucleosome distance is determined by locating in the periodic occupancy profile the nearest region with occupancy of at least 0.35 over the nucleosomal length .error bars that correspond to 100 random realizations of the null models are too small to be shown .note that in both ( a ) and ( b ) red and cyan curves nearly overlap ; adding tf and tbp binding to the alignment models does not improve their performance ( data not shown ) . ] bouchiat , c. , wang , m.d . ,allemand , j .- f . , strick , t. , block , s.m . andcroquette , v. ( 1999 ) estimating the persistence length of a worm - like chain molecule from force - extension measurements ._ biophys.j ._ , * 76 * , 409413 .macisaac , k.d . , wang , t. , gordon , d.b . , gifford , d.k . , stormo , g.d . andfraenkel , e. ( 2006 ) an improved map of conserved regulatory sites for _ saccharomyces cerevisiae_. _ bmc bioinformatics _ , * 7 * , 113 .albert , i. , mavrich , t.n . ,tomsho , l.p . , qi , j. , zanton , s.j . , schuster , s.c . andpugh , b.f .( 2007 ) translational and rotational settings of h2a.z nucleosomes across the _ saccharomyces cerevisiae _ genome ._ nature _ , * 446 * , 572576 .dyer , p.n . ,edayathumangalam , r.s . , white , c.l . , bao , y. , chakravarthy , s. , muthurajan , u.m . and luger , k. ( 2004 ) reconstitution of nucleosome core particles from recombinant histones and dna ._ methods enzymol ._ , * 375 * , 23-44 .thastrom , a. , lowary , p.t ., widlund , h.r . ,cao , h. , kubista , m. and widom , j. ( 1999 ) sequence motifs and free energies of selected natural and non - natural nucleosome positioning dna sequences ._ j.mol.biol._ , * 288 * , 213229 .widlund , h.r . ,cao , h. , simonsson , s. , magnusson , e. , simonsson , t. , nielsen , p.e . ,kahn , j.d . , crothers , d.m . and kubista , m. ( 1998 ) identification and characterization of genomic nucleosome - positioning sequences ._ j.mol.biol._ , * 267 * , 807817 .kassabov , s.r . ,henry , n.m . ,zofall , m. , tsukiyama , t. and bartholomew , b. ( 2002 ) high - resolution mapping of changes in histone - dna contacts of nucleosomes remodeled by isw2 . _ mol.cell.biol._ , * 22 * , 75247534 .moreira , j.m . andholmberg , s. ( 1998 ) nucleosome structure of the yeast cha1 promoter : analysis of activation - dependent chromatin remodeling of an rna - polymerase - ii - transcribed gene in tbp and rna pol ii mutants defective in vivo in response to acidic activators . _embo j. _ , * 17 * , 60286038. almer , a. , rudolph , h. , hinnen , a. and hrz , w. ( 1986 ) removal of positioned nucleosomes from the yeast pho5 promoter upon pho5 induction releases additional upstream activating dna elements . _ embo j. _ , * 5 * , 26892696 .venter , u. , svaren , j. , schmitz , j. , schmid , a. and hrz , w. ( 1994 ) a nucleosome precludes binding of the transcription factor pho4 _ in vivo _ to a critical target site in the pho5 promoter ._ embo j. _ , * 13 * , 48484852 .shimizu , m. , roth , s.y ., szent - gyorgyi , c. and simpson , r.t .( 1991 ) nucleosomes are positioned with base pair precision adjacent to the operator in _saccharomyces cerevisiae_. _ embo j. _ , * 10 * , 30333041 .ravindra , a. , weiss , k. and simpson , r.t .( 1999 ) high - resolution structural analysis of chromatin at specific loci : _ saccharomyces cerevisiae _ silent mating - type locus ._ mol.cell.biol._ , * 19 * , 79447950 .sekinger , e.a ., moqtaderi , z. and struhl , k. ( 2005 ) intrinsic histone - dna interactions and low nucleosome density are important for preferential accessibility of promoter regions in yeast ._ mol.cell_ , * 18 * , 735748 .
in eukaryotic genomes , nucleosomes function to compact dna and to regulate access to it both by simple physical occlusion and by providing the substrate for numerous covalent epigenetic tags . while nucleosome positions _ in vitro _ are determined by sequence alone , _ in vivo _ competition with other dna - binding factors and action of chromatin remodeling enzymes play a role that needs to be quantified . we developed a biophysical model for the sequence dependence of dna bending energies , and validated it against a collection of _ in vitro _ free energies of nucleosome formation and a nucleosome crystal structure ; we also successfully designed both strong and poor histone binding sequences _ ab initio_. for _ in vivo _ data from _ s.cerevisiae_ , the strongest positioning signal came from the competition with other factors . based on sequence alone , our model predicts that functional transcription factor binding sites have a tendency to be covered by nucleosomes , but are uncovered _ in vivo _ because functional sites cluster within a single nucleosome footprint , making transcription factors bind cooperatively . similarly a weak enhancement of nucleosome binding in the tata region for naked dna becomes a strong depletion when the tata - binding protein is included , in quantitative agreement with experiment . predictions at specific loci were also greatly enhanced by including competing factors . our physically grounded model distinguishes multiple ways in which genomic sequence can influence nucleosome positions and thus provides an alternative explanation for several important experimental findings . = 1
modern high - speed optical communication systems require high - performing implementations that support throughputs of 100 gbit / s or multiples thereof , that have low power consumption , that realize close to the theoretical limits at a target of , and that are preferably adapted to the peculiarities of the optical channel . especially with the advent of coherent transmission schemes and the utilization of high resolution , soft - decision decoding has become an attractive means of reliably increasing the transmission reach of lightwave systems . currently , there are two popular classes of codes for soft - decision decoding that are attractive for implementation in optical receivers at decoding throughputs of 100 gbit / s and above : codes and .the latter can be decoded with a highly parallelizable , rapidly converging soft - decision decoding algorithm , usually have a large minimum distance , but require large block lengths of more than to realize codes with small overheads , leading to decoding latencies that can be detrimental in certain applications . with overheads of more than 15% to 20%, these codes no longer perform well , at least under hard - decision decoding .codes are understood and are suited to realize codes with lengths of a few and overheads above % .recently , the class of codes has gained widespread interest due to the fact that these codes are asymptotically capacity - achieving , have appealing encoding and decoding complexity and show outstanding practical decoding performance .codes are an extension of existing coding schemes by a superimposed convolutional structure .the technique of spatial coupling can be applied to most existing codes , the most popular are however codes and , which have found use in optical communications ( _ staircase codes _ ) and show outstanding performance , operating within of the capacity of the hard - decision awgn channel . in this paper , we discuss the use of codes in optical communications and especially focus on - codes .we summarize some recent advances and design guidelines for - codes and show by means of an -based decoding platform that large gains at low bit error rates can be realized with relatively small codes when compared with state - of - the - art ldpc codes .the aim of this paper is to show that - codes are mature channel codes that are viable candidates for future optical communication systems with large .furthermore , their universality makes them attractive for flexible transceivers with adaptive modulation .an code is defined by the null space of a _ sparse _ parity - check matrix of size where the code contains all binary code words of length such that , i.e. , .each row of is considered to be a _ check node _ , while each column of is usually termed _variable node_. we say that the _ variable degree _ ( or _ variable node degree _ ) of a code is _ regular _ with degree if the number of `` 1 ' 's in each column is constant and amounts to .we say that the _ check degree _ ( or _ check node degree _ ) of a code is _ regular _ with degree if the number of `` 1 ' 's in each row of is constant and amounts to .the class of _ irregular _ codes has the property that the number of `` 1 ' 's in each column and/or row is not constant .the _ degree profile _ of an irregular code indicates the fraction of columns / rows of a certain degree .more precisely , represents the fraction of columns with `` 1 ' 's ( e.g. , if , half the columns of have three `` 1 ' 's ) .note that has to hold .similarly , represents the fraction of rows ( i.e. , checks ) with `` 1 ' 's .codes form an important class of codes in optical communications .codes with soft - decision decoding are currently being deployed in systems operating at 100gbit / s and , e.g. , utilizing 16 iterations .modern high - performance systems in optical communications are sometimes constructed using a soft - decision inner code which reduces the to a level of to and a hard - decision algebraic outer cleanup code which pushes the system to levels below .the outer cleanup code is used to combat the _ error floor _ that is present in most codes . note that the implementation of a coding system with an outer cleanup code requires a thorough understanding of the code and a properly designed interleaver between the and the outer code .recently , there has been some interest to avoid the use of an outer cleanup code and to use only soft - decision codes with very low error floors , leading to coding schemes with less rate loss and less latency . with increasing computational resources , it is now also feasible to evaluate very low target of codes and optimize the codes to have very low error floors below the system s target .although the internal data flow of an ldpc decoder may be larger by more than an order of magnitude than that of a btc , several techniques can be used to lower the data - flow , e.g. , the use of layered decoding and min - sum decoding , requiring only two -ary , binary and one -ary message per check node. - codes were introduced more than a decade ago but their outstanding properties have only been fully realized recently , when lentmaier et al .noticed that the estimated decoding performance of a certain class of terminated protograph - based - codes with a simple message passing decoder is close to the performance of the underlying code ensemble under decoding as grows , which was subsequently proven rigorously in , if certain particular conditions on the code structure are fulfilled .a left - terminated - code is basically an code with a structured ,infinitely extended parity - check matrix \bm{h}_1(1 ) & \bm{h}_0(1 ) & & & \\[-0.3em ] \vdots & \bm{h}_{1}(2 ) & & & \\[-0.3em ] \bm{h}_\mu(\mu ) & \vdots & \ddots & & \\[-0.7em ] & \bm{h}_{\mu}(\mu+1 ) & \ddots & & \\[-0.5em ] & & & \bm{h}_0(t ) & \\[-0.5em ] & & \ddots & \bm{h}_1(t+1 ) & \ddots \\[-0.3em ] & & & \vdots & \ddots \\[-0.3em ] & & & \bm{h}_\mu(t+\mu ) & \\[-0.5em ] & & & & \ddots \end{array } \right)}\label{eq : timeinvarmatrix}\end{aligned}\ ] ] with being sparse binary parity - check matrices with and denoting the _ syndrome former memory _ of the code .every code word of the code has to fulfill .one advantage of - codes is that the infinitely long code words can conveniently be decoded with acceptable latency using a simple windowed decoder . in practice , in order to construct codes of finite length , e.g. , to adhere to certain framing structures in the communication system at hand , the infinitely extended matrix is _ terminated _ resulting in finite length code .one example of termination is _ zero - termination _ , where the matrix is cut off after parts , resulting in a code of length and a parity - check matrix } ] .note that this termination leads to a _ rate loss _, which can however be kept small if is chosen large enough . for a discussion of termination schemes ,we refer the interested reader to .codes are now emerging in various applications .two examples of product codes are the staircase code and the braided bch codes , for hard - decision decoding in optical communications . sc - ldpc codes may also be viable for pragmatic coded modulation schemes . in order to simplify the design of hardware , we first drop the time dependency and only consider the time - independent ( left - terminated ) parity - check matrix with , , which is attractive for implementation as the sub - matrices can be easily reused in the encoder and decoder hardware . in this time - invariant construction with , we can give the following upper bound on the minimum distance of the code ( * ? ? ?* eq . ( 7 ) ) to construct codes with large enough minimum distances , we maximize the size of the sub - matrices , i.e. , , which has a quadratic influence on .in order to keep the complexity of the so - constructed code small , we restrict ourselves to small values of the syndrome former memory with either or .we call such codes _ weakly coupled _ codes .in the past , irregular block codes have been used to design codes that perform very well for low snrs , but these schemes do sometimes suffer from relatively high error floors requiring the use of an outer code that leads to inherent rate losses . in the case of - codes , we can use the irregularity to control the propagation speed of the decoding wave of a windowed decoder , i.e. , we can minimize the number of iterations that are necessary until a windowed decoder can advance by one step . to simplify the code construction and to illustrate the concept, we only use the most simple form of irregularity and construct slightly irregular - codes with and additionally with either degree-4 or degree-6 variable nodes .we avoid degree-2 variables nodes due to their potentially detrimental effect on the error floor .also , in contrast to block ldpc codes , degree-2 nodes are _ not _ of the same importance for - codes .we vary the fraction of degree-4 or degree-6 nodes between 0 and 1 and select the check nodes such that a rate ( 25% overhead ) code is constructed .we perform full density evolution using the irregular version of kudekar s ensemble for random spatial coupling with using an awgn channel and measure the required values to advance the decoding wave by steps .the density evolution results are shown in fig .[ fig : speed ] for varying and .we can see that using additionally degree-4 ( besides degree-3 ) variables does not lead to noteworthy gains , which is why we focus on additional degree-6 nodes in this paper .the convergence speed improves by selecting a proper value of leading to a smaller required .the selection depends however on . as we intend to construct low complexity decoders with , we can see that in this case , the optimum is achieved with ( 20% of degree-6 variable nodes , 80% of degree-3 variable nodes ) .we can see that by proper selection of , we can obtain codes that have an improved decoding convergence , however , we also see that depending on the selection , a worse convergence behavior than for the regular case can result .we also observe that if we want a code that operates extremely close to capacity , the optimum value of is larger ( around ) than for the more practical case , where the optimum lies at .note that although we use kudekar s ensemble for density evolution , the codes we construct in the next section are generated from protographs , similar to those in , as these exhibit better finite length performance . in order to verify the performance of the rapidly converging weakly coupled - codes ,we use a platform , whose high - level diagram is illustrated in fig .[ fig : fpga_schematic ] .this platform is similar to other platforms reported in the literature and consists of three parts : a gaussian noise generator , an decoder and an error detecting circuit .the gaussian noise generator generates gaussian distributed , stemming from bpsk transmission over an awgn channel , using uniform random number generators and the box - muller transform .these are then fed to the decoder after quantization to 15 levels .the decoder is based on the layered decoding algorithm and uses a scaled - minsum check node computation rule with constant scaling factor .the windowed decoder that is implemented can be sub - divided into three steps . in the first step, a new sub - block of quantized is received from the random number generator and put into the vacant position of the decoder s memory .decoding takes place by considering copies of .the windowed decoded considers an equivalent matrix of size which it processes before shifting in new values . in order to maximize the hardware utilization , within a window , we use two parallel decoders that operate on non - overlapping portions of that matrix . in a first step ,the first decoding engine operates on the first check nodes of the matrix under consideration while the second engine operates in parallel on the check nodes starting at position . in general , the first engine processes the check nodes at position $ ] while the second engine processes the check node .note that only a single iteration is carried out to guarantee the required throughput , corresponding effectively to iterations per bit ( due to the use of two engines ) . ' '' '' the output of the decoder is connected to the evaluation unit , which counts the bit errors and reports the error positions .we use virtex-7 allowing for a throughput of several gbit / s to evaluate the ber performance of several coding schemes of rate , i.e. , of 25% coding overhead .we select this particular rate due to its importance in today s systems .current and future / s ( with qpsk ) or / s ( with 16-qam ) systems are often operated in channels with an exploitable bandwidth of roughly due to with non - flat frequency characteristic . with almost rectangular pulse shapes ( root - raised cosine with small roll - off ) and today s generation of , symbol rates of can be realized . with dual - polarization qpsk transmission ,gross bit rates of 128gbit / s can be realized .assuming signaling and protocol overheads of 3gbit / s , this leads to a code that adds 25gbit / s parity overhead ( i.e. , of rate ) .we compare three codes : * as reference , we consider a regular block qc - ldpc code ( marker ) with variable node degree and check node degree .the code is a quasi - cyclic code of girth 10 and block length , constructed using cyclically shifted identity matrices of size and decoded with row - layered iterations .* - code a ( ) is the rapidly converging irregular code with syndrome former memory , and and check node degree .the sub - block size is ( ) .* - code b ( ) is a regular code with and syndrome former memory .the size of the sub - matrices is identical to those of - code a , however , we select . both codes are constructed from cyclic permutation matrices of size and are terminated after subblocks .the simulation results are shown in fig . [fig : fpga_simres ] .the block code , which has a matrix that has been optimized for low error floors , is outperformed by both - codes .- code a offers a coding gain of around 0.3db at a ber of compared to the conventional block ldpc code , but an error floor starts to manifest .this error floor is not due to any trapping sets , but due to a few uncorrected bits after windowed decoding , which can be recovered with a few - error correcting outer code .code b has a ber curve that starts to decay at worse channels , but the ber curves cross at .for the next simulated point , we did not observe any bit errors , and hence we conjecture a lower error error floor than for code a. note that no special measures have been taken to combat an error floor : only a plain scaled min - sum decoder has been used . with the block code, post - processing may be necessary to combat the error floor .another advantage of - codes is that they are future - proof : while the block code does not benefit from further decoding iterations , as its performance is already close to its decoding threshold , the scaling behavior of the sc - ldpc code allows to carry out further iterations and achieve still larger coding gains , as the gap to the decoding threshold is still non - negligible .this makes these codes attractive for standardization .as future optical networks tend to become increasingly flexible and _ elastic _ , transceivers that integrate a certain amount of flexibility with respect to coding and modulation formats are required . especially the modulation format is expected to change when transceivers are designed for long - haul or short - haul applications , where the latter require high spectral efficiencies ( e.g. , data center interconnects ) . in this section ,we show that codes are perfectly suited to be combined with varying modulation formats due to their _ universality _ properties .we combine - codes with a modulator and use _ density evolution _ to show how the detector front - end influences the performance of the codes . in conventional ( block ) code design ,usually the code needs to be `` matched '' to the transfer curve of the detection front - end . if the code is not well matched to the front - end , a performance loss occurs .if the detector front - end has highly varying characteristics , due to , e.g. , varying modulation formats or channels , several codes would need to be implemented and selected depending on the conditions , which is not feasible in optical networks , where feedback is usually difficult to realize and where different codes can not be implemented due to hardware constraints .in contrast to many block ldpc codes , spatially coupled ldpc codes can converge below the pinch - off in the exit chart due to the effect of threshold saturation .hence , even if the code is not well matched to the demodulator / detector from a traditional point of view , we can hope to successfully decode .we can hence use a single code which is _ universally _ good in all scenarios and the code design can stay _ agnostic _ to the channel / detector behavior . in order to illustrate the concept , we model the detector by a linear exit characteristic } = f_d(i_a^{[d ] } ) = a\cdot i_a^{[d ] } + i_c - \frac{a}{2}\ ] ] where controls the slope of the characteristic and describes the mutual information of the communication channel .the slope models the effect of e.g. , different modulation formats , different bit labelings in higher order modulation and different detectors .we assume that the output of the detector can be modeled using a. there therefore also use message passing .we compare two different code approaches ; first we use the spatially coupled ensemble presented in with the density evolution equation for iterative detection given by where denotes the node - perspective degree distribution polynomial , the edge - perspective degree distribution , and the edge message erasure probability of spatial position at iteration . additionally , we generate protograph based codes end employ density evolution including iterative detection .we consider two code families of rate : the first family is the rapidly converging code from sec .[ sec : weakly ] with and where we use and in kudekar s ensemble and with in the protograph ensemble .the second code is a regular code where we use kudekar s ensemble and a protograph ensemble with and with .figure [ fig : det_thresholds ] shows the de results where we use solid lines ( ' '' '' ) to show the decoding thresholds for kudekar s ensemble and dashed lines ( ) for the protograph - based ensemble .all sc codes have decoding thresholds close to the theoretical limit of and the decoding threshold is almost independent of the detector characteristic s slope .a regular block ldpc code has a highly varying threshold for different slopes .the flat threshold behavior for codes indicates a _ universal _ , _ channel - agnostic _ behavior .even an optimized irregular ldpc code will only be good for a single slope parameter . in order to improve the decoding threshold, we may deliberately select a precoder that has an exit characteristic with slope , however , as the inset of fig .[ fig : det_thresholds ] shows , the slope affects the decoding speed ( measured at ) , i.e. , the number of iterations required to advance the decoding wave by one step , so that the complexity will grow alongside . for the case of the rapidly converging code, further increases the decoding speed .we have presented an example of such a system with differential detection ( ) that is adapted to a channel with varying phase noise in .therein , a single spatially coupled code was able to outperform two different ldpc codes optimized for different channel characteristics .in this paper , we have highlighted -ldpc codes as potential candidates for future lightwave transmission systems .we have optimized - codes for convergence speed and shown by means of an fpga - based simulation that very low error rates can be obtained . finally , we have shown that - can be good candidates if they employed in a system with iterative decoding and detection : a single code can be used in various channel conditions .a. r. iyengar , m. papaleo , p. h. siegel , j. k. wolf , a. vanelli - coralli , and g. e. corazza , `` windowed decoding of protograph - based ldpc convolutional codes over erasure channels , '' _ ieee trans .inf . theory _58 , no . 4 ,pp . 23032320 , april 2012 .c. hger , a. graell i amat , f. brnnstrm , a. alvarado , and e. agrell , `` comparison of terminated and tailbiting spatially coupled ldpc codes with optimized bit mapping for pm-64-qam , '' in _ proc .ecoc _ , cannes , france , 2014 , paper th.1.3.1 .jian , h. d. pfister , k. r. narayanan , r. rao , and r. mazareh , `` iterative hard - decision decoding of braided bch codes for high - speed optical communication , '' in _ proc .globecom _ ,atlanta , usa , 2013 .v. aref , l. schmalen , and s. ten brink , `` on the convergence speed of spatially coupled ldpc ensembles , '' in _ proc .allerton conference on communications , control , and computing _ , oct .2013 , arxiv:1307.3780 .l. schmalen , s. ten brink , and a. leven , `` spatially - coupled ldpc protograph codes for universal phase slip - tolerant differential decoding , '' in _ proc .ofc _ , los angeles , ca , usa , mar .2015 , paper th3e.6 .
in this paper , we highlight the class of spatially coupled codes and discuss their applicability to long - haul and submarine optical communication systems . we first demonstrate how to optimize irregular spatially coupled ldpc codes for their use in optical communications with limited decoding hardware complexity and then present simulation results with an fpga - based decoder where we show that very low error rates can be achieved and that conventional block - based ldpc codes can be outperformed . in the second part of the paper , we focus on the combination of spatially coupled ldpc codes with different demodulators and detectors , important for future systems with adaptive modulation and for varying channel characteristics . we demonstrate that sc codes can be employed as universal , channel - agnostic coding schemes . . . # 1 ps . oddhead # 1 evenhead # 1 ps . oddhead # 1 evenhead # 1 ps . schmalen _ et . al _ : next generation error correcting codes for lightwave systems error correction codes , low - density parity - check codes , spatial coupling , optical communications
what is free - will for a physicist ? this is a very personal question .most physicists pretend they do nt care , that it is not important to them , at least not in their professional life .but if pressed during some evening free discussions , after a few beers , surprising answers come out .everything from `` obviously i enjoy free - will '' to `` obviously i do nt have any free - will '' can be heard .similarly , questions about time lead to vastly different , though general quite lean discussions : `` time is a mere evolution parameter '' , `` time is geometrical '' are standard claims that illustrate how poorly today s physics understands time .consequently , a theory of quantum gravity that will have to incorporate time in a much more subtle and rich way will remain a dream as long as we do nt elaborate deeper notions of time .i like to argue that some relevant aspect of time is not independent of free - will and that free - will is necessary for rational thinking , hence for science .consequently , this aspect of time , that i ll name creative time - or heraclitus - time - is necessary for science . for different arguments in favor of the passage of time ,see , e.g. , .the identification of time with ( classical ) clocks is likely to be misleading ( sorry einstein ) .clocks do not describe our internal feeling of the passage of time , nor the objective chance events that characterize disruptive times - the creative time - when something beyond the mere unfolding of a symmetry happens .indeed , clocks describe only one aspect of time , the geometric , boring , parmenides - time . but let s start from the beginning . before thinking of time and even before physics and philosophy , we need the possibility to decide what we ll consider as correct statements that we trust and believe and which statements we do nt trust and thus do nt buy .hence : + free - will comes first , in the logical order ; and all the rest follows from this premise .+ free - will is the possibility to choose between several possible futures , the possibility to choose what to believe and what to do ( and thus what not to believe and not to do ) .this is in tension with scientific determinism , according to which , all of today s facts were necessary given the past and the laws of nature .notice that the past could be yesterday or the big - bang billions of years ago .indeed , according to scientific determinism , nothing truly new ever happens , everything was set and determined at the big - bang .this is the view today s physics offers and i always found it amazing that many people , including clever people , do really believe in this .time would merely be an enormous illusion , nothing but a parameter labeling an extraordinary unraveling of some pre - existing initial ( or final ) conditions , i.e. the unfolding of some symmetry .what is the explanatory power of such a view ?what is the explanatory power of the claim that everything was set at the beginning - including our present day feelings about free - will - and that there is nothing more to add because there is no possibility to add anything .clearly , i am not a compatibilist , i.e. not among those who believe that free - will is merely the fact that we always happen to `` choose '' what was already pre - determined to occur , hence that nothing goes against our apparently free choices .i strongly believe that we truly make choices among several possible futures . before elaborating on all this ,let me summarize my argument .the following sections do then develop the successive points of my reasoning .1 . free - will comes first in the logical order .indeed , without free - will there is no way to make sense of anything , no way to decide which arguments to buy and which to reject .hence , there would be no rational thinking and no science .in particular , there would be no understanding .since free - will is the possibility to choose between several possible futures , point 1 implies that the world is not entirely deterministic .non - determinism implies that time really exists and really passes : today there are facts that were not necessary yesterday , i.e. the future is open .4 . in addition to the geometrical time , there is also creative time .one may like to call the first one parmenides - time , and the second concept of time heraclitus - time . both exist .the tension between free - will and creative time on one side and scientific determinism on the other side dissolves once one realizes that the so - called real numbers are not really real : there is no infinite amount of information in any finite space volume , hence initial conditions and parameters defining evolution laws are not ultimately defined , i.e. the real numbers that theories use as inital conditions and parameters are not physically real .hence , neither newtonian , nor relativity , nor quantum physics are ultimately deterministic . 6 .consequently , neither philosophy nor science nor any rational argument can ever disprove the existence of free - will , hence of the passage of time .as already mentioned in the introduction , free - will comes first . indeed ,free - will is the possibility to choose between several possible futures , like the possibility to choose what to believe and what to do , hence also to choose what not to believe and not to do .accordingly , without free - will one could not distinguish truth from false , one could not choose between different views .for example , how could one decide between creationism and darwinism , if we could not use our free - will to choose among these possibilities ? without free - will all supporters of any opinion would be equally determined ( programmed ) to believe in their views . in summary , without free - will there would be no way to make sense of anything , there would be no rational thinking and no science . in particular , there would be no understanding .furthermore , without free - will one could not decide when and how to test scientific theories .hence , one could not falsify theories and science , in the sense of popper , would be impossible .i was very pleased to learn that my basic intuition , expressed above , was shared and anticipated by a poorly known french philosopher , jules lequyer in the 19th century , who wanted to simultaneously validate science and free - will . as lequyer emphasized : `` without free - will the certainty of scientific truths would become illusory '' . and ( my addition ) the consistency of rational arguments would equally become illusory .lequyer continues : `` instead of asking whether free - will is certain , let s realize that certainty requires free - will '' .lequyer also emphasized that free - will does nt create any new possibilities , it only makes some pre - existing potentialities become actual , a view very reminiscent of heisenberg s interpretation of quantum theory. however , lequyer continues , free - will is also the rejection of chance . for lequyer - and for me - our acts offree - will are beginnings of chains of consequences .hence , the future is open , determinism is wrong ; a point on which i ll elaborate in the next two sections .lequyer did nt publish anything .but , fortunately , had an enormous influence on another french philosopher , a close friend , charles renouvier who wrote about lequyer s ideas and published some of lequyer s notes . in turn, renouvier had a great influence on the famous american philosopher and psychologist william james who is considered as one of the most influential american psychologists .william james wrote `` after reading renouvier , my first act of free - will shall be to believe in free - will '' .this may sound bizarre , but , in fact , is perfectly coherent : once one realizes that everthing rests on free - will , then one acts accordingly .the existence of genuine free - will , i.e. the possibility to choose among several possible futures , naturally implies that the world is not entirely deterministic . in other worlds , today there are facts that were not necessary , i.e. facts that were not predetermined from yesterday , and even less from the big - bang .recall that according to scientific determinism everything was set at the beginning , let s say at the big - bang , and since then everything merely unfolds by necessity , without any possible choice .philosophers include in the initial state not only the physical state of the universe , but possibly also the character of humans - and living beings .hence , let s recall that according to physical determinism everything is fully determined by the initial state of all the atoms and quanta at any time ( or time - like hypersurface ) and the laws of physics .for example , given the state of the universe a nanosecond after the big - bang , everything that ever happened and will ever happen - including the characters , desires and reasons of all humans - was entirely determined by this initial condition . in other words , nothing truly new happens , as everything was already necessary a nanosecond after the big - bang . but how can one reconcile ideas about free - will such as summarized in the previous sections with scientific determinism ? or even with quantum randomness ?this difficulty led many philosophers and scientists to doubt the very existence of free - will .these so - called compatibilist changed the definition of free - will in order to make it compatible with determinism .free - will , they argue , is merely the fact that we are determined to never choose anything that does nt necessary happen .nevertheless , compatibilists argue , we have the feeling that our `` necessary choices '' are free .this sounds to me like a game of words , some desperate tentative to save our inner feeling of free - will and scientific determinism .but , as lequyer anticipated , free - will comes first , hence there is no way to rationally argue against its existence , for rational arguing requires that one can freely buy or not buy the argument : genuine compatibilists must freely decide to buy the compatibilists argument , hence compatibilists must enjoy free - will in lequyer s sense .moreover , and this is my main point , scientific determinism is wrong , hence there is no need to squeeze free - will in a deterministic world - view .let me emphasize that since free - will comes first , i.e. the possibility to choose between several possible futures comes first , and since this is incompatible with scientific determinism , the latter is necessarily wrong : the future has to be open , as we show in the next section . before explaining why physics , including classical newtonian physics , is not deterministic ,let me address first two related questions : when do random ( undetermined ) events happen ? what triggers random events ?already when i was a high school student , long before thinking seriously about free - will , the concept of randomness and indeterminism puzzled me a lot .when can a random event happen ? what triggers its occurrence ?if randomness is only a characteristic of long sequences , as my teachers told me , then what characterizes individual random events ?what is the probability of a singular event ?are nt long sequences merely the accumulation of individual events ?the only interesting answer to the question `` when do random events happen ? '' i could find was given by yet another 19th century french philosopher ( there is no way to escape from one s cultural environment ) , antoine a. cournot .his idea was that chance happens when causal chains meet .this is a nice idea , illustrated , e.g. , by quantum chance which happens when a quantum system encounters a measuring device .this idea can be illustrated by everyday chance events .imagine that two persons , alice and bob meet up by chance in the street ( taken from ) .this might happen , for example , because alice was going to the restaurant further down the same street and bob to see a friend who lives in the next street . from the moment they decide to go on foot , by the shortest possible path , to the restaurant for alice and to see his friend for bob , their meeting was predictable .this is an example of two causal chains of events , the paths followed by alice and bob , which cross one another and thus produce what looks like a chance encounter to each of them .but that encounter was predictable for someone with a sufficiently global view .the apparently chance - like nature of the meeting was thus only due to ignorance : bob did not know where alice was going , and conversely .but what was the situation before alice decided to go to the restaurant ? if we agree that she enjoys the benefits of free - will , then before she made this decision , the meeting was truly unpredictable .true chance is like this .true chance does not therefore have a cause in the same sense as events in a deterministic world .a result subject to true chance is not predetermined in any way .but we need to qualify this assertion , because a truly chance like event may have a cause .it is just that this cause does not determine the result , only the probabilities of a range of different possible results are determined . in other words , it is only the propensity of a certain event to be realised that is actually predetermined , not which event obtains .let s have a more physicist look at that .first , consider two colliding classical particles , see fig .[ figcollidingpart ] .next , consider a unitary quantum evolution in an arbitrary hilbert space , see fig .[ figunitaryevol ] .look for a while at the latter one ; it is especially boring , nothing happens , it is just a symmetry that displays itself .possibly the symmetry is complex and the hilbert space very large , but frankly , nothing happens as the equivalence between the schrdinger and the heisenberg pictures clearly demonstrates .likewise , for a bunch of classical harmonic oscillators nothing happens .somehow , there is no time ( or only the boring geometric time that merely labels the evolution ) .similarly , as long as the classical particles of fig .[ figcollidingpart ] merely move straight at a constant speed , nothing happens : in another reference frame they are at rest .it is only when the classical particles collide , or when the quantum system meets a measuring apparatus , that something happens , as cournot claimed .but one may object that in phase space the point that represents the 2 particles does nt meet anything . in phase space, there is no collision , as collisions require at least two objects and in phase space there is only one object , i.e. one point .moreover , the collision in real space and the consequence of that collision is already entirely determined by the initial conditions : in phase space it s only a symplectic symmetry that displays itself . and even if one assumes that each particle is initially `` independent '' , whatever that could mean , after colliding the 2 particles get correlated. hence , for cournot s idea to work , one would need a `` correlation sink '' .this is a bit similar to the collapse postulate of quantum theory which breaks correlations , i.e. resets independence ( separability ) . in summary, cournot s idea is attractive , but not entirely satisfactory ; it does nt seem to fit with scientific determinism .it took me a very long time to realize what is wrong with that claim .consider a finite volume of space , e.g. a one millimeter radius ball containing finitely many particles .can this finite volume of space hold infinitely many bits of information ?classical and quantum theories answer is a clear `` yes '' . but why should we buy this assertion ?the idea that a finite volume of space can hold but a finite amount of information is quite intuitive . however , theoretical physics uses real numbers ( and complex numbers , but let s concentrate on the reals , this suffices for my argument ) .hence the question : are so - called real numbers really real ? are they physically real ? for sure, it is not because descartes ( yet another french philosopher , but this time a well - known one ) named the so - called real numbers `` real '' that they are really real .actually , the idea that real numbers are truly real is absurd : a single real number contains an infinite number of bits and could thus , for example , contain all the answers to all questions one could possibly formulate in any human language .indeed , there are only finitely many languages , each with finitely many letters or symbols , hence there are only countably many sequences of letters .most of them do nt make any sense , but one could enumerate all sequences of letters as successive bits of one real number , first the sequences of length 1 , next of length 2 and so on . the first bit after each sequence tells whether the sequence corresponds to a binary question and , if so , the following bit provides the answer .such a single real number would contain an infinite amount of information , in particular , as said , it would contain the answer to all possible questions one can formulate in any human language . no doubt , real numbers are true monsters !moreover , almost all so - called real numbers are uncomputable . indeed , there are only countably many computer programs , hence real numbers are uncomputable with probability one . in other words ,almost all real numbers are random in the sense that their sequences of digits ( or bits ) are random .let me emphasize that they are as random as the outcome of measurements on half a singlet maximally entangled with another spin . ] . andthese random numbers ( a better name for `` real '' numbers ) should be at the basis of scientific determinism ?come on , that s just not serious ! +imagine that at school you would have learned to name the so - called _ real numbers _ using the more appropriate terminology of _ random numbers_. would you believe that these numbers are at the basis of scientific determinism ? to name `` random numbers ''`` real numbers '' is the greatest scam and trickery of science ; it is also a great source of confusion in the philosophy of science .+ note that not all real numbers are random .some , but only countably many , are computable , like all rational numbers and numbers like and .actually , all numbers one may explicitly encounter are computable , i.e. are exceptional .the use of real numbers in physics , and other sciences , is an extremely efficient and useful idealization , e.g. to allow for differential equations .but one should not make the confusion of believing that this idealization implies that nature is deterministic . a deterministic theoretical model of physicsdoes nt imply that nature is deterministic . again , real numbers are extremely useful to do theoretical physics and calculations , but they are not physically real .the fact that so - called real numbers have in fact random digits , after the few first ones , has especially important consequences in chaotic dynamical systems . after a pretty short time, the future evolution would depend on the thousandth digit of the initial condition .but that digit does nt really exist .consequently , the future of classical chaotic systems is open and newtonian dynamics is not deterministic .actually most classical systems are chaotic , at least the interesting ones , i.e. all those that are not equivalent to a bunch of harmonic oscillators . hence ,classical mechanics is not deterministic , contrary to standard claims and widely held beliefs .note that the non - deterministic nature of physics may leave room for emerging phenomena , like e.g. phenomena that could produce top - down causes , in contrast to the usual down - top causes we are used to in physics .a well - known example of a set of phenomena that emerges from classical mechanics is thermodynamics which can be deduced in the so - called thermodynamical limit .but , rather than going to infinite systems , it suffices to merely understand that classical mechanics is not ultimately deterministic , neither in the initial condition , nor in the set of boundary conditions and potentials required to define the evolution equations .what about quantum theory ?well , if one accepts that the measurement problem is a real physics problem - as i do , then this theory is also clearly not deterministic .if , on the contrary , one believes in some form of a many worlds view , then the details of the enormously entangled wave function of the universe depends again on infinitesimal details , as in classical chaotic systems . note that although quantum dynamics has no hyper - sensitivity to initial conditions , it shares with classical chaotic systems hyper - sensitivity to the parameters that characterize that dynamics , e.g. the hamiltonian .furthermore , open quantum systems recover classical trajectories also in the case of chaotic systems , see fig .[ figqchaos ] . hence ,quantum dynamics is not deterministic .finally , bohmian quantum mechanics is again hyper - sensitive to the initial condition of the positions of the bohmian particles ; hence , like chaotic classical systems , bohmian mechanics is not deterministic .admittedly , one may object that now we have an analog of the measurement problem in classical physics , as it is unclear when and how the non - existing digits necessary to define the future of chaotic systems get determined .this is correct and , in my opinion , inevitable .first , because free - will comes first , next because mathematical real numbers are physical random numbers . finally , because physics - and science in general - is the human activity aiming at describing and understanding how nature does it . for this purpose one needs to describe also how humans interact with nature , how we question nature . including the observer inside the description results , at best , in a tautology without any possible understanding : there would result no way to freely decide which description provides explanations , which argument to buy or not to buy . to summarize this section , claiming that classical mechanics is deterministic , or that quantum theory implies a many - world view , is like elevating `` real '' numbers , the determinism of newton s equations and the linearity of the schrdinger equation , to some sort of ultimate religious truth .it is confusing mathematics with physics .it is a common but profound epistemological mistake , see fig .[ figphysreal ] .so far we saw that free - will comes first in the logical order , hence all its consequences are necessary .in particular one ca nt argue rationally against free - will and its natural consequence , namely that time really passes .we also saw that this is not in contradiction with any scientific fact .actually , quite the opposite , it is in accordance with the natural assumption that no finite region of space can contain more than a finite amount of information .the widely held faith in scientific determinism is nothing but excessive scientism .this can be summarized with the simple chain of implications : + _ free - will _ _ non - determinism _ _ time really passes_ + let us look closer at the implications for time .there is no doubt that time as an evolution parameter exists . to get convinced it suffices to look at a bunch of classical harmonic oscillators ( like classical clocks ) , or the unitary evolution of a closed quantum system , or at the inertial motion of a classical particle as in fig .[ figcollidingpart ] .this time is the boring time , the time when nothing truly new happens , the time when things merely are , time when what matters is _ being _ ,i.e. parmenides - time .one could also name this einstein s time . but let s look at the collision between the two particles of fig .[ figcollidingpart ] .the detail of the consequences of such a collision depends on non - existing infinitesimal digits , i.e. on mathematically real but physically random numbers . to get convinced just imagine a series of such collisions , this leads to chaos; hence each collision is the place of some indeterminism , that is of some creative time , time when what matters is _change_. hence we call this creative time heraclitus - time .this creative time is extraordinarily poorly understood by today s science , in particular by today s physics .this does nt mean that it does nt exist , or that it is not important .on the contrary , it means that there are huge and fascinating open problems in front of us , scientists , physicists and philosophers .notice that this is closely related to cournot s idea that random events happen when independent causal chains meet , e.g. when two classical particles meet .the two particles are independent , at least not fully correlated , because their initial conditions are not fully determined . and their future , after the collision, is not predetermined , but contains a bit of chance .similarly , quantum chance happens when a quantum system meets a measurement apparatus , as described by standard textbooks .admittedly , we do nt know what a measurement apparatus is , i.e. we do nt know which configurations of atoms constitute a measurement apparatus .this is the so - called quantum measurement problem . according to what we saw ,there is a similar problem in classical mechanics : despite the indeterminism in the initial conditions and evolution parameters , things get determined as time passes ( as discussed near the end of the previous section ) .neither philosophy nor science can ever disprove the existence of free - will .indeed , free - will is a prerequisite for rational thinking and for understanding , as emphasized by jules lequyer .consequently , neither philosophy nor science can ever disprove that time really passes .indeed , the fact that time really passes is a necessary consequence of the existence of free - will . the fact that today s science - including classical newtonian mechanics - is not deterministic may come as a huge surprise to many readers ( including the myself of 20 years ago ) .indeed , the fact that descartes named _ real _ numbers that are actually physically _ random _ had enormous consequences .this together with the tendency of many scientists to elevate their findings to some sort of quasi - religious ultimate truth - i.e. scientism - lead to great confusion , as illustrated by laplace famous claim about determinism and by believers in some form of the many - world interpretation of quantum mechanics , based respectively on the determinism of newton s equation and on the linearity of schdinger s equation .once one realizes that science is not entirely deterministic , though it clearly contains deterministic causal chains , one faces formidable opportunities .this might seem frightening , though i always prefer challenges and open problems to the claim that everything is solved .non - determinism implies that time really passes , most likely at the junction of causal chains , i.e. when creative time is at work .this leaves room for emerging phenomena , like thermodynamics of finite systems .it may also leave room for top - down causality : the initial indeterminism must become determined before indeterminism hits large scale , much in the spirit of quantum measurements . as a side conclusion ,note that robots based on digital electronics will never leave room for free - will , hence the central thesis of hard artificial intelligence ( the claim that enough sophisticated robots will automatically become conscious and human - like ) is fundamentally wrong .so , am i a dualist ? possibly , though it depends what is meant by that .for sure i am not a materialist . note that today s physics already includes systems that are not material in the sense that they have no mass , like electro - magnetic radiation , light and photons .what about physicalism ?if this means that everything can be described and understand by today s physics , then physicalism is trivially wrong , as today s theories describe at best 5% of the content of the universe .more interestingly , if physicalism means that everything can be understood using the tools of physics , then i adhere to this view , though the fact that free - will comes first implies that physics can make endless progress , but without ever reaching a final point .we will understand much more about time and about free - will , though we ll never get a full rational description and understanding of free - will .just imagine this debate a century ago .how naive anyone claiming at that time that physics provides a fairly complete description of nature would appear today .similarly , for anyone making today similar claims .let me make a last comment , a bit off - track .free - will is often analyzed in a context involving human responsibility , `` how could we be responsible for our actions if we do nt enjoy free - will ? '' .there is another side to this aspect of the free - will question : `` how could we prevent humans from destroying humanity if we claim we are nothing more than sophisticated robots ? '' , and `` how could one argue that human life has some superior value if we pretend we are nothing but sophisticated robots ? '' .this work profited from numerous discussions , mostly with myself over many decades during long and pleasant walks .i should also thank my old friend jean - claude zambrini for introducing me to cournot s idea , when we were both students in geneva .thanks are due to chris fuchs who introduced me to jules lequyer and to many participants to the workshop on _ time in physics_ organized at the eth - zurich by sandra rankovic , daniela frauchiger and renato renner .j. norton , www.pitt.edu/ jdnorton / goodies / passage l. smolin , _ time reborn _ , houghton mifflin harcourt , 2013 . j. barbour , _ the end of time _ , oxford univ .press , 1999 .r. kane , _ a contemporary introduction to free will _ , oxford univ .press , 2005 .ch.h.s temann and ch.h .stemann , _ heraclitus and parmenides - an ontic perspective _ , grin verlag gmbh , 2013 ; k. jaspers , _ anaximander , heraclitus , parmenides , plotinus , laotzu , nagarjuna _ , mariner books , 1974 .popper , _ logik der forschung _, 1934 ; _ the logic of scientific discovery _ , hutchinson , london , 1959 .j. lequier , _ comment trouver , comment chercher une vrit premire _ , edition de leclat , 1985 ; see also j. grenier , _ la philosophie de jules lequier _ , calligrammes , 1983 ( isbn 2 903258 30 9 ) ; j. wahl , _ jules lequier _ , editions des trois collines , genve - paris , 1948. w. logue _ charles renouvier , philosopher of liberty _ , louisiana state univ . press , 1993 .n. gisin , _ propensities in a non - deterministic physics _ , synthese * 98 * , 287 ( 1991 ) .a. cournot , _ exposition de la thorie des chances et des probabilits _ , librairie hachette , 1843 .reprinted in part in _etudes pour le centenaire de la mort de cournot _ , ed .a. robinet , edition economica , 1978 .n. gisin , _ quantum chance , nonlocality , teleportation and other quantum marvels _ , springer , 2014 .g. chaitin , the labyrinth of the continuum , in meta math ! ,vintage , 2008 .ellis , in _ downward causation and the neurobiology of free will _ , eds murphy , ellis and oconnor , springer , 2009 .brun , n. gisin et al .lett . a 229 , 267 ( 1997 ) .e. schrdinger , mind and matter , cambridge univ . press , 1958 .
today s science provides quite a lean picture of time as a mere geometric evolution parameter . i argue that time is much richer . in particular , i argue that besides the geometric time , there is creative time , when objective chance events happen . the existence of the latter follows straight from the existence of free - will . following the french philosopher lequyer , i argue that free - will is a prerequisite for the possibility to have rational argumentations , hence ca nt be denied . consequently , science ca nt deny the existence of creative time and thus that time really passes .
regge calculus has been proposed as an approach to classical and quantum general relativity .it consists in approximating space - time by a simplicial decomposition .the fundamental variables of the theory are the lengths of the edges of the simplices .this approach has been demonstrated in numerical simulations of classical general relativity and also has inspired attractive ideas for the quantization of gravity .for instance an extension of this framework led to the successful quantization of dimensional euclidean gravity through the ponzano regge model , which can also be seen as one of the key motivations for the `` spin - foam '' approaches to dimensional quantum gravity .there has been quite a bit of work devoted over the years to regge calculus , for a recent review including related formulations see loll , and for a earlier pedagogical presentations see misner , thorne and wheeler .a canonical formulation for regge calculus has nevertheless , remained elusive ( for a review see williams and tuckey ) .we have recently introduced a methodology to treat discrete constrained theories in a canonical fashion , which has been usually called `` consistent discretizations '' .the purpose of this paper is to show that this methodology can be successfully applied to regge calculus without any need for modifications of the regge action .the resulting theory is a proper canonical theory that is consistent , in the sense that all its equations can be solved simultaneously .as is usually the case in `` consistent discretizations '' the theory is constraint - free ( although as is usual in regge calculus there are triangle inequalities to be satisfied among the variables ) .we will see that the treatment can be applied in both the euclidean and lorentzian case . in the latter casethere is an added bonus : in order to have a well defined canonical structure one naturally eliminates `` spikes '' that have been a problem in regge formulations in the past at the time of considering the continuum limit .this is due to the fact that our simplices only have one time - like hinge .it is therefore not possible to construct simplices with infinitesimal volume and arbitrary length .if one lengthens the time - like hinge one necessarily has to lengthen the space - like hinges and therefore increase the volume .therefore one will not see the quantum amplitude dominated by long simplices of vanishing volume .to make the calculations and illustrations simpler , we will concentrate on three dimensional gravity , but the reader will readily notice that there is no obstruction to applying the same reasonings in dimensions . given a simplicial approximation to a three dimensional manifold , one can approximate the einstein action ( with a cosmological term ) , as a sum over the edges ( `` hinges '' ) of the decomposition plus a sum over the simplices , where the first sum is over all hinges and the second over all simplices , is the length of the hinge and is the deficit angle around the hinge , i.e. where is the angle formed by the two faces of the simplices that end in the hinge . is the volume of the simplex ( in our three dimensional case , a tetrahedron ) .the constants and are related to newton s constant and the cosmological constant . a more explicit expression ( see for instance david ) can be given involving the values of the volumes of the two ( in three dimensions ) faces which share the hinge , , where and are the areas of the two triangles adjacent to in the simplex .this in turn can be used to give an expression that is purely a function of the lengths of the hinges , using the cayley menger determinants .we do not quote its explicit expression for brevity . in order to have a formulation that is amenable to a canonical treatment that is uniform , in the sense that one has the same treatment at all points on the lattice, one needs to make certain assumptions about the regularity of the simplicial decomposition chosen .this requirement can be somewhat relaxed and our method still applies , but in a first approach we will consider a regular decomposition as shown in figure 1 .we have divided space - time in prisms ( 1 and 2 in the figure , for example ) , and each prism in turn can be decomposed into three tetrahedra ( in the case of prism 2 the tetrahedra would be given by vertices , , ) .are assigned to the hinges in the following way : , , , , , .,title="fig:",height=124 ] are assigned to the hinges in the following way : , , , , , .,title="fig:",height=124 ] are assigned to the hinges in the following way : , , , , , .,title="fig:",height=124 ] to construct a lagrangian picture for the previous action we consider two generic `` instants of time '' and , as indicated by the direction labeled in figure 1 .we wish to construct an action of the form where the lagrangian depends on variables only at instants and .we choose one of the fundamental cubes ( union of prisms 1 and 2 in the figure ) , choose a conventional vertex in the cube labeled by in the lattice .notice that the use of the cubes is just for convenience , the framework is based on prisms that have a triangular spatial basis and therefore can tile any bidimensional spatial manifold .the variables we will consider are the lengths emanating from the vertex , as designated in the figure . a similar construction is repeated for each fundamental cube .the lagrangian that reproduces the regge action is given by a function that includes step functions that enforce the triangle inequalities between the hinge length variables .up to now we have kept the discussion generic , but we should now make things more precise , in dealing with either the euclidean or the lorentzian case . in the former ,all angles and quantities involved are real . in the lorentzian case, angles can become complex . moreover lengths can be time - like or space - like .null intervals can also be considered , but make the formulas more complicated , so for simplicity we do not consider them here .we will take all lengths as positive numbers , irrespective of the space - like or time - like character of the underlying hinge . in the above constructionwe have chosen the decomposition in such a way that the hinge is time - like and all other hinges are space - like .the formulas presented above ( for the angles , for instance ) are valid in both the euclidean and lorentzian case , but in the latter volumes , areas and length may have to be considered as imaginary numbers .all volumes involving a time - like direction are real , and in the construction these are the only ones involved .areas are imaginary if they involve one time - like direction and real if they do not .lengths are real if they are time - like and purely imaginary if they are space - like . with these conventions , dihedralangles around time - like directions are real ( for instance around ) , and dihedral angles around space - like directions are complex. some can be purely imaginary ( for instance rotation around in tetrahedron ) which correspond to lorentz boosts , or complex ( for instance rotation around in the aforementioned tetrahedron ) which does not correspond to a lorentz transformation ( it traverses the light cone ) .there is one further point to consider . in the expression for the deficit anglethe term is present for hinges that span from the base of the elementary cube to the top cover .for hinges lying entirely within the base or the top cover the term is . with these conventions the lagrangian turns out to be real and the sum yields the correct action avoiding over counting . for a more detailed discussion of angles in the lorentzian case see sorkin .we now proceed to treat this action with the `` consistent discretization '' approach .we consider as configuration variables and define their canonical momenta , here one is faced with several constraints .notice that variables are `` lagrange multipliers '' since the lagrangian does not depend on their value at instant and therefore their canonical momenta vanish .the only depend on links at level and therefore are constraints among the variables .the system of equations determines variables and their momenta in terms of the other variables so they can be eliminated .the resulting canonical pairs are .the remaining equations are evolution equations for these variables and there are no constraints left ( in the sense of dynamical constraints , the variables are still constrained by the usual triangle inequalities ) .the evolution equations are a true canonical transformation from the variables at instant to the variables at instant .this canonical transformation has as generating function , viewed as a type 1 canonical transformation , where in the lagrangian the variables have been replaced via the equations that determine them .the reader unfamiliar with the `` consistent discretization '' approach may question the legitimacy of this procedure in the sense of yielding a true canonical structure , however it was discussed how the canonical structure arises in detail through a generalization of the dirac procedure for discrete systems .this concludes the classical discussion . we have reduced the regge formulation to a well defined , unconstrained canonical system where the discrete time evolution is implemented as a canonical transformation .some of the original dynamical variables are eliminated from the formulation using the equations of motion . in the usual `` consistent discretizations '' the variables that are eliminated are the lagrange multipliers . herethe links that get determined can be viewed as playing a similar role .the equations that determine these variables are a complicated non - linear system . as in the usual `` consistent discretization '' approach ,one may have concerns that the solutions of the non - linear system could fail to be real , or could become unstable .we now have experience with consistent discretizations of mechanical systems and of gowdy cosmologies , which have field theoretic degrees of freedom and the evidence suggests that one can approximate the continuum theory well in spite of potential complex solutions and multi - valued branches . we expect a similar picture to occur in regge calculus .turning our attention to quantization , as usual in the `` consistent discretization '' approach , the hard conceptual issues are sidestepped since the theory is constraint - free . the task at hand is to implement the canonical transformation that yields the discrete time evolution as a unitary quantum operator that implements the discrete classical equations of motion as quantum operatorial equations .this will in a generic situation be computationally intensive , but conceptually clear .it should be noted that the resulting unitary operator differs significantly from the ones that have been historically proposed in path integral approaches based on regge calculus .the usual approach to a path integral would be to compute , with a measure that presumably should enforce the constraints of the theory .on the other hand , in our approach one would have something like where is obtained by substituting in the values of the `` lagrange multipliers '' obtained from their equations of motion . is uniquely determined when one determines the unitary transformation that implements the dynamics ( examples of this in cosmological situations can be seen in our paper ) .so we see that we have eliminated some of the variables and the constraints of the theory and the path integral is uniquely defined by the consistent discretization approach .some concerns might be raised about the limitations imposed on our framework by the choice of initial lattice .we have chosen to use a lattice that is topologically cubical .this sets a well defined framework in which to construct a lagrangian evolution between two spatial hypersurfaces .the cubic lattice is not a strict requirement. it would be enough to have two `` close in time '' space - like hypersurfaces with the same simplicial decomposition in both for us to be able to set up our framework and start evolving .this can encompass quite a range of geometrical situations .it is however , inevitable that one should give up some arbitrariness in the space - time simplicial decomposition if one wishes to have a canonical structure .it is interesting that the structure imposed is such that it automatically eliminates the `` spikes '' ( thin simplices arbitrarily large in the time - like direction ) .it also worthwhile emphasizing that the framework can , with relatively simple additions , incorporate topology change .the idea is depicted in figure 2 . therewe see a point where there is topology change where the legs of the pair of pants separate . for that to happen one would need to modify at the hypersurface the explicit form of the lagrangian .it is interesting that from there on one can continue without further altering the framework and that at all times the number of variables involved has not changed .the picture also shows how one would handle an initial `` no boundary '' type singularity . hereone would have to `` by hand '' add links as the time evolution progresses forward .these variables are free as long as one does not wish to match some final end state for the evolution .if one has , however , specified initial and final data for the evolution , one finds that constraints appear that determine the values of the lengths of the extra links added . as an example of the framework, one can work out explicitly the evolution of a a dimensional space - time consistent of a four adjacent `` unit cubes '' of the type we have considered with fixed outer boundary conditions . in this case the initial data consists of eight lengths , the other initial lengths are determined by the boundary conditions . as we discussed , having the data at level and one can determine the `` lagrange multiplier '' links that have to be substituted in the lagrangian to generate the canonical transformation between the initial and final data .this transformation is later to be implemented unitarily upon quantization . in our formalismit can happen that the `` lagrange multipliers '' are not entirely determined by this procedure ( for other examples where this happens , including bf theory , see our paper ) .in such case the resulting theory has true constraints and true lagrange multipliers . in this examplethis happens .one is finally left with a canonical transformation dependent on 3 parameters ( this result is also true if one considers an adjacent unit cube system ) .the presence of free parameters also requires modifications in the path - integral formulas listed above as well .it should be emphasized that the equations determining the lagrange multipliers , even in this simplified case , are complicated coupled non - linear equations that have a complexity not unlike those in dimensions .what makes them easy to solve is the knowledge that the regge equations of motion correspond in this case to flat space - time .the canonical transformation can be implemented unitarily and the quantization completed .we will discuss the example in detail in a separate publication .we have applied the `` consistent discretization '' approach to regge calculus .we see that it leads to a well defined constraint - free canonical formulation , that is well suited for quantization .the approach can incorporate topology change .although we have limited the equations to the three dimensional case for simplicity , we have never used any of the special properties of three dimensional gravity and it is clear that the construction can be carried out in an arbitrary number of dimensions .it is interesting to notice that one of the original motivations for the construction of the `` consistent discretization '' approach was the observation by friedman and jack that in canonical regge calculus the lagrange multipliers failed to be free .it can be viewed as if this point of view has now been exploited to its fullest potential , offering a well defined computational avenue to handle classical and quantum gravity .we dedicate this paper to rafael sorkin on the occassion of his 60th birthday . we wish to thank luis lehner and jos a. zapata for comments .this work was supported by grant nsf - phy0244335 , nasa - nag5 - 13430 and funds from the horace hearne jr .laboratory for theoretical physics and cct - lsu .00 t. regge , phys . rev . * 108 * 558 ( 1961 ) . g. ponzano and t. regge , `` semiclassical limit of racah coefficients '' in `` spectroscopic and group theoretical methods in physics '' , ed . f. bloch , north holland , amsterdam , ( 1968 ) .r. loll , living rev .* 1 * , 13 ( 1998 ) [ arxiv : gr - qc/9805049 ] . c. misner , k. thorne , j. wheeler , `` gravitation '' , w. h. freeman , new york ( 1973 ) .p. a. tuckey and r. m. williams , class .* 7 * , 2055 ( 1990 ) .j. ambjorn , j. l. nielsen , j. rolf and g. k. savvidy , class . quant. grav .* 14 * , 3225 ( 1997 ) [ arxiv : gr - qc/9704079 ] .f. david , `` simplicial quantum gravity and random lattices '' in `` gravitation and quantizations : proceedings : les houches summer school on gravitation and quantizations '' , j. zinn - justin and b. julia ( editors ) , north - holland , amsterdam ( 1995 ) [ arxiv : hep - th/9303127 ] .r. sorkin , phys .* d12 * , 385 ( 1975 ) ; see also j. barret et al .phys * 36 * , 815 ( 1997 ) [ arxiv : gr - qc/9411008 ] . c. di bartolo , r. gambini , r. porto and j. pullin , j.math .phys . * 46 * , 012901 ( 2005 ) [ arxiv : gr - qc/0405131 ] .r. gambini , m. ponce and j. pullin , phys .d * 72 * , 024031 ( 2005 ) [ arxiv : gr - qc/0505043 ] .r. gambini and j. pullin , class .grav . * 20 * , 3341 ( 2003 ) [ arxiv : gr - qc/0212033 ] . c. di bartolo , r. gambini and j. pullin , class .* 19 * , 5275 ( 2002 ) [ arxiv : gr - qc/0205123 ] .j. l. friedman and i. jack , j. math .* 27 * , 2973 ( 1986 ) .
we apply the `` consistent discretization '' technique to the regge action for ( euclidean and lorentzian ) general relativity in arbitrary number of dimensions . the result is a well defined canonical theory that is free of constraints and where the dynamics is implemented as a canonical transformation . in the lorentzian case , the framework appears to be naturally free of the `` spikes '' that plague traditional formulations . it also provides a well defined recipe for determining the integration measure for quantum regge calculus .
three dimensional low - thrust optimal orbit transfer has attracted much inquiry focused on trajectory optimization using orbital elements . due to strong nonlinearity of differential equations in gaussian form with orbital elements , it is often difficult to obtain the optimal solution numerically in this system .some of the earliest works on the orbit transfer between neighboring elliptic orbits and on the transfer between coplanar and coaxial ellipses were presented by edelbaum .however , his elements as well as the keplerian elements all contain singularity .to avoid the singularity , arsenault firstly introduced the equinoctial elements .broucke and cefola developed nonsingular equinoctial orbital elements using the lagrange and poisson brackets of keplerian elements .kechichian presented an application of these nonsingular elements to solve the minimum time rendezvous problem with constant acceleration .chobotov considered more cases in minimum time transfer , including the comparison between exact solutions and approximate solutions obtained by the averaging technique .gerffroy and epenoy made further progress in both minimum time and fuel transfer problems using the averaging technique with the constraints of earth s oblateness and shadow effect taken into account .more recent works using the numerical averaging technique were presented by tarzi and speyer et al. , who provided a quick and accurate numerical approach for a wide range of transfers , including orbital perturbations such as earth s oblateness and shadow effect .besides the strong nonlinearity , an additional disadvantage in using the equinoctial elements is the complexity of the equations in this coordinate system . with this system ,kepler s equation must be solved by iteration to get the eccentric longitude at each integration step .hence , equinoctial elements present challenges for trajectory optimization .walker put forth another important development in the study of equinoctial elements when he used the stroboscopic method to modify orbital elements .he also altered differential equations into an approximative form , containing five dependent variables and one independent variable , as a means of achieving faster computation performance without solving kepler s equation .roth introduced the stroboscopic method to obtain a higher order approximation for small perturbed dynamical systems , which depends on several slow variables and one fast variable .haberkorn and gergaud investigated the application of the modified orbital elements by using the homotopy method .cui et al . used sequential quadratic programming under their lyapunov feedback control law , which was based on a function made up of modified elements , to obtain the optimal - lyapunov solution without optimal transfer .gao and li made the similar work to optimize the lyapunov function but never reached the optimal solution based on their lyapunov control law .an advanced technique using the lyapunov - based controller to solve the low - thrust orbit transfer problem in cartesian coordinates was introduced and rigorously proved by chang et al .. this technique is based on the fact that a non - degenerate keplerian orbit is uniquely described by its specific angular momentum and laplace vectors .the resulting lyapunov function provides an asymptotically stabilizing feedback controller , such that the target elliptic keplerian orbit becomes a locally asymptotically stable periodic orbit .however , the lyapunov - based transfer trajectory is not optimal in every sense . in this paper , the lyapunov - based transfer presented in be called chang - chichka - marsden ( ccm ) transfer to distinguish it from any other lyapunov - based transfers .the motivation behind this paper is to use the ccm transfer trajectory as an initial guess for optimization in order to obtain the optimal transfer solution utilizing cartesian coordinates . using this method avoids the numerical disadvantages due to strong nonlinearity and complexity in the use of orbital elements .this paper reviews the ccm transfer method in section 2 . in section 3 , a means to translate keplerian elements into specific angular momentum and laplace vectorsis presented .section 4 presents the proposed approach and the optimality ( kkt ) conditions for the minimum fuel consumption problem in cartesian coordinates .specifically , the chebyshev - gauss pseudospectral method is introduced to illustrate how the continuous optimal control problem can be reduced to a discretized nonlinear programming problem .finally , in section 5 , numerical simulations are carried out to make detailed comparisons between the optimal results using cartesian coordinates and those using orbital elements with the same initial guess .it shows that the use of cartesian coordinates makes it easier to obtain the correct optimal solution .this section summarizes and reviews the chang - chichka - marsden ( ccm ) transfer in .this transfer employs lyapunov - based controllers to achieve asymptotically stable transfers between elliptic orbits in a two - body problem .this paper assumes that the configuration space is .let be the tangent space of , and be the coordinates on .the equations of motion are given by of which the solutions are regarded as the _ keplerian orbits _ , where is the gravitational constant .the specific energy is defined by define by where is the specific angular momentum vector and is the laplace vector .the laplace vector is related to the eccentricity vector as follows : the three quantities , and satisfy the following two identities : define the sets the following proposition is from . the following hold : + 1 . is the union of all non - degenerate elliptic keplerian orbits . and .the fiber consists of a unique ( oriented ) non - degenerate elliptic keplerian orbit for each .[ prop1 ] the equation of the motion with a specific control force is given by where is the control force .define a metric on by with an arbitrary parameter , and .let be the open ball of radius centered at in the -metric and its closure .let be the pair of the angular momentum and the eccentricity vector of a target elliptic orbit .define a lyapunov function on by along the trajectory of ( [ eq : newton ] ) there is hence , where take a controller as follows : with an arbitrary function .then , the following proposition is proven in using lasalle s invariance principle . [ prop:2 ]let be the pair of the specific angular momentum and laplace vectors of the target elliptic orbit .take any closed ball of radius centered at contained in the set defined in ( [ set : d ] ) .then every trajectory starting in the subset of remains in that subset and asymptotically converges to the target elliptic orbit in the closed - loop dynamics ( [ eq : newton ] ) with the control law in ( [ eq : control ] ) the choice of the parameter in the lyapunov function plays a crucial role in determining the transfer trajectory .it determines the relative weighting between the two quadratic terms in the function in ( [ eq : lyapunov ] ) . with a small the shape of the trajectory will adjust to that of the target orbit first because a more weight is on . on the other hand , with a large , the normal direction of the trajectory plane will adjust to that of the target orbit plane first because a more weight is on .the parameter also determines the shape of the region of attraction since determines the shape of the ball in the metric .additionally , the ccm transfer works well for parabolic transfer , although the success of the transfer is proven exclusively for elliptic orbits only in proposition [ prop:2 ] .this section presents the transform of the six keplerian elements to specific angular momentum and laplace vectors for convenient reference .the state vector at periapsis to derive the transform in cartesian coordinates with the earth at the origin .let the periapsis of the orbit in the geocentric equatorial frame is determined by in the perifocal frame , the transformation matrix from the perifocal frame into the geocentric equatorial frame is given by {pe}=\left [ \begin{array}{ccc } \cos\omega \cos\omega - \sin\omega \sin\omega\cos i & -\cos\omega \sin\omega -\sin\omega \cos i \cos\omega & \sin\omega \sin i \\\sin\omega \cos\omega + \cos\omega \cos i \sin\omega & -\sin\omega \sin\omega+\cos\omega \cos i \cos\omega & -\cos\omega \sin i \\ \sin i \sin\omega & \sin i \cos\omega & \cos i \end{array } \right].\ ] ] the state vector in the geocentric equatorial frame is found by carrying out the matrix multiplications {pe}\{\mathbf r\}_p , \{\dot{\mathbf r}\}_e=[\mathbf q]_{pe}\{\dot{\mathbf r}\}_p.\ ] ] thus , the components of and are derived . then using the identity , the specific angular momentum and laplace vectors in the geocentric equatorial frameare computed as follows : on equatorial orbits , they simplify to on circular orbits , they simplify to ccm transfer trajectory is used as an initial guess to support the trajectory optimization in the open - loop system using the direct chebyshev - gauss pseudospectral transcription method and a nonlinear programming solver .the cartesian coordinates and the modified orbital elements in optimization are compared .let denote the cartesian coordinates in the geocentric equatorial frame .the minimum fuel consumption problem is given as follows : \\ & \| \mathbf u(t ) \|\leq f_{\rm max } \ , , \ , \forall t \in [ t_0,t_f ] \\ & \mathbf x(t_0 ) \ , \text { fixed}\\ & \mathbf l_t , \mathbf a_t \ , \text { fixed}\\ \end{array } \right.\ ] ] with {6\times 6 } , \quad \quad d=-\frac{\mu}{\| \mathbf r \|^3 } , \quad \quad \mathbf u=[0,0,0,f_1,f_2,f_3]^{t},\ ] ] where denotes the state vector , is the identity matrix and the control vector .the boundary conditions are given by for the minimum time problem , the cost function shall be used . to reduce the continuous optimal control problem ( ocp ) into a discretized non - linear programming ( nlp ) problem ,the pseudospectral method is used with second - kind chebyshev points . the transformation to express the ocps in the time interval $ ] is given by use lagrange interpolation polynomials with points as follows : where is the initial boundary point , and , , are the collocation points , which are the zeros of the second - kind chebyshev polynomial as expressed below : the weights of the chebyshev - gauss quadrature in this case are given by then the nlp problem can be obtained as ( see , ) =\mathbf 0 , \\ & \| \mathbf u(\tau_k)\|-f_{\rm max } \leq 0 , \\ & \mathbf s_0(\mathbf x(\tau_0))= \mathbf 0 , \\ & \mathbf s_{l}(\mathbf x(\tau_f))= \mathbf 0 , \\ & \mathbf s_{a}(\mathbf x(\tau_f))=\mathbf 0.\end{aligned}\ ] ] where indicates the collocating point . the augmented cost function with the constraints combined via lagrange multipliersis given by \right ) .\end{split}\end{aligned}\ ] ] the remaining karush - kuhn - tucker ( kkt ) conditions at the collocating points are given by now , the optimal control problem can be solved by using well - established nlp algorithms .the same notations as are utilized here to describe the modified equinoctial orbit elements .the state variables are defined as where the true longitude is the fast independent variable and the other five are slow dependent variables .the control variables in rtn ( radial - tangential - normal ) coordinates are defined with then with the system equation is given by the minimum fuel consumption orbit transfer problem can be written as \\ & \| \mathbf f(t ) \| \leq f_{\rm max } \ , , \ , \forall t \in [ t_0,t_f ] \\ & \mathbf x_m(t_0 ) \ , \text { fixed}\\ & \mathbf x_m(t_f ) \ , \text { fixed}\\ \end{array } \right.\ ] ] the optimality conditions expressed in terms of modified elements are not provided here since those presented above in cartesian coordinates also apply to modified elements .using an example , this section illustrates transfers from a low - earth orbit ( leo ) to a geosynchronous orbit ( geo ) in terms of minimum transfer time and minimum fuel consumption .the numerical data used are from , with the exception of the final time in the minimum fuel case .the initial point is given by the target orbit is given by with the control constraint km / sec and the initial time . in the minimum fuel case , the fixed final time is hr . canonical units are used in simulations , where 806.812 sec = 1 canonical time unit ; 6378.140 km = 1 canonical distance unit ; km / sec = 1 canonical acceleration unit ; and the gravitational parameter . in the following ,all units are canonical unless otherwise indicated .the initial and final conditions in cartesian coordinates are given by the initial and final conditions in modified elements are given by with . in the minimum fuel case ,the fixed final time is , but in the minimum time case the final time is free , so it must first be adjusted according to the final time guess of .the ccm controller given in ( [ eq : control ] ) is applied with the weighting and the function to obtain a transfer trajectory that provides an initial guess for optimization .then tomlab / propt is utilized together with the pseudospectral method and snopt solver to optimize the trajectory on the matlab platform . to compare the different coordinate systems ,all the collocating points are located in one phase .the optimal results are listed in table [ table : c ] , and the minimum fuel and minimum time transfer trajectories are shown in fig .[ fig : transfer_c ] and fig .[ fig : transfer_time_c ] , respectively . for the minimum fuel case , fig .[ fig : force comparison c ] compares the force magnitude between the two coordinate systems presented above .[ fig : la c ] and fig .[ fig : r_dr_c ] show the time history of , and , and fig .[ fig : force_c ] details the time history of the control force in the cartesian coordinates system . in the timehistory plots , the dotted lines represent the initial guesses and the solid lines represent final trajectories .
this paper presents a simple approach to low - thrust optimal - fuel and optimal - time transfer problems between two elliptic orbits using the cartesian coordinates system . in this case , an orbit is described by its specific angular momentum and laplace vectors with a free injection point . trajectory optimization with the pseudospectral method and nonlinear programming are supported by the initial guess generated from the chang - chichka - marsden lyapunov - based transfer controller . this approach successfully solves several low - thrust optimal problems . numerical results show that the lyapunov - based initial guess overcomes the difficulty in optimization caused by the strong oscillation of variables in the cartesian coordinates system . furthermore , a comparison of the results shows that obtaining the optimal transfer solution through the polynomial approximation by utilizing cartesian coordinates is easier than using orbital elements , which normally produce strongly nonlinear equations of motion . in this paper , the earth s oblateness and shadow effect are not taken into account . chang - chichka - marsden lyapunov - based transfer , trajectory optimization , cartesian coordinates
the libra toolkit is a collection of algorithms for learning and inference with probabilistic models in discrete domains .what distinguishes libra from other toolkits is the types of methods and models it supports .libra includes a number of algorithms for _ structure learning for tractable probabilistic models _ in which exact inference can be done efficiently .such models include sum - product networks ( spn ) , mixtures of trees ( mt ) , and bayesian and markov networks with compact arithmetic circuits ( ac ) .these learning algorithms are not available in any other open - source toolkit .libra also supports _ structure learning for graphical models _ , such as bayesian networks ( bn ) , markov networks ( mn ) , and dependency networks ( dn ) , in which inference is not necessarily tractable . some of these methods are unique to libra as well , such as using dependency networks to learn markov networks .libra provides a variety of exact and approximate inference algorithms for answering probabilistic queries in learned or manually specified models .many of these are designed to exploit local structure , such as conjunctive feature functions or tree - structured conditional probability distributions .the overall goal of libra is to make these methods available to researchers , practitioners , and students for use in experiments , applications , and education .each algorithm in libra is implemented in a command - line program suitable for interactive use or scripting , with consistent options and file formats throughout the toolkit .libra also supports the development of new algorithms through modular code organization , including shared libraries for different representations and file formats .libra is available under a modified ( 2-clause ) bsd license , which allows modification and reuse in both academia and industry .libra includes a variety of learning and inference algorithms , many of which are not available in any other open - source toolkit .see table [ tab : functionality ] for a brief overview . & bn structure with tree cpds & + & dn structure with tree / boosted tree / lr cpds & + & mn structure from dns & + & mn parameters ( pseudo - likelihood ) & + + & tractable bn / ac structure & + & tractable mn / ac structure & + & mixture of trees ( mt ) & + & spn structure ( id - spn algorithm ) & + & chow - liu algorithm & + & ac parameters ( maximum likelihood ) & + + & gibbs sampling ( bn , mn, ) & ( dn ) + & mean field ( bn , mn, ) & ( dn ) + & loopy belief propagation ( bn , mn ) & + & max - product ( bn , mn ) & + & iterated conditional modes ( bn , mn, ) & + & variational optimization of acs & + + & ac variable elimination ( bn , mn ) & + & marginal and map inference ( ac , spn , mt ) & + libra s command - line syntax is designed to be simple . for example , to learn a tractable bn , run the command : `` libra acbn -i train.data -mo model.bn -o model.ac '' where train.data is the input data , model.bn is the filename for saving the learned bn , and model.ac is the filename for the corresponding ac representation , which allows for efficient , exact inference . to compute exact conditional marginals in the learned model : `` libra acquery -m model.ac -ev test.ev -marg '' . to compute approximate marginals in the bn with loopy belief propagation : `` libra bp -m model.bn -ev test.ev '' .additional command - line parameters can be used to specify other options , such as the priors and heuristics used by acbn or the maximum number of iterations for bp .these are just three of more than twenty commands included in libra .libra supports a variety of file formats . for data instances, libra uses comma separated values , where each value is a zero - based index indicating the discrete value of the corresponding variable . for evidence and query files , unknown or missing valuesare represented with the special value `` ` * ` '' . for model files, libra supports the xmod representation from the winmine toolkit , the bayesian interchange format ( bif ) , and the simple representation from the uai inference competition .libra converts among these different formats using the provided mconvert utility , as well as to its own internal formats for bns , mns , and dns ( .bn , .mn , .dn ) .libra has additional representations for acs and spns ( .ac , .spn ) .these formats are designed to be easy for humans to read and programs to parse .libra is implemented in ocaml .ocaml is a statically typed language that supports functional and imperative programming styles , compiles to native machine code on multiple platforms , and uses type inference and garbage collection to reduce programmer errors and effort .ocaml has a good foreign function interface , which libra uses for linking to c libraries and implementing some memory - intensive subroutines more efficiently in c. the code to libra includes nine support libraries , which provide modules for input , output , and representation of different types of models , as well as commonly used algorithms and utility methods .in table [ tab : comparison ] , we compare libra to other toolkits in terms of representation , learning , and inference . .comparison of libra to several other probabilistic inference and learning toolkits . [ cols= "< , < , < , < , < , < , < " , ] in terms of representation , libra is the only open - source software package that supports acs and one of a very small number that support dns or spns .libra does not currently support dynamic bayesian networks ( dbn ) or influence diagrams ( i d ) . for factors, libra supports tables , trees , and arbitrary conjunctive feature functions .bnt and openmarkov ( cisiad , 2013 ) also support additional types of cpds , such as logistic regression , noisy - or , neural networks , and algebraic decision diagrams , but they only support tabular cpds for structure learning . opengm2 supports sparse factors , but iterates through all factor states during inference .libra is unique in its ability to learn models with local structure and exploit that structure in inference . for exact inference , the most common algorithms are junction tree ( jt ) , enumeration ( e ) , and variable elimination ( ve ) .libra provides acve , which is similar to building a junction tree , but it can exploit structured factors to run inference in many high - treewidth models . for approximate inference , libra provides gibbs sampling ( g ) , loopy belief propagation ( bp ) , and mean field ( mf ) , all of which are optimized for structured factors .a few learning toolkits offer likelihood weighting ( lw ) or additional sampling algorithms for bns .fastinf , libdai , and opengm2 offer the most algorithms but only support tables . for learning ,libra supports maximum likelihood ( ml ) parameter learning for bns , acs , and spns , and pseudo - likelihood ( pl ) optimization for mns and dns .libra does not yet support expectation maximization ( em ) for learning with missing values .structure learning is one of libra s greatest strengths .most toolkits only provide algorithms for learning bns with tabular cpds or mns using the pc algorithm .libra includes methods for learning bns , mns , dns , spns , and acs , and all of its algorithms support learning with local structure . in experiments on grid - structured mns ,libra s implementations of bp and gibbs sampling were at least as fast as libdai , a popular c++ implementation of many inference algorithms .the accuracy of both toolkits was equivalent .parameter settings , such as the number of iterations , were identical .see figure [ fig : libdai ] for more details .the libra toolkit provides algorithms for learning and inference in a variety of probabilistic models , including bns , mns , dns , spns , and acs .many of these algorithms are not available in any other open - source software .libra s greatest strength is its support for tractable probabilistic models , for which very little other software exists .libra makes it easy to use these state - of - the - art methods in experiments and applications , which we hope will accelerate the development and deployment of probabilistic methods .
the libra toolkit is a collection of algorithms for learning and inference with discrete probabilistic models , including bayesian networks , markov networks , dependency networks , and sum - product networks . compared to other toolkits , libra places a greater emphasis on learning the structure of tractable models in which exact inference is efficient . it also includes a variety of algorithms for learning graphical models in which inference is potentially intractable , and for performing exact and approximate inference . libra is released under a 2-clause bsd license to encourage broad use in academia and industry . probabilistic graphical models , structure learning , inference
* background . *the graph theory indeed presents relevance with abstract relation among objects .since author wrote the bots algorithm for graph traversal at april , 2012 , author still studies this problem .we find the fact that an instance can be computed by bots algorithm without concern the graph classes .we guess that there is a logic model support this phenomenon , which leads to the approach possess the capacity of data - oriented .the strategy of equivalent visiting is posed so that we can quantify the process of graph traversal with monotone decreasing function .the uniform data structure of graph is set up with the method of partition of set , and this core idea may be used to solve other problems of graph , such as graph partition and graph coloring .it makes these problems may present a quantified model for the abstract relation . and this abstract logic model can guarantee algorithms possess much more general and stronger .+ * related * work .we formally state the binary relation on graph. this relation contribution can be underlying basis for this model .we repeatedly abstract the basic relation for our new logical mode , and obtain new classes or new properties of relation in new model .these algorithms we given always may have no associated with weights to each edge or arc .it makes the new relation is easy to present a practical instance . the theoretical proof and computing of relationare transformed to algebra of sets .furthermore , these algorithms present more reliable , intuitive , simple and high precision , although they are heuristic approaches .first , we introduce bost and obots algorithms , which runtime complexity both are . they can exactly compute any connected graph classes , including difficult mixed graph .all connected instances may be explored by these algorithms without recursive method as __ dynamic programming__ , such that greatly reduce the complexity of program .we can really and easily achieve the aim of parallel and distributed computing for graph traversal .graph partition may be independent of weight not like kernighan lin algorithm , although its runs on time complexity . in this thesis ,the graph partition actually is a method of cutting graph .you can arbitrarily choose the nodes on instance for your research of ai , network flow , graph color , physical problems and etc .it makes the abstract relation among nodes be partition on a sequence of domains for your model of problem .graph coloring is not a simple labeling each vertex on instance .it becomes a logical problem for how to cut graph and let those vertices be partitioned to two classes .author gives two speed - up algorithms bogpc and boerc , which can run in and respectively .we prove that their precision can be less than and equal to a constant on an instance .+ * overview .* author will follow this format : defining objects , exploring the features of objects , proving algorithm , giving pseudocode , computing runtime complexity of approach and finally present exp .on instance . in these process , we give the discussion or summary to express author s viewpoint with problem .then this paper is organized as follows .firstly preliminary knowledge is in section 2 .section 3 introduces basic definition , properties , method and proofs for graph , including pseudocode of bots and obots , experiments . at the endgive the solving problem of natural number bocps .the definition and method of graph partition are stated in section 4 . similarly , there are pseudocode , algorithmic complexity and experiment .section 5 proposes definition of edge and model of graph coloring .finally we give the formula of graph coloring . of cause, there show the algorithms bogpc and boerc with concerning complexity .we will evaluate those algorithms about probabilistic of exploring minimum chromatic value .the paper is concluded by a summary , a conjecture of _ russell paradox _ and future work in section 6 .in this paper , we are interested in the connected graph . for each vertex on instance, there can be at least a path between and the others .we set each vertex can be labeled with number .let be a collection of vertices having .we reserve the letter _n _ and the term for the number of vertices on an instance. + * * partition of a set.** given a no - empty universal set , there exists a family of sets , which is the partition of , if and only if these following conditions hold : 1 . 2 . 3 . * * equivalent class partition.** if there is a binary relation on set , then there is a unique partition set of the set . for each component ,such that there are properties of reflexivity , symmetry and transitivity among all elements in set with respect to .+ * * cartesian product.** given sets , there exists a multiplying sets and return _n - ordered _ vectors set , in which for these members such that .given a no - empty and connected graph and vertices set .consider a pair . if there is a binary relation to characterize a behavior of traversal from vertex to vertex .we define the binary relation as traversal relation , denote by .we write the form and to represent this relation on pair .the ordered pair denotes a direction of left to right .we reserve the notation equal to the first member in ordered pair , and then one equals to .[ t1 ] if there is a no - empty traversal relation on an instance , then .given an instance .let be a traversal relation on instance . as described in definition of cartesian product , for each pair such that there is .observe if pair then .assume that there is a pair .then we have , a contradiction to definition of traversal relation .+ it is obviously that there are some properties in traversal relation as follow : 1 .reflexivity : if , then there may be .anti - symmetry : if , then there may be .anti - transitivity : if , then there may be .let be a traversal relation on instance . for each vertex such that there may be with definition of cartesian product . then observe that there may be pair with as theorem[t1 ] .hence , may have property of reflexivity .consider a pair and there is .observe that we can not say hence , there is no property of symmetry in .when there are three vertices and pair , similarly we can not say pair with pairs .we prove there is no transitive relation in and finish this proof .+ let be a subset of set . if , for two arbitrary components such that .we call set _ unit subgraph_. denotes the collection of unit subgraphs .we reserve the subscript of set equal to the one of first member of each component in set .+ [ t2 ] let be a collection of unit subgraphs on a traversal relation .then set is the partition of set .let be a collection of unit subgraphs on traversal relation .we aim to prove three conditions hold for set on set .hence , first we can let set .consider there is an isolated vertex .it is certainly that pairs with the definition of traversal relation .hence for each component , there is no such case with by the definition of unit subgraph .consider a pair . as the definition of unit subgraph, there naturally may have a component such that , which subscript is .hence , we have that and .assume and having a pair .there certainly may exist a component and introduce pair to , thus observe there may be , a contradiction . hence and .consider two components with such that .assume to .set a pair .as the definition of unit subgraph , observe there can be , contradicts the given condition of .hence . to sum up above ,set is the partition of set .+ [ t3 ] unit subgraph is a cartesian product set .let be a unit subgraph .we have a term to characterize it as follow the form can be written as follow observe that set is a cartesian product set . + as the form, we call term _ root set _ , denote by . call the right set _ leaf set _ , denote by . therefore , the unit subgraph can be abbr . by . * claim . *the cardinality of a multiple set is the number of difference members , not be the quantity of members .we reserve the notation or to represent a component in a multi - set , the is the count of element and .we call _ group _ for a component containing same and repeated elements .we define the group minus as that , if and only if .then the difference value is 0 if . if there is a multiple set and , i.e. for each pairs such that , then we call set _ multiple traversal relation_. for convenience , we use denote each group in set . the notation is the count of the pairs . + let be a subset of multiple traversal relation . if , then for two groups such that .we call set _ weighted unit subgraph _ , and reserve the subscript of set equals to the one of first element of each pair in set . the collection of weighted unit subgraphs we denote by . + [ t6 ] let be a multiple traversal relation and be a collection of weighted unit subgraphs on set .then set is the partition of set .let be a collection of weighted unit subgraphs on multiple traversal relation .as the definition of weighted unit subgraph , for a group and , there may be such that group may be introduced to set . hence observe if .if there is , then at least we can have a group . summarizing above ,it is certainly that there may be a component such that group can be introduced to hence , and .consider two components with such that .assume to and at least a pair .then , we have a contradiction to given condition as described in definition of weighted unit subgraph . hence . for satisfying three conditions for set on set , we understand set is the partition of set .+ [ d1 ] there is a no - empty traversal relation on graph .consider each pair and a trail on instance . if the pair lies on trail with a constraint of direction respect to the traversal relation , then we call this constraint _ traversal visiting_.+ [ t4 ] there is a no - empty multiple traversal relation on an instance g(v ) .then set characterizes the traversal visiting among all vertices on instance .given an instance .let be a multiple traversal relation on it .consider each pair . if there is a group and no group , it is obviously that there is no traversal visiting on direction , i.e. there is impossible for ordered pair to lie on each trail on instance .we call this pair _ directed graph_. let .when , we say there exists a bidirected traversal visiting between pair ; the case is a _simple graph_. if and , then there are several bidirected and equal visiting to each other. observe the case is a _ multi - graph_. for and , then there may be unequal visiting opportunities between the pair .this instance is usually called _mixed graph_. + [ t5 ] there is and on an instance .let be multiple traversal relation and be traversal relation on an instance .as the described in definition of multiple traversal relation , we have . for each group , there is , if and only if .then , similarly prove .+ with theorem[t4 ] , consider given a connected graph , there can be these data structures on it as follow : * claim .* we reserve the abbr . or to represent a connected instance with no - empty set or respectively. + * section summary . * in this section , author constructs the basic logic for graph , that is with the property of reflexive . in following, author will gradually abstract the subset of traversal relation to construct new relation for problems . the new glossary _ unit subgraph_ indeed is an equivalent class partition in set , because there are three properties of reflexivity , symmetry and transitivity in relation of equal first element of each pair in unit subgraph . in settheory , the equivalent class also is the partition of set , but it is the unique partition of set respect to certain relation . therefore it is the essential data structure for our research in this paper with the feature of uniqueness .lemma[t5 ] states a fact that all connected graphs can be viewed as an instance of mixed graph on traversal visiting , such that the exact graph traversal algorithms we will show has to cover all connected graph , which method is data - oriented only .+ there exist two demands for graph traversal , traversing vertices and traversing edges . in this paper, author only introduces the problem of traversing vertices .because the data of traversing edges are huge , and the method is similar to traversing vertices too . + as the definition[d1 ] of traversal visiting , we define a characteristic function as follow because of the case with lemma[t5 ] , therefore we must consider the group weight . then the characteristic function will be converted to map the group with weight to the binary set as follow now when the program enumerates the possible vertices , the program only needs to scan the leaf sets and checks the weights .the characteristic function provides a method of judgment to ensure enumerating valid vertices .let be the subset of leaf set .the weighted unit subgraph is entry parameter .we define the enumerating operator as follow let be a subset of set . if . for two arbitrary pairs such that , then we call set _visiting set_. the collection of visiting sets we denote by ; and reserve the subscript of component in set equals to the one of then element in each pair in native component .use to denote the set of all first elements in pairs and the then elements set is .it is easy to prove visiting set is a cartesian product set like unit subgraph .+ [ a1 ] let be a collection of visiting sets on set . then set is the partition of set .there is a collection of visiting sets on set . as the described in definition of visiting set , for a pair , then there may exist a visiting set and .hence , observe that for each component , there may be and .assume to and at least a pair .there may be a visiting set such that pair can be introduced to set .hence there is a contradiction of then , .consider two components with such that .assume to and a pair .with definition of visiting set , we can have a contradiction to given condition of .hence summarizing , we can understand set is the partition of set . indeed , the proof is in same fashion with unit subgraph , because they both are two equivalent classes on set . + let be a subset of set . if .for two arbitrary groups , such that there is .we call the set _ multiple visiting set _ , the collection of multiple visiting sets denote by . we reserve the subscripts of components in set equals to the ones of then elements in each group in native component .+ [ a2 ] let be a collection of multiple visiting sets on set .then set is the partition of set .indeed we aim to prove three conditions hold for set on set . with the same fashion of proof in theorem[t6 ] , it is clearly for us to prove the fact on set .here we need not do the repeated work again .+ let be a collection of cartesian product of set on an instance , having with .let be a member in set .if for each ordered pair lies on sequence , such that there is ordered pair , then we call sequence _ connected path_. + [ p ] let be a connected path on graph and be a collection of multiple traversal visiting sets .if there is such an approach of cutting graph , for each group such that , then we call this approach _ equivalent visiting_. sequence is called _ equivalent visiting path_. + with definition[p ] , we define the equivalent visiting operator as follow [ a3 ] let be an equivalent visiting path on graph .there is an approach of equivalent visiting on path .consider a group and .then the weight may be converged to 0 by invoking equivalent visiting operator if group lies on and .let be an equivalent visiting path on an instance , on which there is an approach of equivalent visiting .we set there is a component with .consider a group .if , we can understand that equivalent visiting operator do nothing inducing from the inner characteristic function , as definitions of these functions .+ if , then the may be self - subtract - one and returned by operator inducing from inner function .consider the vertex group with . as described in definition of equivalentvisiting , this case can lead to , such that there is by iteratively invoking operator .when , the function would return with nothing , thus we can understand that while for , the function can not continue to compute as the inducing from function .hence , the can be converged to 0 .+ [ a4 ] let be a collection of multiple visiting sets and be an equivalent visiting path on instance . if for each component and each group such that , then for each vertex group and , we have .let be a collection of multiple visiting sets and be an equivalent visiting path on instance .for each component , we let represent all weights of groups in set .consider vertex and having group with . as the definition of equivalent visiting, there is . because the , we have that for .it implies the fore equation can be view as an iterative equation with .let . when , for each such that there is . then we have each with lemma[a3 ] , and the number set converges at 0 . for each weight of group is equal to 0 , the enumerating function can not introduce vertex to path as a valid vertex again .then for each vertex , such that at most there are possibilities on path .namely , .if , we can set is a constant , then with formula .hence , the case does not exist with respect to invoking enumerating functiuon .because of this theorem shows the maximum possibility of visiting a vertex , author call it _equivalent visiting maximum value theorem_. + [ a5 ] let be an equivalent visiting path on a finite and no - empty instance . then path can be convergent . given a finite and no - empty graph .let be an equivalent visiting path .consider each group with , then there is such that .further we can understand the term equals to with set is partition of set as theorem[a2 ] .let and be a constant .we can see sequence as a discrete point - sequence for iterated function , and have observe the function is iterative and monotone decreasing with input . because of instance being finite , therefore can be convergent at 0. then path converges , to which function can not introduce any vertex with all weights equal to 0 .similarly consider the end - node on current path with .for each group , if each equals to 0 , even nor converges at 0 , then path similarly converges .assume the path is infinite .as the described in theorem[a4 ] , there are infinite weights , and then it implies that at least a number of a pair is infinity .then , this assumption contradicts the condition of finite instance .+ [ a6 ] let be an equivalent visiting path on graph and be a collection of multiple visiting sets .consider each maximum weight in each component . then . given a finite and no - empty graph .let be an equivalent visiting path on graph . with lemma[a5] , the path is convergent .consider each component , in which the maximum weight of group is .with theorem[a4 ] , we understand that each vertex can lie on path for at most possibilities .hence there is , if as described in lemma[a5 ] .+ consider there is such case , of which each weight of group in weighted unit subgraph equals to 0 as input for enumerating function .the function can return with nothing .the path can be forced to converge even nor .its length can be shorter than .hence , it holds .+ [ a7 ] if there are self - cycles on an instance , then they are invalid traversal visiting on equivalent visiting . given a no - empty graph .let and pair .there is pair and its weight equals 1 as the definitions of unit subgraph and visiting set . if there is an approach of equivalent visiting on instance . when the vertex lies on path , as the definition of equivalent visiting , we have . then the weight of pair will be forced to subtract 1 , such that enumerating operator can not introduce vertex again .hence , the approach can not traverse self - cycle , i.e. the self - cycle equals to an empty traversal visiting on instance .+ here author briefly shows the viewpoint about the case of self - cycle on simple graph : the self - cycle at least is an instance of _russell paradox_. look at the term above .when we partition the set with the relations of or , we can not say the self - cycle is arriving or leaving on vertex .thus the method of equivalent visiting on simple graph would filter out self - cycle as the symmetry relation of leaving and arriving . on an instance , if an approach obtains those equivalent visiting paths depend on iteratively and alternately invoking enumerating function and equivalent visiting function to enumerate vertices and modify the traversal relations , we call this approach _ based on table search _ , abbr . by _bots_. [ a8 ]the approach bots can enumerate every equivalent visiting path on a simple graph .given a no - empty graph .let be a connected path on graph .consider a pair with .if vertex lies on path , bots would invoke operator such that with lemma[a5 ] .if having pair , then there can be , moreover its weight equals to 0 .but for pair such that it weight is still 1 .hence , bots can introduce vertex to path by invoking the function to scan the leaf set in subgraph .hence , function can not affect this work of function . with the weight of pair ( )is equal to 0 , while the bots scan the leaf set in subgraph , it can not enumerate vertex as the next valid vertex . the method of equivalentvisiting prevents repeated visiting , but not to block enumerating vertices , which weights are equal to 1 .+ consider the reasons of search terminating .let vertex be current node on path .if , as the described in theorem[a4 ] the enumerating operator can return with nothing , then the length of path may be less than or equal to . if , then the path is a dead - end path . otherwise , the path is a hamiltonian path .namely , the bots may enumerate all connected paths on graph .+ author gave two equivalent classes for traversal relation , so that we can define two operators on these classes .their works can interactively constraint each other , according to the constraint of traversal visiting and method of equivalent visiting .consequently , the approach of bots has no necessary to be a recursive method . because there is not any demand of traversing a mixed graph or multi - graph , therefore authorwill not argument those problems in this paper .hence all works stop at simple graph traversal . herewe will discuss the problem on the level of program .summarize the augments above , we can obtain some conclusions as follow : the traversal relation can be organized as a table , in which unit subgraph can be a unit of data .we can evaluate the longest length of path .when the enumerating function returns an empty set , we know this search work on current path is over .hence , we need define three set for approach as follow : first is the set _ stack _ , in which there are the path waiting for search . set _p _ is second , which is a path containing a sequence of vertices in process of current exploring . andthen set _ r _ stores the final results and returns finally .the following pseudocode for the approach is given as algorithm [ cols="<",options="header " , ] with rough viewpoint , program need do much more works of comparing among arrays to remove the repeated data .such that the price of runtime and memory increase quickly .because of on dodecahedron figure , therefore it must at least enumerate civss for two times .let represent the first loop , and similarly the second one has .then there exists a combination formula such that we have a number of combination , i.e. there is a search breadth of 3969 .we can evaluate the number of combination for a given instance as follow we can let , then complexity is .it is similar to the current approaches .* summary . * in this work we studied how to cut graph actually , with such logic structure .we proved that some problems can be quantified so that it can be a basic relation model for applications .similarly , we proved some axioms in current theory and show why some things are so hard to us .the algorithmic contribution focused on the data - structure such that solve the problem of general . in the process , the equivalent class , unit subgraph is the keypoint .it let us freely choose the method to abstract basic relation for construct new logic model .for example we abstract the edge relation from it , and finally construct two classes , with those properties of symmetry and transitivity .in fact , there are more methods to abstract this binary relation for problems such as ai , flow network , tsp . due to limited space of page, author can not continue to do these works .* future * work . a wide range of possible future work exists for present abstract relation among those objects , e.g. tsp .we give the cutting graph by bogpc or boerc . for among each civs, those edges are the bridges between two arbitrary domains of vertices , and for those domains indeed , there exists the relation of inequivalence - color among them .it lets us may have a nice condition to use the _ greedy algorithm _ to exactly solve this problem , so that graph coloring is not a pure problem of graph theory .further for ai , we can use graph partition to characterize the process of solving some problems based on conditions .finally , we pose the conjecture in following. + * conjecture .* there are two binary relation and with on a universal set , which lead to two equivalent classes and on respectively .if for {\rho}\cap [ a_i]_{\bar{\rho}}$ ] such that , then is _russell paradox_. + _ reason. _ self - cycle appears in graph traversal but vanishing in edge relation . and traversal relation justly possesses property of reflexivity without symmetry , to contrary for edge relation . and we can find the equivalent classes for traversal relation in , but not on edge relation .99 bellman , r. ( 1960 ) , `` combinatorial processes and dynamic programming '' , in bellman , r. , hall , m. , jr .( eds . ) , combinatorial analysis , proceedings of symposia in applied mathematics 10 , american mathematical society , pp .kernighan , b. w. ; lin , shen ( 1970 ) .`` an efficient heuristic procedure for partitioning graphs '' .bell systems technical journal 49 .lucas , john f. ( 1990 ) .introduction to abstract mathematics .rowman & littlefield .isbn 9780912675732 .burnstein , ilene ( 2003 ) , practical software testing , springer - verlag , p. 623, isbn 0 - 387 - 95131 - 8 hazewinkel , michiel , ed .( 2001 ) , `` direct product '' , encyclopedia of mathematics , springer , isbn 978 - 1 - 55608 - 010 - 4
in this paper , author uses set theory to construct a logic model of abstract figure from binary relation . based on the uniform quantified structure , author gives two logic system for graph traversal and graph coloring respectively , moreover shows a new method of cutting graph . around this model , there are six algorithms in this paper including exact graph traversal , algebra calculation of natural number , graph partition and graph coloring .
we are deeply grateful to wojciech czaja for his generous help on this article . research presented in this paperwas supported in part by laboratory of telecommunication sciences .we gratefully acknowledge this support . 1 a. barrlund , perturbation bounds for the ldlh and lu decompositions . bit ,358 - 363 , 1991 .e. bura , r. pfeiffer , on the distribution of the left singular vectors of a random matrix and its applications .statistics and probability letters , vol .78 , pp . 2275 - 2280 , 2008 .d. chandler , w. m. kahan , the rotation of eigenvectors by a perturbation .siam j. numer ., vol . 7 , pp . 1 - 46 , 1970 . w. czaja , r. wang , mpsk classification by pca . in preparation. f. dopico , a note on sin theorems for singular subspace variations . bit ,395 - 403 , 2000 . c. huang , a. polydoros , likelihood methods for mpsk modulation classification .ieee transaction on communications , vol .43 , no . 2 - 4 ,1493 - 1504 , 1995 .s. s. soliman , s .- z .hsue , signal classification using statistical moments .ieee trrans . commun .908 - 916 , 1992 .stein , r. shakarchi , real analysis , measure theory , integration , & hilbert spaces .3 , p.91 , 2005 .a. swami , b. m. sadler , hierachical digital modulation classification using cumulants .ieee trans .3 , pp 416 - 429 , 2000. s. j. szarek , condition numbers of random matrices .j. complexity , vol . 7 , no .131 - 149 , 1991 .v. vu , singular vectors under random perturbation .random struct .algorithms , vol .526 - 538 , 2011 .p. a. wedin , perturbation bounds in connection with singular value decomposition .bit numer .99 - 111 , 1972 .h.weyl , das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen ( mit einer anwendung auf die theorie der hohlraumstrahlung ) .441 - 479 , 1912 .
we perform a non - asymptotic analysis on the singular vector distribution under gaussian noise . in particular , we provide sufficient conditions of a matrix for its first few singular vectors to have near normal distribution . our result can be used to facilitate the error analysis in pca . startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ introduction]introduction the singular value decomposition ( svd ) lies in the heart of various dimension reduction ( dr ) techniques , such as pca , le , lle etc . in real world problems , noise from unknown sourses may largely change the dimension reduction result by changing either the principal directions , or the data distribution along those directions . aside from some unusual cases that involve nonlinear bias in the measurements , and because of the central limit theorem , it is always assumed that the noise vector in the raw data obeys the i.i.d . gaussian distribution , which simplifies both data processing and error analysis . however , when doing dimension reductions , the gaussian distribution is not preserved . this is because the low dimensional data is contained in the first few singular vectors of some predefined kernel , and elements of any singular vector are bounded by one , which prevent them from being gaussian variables . in , it is pointed out in a statistical and asymptotic way that for a fixed data matrix , as the noise level goes to 0 , the distribution of singular vectors would eventually become very close to a multivariate normal distribution . in this paper , we go further in this direction and provide non - asymptotic bound on the distance between the perturbed singular vectors and a multi - variate normal random variable . in applications , our result could be used in two ways : * to predict for a fixed data matrix and noise level , whether or not can one reliably ( within a given confidence interval ) assume that the noise after dimension reduction is gaussian . * for a given noise level , to determine how many samples are necessary for the noise in the embedded space to be close ( e.g. , the distance is less than some given threshold ) to gaussian . we state the mathematical description of our problem as follows . + + * problem * : _ suppose is an noiseless data matrix , and is the observed noisy data . their associated svds are : and where the entries of are i.i.d . and is an absolute constant . due to the randomness of , all the quantities in ( 2 ) are random variables whose measures are induced by those of . under this setting , what is the distribution of ? _ + in this paper , without loss of generality , we focus only on the low rank and square ( i.e. , ) . all our techniques can be carried through in high rank and rectangular cases . the perturbation problem of singular vectors has brought great attention in the field of statistics and numerical analysis . davis and kahan studied the deterministic perturbation on hermitain matrices and provides an upper bound for the rotation of singular vector subspaces caused by the perturbation . they showed that the span of a group of singular vectors as a subspace is changed by an amount which is propotional to the noise power and the reciprocal of eigenvalue gap . ( the change is charaterized in canonical angle between subspaces , denoted by ) . wedin extended this result to non - hermitain cases . dopico proved that the left and right singular vectors is perturbed towards the same direction . van vu considered random perturbations ( i.i.d . bernoulli distribution ) and provided a tighter upper bound for the canonical angles , which hold with large probability . the necessity of using the canonical angles to characterize the change is well illustrated in the following example in . consider the following two matrices , both matrices are diagonalizable , the eigenvectors of are and , while those of are and , for any . hence , the difference of these two bases does not go to zero with . + + beside the canonical angle , some authors used another quantity : to describe the changes of singular subspaces . the latter has been proved to be an upper bound of the former . + in this paper , we use the pointwise matrix norm to bound the difference between two matrices . our technique is valid and simpler for the frobenius norm case so we omit it . however , if one tries to use the bound we provide on pointwise norm to derive a bound on frobenius norm using the equivalence of norms , he will get a looser bound than directly running over our technique on frobenius norm , and vice versa . the rest of the paper is organized as follows . in section 2 , we introduce notations and some existing theorems to be used . in section 3 , we state several lemmas and our main theorem . section 4 contains the proof of the main theorem . in section 5 , we apply our result to an audio signal classification problem . startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ previous results]previous results throughout this paper , we consider only square data matrices with nontrivial dimensionality and rank . the variables , , , , , , , , , remain the same as defined in ( 1 ) and ( 2 ) . we assume . in addition , we assume that all the diagonal elements of the matrix are bounded away from zero . + for a given matrix , we use and to denote the qr decomposition of . the set of singular values of is denoted by ; the largest one is denoted by ; and the minimum singular value is by . we use to denote the normalized gaussian matrix whose entries are . for any matrix , denotes the frobenius norm , and the spectral norm . in addition , we use to denote the component - wise maximum norm of a matrix , i.e. , . the following theorem is due to dopico , who provided the worst - case bound on the frobenius norm of the deviation of singular vector subspace under perturbation . he proved that this bound is proportional to the frobenius norm of the perturbation as well as the reciprocal of the eigenvalue gap . let and and their svds be as defined in ( 1 ) and ( 2 ) . define if , then where , . moreover , the left hand side of ( 3 ) is minimized for where is any svd of , and in this case , the equality can be attained . in the proof of our main theorem ( theorem 4 ) , we will frequently encounter the spectral norm of gaussian matrices , which is known to be bounded in the following theorem . let w be an matrix with i.i.d . normal entries with mean 0 and variance . then , its largest and smallest singular values obey : in order to estimate the gap of eigenvalues in ( 3 ) , we will make use of the following result . ( ) let be an matrix and be a perturbation of . let and be the largest eigenvalue of and , respectively . then , for , startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ main theorem]main theorem we now ready to state our main theorem . let and and their svds be defined as in ( 1 ) and ( 2 ) . assume are small enough such that the following defined quantities satisfies and ( see below ) . then with probability ( with respect to the random gaussian noise w ) exceeding , there exists an unitary matrix , such that for , we have where is a gaussian matrix defined by , and where are defined later in formulas ( 19 ) , ( 20 ) , ( 25 ) , ( 29 ) , ( 17 ) , and where , , and . note that the order of in terms of and are : , , , and . if the eigenvalues of are all different , then ( 4 ) becomes where . * remark 1 * : we observe that the perturbation can be approximated by a gaussian term only when it is the leading term in the error . the average magnitude of this term has asymptotic order of as and going to 0 , while the order of the leading terms on the right hand side is either or . therefore , to ensure the gaussian term is higher in order , we need the following condition on the pair : before going into the proof , we first establish three useful lemmas . the first one is an elementary observation in linear algebra , and the last one is a direct application of theorem 2 , so we omit their proofs . if , then let be the distribution of the product of two independent normal random variables . let , , ... , be i.i.d . random variables drawn from . if , we have where . if , one can verify that for all , we apply markov s inequality ( see eg . ) to have : letting in the above formula , we get let be an gaussian matrix whose elements are i.i.d . . let be written as with a matrix . then , with probability exceeding , startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ proof of the main theorem]proof of the main theorem the idea of the proof is the following : our goal is to charaterize the perturbed singular vector space , which is a left orthogonal matrix and satisfies , meaning that it , together with , diagonalizes . thus , if we can construct two other orthogonal matrices which also diagonalize with small error , then , by theorem 1 , they are expected to be good approximations to and . + + we start the proof similarly to that in . define the random matrix as follows , where and due to the invariant property of gaussian matrix , has the same distribution as . the asymtotic order of in the error term is not satisfactory , we thus want to further diagonalize by using the following two matrices , with , and with . note that in doing so , we start to differ from by defining and explicitly . the second order terms in and are crucial for obtaining ( 4 ) . + keeping in mind that and are not yet unitary , we multiply the left hand side of ( 6 ) by on the left , and by on the right , to obtain , where the matrix includes all the terms whose order on is greater than or equal to 3 , so is . compare ( 7 ) with ( 6 ) , we see the above operation has changed the order of error term from to . if we denote the eigenvector subspaces corresponding to the two diagonal blocks by , , then they have the following expressions , recall that and denote the qr decomposition of a matrix . now , we want to orthogonalize and . for that purpose , for both sides of ( 7 ) , we multiply them by on the left , and on the right . this way we obtain : where with a little abuse of notation , we continue to use to denote the error term in ( 10 ) , though it was already changed by the left and right multiplication . we denote by the svd of the , with . in this case , ( 10 ) becomes on the other hand , the left hand side of ( 11 ) also satisfies combining ( 11 ) with ( 12 ) , we obtain : moving everything but on the left hand side to the right by multiplying the inverse of each matrix and utilizing the fact that and are orthogonal to each other , we derive : ( 13 ) combined with theorem 1 imply that and are the first left singular vectors of two very similar matrices ( different by the err term ) and that they are close . keeping this useful result in mind , we first turn to look at the big picture . our final goal is to approximate by a gaussian variable the difference between and up to a roation : ( we willl define the unitary matrix m explicitly later ) , which can be decomposed as : we insert ( 8) into ( 14 ) to get here , the gaussian term is the second to last term on the right , so we want to prove all other terms are small . for that purpose , we move the gaussian term to the left and take the component - wise matrix norm on both sides , to have : observe that the left hand side of ( 15 ) is exactly what we want to bound in this theorem . the rest of the proof is divided into three parts to bound each term , , on the right hand side . + from ( 2 ) and ( 13 ) , we obtain that and are the left eigenspaces of and , respectively . thus , we want to use theorem 1 to bound . for this purpose , we fisrt calculate the key parameter in that theorem . + + recall . by theorem 2 and theorem 3 , the largest sigular value of and that of obey , with probability over , that equation ( 16 ) implies the following lower bound on : whenever , we can apply theorem 1 to the two svds in ( 2 ) and ( 13 ) , to obtain the matrix , defined in ( 7 ) and modified in ( 10 ) and ( 13 ) , is essentially a sum of several products of gaussian matrices . applying lemma 1 on and utilizing lemma 3 , we obtain the following bound : holds with probability over whenever is small enough such that . here , and where , , and . + now , the right hand side of ( 18 ) has the following bound : we are now ready to define the rotation which first appears in ( 14 ) . comparing ( 18 ) with the first term of ( 14 ) , we have where the explicit form of is given in theorem 1 . + we plug ( 21 ) into ( 18 ) , to obtain : we start with breaking into two parts : we estimate iv and v separately . observe that the entries of are i.i.d . and are independent of those in . therefore we can apply lemma 2 to each entry and those of to get , with probability at least , with , for , we first observe the following random upper bound , using ( 24 ) , the union bound , as well as the following inequality , we can estimate the probability that exceeds the value , where is the first element of . therefore , with probability exceeding , we have we combine the estimates of and to get , with probability greater than , the following calculation shows how far away is from unitary . hence , following a procedure similar to the one we used to obtain the bound in ( 21 ) , we derive the distance between this covariance matrix and the identity matrix in frobenius norm : here , was defined in ( 19 ) . when is small enough such that , theorem 2.2 in ] shows that the distance can be bounded by a function of : thus , to estimate ( 27 ) , we first note that from ( 25 ) and the assumption that , we obtain therefore , we insert ( 26 ) and ( 28 ) into ( 27 ) , to arrive at the bound : furthermore , from ( 8) and lemma 3 , it is straightforward to estimate : we combine the above two inequalities to get : we now aggregate the estimates of , and to get ( 4 ) and ( 5 ) . startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ application]application we use the m - psk ( phase shift keying ) modulation classification problem as an example to show how our result is used to make a sampling strategy and a new classification method . in the so - called adaptive modulation system , the modulation type is varying with time . when the condition of channels ( such as fading or interference ) changes , the transmitter seeks to find the modulation type which best adapts to the new environment . meanwhile , if the receiver is not informed of these changes , a modulation classification procedure needs to be carried out as soon as the signal is received . here , we consider the classification problem for the widely used mpsk type of modulations which conveys the data by changing the phase of a carrier wave . here stands for the number of available phases and usually takes value 2 , 4 , 8 , 16 or 32 . an mpsk signal has the following mathematical representation : where is called the _ symbol period / duration _ ; is a function supported on ] , for some unknown integer , is unknown , and is chosen unifromly from . suppose we digitize the data in the following way . we take uniform samples from periods of and store them in an matrix , where denote the sample of the period , which has the expression : where is a guassian noise matrix with i.i.d . entries . when noise is absent ( i.e. , ) , each column of is a function of . since has at most choices , the columns of also have only patterns . if we deem each column as an dimensional data point , then it merely is a high dimensional embedding of a zero dimensional parameter space . in other words , the dimensional graph of consists only of points . when noise is added , these points becomes clusters . hence , the classification problem of finding the correct is reduced to a clustering problem of finding the total number of clusters . the complexity of all well known clustering methods , such as k - means and mean shift grows exponentially with dimension . therefore , for large data sets , a preprocessing step of dimension reduction is necessary . the dimension reduction procedure has another two advantages over other well known methods ( e.g. , , , ) : * no carrier removal procedure is needed , relaxing the digitization rate to below the nyquist rate of the carrier frequency . * classification and detection are completed simultaneously . for our problem setting , pca is the most suitable technique for the following reasons : * the signal has gaussian noise and pca is just a linear - gaussian latent variable model . * this is a linear model when we deem as parameters of ( we will provide more discussion on this observation later ) . in telecommunications , the 2-d graph of all evaluations of with is formally called the constellation diagram of the mpsk modulation ( see figure 1 ) , and can be used as an identifier of the modulation type . therefore , the idea of our method is using pca to map the high dimensional data to the two - dimensional constellation diagram on which clustering algorithm is applied to find the true cluster quantity . now we provide more detail to legistate the linear model observation . by definition , the matrix has the following decomposition : where for , , and , it is immediate to verify that when , we have , and is nearly orthogonal since we have assumed that are choosen unifromly at random from the parameter space ( rigorous calculation of the distance between and an orthogonal matrix can be found in [ 8 ] ) . this decomposition clearly shows how the noiseless part of linearly depends on . when noise is absent , pca does the job of seperating from , where the latter is just a scaled version of the constellation . with noise is added , as explained earlier , each individual point in the graph turns into a cluster ( figure 2 ) . even though it is quite obvious to a human observer how many clusters there are in figure 2 without any prior knowledge , the exisiting clustering algorithms are surprisingly inferior by requiring either the number of clusters or the cluster radius as input . when the number of clusters is unknown as in our setting , many previous work suggest to do brute force on all the possible numbers ( which are 2 , 4 , 8 , 16 , 32 , maybe also 64 , 128 ) , and compare the clustering results in some ad hoc way to decide which is more likely . a more pleasant way to do this is by finding the cluster radius , which is exactly the place to use our results in this paper . before applying our results , we first normalizate the singular values of by setting , so that the singular values of do not change with the matrix size , in hardware implementations , a larger value of is usually more difficult to realize than that of , because corresponds to the sampling rate and is the sample duration . hence , we assume . a generalized version of theorem 4 for rectangular matrix can be similarly built , by padding zeros to form an matrix . since now the factor in ( 34 ) is a normalized gaussian matrix , the other factor , , denotes the energy of noise , corresponding to in theorem 4 . as , . theorem 4 implies that for a given matrix , and a given confidence level , there exists a threshold such that , as long as , we can assume the first two eigenvectors of have multivariate normal distribution . in general , is a function of both and , which makes the inquality difficult to solve . fortunately , in our model , where is uniformly bounded for all , it can be verified from ( 5 ) that is universal for all sizes of . thus solving the simple inequality gives us the feasible region of . with any feasible , we can apply the mean shift clustering method with gaussian kernel . by some elementary calculations , one can derive that the percentile of the gaussian noise on each data point in the embedded space equals .. we set this number to be the radius . since the rank of for bpsk signals is one , then trying to invert the second singular value in theorem 4 makes the problem near singular and cause large error in the second dimension as shown in figure 2 . fortunately , this case is easy to be recogonized by simply examing whether the second singular value of is much smaller than the first one . in our first experiment , we generate a qpsk signal with carrier frequence of 1ghz , symbol rate 10mhz , and damped by awgn with . we use the sampling rate 21 samples per symbol ( much lower than the nyquist rate of the carrier frequence , and satisfies ) and sample 200 symbols . in figure 3 , the result of pca is plotted , together with a circle whose radius is the percentile predicted by the theorem , and whose centers are those found by the meanshift algorithm . we can see that the prediction of radius is quite accurate . in our second experiment , we let the snr decrease and examine the performance of the above algorithm . an classification is deemed as successful only when the number of clusters returned by the mean shift algorithm strictly equal to the true type . the result is plotted in figure 4 . as expected , when noise grows , the eigenvector distribution deviates from the gaussian distribution and the predicted radius becomes too small for the algorithm to find the correct . startsectionsection1@ -3.5ex plus -1ex minus -.2ex 2.3ex plus.2ex * * [ conclusion]conclusion in this paper , we provided a condition under which the perturbation of the principal eigenvectors of a matrix under gaussian noise has a near - gaussian distribution . the condition is nonasymptotic and is useful in application . we provided an simple example of audio signal classification problem to illustrate how our theorem can be used to make sampling strategy and to form new classification technique . more details about this new classification scheme is discussed in .
in many multiuser wireless communications scenarios , having sufficient csit is a crucial ingredient that facilitates improved performance . while being useful , perfect csit is also hard and time - consuming to obtain , hence the need for communication schemes that can utilize partial or delayed csit knowledge ( see ) . in this context of multiuser communications ,we here consider the broadcast channel ( bc ) , and specifically focus on the two - user multiple - input single - output ( miso ) bc , where a two - antenna transmitter communicates to two single - antenna receivers . in thissetting , the channel model takes the form where for any time instant , represent the channel vectors for user 1 and 2 respectively , where represent unit power awgn noise , where is the input signal with power constraint , and where in this case , also takes the role of the signal - to - noise ratio ( snr ) .it is well known that in this setting , the presence of full csit allows for the optimal degree - of - freedom ( dof ) per user , whereas the complete absence of csit causes a substantial degradation to just dof per user , the corresponding dof pair is given by the corresponding dof region is then the set of all achievable dof pairs . ] .an interesting scheme that bridges this performance gap by utilizing partial csit knowledge , was recently presented in which showed that delayed csit knowledge can still be useful in improving the dof region of the broadcast channel . in the above described two - user miso bc setting , and under the assumption that at time , the transmitter knows the delayed channel states ( ) up to time , the work in showed that each user can achieve dof , providing a clear improvement over the case of no csit .this result was later generalized in which considered the natural extension where , in addition to the aforementioned perfect knowledge of prior csit , the transmitter also had imperfect knowledge of current csit ; at time the transmitter had estimates of and , with estimation errors having i.i.d .gaussian entries with power for some non - negative parameter that described the quality of the estimate of the current csit . in this setting of` mixed ' csit ( perfect prior csit and imperfect current csit ) , and for denoting the dof for the first and second user over the aforementioned two - user bc , the work in showed the optimal dof region to take the form , corresponding to a polygon with corner points , nicely bridging the gap between the case of explored in , and the case of ( and naturally ) corresponding to perfect csit . throughout this paper , , , ,respectively denote the inverse , transpose , and conjugate transpose of a matrix , while denotes the complex conjugate , and denotes the euclidean norm . denotes the magnitude of a scalar , and denotes a diagonal matrix .logarithms are of base 2 . comes from the standard landau notation , where implies .we also use to denote _ exponential equality _ , i.e. , we write to denote .finally , in the spirit of we consider a unit coherence period , as well as perfect knowledge of channel state information at the receivers ( perfect csir ) .motivated by the fact that in multiuser settings , the quality of csit feedback may vary across different links , we extend the approach in to consider unequal quality of current csit knowledge for and . specifically under the same set of assumptions mentioned above , and in the presence of perfect prior csit , we now consider the case where at time , the transmitter has estimates of the current and , with estimation errors having i.i.d .gaussian entries with power for some non - negative parameters that describe the generally unequal quality of the estimates of the current csit for the two users links .we proceed to describe the optimal dof region of the general mixed - csit two - user miso bc ( two - antenna transmitter ) .the optimal schemes are presented in section [ sec : achievability ] , parts of the proof of the schemes performance are presented in appendix [ sec : achievable ] , while the outer bound proof is placed in appendix [ sec : outerb ] . without loss of generality, the rest of this work assumes that [ theorem : bc - outerb - gen ] the dof region of the two - user miso bc with general mixed - csit , is given by where the region is a polygon which , for has corner points and otherwise has corner points the above corner points , and consequently the entire dof inner bound , will be attained by the schemes to be described later on .the result generalizes the results in as well as the result in which considered the case of ( ) , where one user had perfect csit and the other only prior csit .( case 1 ) and when ( case 2 ) . the corner points take the following values : , , and ., width=340 ] figure [ fig : dofcsitouterb ] depicts the general dof region for the case where ( case 1 ) and the case where ( case 2 ) .we proceed to describe the communication schemes .as stated , without loss of generality , we assume that .we describe the three schemes , and that achieve the optimal dof region ( in conjunction with time - division between these same schemes ) .specifically scheme achieves ( case 1 ) , scheme achieves dof points ( case 1 ) and ( case 2 ) , and scheme achieves ( case 1 and case 2 ) .the scheme description is done for , and for rational .the cases where , or , or where are not rational , can be readily handled with minor modifications .we proceed to describe the basic notation and conventions used in our schemes .the schemes are designed with phases ( varies from scheme to scheme ) , where the phase consists of channel uses , . the vectors and will denote the channel vectors seen by the first and second user respectively during timeslot of phase , while and will denote the estimates of these channels at the transmitter during the same time , and , will denote the estimation errors . furthermore and will denote the independent information symbols that may be sent during phase- , timeslot- , and which are meant for user 1 , while symbols and are meant for user 2 .vectors and are the unit - norm beamformers for and respectively , chosen so that is orthogonal to , and so that is orthogonal to .furthermore are the randomly chosen unit - norm beamformers for and respectively .another notation that will be shared between schemes includes that denotes the interference seen by user 1 and user 2 respectively , during timeslot of phase . for the accumulated interference to both users during phase , we will let be a quantized version of , and we will consider the mapping where the total information in is split evenly across symbols transmitted during the next phase .in addition we use to denote the randomly chosen unit - norm beamformer of .furthermore , unless stated otherwise , will be the general form of the transmitted vector at timeslot of phase .as noted above under each summand , the average power that is assigned to each symbol , throughout a specific phase , will be denoted as follows : furthermore each of the above symbols carries a certain amount of information , per timeslot , where this amount may vary across different phases .specifically we use to mean that , during phase , each symbol carries bits .similarly we use to describe the prelog factor of the number of bits in respectively , again for phase . finally the received signals during phase for the first and second user ,are respectively denoted as and , where generally the signals take the following form as stated , scheme has phases , where the phase durations are chosen to be integers such that where , , , and where is any constant such that . during phase 1 ( channel uses ) ,the transmit signal is while the power and rate are set as the received signals at the two users then take the form where under each term we noted the order of the summand s average power . at this point , and after the end of the first phase , the transmitter can use its knowledge of delayed csit to reconstruct ( cf . ) , and quantize each term as where are the quantized values , and where are the quantization errors . noting that , we choose a quantization rate that assigns each a total of bits , and each a total of bits , thus allowing for ( ) . at this point the bits representing , are distributed evenly across the set which will be sequentially transmitted during the next phase .this transmission of will help each of the users cancel the interference from the other user , and it will also serve as an extra observation that allows for decoding of all private information of that same user . during phase 2 ( channel uses ) , the transmit signal takes the exact form in where we set power and rate as and where we note that satisfies .the received signals during this phase are given as for , where under each term we noted the order of the summand s average power .at this point , based on , , each user decodes by treating the other signals as noise . after decoding and fully reconstructing , user 1 goes back one phase and subtracts from to remove ( up to bounded noise ) the interference corresponding to .the same user will also use the estimate of as an extra observation which , together with the observation , present the user with a mimo channel that allows for decoding of both and similarly user 2 , after fully reconstructing , subtracts from , to remove ( up to bounded noise ) the interference corresponding to , and also uses the estimate of as an extra observation which , together with the observation , allow for decoding of both and further exposition to the details regarding the achievability of the mentioned rates , can be found in appendix [ sec : achievable ] .consequently after the end of the second phase , the transmitter can use its knowledge of delayed csit to reconstruct , and quantize each term to . with ,we choose a quantization rate that assigns each a total of bits , and each a total of bits , thus allowing for .then the bits representing , are split evenly across the set which will be sequentially transmitted in the next phase so that user 1 can eventually decode , and user 2 can decode .we now proceed with the general description of phase .phase ( channel uses ) is almost identical to phase 2 , with one difference being the different relationship between and .the transmit signal takes the same form as in phase 2 ( cf ., ) , the rates and powers of the symbols are the same ( cf . ) and the received signals ( ) take the same form as in , .most of the actions are also the same , where based on , ( corresponding now to phase ) , each user decodes by treating the other signals as noise , and then goes back one phase and reconstructs .as before , user 1 then subtracts from to remove , up to bounded noise , the interference corresponding to .the same user also employs the estimate of as an extra observation which , together with the observation obtained after decoding , allow for decoding of both and .similar actions are performed by user 2 . as before ,after the end of phase , the transmitter can use its knowledge of delayed csit to reconstruct , and quantize each term to with the same rate as in phase 2 ( bits for each , and bits for each ) .finally the accumulated bits representing all the quantized values , are distributed evenly across the set which will be sequentially transmitted in the next phase .more details can be found in appendix [ sec : achievable ] . during the last phase ( channel uses ) , the transmit signal is where we set power and rate as the received signals are for . at this point , as before , the power and rate allocation of the different symbols allow both users to decode by treating the other signals as noise .consequently user 1 can remove from and decode , and similarly user 2 can remove from and decode .finally each user goes back one phase and reconstructs , which allows for decoding of and at user 1 and of and at user 2 , all as described for the previous phases ( see appendix [ sec : achievable ] for more details ) .table [ tab : x1summary ] summarizes the parameters of scheme .the use of symbol is meant to indicate precoding that is orthogonal to the channel estimate ( rather than random ) .the table s last row indicates the prelog factor of the quantization rate ..summary of scheme .[ cols="^,^,^,^,^",options="header " , ] [ tab : x2summary ] [ [ dof - calculation - for - scheme - mathcal - x_2 ] ] dof calculation for scheme + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we proceed to add up the total amount of information transmitted during this scheme . in accordance to the declared pre - log factors and phase durations ( see table [ tab : x2summary ] ) , and irrespective of whether fall under case 1 or case 2 , we have that where is due to . regarding the second user and the declared , for case 1 ( ) we see that where we have used to get , where we have used that implies , and where we have considered an asymptotically large .when ( ) , then gives that which , in the high regime , gives when ( ) , then gives that which , for large , gives in conclusion , scheme achieves dof pair ( case 1 ) , else it achieves .this is the simplest of all three schemes , and it consists of a single channel use ( ) during which the transmitter sends where is orthogonal to , is orthogonal to , and where the power and rates are set as resulting in received signals of the form after transmission , both receivers first decode by treating the other signals as noise , and then user 1 utilizes its knowledge of to reconstruct and remove it from , thus being able to decode , while after decoding , user 2 removes from , and decodes .the details for the achievability of follow closely the exposition in appendix [ sec : achievable ] .consequently the dof point can be achieved by associating to information intended entirely for the second user .the work provided analysis and communication schemes for the setting of the two - user miso bc with general mixed csit . the work can be seen as a natural extension of the result in and of the recent results in , to the case where the csit feedback quality varies across different links .we will here focus on achievability details for scheme . the clarifications of the details carry over easily to the other two schemes .regarding ( - see ) , we recall that during phase , both users decode ( from - see , ) by treating all other signals as noise . consequently for , we note that to get similarly for the last phase ( see , , ) , we note that to get regarding achievability for , , and ( see , , ) , we note that each element in has enough bits ( recall that ) , to match the quantization rate of that is necessary in order to have a bounded quantization noise . consequently going back to phase 1 ,user 1 is presented with linearly independent equivalent mimo channels of the form ( ) , where again we note that the described quantization rate results in a bounded equivalent noise , which then immediately gives that and are achievable .similarly for user 2 , the presented linearly independent equivalent mimo channels ( ) , allow for decoding at a rate corresponding to and .regarding achievability for , , and , ( - see , , ) , we note that during phase , both users can decode , and as a result user 1 can remove from , and user 2 can remove from ( ) . as a result user 1is presented with linearly independent equivalent mimo channels of the form ( ) .given that the rate associated to , matches the quantization rate for , allows for a bounded variance of the equivalent noise , and in turn for decoding of at a rate corresponding to and .similarly user 2 is presented with independent mimo channels of the form allowing for decoding of ( ) at rates corresponding to and .regarding achievability for and ( see , , ) , we note that , after decoding , user 1 can remove from , and user 2 can remove from , ( ) . consequently during this phase , user 1 sees linearly independent siso channels of the form ( ) which can be readily shown to support .a similar argument gives achievability for . we here adopt the outer bound approach in to the asymmetric case of .as in , we first linearly convert the original bc in , to an equivalent bc ( see , ) having the same dof region as the original bc ( cf. ) , and we then consider the degraded version of the equivalent bc in the absence of delayed feedback , which matches in capacity the degraded bc with feedback ( for the memoryless case ) , and which exceeds the capacity of the equivalent bc .the final step considers the compound and degraded version of the equivalent bc without delayed feedback , whose dof region will serve as an outer bound on the dof region of the original bc . where ^{t } { \triangleq}\frac{1}{\sqrt{p } } { { { \uppercase{{\bm{q}}}}}}^{-1}_{t } { { { \lowercase{{\bm{x}}}}}}_{t},\ ] ] where \in \mathbb{c}^{2\times 2} ] , }{\triangleq}\{y^{'(i)}_{t}\}_{t=1}^{n} ] for . using fano s inequality, we have }|{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) + n o(n ) \nonumber \\&\leq\ ! n \log\ ! p \!+\ !n o(\log\ !h(y^{'(1)}_{[n]}|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) \!+\ !n o(n ) , \label{eq : r1-bound-2}\end{aligned}\ ] ] as well as }|{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) + n o(n ) \nonumber \\&\leq\ !n o(\log\ !! h(y^{''(1)}_{[n]}|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) \!+\ !n o(n ) , \label{eq : r1-bound-12}\end{aligned}\ ] ] which is added to to give }|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) \nonumber \\ & \quad -h(y^{''(1)}_{[n]}|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n]})+2n o(n ) \nonumber \\ & \leq 2n\log p+ 2n o(\log p ) \nonumber \\ & \quad -h(y^{'(1)}_{[n]},y^{''(1)}_{[n]}|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) + 2n o(n ) .\label{eq:2r1-bound}\end{aligned}\ ] ] let where , , and let }{\triangleq}\{z_{t}\}_{t=1}^{n} ] , implies knowledge of and of , up to bounded noise level .furthermore }\!,y^{''(1)}_{[n]}\!,\ !! z_{[n]},{{{\uppercase{{\bm{h}}}}}}_{[n]}\ ! )\nonumber \\ & = \!\!i(\! w_1;y^{'(1)}_{[n]}\!,y^{''(1)}_{[n]},z_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]}\ !n o(n ) , \label{eq : r1}\end{aligned}\ ] ] since again knowledge of }\}_{t=1}^{n}$ ] provides for up to bounded noise level . now combining and , gives },y^{''(1)}_{[n]},z_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1 ) \!+\ !n o(\log p)\!+\ !n o(n ) \\ & = i(w_2;y^{'(1)}_{[n]},y^{''(1)}_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1 ) \\& \quad \!+\ !i(w_2;z_{[n]}|y^{'(1)}_{[n]},y^{''(1)}_{[n]},{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1 ) \!+\ ! n o(\log p)\!+\ !n o(n ) \nonumber\\ & = \! h(y^{'(1)}_{[n]},y^{''(1)}_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1)\!-\!\underbrace{h(y^{'(1)}_{[n]},y^{''(1)}_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1,w_2)}_{n o(\log p ) } \\ & \quad -\underbrace{h(z_{[n]}|y^{'(1)}_{[n]},y^{''(1)}_{[n]},{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1,w_2)}_{n o(\log p ) } \\ & \quad + \underbrace{h(z_{[n]}|y^{'(1)}_{[n]},y^{''(1)}_{[n]},{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1)}_{\le h(z_{[n ] } ) } + n o(\log p)+n o(n ) \\ & \leq \ !h(y^{'(1)}_{[n]},y^{''(1)}_{[n]}|{{{\uppercase{{\bm{h}}}}}}_{[n]},w_1 ) \!+\!h(z_{[n ] } ) \!+\ !n o(\log p)\!+\ !n o(n ) \\ & \leq \! h(y^{'(1)}_{[n]},y^{''(1)}_{[n]}|w_1,{{{\uppercase{{\bm{h}}}}}}_{[n ] } ) + n \alpha_1\log p \\ & \quad + n o(\log p)+n o(n ) , \label{eq : r2-bound - final}\end{aligned}\ ] ] which is combined with to give which in turn proves the outer bound as described in .finally interchanging the roles of the two users and of , gives naturally the single antenna constraint gives that . c. s. vaze and m. k. varanasi , `` the degrees of freedom region of two - user and certain three - user mimo broadcast channel with delayed csi , '' dec .2010 , submitted to _ieee trans .inform . theory _, available on arxiv:1101.0306v2 [ cs.it ] .j. xu , j. g. andrews , and s. a. jafar , `` broadcast channels with delayed finite - rate feedback : predict or observe ? '' may 2011 , submitted to _ieee trans . on wireless communications _, available on arxiv:1105.3686v1 [ cs.it ] .m. a. maddah - ali and d. n. c. tse , `` completely stale transmitter channel state information is still very useful , '' sep .2011 , submitted to _ieee trans .inform . theory _, available on arxiv:1010.1499v2 [ cs.it ] .m. kobayashi , s. yang , d. gesbert , and x. yi , `` on the degrees of freedom of time correlated miso broadcast channel with delayed csit , '' feb .2012 , submitted to _ proc .information theory ( isit ) 2012 _ , available on arxiv:1202.1909v1 [ cs.it ] .s. yang , m. kobayashi , d. gesbert , and x. yi , `` degrees of freedom of time correlated miso broadcast channel with delayed csit , '' mar .2012 , submitted to _ieee trans .inform . theory _ , available on arxiv:1203.2550v1 [ cs.it ] .t. gou and s. jafar , `` optimal use of current and outdated channel state information - degrees of freedom of the miso bc with mixed csit , '' mar .2012 , submitted to _ ieee communications letters _ , available on arxiv:1203.1301v1 [ cs.it ] .
in the setting of the two - user broadcast channel , recent work by maddah - ali and tse has shown that knowledge of prior channel state information at the transmitter ( csit ) can be useful , even in the absence of any knowledge of current csit . very recent work by kobayashi et al . , yang et al . , and gou and jafar , extended this to the case where , instead of no current csit knowledge , the transmitter has partial knowledge , and where under a symmetry assumption , the quality of this knowledge is identical for the different users channels . motivated by the fact that in multiuser settings , the quality of csit feedback may vary across different links , we here generalize the above results to the natural setting where the current csit quality varies for different users channels . for this setting we derive the optimal degrees - of - freedom ( dof ) region , and provide novel multi - phase broadcast schemes that achieve this optimal region . finally this generalization incorporates and generalizes the corresponding result in maleki et al . which considered the broadcast channel with one user having perfect csit and the other only having prior csit .
theoretical studies of the cmb have shown that the accurate measurement of the cmb anisotropy spectrum with future space missions such as planck will allow for tests of cosmological scenarios and the determination of cosmological parameters with unprecedented accuracy .nevertheless , some near degeneracies between sets of cosmological parameters yield very similar cmb temperature anisotropy spectra . the measurement of the cmb polarization and the computation of its power spectrum may lift to some extent some of these degeneracies .it will also provide additional information on the reionization epoch and on the presence of tensor perturbations , and may also help in the identification and removal of polarized astrophysical foregrounds .a successful measurement of the cmb polarization stands as an observational challenge ; the expected polarization level is of the order of of the level of temperature fluctuations ( ) .efforts have thus gone into developing techniques to reduce or eliminate spurious non - astronomical signals and instrumental noise which could otherwise easily wipe out real polarization signals . in a previous paper , we have shown how to configure the polarimeters in the focal plane in order to minimize the errors on the measurement of the stokes parameters . in this paper , we address the problem of low frequency noise . low frequency noise in the data streams can arise due to a wide range of physical processes connected to the detection of radiation . noise in the electronics , gain instabilities , and temperature fluctuations of instrument parts radiatively coupled to the detectors , all produce low frequency drifts of the detector outputs . the spectrum of the total noise can be modeled as a superposition of white noise and components behaving like where , as shown in fig .[ spectrebruit ] .this noise generates stripes after reprojection on maps , whose exact form depends on the scanning strategy . if not properly subtracted , the effect of such stripes is to degrade considerably the sensitivity of an experiment . the elimination of this `` striping '' may be achieved using redundancies in the measurement , which are essentially of two types for the case of planck : * each individual detector s field of view scans the sky on large circles , each of which is covered consecutively many times ( ) at a rate of about rpm .this permits a filtering out of non scan - synchronous fluctuations in the circle constructed from averaging the consecutive scans . *a survey of the whole sky ( or a part of it ) involves many such circles that intersect each other ( see fig .[ inter ] ) ; the exact number of intersections depends on the scanning strategy but is of the order of for the planck mission : this will allow to constrain the noise at the intersection points .one of us has proposed to remove low frequency drifts for unpolarized data in the framework of the planck mission by requiring that all measurements of a single point , from all the circles intersecting that point , share a common sky temperature signal .the problem is more complicated in the case of polarized measurements since the orientation of a polarimeter with respect to the sky depends on the scanning circle .thus , a given polarimeter crossing a given point in the sky along two different circles will not measure the same signal , as illustrated in fig .[ deuxpol ] .the rest of the paper is organized as follows : in sect .[ noise ] , we explain how we model the noise and how low frequency drifts transform into offsets when considering the circles instead of individual scans . in sect .[ skypol ] , we explain how polarization is measured .the details of the algorithm for removing low - frequency drifts are given in sect .we present the results of our simulations in sect .[ simul ] and give our conclusions in sect .[ resul ] .as shown in fig . [ spectrebruit ] , the typical noise spectrum expected for the planck high frequency instrument ( hfi ) features a drastic increase of noise power at low frequencies .we model this noise spectrum as : the knee frequency is defined as the frequency at which the power spectrum due to low frequency contributions equals that of the white noise .the noise behaves as pure white noise with variance at high frequencies .the spectral index of each component of the low - frequency noise , , is typically between 1 and 2 , depending on the physical process generating the noise .the fourier spectrum of the noise on the circle obtained by combining consecutive scans depends on the exact method used .the simplest method , setting the circle equal to the average of all its scans , efficiently filters out all frequencies save the harmonics of the spinning frequency . since the noise power mainly resides at low frequencies ( see fig .[ spectrebruit ] ) , the averaging transforms to first order low frequency drifts into constant offsets different for each circle and for each polarimeter .this is illustrated in the comparison between figs .[ stream1sf ] and [ average1sf ] .more sophisticated methods for recombining the data streams into circles can be used , as minimization , wiener filtering , or any map - making method projecting about samples onto a circle of about points . for simplicity, we will work in the following with the circles obtained by simple averaging of all its consecutive scans .we thus model the effect of low frequency drifts as a constant offset for each polarimeter and each circle .this approximation is excellent for .the remaining white noise of the polarimeters is described by one constant matrix .the measurement with one polarimeter of the linear polarization of a wave coming from a direction on the sky , requires at least three measurements with different polarimeter orientations . since the stokes parameters and are not invariant under rotations, we define them at each point with respect to a reference frame of tangential vectors .the output signal given by a polarimeter looking at point is : where is the angle between the polarimeter and stokes parameter since no net circular polarization is expected through thomson scattering . ] . in the following , we choose the longitude - latitude reference frame as the fixed reference frame on the sky ( see fig .[ frame ] ) .the destriping method consists in using redundancies at the intersections between circle pairs to estimate , for each circle and each polarimeter , the offsets on polarimeter measurements . for each circle intersection, we require that all three stokes parameters _ in a fixed reference frame _ in that direction of the sky , as measured on each of the intersecting circles , be the same .a minimization leads to a linear system whose solution gives the offsets . by subtracting these offsets, we can recover the stokes parameters corrected for low - frequency noise .we consider a mission involving circles .the set of all circles that intercept circle is denoted by and contains circles .for any pair of circles and , we denote the two points where these two circles intersect ( if any ) by . in this notation the circle currently scanned , the intersecting circle in set , and indexes the two intersections ( indexes the first ( second ) point encountered from the northernmost point on the circle ) so that the points and on the sky are identical .the stokes parameters at point , with respect to a fixed global reference system , are denoted by a , with at intersection , the set of measurements by polarimeters travelling along the scanning circle is a denoted by , and is related to the stokes parameters at this point by ( see eq .[ mesurepol ] ) : where is the matrix : ] . in all these cases ,the results are similar for . for more pessimistic noise cases ,the choice of the scanning strategy may have a strong impact on the quality of the final maps .a quantitative study of this point is deffered to a forthcoming publication .rebolo , r. , gutirrez , c. , watson , r. , and gallegos , j. : 1999 , in _ proceedings of the workshop `` the cmb and the planck mission '' held in santander , to appear in astrophysical letters and communications _
a major problem in cosmic microwave background ( cmb ) anisotropy mapping , especially in a total - power mode , is the presence of low - frequency noise in the data streams . if unproperly processed , such low - frequency noise leads to striping in the maps . to deal with this problem , solutions have already been found for mapping the cmb temperature fluctuations but no solution has yet been proposed for the measurement of cmb polarization . complications arise due to the scan - dependent orientation of the measured polarization . in this paper , we investigate a method for building temperature and polarization maps with minimal striping effects in the case of a circular scanning strategy mission such as the planck mission .
after popper , it is a commonplace opinion in philosophy of science that the ` value ' of definitions besides mathematics is generally low .nevertheless , for many more practical needs , from jurisprudence to health planning , the definitions are necessary to impose theoretical boundaries of a subject despite of their incompleteness and temporariness .this highly applies to definitions of drugs and drug use . following the standard definitions , a _ drug _ is a ` chemical that influences biological function ( other than by providing nutrition or hydration ) ' .a _ psychoactive drug _ is a ` drug whose influence is in a part on mental functions ' .abusable psychoactive drug _ is a ` drug whose mental effects are sufficiently pleasant or interesting or helpful that some people choose to take it for a reason other than to relieve a specific malady ' . in our studywe use the term ` drug ' for abusable psychoactive drug regardless of whether it s illicit or not .drug use is a risk behaviour that does not happen in isolation ; it constitutes an important factor for increasing risk of poor health , along with earlier mortality and morbidity , and has significant consequences for society .drug consumption and addiction constitutes a serious problem globally .it includes numerous risk factors , which are defined as any attribute , characteristic , or event in the life of an individual that increases the probability of drug consumption .a number of factors are correlated with initial drug use including psychological , social , individual , environmental , and economic factors .these factors are likewise associated with a number of personality traits .while legal drugs such as sugar , alcohol and tobacco are probably responsible for far more premature death than illegal recreational drugs , the social and personal consequences of recreational drug use can be highly problematic .psychologists have largely agreed that the personality traits of the five factor model ( ffm ) are the most comprehensive and adaptable system for understanding human individual differences .the ffm comprises neuroticism ( n ) , extraversion ( e ) , openness to experience ( o ) , agreeableness ( a ) , and conscientiousness ( c ) .a number of studies have illustrated that personality traits are associated to drug consumption .roncero et al highlighted the importance of the relationship between high n and the presence of psychotic symptoms following cocaine - induced drug consumption .vollrath & torgersen observed that the personality traits of n , e , and c are highly correlated with hazardous health behaviours .a low score of c , and high score of e or high score of n correlate strongly with multiple risky health behaviours .flory et al found alcohol use to be associated with lower a and c , and higher e. they found also that lower a and c , and higher o are associated with marijuana use .sutina et al demonstrated that the relationship between low c and drug consumption is moderated by poverty ; low c is a stronger risk factor for illicit drug usage among those with relatively higher socioeconomic status .they found that high n , and low a and c are associated with higher risk of drug use ( including cocaine , crack , morphine , codeine , and heroin ) .it should be mentioned that high n is positively associated with many other addictions like internet addiction , exercise addiction , compulsive buying , and study addiction .an individual s personality profile plays a role in becoming a drug user .terracciano et al demonstrated that the personality profiles for the users and non - users of nicotine , cannabis , cocaine , and heroin are associated with a ffm of personality samples from different communities .they also highlight the links between the consumption of these drugs and low c. turiano et al found a positive correlation between n and o , and drug use , while , increasing scores for c and a decreases risk of drug use .previous studies demonstrated that participants who use drugs including alcohol and nicotine have a strong positive correlation between a and c and a strong negative correlation for each of these factors with n .three high - order personality traits are proposed as endophenotypes for substance use disorders : positive emotionality , negative emotionality , and constraint .the statistical characteristics of groups of drug users and non - users have been studied by many authors ( see , for example , terracciano et al ) .they found that the personality profile for the users and non - users of tobacco , marijuana , cocaine , and heroin are associated with a higher score on n and a very low score for c. sensation seeking is also higher for users of recreational drugs .the problem of risk evaluation for individuals is much more complex .this was explored very recently by yasnitskiy et al , valeroa et al and bulut & bucak .both individual and environmental factors predict substance use and different patterns of interaction among these factors may have different implications .age is a very important attribute for diagnosis and prognosis of substance use disorders .in particular , early adolescent onset of substance use is a robust predictor of future substance use disorders .valeroa et al evaluated the individual risk of drug consumption for alcohol , cocaine , opiates , cannabis , ecstasy and amphetamines .input data were collected using the spanish version of the zuckerman - kuhlman personality questionnaire ( zkpq ) .two samples were used in this study .the first one consisted of 336 drug dependent psychiatric patients of one hospital .the second sample included 486 control individuals .the authors used a decision tree as a tool to identify the most informative attributes .the sensitivity of 40% and the specificity of 94% were achieved for the training set .the main purpose of this research was to test if predicting drug consumption was possible and to identify the most informative attributes using data mining methods .the decision tree methods were applied to explore the differential role of personality profiles in drug consumer and control individuals .the two personality factors , neuroticism and anxiety and the zkpq s impulsivity , were found to be most relevant for drug consumption prediction .low sensitivity ( 40% ) does not provide application of this decision tree to real life problems .in our study we tested the associations with personality traits for different types of drugs separately using the revised neo five - factor inventory ( neo - ffi - r ) , the barratt impulsiveness scale version 11 ( bis-11 ) , and the impulsivity sensation - seeking scale ( impss ) to assess impulsivity and sensation - seeking respectively .bulut & bucak detected a risk rate for teenagers in terms of percentage who are at high risk without focusing on specific addictions .the attributes were collected by an original questionnaire , which included 25 questions .the form was filled in by 671 students .the first 20 questions asked about the teenagers financial situation , temperament type , family and social relations , and cultural preferences .the last five questions were completed by their teachers and concerned the grade point average of the student for the previous semester according to a 5-point grading system , whether the student had been given any disciplinary punishment so far , if the student had alcohol problems , if the student smoked cigarettes or used tobacco products , and whether the student misused substances . in bulutet al s study there are five risk classes as outputs .the authors diagnosed teenagers risk to be a drug abuser using seven types of classification algorithms : -nearest neighbor , id3 and c4.5 decision tree based algorithms , nave bayes classifier , nave bayes / decision trees hybrid approach , one - attribute - rule , and projective adaptive resonance theory .the classification accuracy of the best classifier was reported as 98% .yasnitskiy et al , attempted to evaluate the individual s risk of illicit drug consumption and to recommend the most efficient changes in the individual s social environment to reduce this risk .the input and output features were collected by an original questionnaire .the attributes consisted of : level of education , having friends who use drugs , temperament type , number of children in the family , financial situation , alcohol drinking and smoking , family relations ( cases of physical , emotional and psychological abuse , level of trust and happiness in the family ) .there were 72 participants .a neural network model was used to evaluate the importance of attributes for diagnosis of the tendency to drug addiction .a series of virtual experiments was performed for several test patients ( drug users ) to evaluate how it is possible to control the propensity for drug addiction .the most effective change of social environment features was predicted for each patient .the recommended changes depended on the personal profile , and significantly varied for different patients .this approach produced individual bespoke advice for decreasing drug dependence . in our study , the database was collected by an anonymous online survey methodology by elaine fehrman yielding 2051 respondents .the database is available online .twelve attributes are known for each respondent : personality measurements which include n , e , o , a , and c scores from neo - ffi - r , impulsivity ( imp . ) from ( bis-11 ) , sensation seeking ( ss ) from ( impss ) , level of education ( edu . ) , age , gender , country of residence , and ethnicity .the data set contains information on the consumption of 18 central nervous system psychoactive drugs including alcohol , amphetamines , amyl nitrite , benzodiazepines , cannabis , chocolate , cocaine , caffeine , crack , ecstasy , heroin , ketamine , legal highs , lsd , methadone , magic mushrooms ( mmushrooms ) , nicotine , and volatile substance abuse ( vsa ) , and one fictitious drug ( semeron ) which was introduced to identify over - claimers .participants selected for each drug either they never used this drug , used it over a decade ago , or in the last decade , year , month , week , or day .participants were asked about substances , which were classified as central nervous system depressants , stimulants , or hallucinogens .the depressant drugs comprised alcohol , amyl nitrite , benzodiazepines , tranquilizers , gamma - hydroxybutyrate solvents and inhalants , and opiates such as heroin and methadone / prescribed opiates . the stimulants consisted of amphetamines , nicotine , cocaine powder , crack cocaine , caffeine , and chocolate . although chocolate contains caffeine , data for chocolatewas measured separately , given that it may induce parallel psychopharmacological and behavioural effects in individuals congruent to other addictive substances .the hallucinogens included cannabis , ecstasy , ketamine , lsd , and magic mushrooms .legal highs such as mephedrone , salvia , and various legal smoking mixtures were also measured .we use four different definitions of ` drug users ' based on the recency of use .firstly , two isolated categories ( ` never used ' and ` used over a decade ago ' ) are placed into the class of non - users , and all other categories are merged to form the class of users . secondly , we merge the categories ` used in last decade ' , ` used over a decade ago ' and ` never used ' into the group of non - users and place four other categories ( ` used in last year - month - week - day ' ) into group of users .this classification is called ` year - based ' one .also ` month - based ' and ` week - based ' user / non - user separations are considered .the objective of the study was to assess the potential effect of big five personality traits , impulsivity , sensation - seeking , and demographic data on drug consumption for different drugs , groups of drugs and for different definitions of drug users .the study had two purposes : ( i ) to identify the association of personality profiles ( i.e. neo - ffi - r ) with drug consumption and ( ii ) to predict the risk of drug consumption for each individual according to their personality profiles .part of the results was presented in the preprint .the sample was created by an anonymous online survey .it was found to be biased when compared with the general population , which was indicated from comparison to the data published by egan , et al and costa jr & mccrae .such a bias is usual for clinical cohorts .our study reveals that the personality profiles are strongly associated with belonging to groups of the users and non - users of the 18 drugs . for analysis , we use the following subdivision of the sample t- : the interval 44 - 49 indicates a moderately low score , , the interval 49 - 51 indicates a neutral score , and the interval 51 - 56 indicates a moderately high score .we found that the n and o scores of drug users of all 18 drugs are moderately high or neutral , except for crack usage for the week - based classification , for which the o score is moderately low .the a and c scores are moderately low or neutral for all groups of drug users and all user / non - user separations . for most groups of illicit drug users the a and c scores are moderately low except two exclusions :the a score is neutral in the year - based classification for lsd users and in the week - based classification for lsd and magic mushrooms users .the a and c scores for groups of legal drugs users ( i.e. alcohol , chocolate , caffeine , and nicotine ) are neutral , apart from nicotine users , whose c score is moderately low for all bases of user / non - user separation .the impact of the e score is drug specific .for example , for the week - based user / non - user separation the e scores are : * the e score of users is moderately low for amphetamines , amyl nitrite , benzodiazepines , heroin , ketamine , legal highs , methadone , and crack ; * the e score of users is moderately high for cocaine , ecstasy , lsd , magic mushrooms , and vsa ; * the e score of users is neutral for alcohol , caffeine , chocolate , cannabis , and nicotine . for more detailssee section ` comparison of personality traits means for drug users and non - users ' in ` results ' .usage of some drugs are significantly correlated .the structure of these correlations is analysed in section ` correlation between usage of different drugs ' .two correlation measures are utilised : the pearson correlation coefficient ( pcc ) and the relative information gain ( rig ) .we found three groups of drugs with highly correlated use .the central element is clearly identified for each group .these centres are : _ heroin , ecstasy , and benzodiazepines_. it means that the drug consumption has a ` modular structure ' .the modular structure has clear reflection in the correlation graph .the idea to merge the correlated attributes into ` modules ' called as _ correlation pleiades _ is popular in biology .the concept of correlation pleiades was introduced in biostatistics in 1931 .they were used for identification of the modular structure in evolutionary physiology . according to berg ,correlation pleiades are clusters of correlated traits . in our approach, we distinguish the core and the peripheral elements of correlation pleiades and allow different pleiads to have small intersections in their periphery . ` soft ' clustering algorithmsrelax the restriction that each data object is assigned to only one cluster ( like probabilistic or fuzzy clustering ) .see the book of r. xu and d. wunsch for the modern review of hard and soft clustering .we refer to for a discussion of clustering in graphs with intersections .the three groups of correlated drugs centered around heroin , ecstasy , and benzodiazepines are defined for the decade- , year- , month- , and week - based classifications : * the heroin pleiad includes crack , cocaine , methadone , and heroin ; * the ecstasy pleiad consists of amphetamines , cannabis , cocaine , ketamine , lsd , magic mushrooms , legal highs , and ecstasy ; * the benzodiazepines pleiad contains methadone , amphetamines , cocaine , and benzodiazepines .analysis of the intersections between correlation pleiads of drugs can generate important question and hypotheses : * why is cocaine a peripherical member of all pleiads ? * why does methadone belong to the periphery of both the heroin and benzodiazepines pleiades ? *do these intersections reflect the structure of individual drug consumption or the structure of the groups of drug consumers ?correlation analysis of the decade - based classification problems demonstrates that the consumption of legal drugs ( i.e. alcohol , chocolate and caffeine ) is not correlated with consumption of other drugs .the consumptions of seven illicit drugs ( i.e. amphetamines , cannabis , cocaine , ecstasy , legal highs , lsd , and mushrooms ) are symmetrically correlated ( when the correlations are measured by relative information gain , which is not symmetric a priori ) .there are also many strongly asymmetric correlations .for example , knowledge of amphetamines , cocaine , ecstasy , legal highs , lsd , and magic mushroom consumption is useful for the evaluation of ketamine consumption , but on the other hand , knowledge of ketamine consumption is significantly less useful for the evaluation of usage of the drugs listed above . in this study , we evaluated the individual drug consumption risk separately , for each drug and pleiad of drugs .we also analyzed interrelations between the individual drug consumption risks for different drugs .we applied several data mining approaches : decision tree , random forest , -nearest neighbors , linear discriminant analysis , gaussian mixture , probability density function estimation , logistic regression and nave bayes .the quality of classification was surprisingly high .we tested all the classifiers by _ leave - one - out cross validation_. the best results with sensitivity and specificity being greater than 75% were achieved for cannabis , crack , ecstasy , legal highs , lsd , and vsa .sensitivity and specificity greater than 70% were achieved for the following drugs : amphetamines , amyl nitrite , benzodiazepines , chocolate , caffeine , heroin , ketamine , methadone and nicotine .the poorest result was obtained for prediction of alcohol consumption .an exhaustive search was performed to select the most effective subset of input features , and data mining methods to classify users and non - users for each drug .users are defined for each correlation pleiad of drugs as users of any of the drug from the pleiade .we consider the classification problem for drug pleiades for the decade- , year- , month- , and week - based user / non - user separations .these problems are much better balanced for short periods ( the week - based user definition ) than the classification problems for separate drugs .for example , there are 184 users for the heroin pleiad but only 29 heroin users in the database for the week - based definition of users .the quality of classification is high .for example , for the month - based user / non - user separation of the heroin pleiad consumption , the best classifier is a decision tree with five features and sensitivity 74.18% and specificity 74.11% .a decision tree with seven attributes is the best classifier for the year - based classification problem of the ecstasy pleiad users / non - users and has sensitivity 80.65% and specificity 80.72% . in the week - based separation of the benzodiazepinespleiad users / non - users , the best classifier is a decision tree with five features , sensitivity 75.10% , and specificity 75.76% . the creation of classifiers provided the capability to evaluate the risk of drug consumption in relation to individuals .the risk map is a useful tool for data visualisation and for the generation of hypotheses for further study ( see section ` risk evaluation for the decade - based user / non - user separation ' ) .the main results of the work are : * presentation and descriptive analysis of a database with information of 1885 respondents and usage of 18 drugs . *demonstration that the personality traits ( five factor model , impulsivity , and sensation seeking ) together with simple demographic data give the possibility of predicting the risk of consumption of individual drugs with sensitivity and specificity above 70% for most drugs . *the best classifiers and most significant predictors are found for each individual drug in question .* revelation of three correlation pleiads of drugs , that are the clusters of drugs with correlated consumption centered around heroin , ecstasy , and benzodiazepines .* the best robust classifiers and most significant predictors are found for use of pleiads of drugs . *the risk map technology is developed for the visualization of the probability of drug consumption .the database was collected by elaine fehrman between march 2011 and march 2012 . in january 2011, the research proposal was approved by the university of leicester s forensic psychology ethical advisory group , and subsequently received favourable opinion from the university of leicester school of psychology s research ethics committee ( prec ) .the data are available online .an online survey tool from survey gizmo was employed to gather data which maximised anonymity , this being particularly relevant to canvassing respondents views , given the sensitive nature of drug use .all participants were required to declare themselves at least 18 years of age prior to informed consent being given .the study recruited 2051 participants over a 12-month recruitment period .of these persons , 166 did not respond correctly to a validity check built into the middle of the scale , so were presumed to being inattentive to the questions being asked .nine of these persons were found to also have endorsed using a fictitious recreational drug , and which was included precisely to identify respondents who over - claim , as have other studies of this kind .this led a useable sample of 1885 participants ( male / female = 943/942 ) .the snowball sampling methodology recruited a primarily ( 93.7% ) native english - speaking sample , with participants from the uk ( 1044 ; 55.4% ) , the usa ( 557 ; 29.5% ) , canada ( 87 ; 4.6% ) , australia ( 54 ; 2.9% ) , new zealand ( 5 ; 0.3% ) and ireland ( 20 ; 1.1% ) .a total of 118 ( 6.3% ) came from a diversity of other countries , none of whom individually met 1% of the sample or did not declare the country of location . further optimizing anonymity , persons reported their age band , rather than their exact age ; 18 - 24 years ( 643 ; 34.1% ) , 25 - 34 years ( 481 ; 25.5% ) , 35 - 44 years ( 356 ; 18.9% ) , 45 - 54 years ( 294 ; 15.6% ) , 55 - 64 ( 93 ; 4.9% ) , and over 65 ( 18 ; 1% ) .this indicates that although the largest age cohort band were in the 18 to 24 range , some 40% of the cohort was 35 or above , which is an age range often missed in studies of this kind .the sample recruited was highly educated , with just under two thirds ( 59.5% ) educated to , at least , degree or professional certificate level : 14.4% ( 271 ) reported holding a professional certificate or diploma , 25.5% ( 481 ) an undergraduate degree , 15% ( 284 ) a master s degree , and 4.7% ( 89 ) a doctorate .approximately 26.8% ( 506 ) of the sample had received some college or university tuition although they did not hold any certificates ; lastly , 13.6% ( 257 ) had left school at the age of 18 or younger .participants were asked to indicate which racial category was broadly representative of their cultural background . an overwhelming majority ( 91.2% ; 1720 )reported being white , ( 1.8% ; 33 ) stated they were black , and ( 1.4% ; 26 ) asian .the remainder of the sample ( 5.6% ; 106 ) described themselves as ` other ' or ` mixed ' categories .this small number of persons belonging to specific non - white ethnicities precludes any analyses involving racial categories . in order to assess personality traits of the sample , the revised neo five - factor inventory ( neo - ffi - r ) questionnairewas employed .the neo - ffi - r is a highly reliable measure of basic personality domains ; internal consistencies are 0.84 ( n ) ; 0.78 ( e ) ; 0.78 ( o ) ; 0.77 ( a ) , and 0.75 ( c ) egan .the scale is a 60-item inventory comprised of five personality domains or factors .the neo - ffi - r is a shortened version of the revised neo - personality inventory ( neo - pi - r ) .the five factors are : n , e , o , a , and c with 12 items per domain .the five traits can be summarized as : 1 ._ neuroticism _ ( * n * ) is a long - term tendency to experience negative emotions such as nervousness , tension , anxiety and depression ; 2 ._ extraversion _ ( * e * ) is manifested in outgoing , warm , active , assertive , talkative , cheerful , and in search of stimulation characteristics ; 3 ._ openness to experience _( * o * ) is a general appreciation for art , unusual ideas , and imaginative , creative , unconventional , and wide interests , 4 ._ agreeableness _ ( * a * ) is a dimension of interpersonal relations , characterized by altruism , trust , modesty , kindness , compassion and cooperativeness ; 5 ._ conscientiousness _ ( * c * ) is a tendency to be organized and dependable , strong - willed , persistent , reliable , and efficient .all of these domains are hierarchically defined by specific facets .egan et al observe that the score o and e domains of the neo - ffi instrument are less reliable than n , a , and c. participants were asked to read the 60 neo - ffi - r statements and indicate on a five - point likert scale how much a given item applied to them ( i.e. 0 = ` strongly disagree ' , 1 = ` disagree ' , 2 = ` neutral ' , 3 = ` agree ' , to 4 = ` strongly agree ' ) .we expected that drug usage is associated with high n , and low a and c. the darker dimension of personality can be described in terms of high n and low a and c , whereas much of the anti - social behaviour in non - clinical persons appears underpinned by high n and low c .the so - called ` _ _ negative urgency _ _ ' is the tendency to act rashly when distressed , and characterized by high n , low c , and low a .the negative urgency is partially proved below for users of most of the illegal drugs . in addition, our findings suggest that o is higher for drug users .the second measure used was the barratt impulsiveness scale ( bis-11 ) .the bis-11 is a 30-item self - report questionnaire , which measures the behavioural construct of impulsiveness , and comprises three subscales : motor impulsiveness , attentional impulsiveness , and non - planning .the ` motor ' aspect reflects acting without thinking , the ` attentional ' component poor concentration and thought intrusions , and the ` non - planning ' a lack of consideration for consequences .the scale s items are scored on a four - point likert scale .this study modified the response range to make it compatible with previous related studies .a score of five usually connotes the most impulsive response although some items are reverse - scored to prevent response bias .items are aggregated , and the higher bis-11 scores , the higher the impulsivity level . the bis-11 is regarded a reliable psychometric instrument with good test - retest reliability ( spearman s rho is equal to 0.83 ) and internal consistency ( cronbach s alpha is equal to 0.83 ; .the third measurement tool employed was the impulsiveness sensation - seeking ( impss ) .although the impss combines the traits of impulsivity and sensation - seeking , it is regarded as a measure of a general sensation - seeking trait .the scale consists of 19 statements in true - false format , comprising eight items measuring impulsivity ( * imp * ) , and 11 items gauging sensation - seeking ( * ss * ) . the impss is considered a valid and reliable measure of high risk behavioural correlates such as , substance misuse .participants were questioned concerning their use of 18 legal and illegal drugs ( alcohol , amphetamines , amyl nitrite , benzodiazepines , cannabis , chocolate , cocaine , caffeine , crack , ecstasy , heroin , ketamine , legal highs , lsd , methadone , magic mushrooms , nicotine and volatile substance abuse ( vsa ) ) and one fictitious drug ( semeron ) which was introduced to identify over - claimers .it was recognised at the outset of this study that drug use research regularly ( and spuriously ) dichotomises individuals as users or non - users , without due regard to their frequency or duration / desistance of drug use . in this study ,finer distinctions concerning the measurement of drug use have been deployed , due to the potential for the existence of qualitative differences amongst individuals with varying usage levels . in relation to each drug , respondents were asked to indicate if they never used this drug , used it over a decade ago , or in the last decade , year , month , week , or day .this format captured the breadth of a drug - using career , and the specific recency of use .different categories of drug users are depicted in fig [ categoriesfig:1 ] .analysis of the classes of drug users shows that part of the classes are nested : participants which belong to the category ` used in last day ' also belong to the categories ` used in last week ' , ` used in last month ' , ` used in last year ' and ` used in last decade ' .there are two special categories : ` never used ' and ` used over a decade ago ' ( see fig [ categoriesfig:1 ] ) .data does not contain a definition of users and non - users groups .formally only a participant in the class ` never used ' can be called a non - user , but it is not a seminal definition because a participant who used a drug more than decade ago can not be considered a drug user for most applications .there are several possible way to discriminate participants into groups of users and non - users for binary classification : 1 . two isolated categories ( ` never used ' and ` used over a decade ago ' ) are placed into the class of non - users with a green background in fig [ categoriesfig:1 ] and all other categories into the class ` users ' as the simplest version of binary classification .this classification problem is called ` _ _ decade - based _ _ ' user / non - user separation .the categories ` used in last decade ' , ` used over a decade ago ' and ` never used ' are merged to form a group of non - users and all other categories are placed into the group of users .this classification problem is called ` _ _ year - based _ _ ' .3 . the categories ` used in last year ' , ` used in last decade ' , ` used over a decade ago ' and ` never used ' are combined to form a group of non - users and all three other categories are placed into the group of users .this classification problem is called ` _ _ month - based _ _ ' .4 . the categories ` used in last week ' and ` used in last month ' are merged to form a group of users and all other categories are placed into the group of non - users .this classification problem is called ` _ _ week - based _ _ ' .we begin our analysis from the decade - based user / non - user separation because it is a relatively well balanced classification problem , that is , there are sufficiently many users in the united group ` used in last decade - year - month - week ' for all drugs in the database . if the problem is not directly specified then it is the decade - based classification problem .we also perform analysis for the year- , month- , and week - based user / non - user separation .it is useful to group drugs with highly correlated usage for this purpose ( see section ` pleiades of drugs ' ) .the proportion of drug users among all participants is different for different drugs and for different classification problems .the data set comprises 1885 individuals without any missing data .table [ tab:1a ] shows the percentage of drug users for each drug and for each problem in the database .it is necessary to mention that the sample is biased to a higher proportion of drug users .this means that for the general population the fraction of an illegal drug users is expected to be significantly lower .-2.25in0 in .the number and fraction of drug users [ cols="<,^,^,^,^ " , ] -2.25in0 in + heroinpl , ecstasypl , benzopl & & & & & + + heroinpl , ecstasypl , benzopl & & & & & + + heroinpl , ecstasypl & & & & & + benzopl & & & & & + + heroinpl , benzopl & & & & & + ecstasypl & & & & & + the introduction of moderate subcategories of t- for pleiades of drugs enables to separates the pleiades of drugs into two groups for the decade- , month- , and week - based user / non - user separation . while for year - based user / non - user separation it is only one group with the profile includes the users of heroinpl , ecstasypl and benzopl . for decade - based classification problem ,the group with the profile includes the users of heroinpl and benzopl .the group with the profile includes the users of ecstasypl .for month- and week - based classification problem , the group with the profile includes the users of heroinpl and benzopl .the group with the profile includes the users of ecstasypl .the personality profile for pleiades of drugs also are strongly associated with belonging to groups of the users and non - users of each plead for decade- , year- , month- , and week - based classification problem ( see fig .[ heroinpldecade ] , fig .[ ecstasyplyear ] , fig .[ benzoplmonth ] and [ heroinplweek ] ) . with respect to the sample means , scaledwidth=100.0% ] with respect to the sample means , scaledwidth=100.0% ] with respect to the sample means , scaledwidth=100.0% ] with respect to the sample means , scaledwidth=100.0% ] we applied the eight methods described in section ` risk evaluation methods ' and selected the best one for each pleiad for decade- , year- , month- , and week - based classification problem .the results of the classifier selection are depicted in table [ tab:12a ] and the quality of classification is high .the classification results are very satisfactory for each pleiad for decade- , year- , month , and week based problem .we can compare the classifiers for one pleiad and for different problems ( see table [ tab:12a ] ) .for example , * the best classifier for ecstasypl for the year - based user / non user separation is dt with seven attributes and has sensitivity 80.65% and specificity 80.72% . * the best classifier for heroinpl for the month - based user / non user separation is dt with five attributes and has sensitivity 74.18% and specificity 74.11% . *the best classifier for benzopl for the week - based user / non user separation is dt with five attributes and has sensitivity 75.10% and specificity 75.76% .-2.25in0 in the comparison of tables [ tab:12 ] and [ tab:12a ] shows that the best classifiers for the ecstasy and benzodiazepines pleiades are more accurate than the best classifiers for consumption of the ` central ' drugs of the pleiades , ecstasy and benzodiazepines even for the decade - based user definition .classifiers for heroinpl may have slightly worse accuracy but these classifiers are more robust because they solve classification problems which have more balanced classes .all other classifiers for pleiades of drugs are more robust too by the same reasons for all pleiades and definitions of users. tables [ tab:12 ] and [ tab:12a ] for the decade - based user definition show that most of the classifiers for pleiades use more input features than the classifiers for individual drugs .we can see from these tables that the accuracies of the classifiers for pleiades and for individual drugs do not differ drastically , but the use of a greater number of input features indicates more robust classifiers .it is important to stress that usually pleiades are assumed to be disjoint .we consider pleiades which are named by the central drug and have an intersection in the peripheral drugs .for example , heroin and ecstasy pleiades have cocaine as an intersection .this approach corresponds to the concept of ` soft clustering ' .the successful construction of a classifier provides an instrument for the evaluation of the risk of drug consumption for each individual , along with the creation of a map of risk .the risk map of ecstasy consumption on the basis of three input features is depicted in fig [ riskmapsfig:8 ] . from the pdfe - based risk maps ( fig [ riskmapsfig:8]a and [ riskmapsfig:8]b )it can be observed that there is a considerable area of high risk ( indicated in blue ) for men aged between 25 - 34 years , but significantly less for females .however , young males with the highest ss scores have significantly less risk than females with the same profiles .dt - based risk maps ( fig [ riskmapsfig:8]c and [ riskmapsfig:8]d ) illustrate qualitatively the same shapes .the risk maps provide a tool for the generation of hypotheses for further study .we can create risk maps for pleiades of drugs as well .our study demonstrates strong correlations between personality profiles and the risk of drug use .this result supports observations from some previous works .individuals involved in drug use are more likely to have higher scores for n , and low scores for a and c .we analysed in detail the average differences in the groups of drug users and non - users for 18 drugs ( tables [ tab:5 ] , [ tab:5a ] , [ tab:5b ] , and [ tab:5c ] ) .in addition to this analysis , we achieved much more detailed knowledge about the relationship between the personality traits , biographic data and the use of individual drugs or drug clusters by an individual patient . the analysed database contained 1885 participants and 12 features ( input attributes ) .these features included five personality traits ( neo - ffi - r ) ; impulsivity ( bis-11 ) , sensation seeking ( impss ) , level of education , age , gender , country of residence , and ethnicity .the data set included information on the consumption of 18 central nervous system psychoactive drugs : alcohol , amphetamines , amyl nitrite , benzodiazepines , cannabis , chocolate , cocaine , caffeine , crack , ecstasy , heroin , ketamine , legal highs , lsd , methadone , mushrooms , nicotine , and vsa ( output attributes ) .there were limitations of this study since the collected sample was biased with respect to the general population , but it remained useful for risk evaluation .we used three different techniques of feature ranking .after input feature ranking we excluded ethnicity and country of residence .it was impossible to completely exclude the possibility that ethnicity and country of residence may be important risk factors , but the dataset has not enough data for most of ethnicities and countries to prove the value of this information . as a result , 10 input features remained : age , edu . ,n , e , o , a , c , imp . , ss , and gender .our aim was to predict the risk of drug consumption for an individual .all input features are ordinal or nominal . to apply data mining methods which were developed for continuous input features we apply catpca technique to quantify data .we used four different definitions of drug users which differ with regard to the recency of the last drug consumption : the decade - based , year - based , month - based and week - based user / non user separation ( fig . [ categoriesfig:1 ] ) .the day - based classification problem is also possible but there is not enough data on drug use during the last day for most drugs our findings allowed us to draw a number of important conclusions about the associations between personality traits and drug use .all five personality factors are relevant traits to be taken into account when assessing the risk of drug consumption .the mean scores for the groups of users of all 18 drugs are moderately high or neutral for n and o , and moderately low for a and c , except for crack usage for the week based classification problem which has a moderately low o score ( see table [ tab:5c ] and fig [ averagecrack ] ) .users of legal drugs ( alcohol , chocolate , caffeine , and nicotine ) have neutral a and c scores , except nicotine users whose c score is moderately low . for lsd users in the yearbased classification problem and for lsd and magic mushrooms users in the week based classification problem the a score is neutral . the impact of the e score is drug specific .for example , for the decade - based user / non user definition the e score is negatively correlated with consumption of crack , heroin , vsa , and methadone ( e score is for their users ) .it is has no predictive value for other drugs for the decade - based classification ( e score for users is ) , whereas in the year- , month- , and week - based classification problems all three possible values of e score are observed ( see tables [ tab:5 ] , [ tab:5a ] , [ tab:5b ] and [ tab:5c ] ) .we confirm the previous researcher finding that the higher scores for n and o the lower scores for c and a lead to increased risk of drug use .o score is marked by curiosity and open - mindedness ( and correlated with intelligence ) , and it is therefore understandable why higher o may be sometimes associated with drug use .flory et al found that marijuana use to be associated with lower a and c , and higher o. these findings have been confirmed by our study .our results improve the knowledge concerning the pathways leading to drug consumption .it is known that significant predictors of alcohol , tobacco and marijuana use may vary according to the drug in question .our study demonstrated that different attributes were important for different drugs .we tested eight types of classifiers for each drug for the decade - based user definition .loocv was used to evaluate sensitivity and specificity . in this studywe choose the method which provide maximal value of minimum among sensitivity and specificity as the best one .the method with maximal sum of the sensitivity and specificity is selected as the best one for two methods with the same minimum among sensitivity and specificity .there were classifiers with sensitivity and specificity greater than 70% for the decade - based user / non user separation for all drugs except magic mushrooms , alcohol , and cocaine ( table [ tab:12 ] ) .this accuracy was unexpectedly high for this type of problem .the poorest result was obtained for the prediction of alcohol consumption .the best set of input features was defined for each drug ( table [ tab:12 ] ) .an exhaustive search was performed to select the most effective subset of input features , and the best data mining methods to classify users and non - users for each drug .there were 10 input features .each of them is an important factor for risk evaluation for the use of some drugs .however , there was no single most effective classifier using all input features .the maximal number of attributes used in the best classifiers is six ( out of 10 ) and the minimal number is two . table [ tab:12 ] shows the best sets of attributes for user / nonuser classification for different drugs and for the decade - based classification problem .this table together with its analogues for pleiads of drugs and all decade - year - month - week classification problems ( table [ tab:12a ] ) are important result of the analysis .the dt for crack consumption used two features e and c only , and provided sensitivity of 80.63% and specificity of 78.57% .the dt for vsa consumption used age , edu . ,e , a , c , and ss , and provided sensitivity 83.48% and specificity 77.64% ( table [ tab:12 ] ) .age was a widely used feature which was employed in the best classifiers for 14 drugs for the decade - based classification problem .gender was used in the best methods for 10 drugs .we found some unexpected outcomes .for example , fraction of females which are alcohol users is greater than the fraction of males but the greater part of males consume caffeine ( coffee ) .most of the features which are not used in the best classifiers are redundant but are not uninformative .for example , the best classifier for ecstasy consumption used age , ss , and gender and had sensitivity 76.17% and specificity 77.16% .there exist a dt for prediction of usage of the same drug , which utilizes age , edu ., o , c , and ss with sensitivity 77.23% and specificity 75.22% , a dt with inputs age , edu . ,e , o , and a with sensitivity 73.24% and specificity 78.22% , and an advanced classifier with inputs age , edu ., n , e , o , c , imp . , ss , and gender with sensitivity 75.63% and specificity 75.75% .this means that for evaluating the risk of ecstasy usage all input attributes are informative but the required information can be extracted from a subset of attributes .we demonstrated that there are three groups of drugs with strongly correlated consumption .that is , the drug usage has a ` modular structure ' .the idea to merge correlated attributes into ` modules ' is popular in biology .the modules are called the ` correlation pleiades ' ( see section ` pleiades of drugs ' ) .the modular structure contains three modules : the heroin pleiad , ecstasy pleiad , and benzodiazepines pleiad : * the _ heroin pleiad ( heroinpl ) _ includes crack , cocaine , methadone , and heroin . *the _ ecstasy pleiad ( ecstasypl ) _ includes amphetamines , cannabis , cocaine , ketamine , lsd , magic mushrooms , legal highs , and ecstasy . * the _ benzodiazepines pleiad ( benzopl ) _ includes contains methadone , amphetamines , and cocaine .the modular structure has a clear reflection in the correlation graph , fig [ strongdrugusfig:5 ] .we define groups of users and non - users for each pleiad . inmost of the databases the classes of users and non - users for most of the individual drugs are imbalanced ( see table [ tab:1 ] ) but the merging the users of all drugs in one class ` drug users ' does not seem to be the best solution because of physiological , psychological and cultural differences between usage of different drugs .we propose instead to use correlation pleiades for the analysis of drug usage as a solution to the class imbalance problem because for all three pleiades the classes of users and non - users are better balanced ( table [ tab:13a ] ) and the consumption of different drugs from the same pleiad is correlated .we applied the eight methods described in the ` risk evaluation methods ' section and selected the best one for each problem for all pleiades .the results of the classifier selection are presented in table [ tab:12a ] and the quality of the classification is high .the majority of the best classifiers for pleiades of drugs has a better accuracy than the classifiers for individual drug usage .( see table [ tab:12 ] and [ tab:12a ] ) .the best classifiers for pleiades of drugs use more input features than the best classifiers for the corresponding individual drugs .the classification problems for pleiades of drugs are more balanced .therefore , we can expect that the classifiers for pleiades are more robust than the classifiers for individual drugs .the user / non - user classifiers can be also used for forming of risk maps .risk maps are useful tools for data visualisation and hypotheses generation .these results are important as they examine the question of the relationship between drug use and personality comprehensively and engage the challenge of untangling correlated personality traits ( the ffm , impulsivity , and sensation seeking ) , and clusters of substance misuse ( the correlation pleiades ) .the work acknowledged the breadth of a common behaviour which may be transient and leave no impact , or may significantly harm an individual .we examined drug use behaviour comprehensively in terms of the many kinds of substances that may be used ( from the legal and anodyne , to the deeply harmful ) , as well as the possibility of behavioural over - claiming .we built into our study the wide temporality of the behaviour indicative of the chronicity of behaviour and trends and fashions ( e.g. the greater use of lsd in the 1960s and 1970s , the rise of ecstasy in the 1980s , some persons being one - off experimenters with recreational drugs , and others using recreational substances on a daily basis ) .we defined substance use in terms of behaviour rather than legality , as legislation in the field is variable .our data were gathered before ` legal highs ' emerged as a health concern so we did not differentiate , for example , synthetic cannabinoids and cathinone - based stimulants ; these substances have been since widely made illegal .we were nevertheless able to accurately classify users of these substances ( reciprocally , our data were gathered before cannabis decriminalisation in parts of north america , but again , we were able to accurately classify cannabis users ) .we included control participants who had never used these substances , those who had used them in the distant past , up to and including persons who had used the drug in the past day , avoiding the procrustean data - gathering and classifying methods which may occlude an accurate picture of drug use behaviour and risk . such rich data and the complex methods used for analysis necessitated a large and substantial sample. the study was atheoretical regarding the morality of the behaviour , and did not medicalise or pathologise participants , optimising engagement by persons with heterogeneous drug - use histories .our study used a rigorous range of data - mining methods beyond those typically used in studies examining the association of drug use and personality in the psychological and psychiatric literature , revealing that decision tree methods were most commonly effective for classifying drug users .we found that high n , low a , and low c are the most common personality correlates of drug use , these traits being sometimes seen in combination as an indication of higher - order stability and behavioural conformity , and , inverted , are associated with externalisation of distress .low stability is also a marker of negative urgency whereby persons act rashly when distressed .our research points to the importance of individuals acquiring emotional self - management skills anteceding distress as a means to reduce self - medicating drug - using behaviour , and the risk to health that injudicious or chronic drug use may cause .[ [ s1_appendix ] ] s1 appendix .+ + + + + + + + + + + + * mean for groups of users and non - users . * in this section we present mean t- for groups of users and non - users for decade , year , month , and week based user definitions respectively .column -value assesses the significance of differences of mean scores for groups of users and non - users : it is the probability of observing by chance the same or greater differences for mean scores if both groups have the same mean .rows ` # ' contain number of users and non - users for the drugs .+ & & & * -value * + & mean t - score & 95% ci for mean & mean t - score & 95% ci for mean & + + & & & * -value * + & mean t - score & 95% ci for mean & mean t - score&95% ci for mean & + + + # & & & + n & 50.13 & 49.67 , 50.59 & 48.19 & 45.77 , 50.61 & 0.116 + e & 50.06 & 49.60 , 50.52 & 50.04 & 47.61 , 52.42 & 0.988 + o & 50.04 & 49.58 , 50.51 & 48.81 & 46.45 , 51.17 & 0.318 + a & 49.93 & 49.47 , 50.39 & 52.51 & 50.26 , 54.77 & 0.036 + c & 49.94 & 49.48 , 50.40 & 53.31 & 51.05 , 55.56 & 0.006 + + # & & & + n & 51.71 & 50.95 , 52.46 & 49.14 & 48.58 , 49.69 & 0.001 + e & 49.71 & 48.89,50.53 & 50.26 & 49.72 , 50.80 & 0.251 + o & 53.05 & 52.34 , 53.77 & 48.28 & 47.72 , 48.84 & 0.001 + a & 48.39 & 47.60 , 49.18 & 50.94 & 50.39 , 51.48 & 0.001 + c & 47.04 & 46.29 , 47.80 & 51.76 & 51.22 , 52.30 & 0.001 +n & 50.78 & 49.78 , 51.79 & 49.89 & 49.37 , 50.39 & 0.122 + e & 50.97 & 49.95 , 51.99 & 49.84 & 49.33 , 50.35 & 0.052 + o & 51.45 & 50.47 , 52.43 & 49.65 & 49.14 , 50.15 & 0.002 + a & 48.69 & 47.65 , 49.72 & 50.35 & 49.84 , 50.85 & 0.004+ c & 48.08 & 46.10 , 49.07 & 50.54 & 50.04 , 51.05 & 0.001 + n & 52.83 & 52.12 , 53.54 & 48.15 & 47.59 , 48.72 & 0.001 + e & 49.07 & 48.31 , 49.83 & 50.74 & 50.19 , 51.30 & 0.001 + o & 52.66 & 51.97 , 53.34 & 48.17 & 47.59 , 48.75 & 0.001 + a & 48.28 & 47.53 , 49.02 & 51.22 & 50.67 , 51.78 & 0.001 + c & 47.70 & 46.98 , 48.41 & 51.69 & 51.12 , 52.25 & 0.001 + n & 51.08 & 50.52 , 51.65 & 47.98 & 47.25 , 48.71 & 0.001 + e & 49.75 & 49.17 , 50.33 & 50.70 & 49.98 , 51.41 & 0.053 + o & 52.48 & 51.96 , 52.99 & 44.95 & 44.20,45.70 & 0.001 + a & 48.84 & 48.28,49.40 & 52.42 & 51.69,53.15 & 0.001 + c & 48.15 & 47.60,48.70 & 53.92 & 53.27,54.65 & 0.001 +n & 50.06 & 49.60 , 50.51 & 50.29 & 46.36 , 54.21 & 0.894 + e & 50.05 & 49.59 , 50.51 & 50.80 & 47.71 , 53.89 & 0.660 + o & 50.05 & 49.59 , 50.51 & 47.37 & 44.42 , 50.32 & 0.117 + a & 50.05 & 49.59 , 50.50 & 48.66 & 44.61 , 52.70 & 0.416+ c & 50.03 & 49.58 , 50.49 & 51.57 & 47.87 , 55.27 & 0.366 +n & 51.85 & 51.09 , 52.60 & 49.04 & 48.48 , 49.59 & 0.001 + e & 50.33 & 49.55 , 51.12 & 49.91 & 49.35 , 50.46 & 0.374 + o & 52.60 & 51.89 , 53.30 & 48.51 & 47.94 , 49.08 & 0.001 + a & 47.71 & 46.93 , 48.50 & 51.34 & 50.81,51.88 & 0.001 + c & 47.49 & 46.76 , 48.22 & 51.53 & 50.98 , 52.09 & 0.001 +n & 50.06 & 49.61 , 50.52 & 50.08 & 46.59,53.57 & 0.991 + e & 50.13 & 49.67 , 50.59 & 46.76 & 43.82 , 49.69 & 0.043 + o & 50.11 & 49.65 , 50.57 & 44.59 & 41.67,47.52 & 0.001 + a & 49.99 & 49.54 , 50.45 & 51.11 & 47.71 , 54.51 & 0.504 + c & 49.99 & 49.53 , 50.44 & 53.59 & 50.79 , 56.40 & 0.029 +n & 53.08 & 51.64 , 54.52 & 49.72 & 49.25 , 50.20 & 0.001 + e & 48.80 & 47.33 , 50.27 & 50.20 & 49.73 , 50.68 & 0.066 + o & 52.91 & 51.60 , 54.21 & 49.67 & 49.19 , 50.15 & 0.001 + a & 47.02 & 45.46 , 48.58 & 50.36 & 49.89 , 50.83 & 0.001 + c & 46.19 & 44.70 , 47.68 & 50.50 & 50.03 , 50.96 & 0.001 +n & 51.30 & 50.58 , 52.02 & 49.24 & 48.67 , 49.82 & 0.001 + e & 50.56 & 49.80 , 51.31 & 49.73 & 49.17 , 50.30 & 0.081 + o & 53.62 & 52.96 , 54.28 & 47.60 & 47.03 , 48.17 & 0.001 + a & 48.49 & 47.75 , 49.23 & 51.03 & 50.47 , 51.60 & 0.001 + c & 47.30 & 46.59 , 48.00 & 51.89 & 51.33 , 52.45 & 0.001 +n & 54.60 & 53.29 , 55.92 & 49.49 & 49.01 , 49.96 & 0.001 + e & 48.42 & 46.94 , 49.90 & 50.27 & 49.80 , 50.74 & 0.011 + o & 54.25 & 53.04 , 55.47 & 49.46 & 48.98 , 49.94 & 0.001 + a & 45.53 & 44.00 , 47.06 & 50.59 & 50.12 , 51.05 & 0.001 + c & 45.91 & 44.55 , 47.26 & 50.59 & 50.11 , 51.06 & 0.001 + n & 51.42 & 50.40 , 52.43 & 49.75 & 49.25 , 50.26 & 0.005 + e & 50.36 & 49.23 , 51.48 & 49.99 & 49.50 , 50.49 & 0.542 + o & 53.87 & 52.90 , 54.84 & 49.12 & 48.62 , 49.62 & 0.001 + a & 47.80 & 46.67 , 48.94 & 50.53 & 50.04 , 51.01 & 0.001 + c & 46.86 & 45.81 , 47.92 & 50.79 & 50.30 , 51.28 & 0.001 +n & 51.49 & 50.77 , 52.22 & 49.09 & 48.52 , 49.66 & 0.001 + e & 49.71 & 48.94 , 50.47 & 50.30 & 49.75 , 50.86 & 0.206 + o & 54.30 & 53.67 , 54.92 & 47.08 & 46.51 , 47.65 & 0.001 + a & 48.61 & 47.86 , 49.36 & 50.98 & 50.42 , 51.53 & 0.001 + c & 46.98 & 46.27 , 47.72 & 52.14 & 51.60 , 52.68 & 0.001 +n & 50.87 & 50.05 , 51.69 & 49.72 & 49.18 , 50.26 & 0.023 + e & 50.07 & 49.16 , 50.98 & 50.06 & 49.54 , 50.58 & 0.986 + o & 55.25 & 54.54 , 55.96 & 47.80 & 47.27 , 48.33 & 0.001 + a & 48.44 & 47.57 , 49.31 & 50.68 & 50.16 , 51.21 & 0.001+ c & 47.54 & 46.71 , 48.36 & 51.12 & 50.59 , 51.65 & 0.001 +n & 53.41 & 52.47 , 54.35 & 49.11 & 48.61 , 49.62 & 0.001 + e & 47.97 & 46.88 , 49.05 & 50.66 & 50.17 , 51.15 & 0.001 + o & 53.77 & 52.86 , 54.69 & 48.93 & 48.42 , 49.44 & 0.001 + a & 47.07 & 46.03 , 48.10 & 50.86 & 50.37 , 51.35 & 0.001 + c & 46.21 & 45.22 , 47.20 & 51.15 & 50.66 , 51.64 & 0.001 + n & 50.73 & 49.98,51.47 & 49.67 & 49.10 , 50.24 & 0.027 + e & 50.15 & 49.35,50.96 & 50.01 & 49.47 , 50.55 & 0.765 + o & 54.36 & 53.71,55.01 & 47.46 & 46.90 , 48.02 & 0.001 + a & 48.55 & 47.78,49.32 & 50.88 & 50.33 , 51.43 & 0.001+ c & 47.54 & 46.81,48.27 & 51.53 & 50.97 , 52.08 & 0.001 +n & 50.97 & 50.41 , 51.52 & 48.22 & 47.45 , 48.99 & 0.001 + e & 49.98 & 49.41 , 50.54 & 50.24 & 49.48 , 50.99 & 0.599 + o & 51.47 & 50.92 , 52.01 & 47.01 & 46.26 , 47.77 & 0.001 + a & 49.20 & 48.65 , 49.75 & 51.69 & 50.92 , 52.46 & 0.001 + c & 48.64 & 48.08 , 49.20 & 52.95 & 52.24 , 53.67 & 0.001 +n & 52.88 & 51.57 , 54.20 & 49.67 & 49.19,50.15 & 0.001 + e & 48.96 & 47.45 , 50.47 & 50.22 & 49.74,50.69 & 0.075 + o & 54.20 & 53.00 , 55.41 & 49.42 & 48.93,49.90 & 0.001 + a & 47.30 & 45.92 , 48.68 & 50.40 & 49.92,50.87 & 0.001 + c & 45.22 & 43.88 , 46.56 & 50.73 & 50.26,51.20 & 0.001 + + & & & * -value * + & mean t - score & 95% ci for mean & mean t - score & 95% ci for mean & + + & & & * -value * + & mean t - score & 95% ci for mean & mean t - score&95% ci for mean & + + + # & & & + n & 49.96 & 49.49 , 50.43 & 50.55 & 48.87 , 52.24 & 0.5012 + e & 50.12 & 49.64 , 50.59 & 48.52 & 46.89 , 50.14 & 0.0642 + o & 50.07 & 49.60 , 50.54 & 49.11 & 47.46 , 50.76 & 0.2712 + a & 49.95 & 49.48 , 50.42 & 50.70 & 49.03 , 52.37 & 0.3921 + c & 49.94 & 49.47 , 50.41 & 50.73 & 49.03 , 52.44 & 0.3782 + + # & & & + n & 52.40 & 51.43 , 53.36 & 49.28 & 48.77 , 49.78 & 0.0001 + e & 49.48 & 48.45 , 50.50 & 50.16 & 49.66 , 50.66 & 0.2431 + o & 53.83 & 52.92 , 54.74 & 48.85 & 48.34 , 49.35 & 0.0001 + a & 47.47 & 46.45 , 48.48 & 50.76 & 50.27 , 51.26 & 0.0001+ c & 45.91 & 44.94 , 46.88 & 51.23 & 50.74 , 51.72 & 0.0001 +n & 51.57 & 49.85 , 53.29 & 49.88 & 49.41 , 50.35 & 0.0633 + e & 50.25 & 48.44 , 52.06 & 49.98 & 49.51 , 50.45 & 0.7777 + o & 51.97 & 50.36 , 53.59 & 49.85 & 49.38 , 50.32 & 0.0136 + a & 46.35 & 44.64 , 48.07 & 50.28 & 49.81 , 50.74 & 0.0001 + c & 46.92 & 45.25 , 48.60 & 50.23 & 49.77 , 50.70 & 0.0002 +n & 53.69 & 52.84 , 54.54 & 48.54 & 48.03 , 49.05 & 0.0001 + e & 48.91 & 47.97 , 49.85 & 50.43 & 49.92 , 50.94 & 0.0053 + o & 53.00 & 52.18 , 53.83 & 48.81 & 48.28 , 49.34 & 0.0001 + a & 47.55 & 46.64 , 48.46 & 50.97 & 50.46 , 51.48 & 0.0001 + c & 47.06 & 46.19 , 47.94 & 51.16 & 50.65 , 51.68 & 0.0001 +n & 51.10 & 50.46 , 51.74 & 48.76 & 48.14 , 49.38 & 0.0001 + e & 49.73 & 49.08 , 50.39 & 50.30 & 49.69 , 50.91 & 0.2177 + o & 53.68 & 53.13 , 54.23 & 45.85 & 45.22 , 46.48 & 0.0001 + a & 48.77 & 48.13 , 49.40 & 51.39 & 50.76 , 52.02 & 0.0001+ c & 47.32 & 46.70 , 47.95 & 53.02 & 52.42 , 53.62 & 0.0001 +n & 50.01 & 49.56 , 50.47 & 49.45 & 46.04 , 52.86 & 0.7445 + e & 49.99 & 49.53 , 50.45 & 50.49 & 47.75 , 53.24 & 0.7156 + o & 50.04 & 49.58 , 50.50 & 48.27 & 45.59 , 50.94 & 0.1945 + a & 50.02 & 49.56 , 50.48 & 49.17 & 45.72 , 52.62 &0.6259 + c & 49.97 & 49.51 , 50.42 & 51.41 & 47.81 , 55.01 & 0.4254 +n & 52.16 & 51.19 , 53.13 & 49.39 & 48.88 , 49.89 & 0.0001 + e & 50.93 & 49.88 , 51.98 & 49.74 & 49.24 , 50.23 & 0.0439 + o & 52.79 & 51.87 , 53.71 & 49.21 & 48.70 , 49.72 & 0.0001 + a & 46.84 & 45.81 , 47.86 & 50.90 & 50.41 , 51.39 & 0.0001 + c & 46.81 & 45.86 , 47.75 & 50.91 & 50.40 , 51.41 & 0.0001 +n & 49.99 & 49.53 , 50.45 & 50.36 & 47.75 , 52.96 & 0.7827 + e & 50.10 & 49.64 , 50.56 & 46.92 & 44.34 , 49.50 & 0.0181 + o & 50.10 & 49.64 , 50.56 & 47.07 & 44.45 , 49.69 & 0.0265 + a & 49.96 & 49.50 , 50.42 & 51.15 & 48.61 , 53.69 & 0.3605 + c & 49.91 & 49.45 , 50.37 & 52.79 & 50.14 , 55.43 & 0.0362 +n & 54.06 & 51.78 , 56.35 & 49.82 & 49.36 , 50.28 & 0.0005 + e & 49.24 & 47.20 , 51.28 & 50.03 & 49.57 , 50.50 & 0.4548 + o & 52.90 & 50.72 , 55.08 & 49.87 & 49.41 , 50.33 & 0.0084 + a & 46.65 & 44.17 , 49.12 & 50.15 & 49.69 , 50.61 & 0.0070 + c & 45.08 & 42.92 , 47.25 & 50.22 & 49.76 , 50.67 & 0.0001 +n & 50.71 & 49.82 , 51.61 & 49.73 & 49.21 , 50.25 & 0.0624 + e & 51.51 & 50.60 , 52.42 & 49.43 & 48.91 , 49.95 & 0.0001 + o & 54.10 & 53.32 , 54.89 & 48.45 & 47.92 , 48.97 & 0.0001 + a & 48.55 & 47.64 , 49.45 & 50.55 & 50.03 , 51.07 & 0.0002 + c & 47.02 & 46.17 , 47.87 & 51.13 & 50.61 , 51.65 & 0.0001 +n & 55.37 & 53.63 , 57.12 & 49.64 & 49.18 , 50.10 & 0.0001 + e & 47.68 & 45.68 , 49.69 & 50.15 & 49.69 , 50.62 & 0.0192 + o & 53.45 & 51.80 , 55.11 & 49.77 & 49.30 , 50.24 & 0.0001 + a & 44.37 & 42.25 , 46.50 & 50.38 & 49.92 , 50.83 & 0.0001 + c & 45.45 & 43.52 , 47.39 & 50.30 & 49.84 , 50.77 & 0.0001 +n & 51.52 & 50.20 , 52.84 & 49.81 & 49.33 , 50.29 & 0.0175 + e & 51.02 & 49.48 , 52.57 & 49.87 & 49.40 , 50.34 & 0.1619 + o & 54.18 & 52.88 , 55.49 & 49.48 & 49.00 , 49.96 & 0.0001+ a & 47.90 & 46.42 , 49.38 & 50.26 & 49.79 , 50.73 & 0.0030 + c & 46.34 & 44.91 , 47.76 & 50.45 & 49.98 , 50.93 & 0.0001 +n & 51.27 & 50.41 , 52.12 & 49.46 & 48.93 , 49.99 & 0.0004 + e & 50.02 & 49.11 , 50.92 & 49.99 & 49.48 , 50.51 & 0.9659 + o & 54.49 & 53.76 , 55.22 & 48.08 & 47.55 , 48.62 & 0.0001 + a & 48.09 & 47.23 , 48.96 & 50.81 & 50.29 , 51.34 & 0.0001 + c & 46.56 & 45.71 , 47.41 & 51.47 & 50.95 , 51.98 & 0.0001 + n & 49.98 & 48.97 , 50.99 & 50.00 & 49.50 , 50.51 & 0.9691 + e & 50.72 & 49.63 , 51.81 & 49.82 & 49.32 , 50.31 & 0.1403 + o & 56.29 & 55.48 , 57.11 & 48.41 & 47.91 , 48.91 & 0.0001 + a & 49.05 & 47.98 , 50.12 & 50.24 & 49.74 , 50.74 & 0.0484 + c & 47.74 & 46.71 , 48.77 & 50.57 & 50.07 , 51.07 & 0.0001 +n & 53.74 & 52.65 , 54.84 & 49.23 & 48.75 , 49.72 & 0.0001 + e & 47.75 & 46.46 , 49.03 & 50.46 & 49.99 , 50.94 & 0.0001 + o & 53.81 & 52.76 , 54.86 & 49.22 & 48.73 , 49.71 & 0.0001 + a & 46.53 & 45.33 , 47.73 & 50.71 & 50.23 , 51.19 & 0.0001 + c & 46.01 & 44.87 , 47.15 & 50.82 & 50.33 , 51.30 & 0.0001 +n & 50.33 & 49.40 , 51.26 & 49.90 & 49.38 , 50.42 & 0.4311 + e & 50.71 & 49.66 , 51.76 & 49.79 & 49.29 , 50.28 & 0.1179 + o & 55.72 & 54.96 , 56.49 & 48.29 & 47.78 , 48.80 & 0.0001 + a & 48.51 & 47.48 , 49.53 & 50.45 & 49.95 , 50.95 & 0.0009 + c & 47.30 & 46.36 , 48.24 & 50.81 & 50.30 , 51.32 & 0.0001 +n & 51.16 & 50.55 , 51.76 & 48.51 & 47.85 , 49.18 & 0.0001 + e & 49.80 & 49.17 , 50.42 & 50.26 & 49.61 , 50.91 & 0.3094 + o & 51.91 & 51.31 , 52.51 & 47.55 & 46.89 , 48.20 & 0.0001 + a & 49.04 & 48.42 , 49.65 & 51.24 & 50.58 , 51.90 & 0.0001 + c & 47.98 & 47.38 , 48.57 & 52.60 & 51.95 , 53.25 & 0.0001 + n & 53.81 & 51.67 , 55.95 & 49.80 & 49.34 , 50.26 & 0.0004 + e & 49.55 & 47.32 , 51.79 & 50.02 & 49.56 , 50.49 & 0.6832 + o & 53.59 & 51.58 , 55.60 & 49.81 & 49.35 , 50.27 & 0.0004 + a & 47.48 & 45.35 , 49.61 & 50.13 & 49.67 , 50.60 & 0.0174 + c & 45.31 & 43.18 , 47.44 & 50.25 & 49.79 , 50.71 &0.0001 + + & & & * -value * + & mean t - score & 95% ci for mean & mean t - score & 95% ci for mean & + + & & & * -value * + & mean t - score & 95% ci for mean & mean t - score&95% ci for mean & + + + # & & & + n & 49.85 & 49.35 , 50.35 & 50.69 & 49.61 , 51.78 & 0.1669 + e & 50.62 & 50.12 , 51.11 & 47.14 & 46.11 , 48.17 & 0.0001 + o & 50.22 & 49.73 , 50.72 & 48.96 & 47.87 , 50.06 & 0.0401 + a & 50.10 & 49.60 , 50.60 & 49.55 & 48.50 , 50.60 & 0.3575 + c & 50.15 & 49.66 , 50.65 & 49.28 & 48.22 , 50.35 & 0.1460 +n & 52.78 & 51.49 , 54.07 & 49.60 & 49.12 , 50.08 & 0.0001 + e & 49.07 & 47.63 , 50.51 & 50.13 & 49.66 , 50.61 & 0.1665 + o & 52.94 & 51.69 , 54.20 & 49.57 & 49.09 , 50.06 & 0.0001 + a & 46.57 & 45.21 , 47.92 & 50.50 & 50.02 , 50.97 & 0.0001 + c & 45.06 & 43.75 , 46.37 & 50.71 & 50.24 , 51.19 & 0.0001 + n & 49.36 & 46.12 , 52.61 & 50.01 & 49.56 , 50.47 & 0.6911 + e & 49.83 & 46.38 , 53.28 & 50.00 & 49.55 , 50.46 & 0.9218 + o & 50.40 & 47.06 , 53.74 & 49.99 & 49.53 , 50.45 & 0.8083 + a & 45.43 & 41.87 , 49.00 & 50.10 & 49.65 , 50.56 & 0.0122 + c & 47.31 & 44.60 , 50.01 & 50.06 & 49.60 , 50.52 & 0.0490 +n & 55.26 & 54.11 , 56.41 & 49.01 & 48.53 , 49.48 & 0.0001 + e & 48.06 & 46.79 , 49.33 & 50.37 & 49.89 , 50.84 & 0.0010 + o & 52.68 & 51.59 , 53.77 & 49.49 & 49.00 , 49.99 & 0.0001 + a & 46.72 & 45.47 , 47.97 & 50.62 & 50.14 , 51.10 & 0.0001 + c & 46.54 & 45.40 , 47.69 & 50.65 & 50.17 , 51.14 & 0.0001 +n & 50.72 & 49.99 , 51.45 & 49.48 & 48.91 , 50.06 & 0.7142 + e & 50.06 & 49.31 , 50.80 & 49.96 & 49.40 , 50.52 & 0.5738 + o & 54.34 & 53.74 , 54.95 & 46.88 & 46.30 , 47.46 & 0.3233 + a & 48.66 & 47.93 , 49.38 & 50.97 & 50.39 , 51.54 & 0.5477 + c & 47.26 & 46.55 , 47.97 & 51.97 & 51.41 , 52.53 & 0.3800 +n & 49.98 & 49.52 , 50.44 & 50.40 & 48.19 , 52.60 & 0.0002 + e & 50.03 & 49.57 , 50.49 & 49.43 & 47.38 , 51.49 & 0.1295 + o & 50.05 & 49.58 , 50.52 & 49.13 & 47.36 , 50.90 & 0.0008 + a & 50.04 & 49.57 , 50.50 & 49.36 & 47.19 , 51.53 & 0.0001 + c & 49.95 & 49.49 , 50.41 & 50.90 & 48.82 , 52.97 & 0.0002 + + # & & & + n & 52.87 & 51.29 , 54.46 & 49.74 & 49.27 , 50.21 & 0.3787 + e & 51.34 & 49.50 , 53.18 & 49.88 & 49.41 , 50.34 & 0.0078 + o & 52.53 & 51.00 , 54.06 & 49.77 & 49.29 , 50.24 & 0.0560 + a & 45.75 & 44.05 , 47.46 & 50.39 & 49.93 , 50.86 & 0.7537 + c & 47.23 & 45.71 , 48.75 & 50.25 & 49.78 , 50.73 & 0.4728 +n & 50.05 & 49.59 , 50.52 & 49.21 & 47.37 , 51.05 & 0.0092 + e & 50.16 & 49.69 , 50.63 & 47.68 & 45.93 , 49.43 & 0.8390 + o & 50.12 & 49.65 , 50.58 & 48.29 & 46.47 , 50.11 & 0.0001 + a & 49.98 & 49.51 , 50.45 & 50.27 & 48.50 , 52.04 & 0.0001 + c & 49.95 & 49.49 , 50.42 & 50.67 & 48.77 , 52.56 & 0.0001 + n & 57.86 & 52.26 , 63.46 & 49.92 & 49.46 , 50.37 & 0.1931 + e & 45.97 & 41.30 , 50.64 & 50.04 & 49.59 , 50.50 & 0.0852 + o & 50.89 & 47.21 , 54.57 & 49.99 & 49.54 , 50.45 & 0.6165 + a & 42.98 & 36.34 , 49.62 & 50.08 & 49.62 , 50.53 & 0.0380 + c & 45.14 & 40.37 , 49.91 & 50.05 & 49.60 , 50.51 & 0.0448 +n & 49.53 & 48.18 , 50.89 & 50.07 & 49.59 , 50.55 & 0.4649 + e & 52.24 & 50.83 , 53.65 & 49.67 & 49.20 , 50.15 & 0.0008 + o & 54.41 & 53.23 , 55.59 & 49.36 & 48.88 , 49.84 & 0.0001 + a & 48.10 & 46.75 , 49.45 & 50.28 & 49.80 , 50.76 & 0.0029 + c & 47.27 & 45.96 , 48.59 & 50.40 & 49.92 , 50.88 & 0.0001 + n & 56.70 & 54.05 , 59.34 & 49.81 & 49.35 , 50.26 & 0.0001 + e & 45.58 & 42.06 , 49.10 & 50.13 & 49.67 , 50.58 & 0.0130 + o & 52.48 & 49.63 , 55.34 & 49.93 & 49.47 , 50.39 & 0.0823 + a & 42.18 & 39.00 , 45.35 & 50.23 & 49.77 , 50.68 & 0.0001 + c & 43.36 & 40.35 , 46.37 & 50.19 & 49.74 , 50.65 & 0.0001 + + # & & & + n & 51.29 & 49.14 , 53.45 & 49.94 & 49.48 , 50.41 & 0.2270 + e & 49.62 & 46.74 , 52.49 & 50.02 & 49.56 , 50.47 & 0.7851 + o & 54.79 & 52.59 , 56.98 & 49.79 & 49.33 , 50.25 & 0.0001 + a & 46.90 & 44.15 , 49.66 & 50.14 & 49.68 , 50.59 & 0.0237 + c & 45.03 & 42.50 , 47.56 & 50.22 & 49.76 , 50.67 & 0.0001 +n & 52.02 & 50.68 , 53.36 & 49.70 & 49.23 , 50.18 & 0.0015 + e & 49.10 & 47.59 , 50.61 & 50.13 & 49.66 , 50.60 & 0.2002 + o & 54.37 & 53.22 , 55.53 & 49.36 & 48.88 , 49.84 &0.0001 + a & 46.83 & 45.50 , 48.16 & 50.46 & 49.99 , 50.94 & 0.0001 + c & 45.30 & 44.01 , 46.60 & 50.69 & 50.21 , 51.16 & 0.0001 + n & 50.55 & 48.97 , 52.12 & 49.95 & 49.48 , 50.42 & 0.4717 + e & 51.28 & 49.53 , 53.04 & 49.88 & 49.41 , 50.34 & 0.1279 + o & 57.28 & 56.16 , 58.41 & 49.30 & 48.83 , 49.77 & 0.0001 + a & 48.92 & 47.35 , 50.48 & 50.10 & 49.63 , 50.58 & 0.1533 + c & 47.10 & 45.59 , 48.60 & 50.28 & 49.81 , 50.75 & 0.0001 + n & 54.53 & 53.00 , 56.06 & 49.55 & 49.08 , 50.02 & 0.0001 + e & 46.86 & 45.07 , 48.65 & 50.31 & 49.85 , 50.78 & 0.0003 + o & 52.89 & 51.37 , 54.40 & 49.71 & 49.24 , 50.18 & 0.0001 + a & 46.18 & 44.50 , 47.87 & 50.38 & 49.92 , 50.85 & 0.0001 + c & 45.44 & 43.83 , 47.05 & 50.45 & 49.99 , 50.92 & 0.0001 + + # & & & + n & 49.91 & 48.41 , 51.42 & 50.01 & 49.53 , 50.48 & 0.9064 + e & 50.31 & 48.50 , 52.12 & 49.97 & 49.51 , 50.44 & 0.7205 + o & 56.92 & 55.77 , 58.07 & 49.36 & 48.89 , 49.83 & 0.0001 + a & 48.57 & 46.94 , 50.19 & 50.13 & 49.66 , 50.60 & 0.0699 + c & 46.85 & 45.25 , 48.45 & 50.29 & 49.82 , 50.76 & 0.0001+ + # & & & + n & 51.11 & 50.43 , 51.79 & 49.04 & 48.44 , 49.64 & 0.0001 + e & 49.98 & 49.29 , 50.66 & 50.02 & 49.42 , 50.62 & 0.9237 + o & 51.86 & 51.19 , 52.52 & 48.39 & 47.79 , 48.99 & 0.0001 + a & 49.02 & 48.34 , 49.71 & 50.85 & 50.25 , 51.44 & 0.0001 + c & 47.69 & 47.03 , 48.34 & 52.01 & 51.41 , 52.60 & 0.0001 + + # & & & + n & 51.34 & 47.48 , 55.20 & 49.98 & 49.52 , 50.43 & 0.4793 + e & 51.80 & 48.00 , 55.60 & 49.97 & 49.51 , 50.42 & 0.3378 + o & 54.65 & 51.65 , 57.66 & 49.91 & 49.46 , 50.37 & 0.0032 + a & 45.91 & 42.12 , 49.71 & 50.08 & 49.62 , 50.53 & 0.0336 + c & 47.22 & 43.63 , 50.81 & 50.05 & 49.60 , 50.51 & 0.1209 +n & 49.82 & 49.28 , 50.36 & 50.37 & 49.56 , 51.19 & 0.2673 + e & 50.90 & 50.35 , 51.44 & 48.17 & 47.38 , 48.97 & 0.0001 + o & 50.08 & 49.54 , 50.63 & 49.83 & 49.03 , 50.62 & 0.6023 + a & 50.05 & 49.50 , 50.60 & 49.89 & 49.09 , 50.69 & 0.7454 + c & 50.19 & 49.64 , 50.74 & 49.61 & 48.81 , 50.42 & 0.2441 +n & 52.86 & 51.27 , 54.45 & 49.73 & 49.26 , 50.20 & 0.0003 + e & 48.50 & 46.78 , 50.21 & 50.14 & 49.68 , 50.61 & 0.0695 + o & 52.87 & 51.27 , 54.47 & 49.73 & 49.26 , 50.20 & 0.0003 + a & 46.77 & 45.09 , 48.44 & 50.31 & 49.84 , 50.77 & 0.0001 + c & 45.29 & 43.72 , 46.85 & 50.45 & 49.98 , 50.91 & 0.0001 + + # & & & + n & 49.64 & 43.98 , 55.29 & 50.00 & 49.55 , 50.46 & 0.8922+ e & 45.85 & 39.05 , 52.64 & 50.04 & 49.59 , 50.49 & 0.2106 + o & 49.02 & 43.02 , 55.02 & 50.01 & 49.56 , 50.46 & 0.7320 + a & 44.36 & 38.95 , 49.77 & 50.05 & 49.60 , 50.50 & 0.0409 + c & 44.81 & 39.74 , 49.89 & 50.05 & 49.59 , 50.50 & 0.0447 + n & 56.56 & 55.08 , 58.05 & 49.31 & 48.85 , 49.77 & 0.0001 + e & 46.15 & 44.61 , 47.70 & 50.40 & 49.93 , 50.87 & 0.0001 + o & 52.18 & 50.72 , 53.64 & 49.77 & 49.30 , 50.25 & 0.0022 + a & 46.57 & 44.94 , 48.21 & 50.36 & 49.89 , 50.83 & 0.0001 + c & 46.20 & 44.75 , 47.65 & 50.40 & 49.93 , 50.87 & 0.0001 +n & 50.62 & 49.82 , 51.42 & 49.68 & 49.13 , 50.22 & 0.0555 + e & 50.17 & 49.36 , 50.97 & 49.91 & 49.37 , 50.46 & 0.6056 + o & 54.70 & 54.05 , 55.35 & 47.54 & 46.99 , 48.09 & 0.0001 + a & 48.78 & 48.00 , 49.56 & 50.64 & 50.09 , 51.19 & 0.0001 + c & 47.45 & 46.67 , 48.23 & 51.34 & 50.80 , 51.88 & 0.0001 +n & 49.95 & 49.44 , 50.45 & 50.20 & 49.18 , 51.23 & 0.6601 + e & 50.22 & 49.72 , 50.73 & 49.16 & 48.14 , 50.17 & 0.0648 + o & 49.89 & 49.37 , 50.40 & 50.42 & 49.48 , 51.36 & 0.3270 + a & 50.20 & 49.70 , 50.70 & 49.23 & 48.19 , 50.27 & 0.0988 + c & 50.13 & 49.62 , 50.63 & 49.51 & 48.50 , 50.53 & 0.2864 +n & 53.24 & 50.50 , 55.99 & 49.89 & 49.44 , 50.35 & 0.0191 + e & 52.03 & 49.22 , 54.84 & 49.93 & 49.48 , 50.39 & 0.1464 + o & 51.40 & 48.83 , 53.97 & 49.95 & 49.49 , 50.41 & 0.2727 + a & 43.73 & 40.79 , 46.68 & 50.21 & 49.75 , 50.66 & 0.0001 + c & 46.72 & 44.32 , 49.11 & 50.11 & 49.65 , 50.57 & 0.0072 +n & 50.07 & 49.59 , 50.55 & 49.46 & 48.10 , 50.81 & 0.3998 + e & 50.20 & 49.72 , 50.68 & 48.54 & 47.27 , 49.81 & 0.0166 + o & 50.04 & 49.56 , 50.52 & 49.71 & 48.40 , 51.03 & 0.6450 + a & 49.93 & 49.45 , 50.41 & 50.48 & 49.15 , 51.81 & 0.4462 + c & 49.95 & 49.47 , 50.43 & 50.37 & 49.01 , 51.72 & 0.5690 + + # & & & + n & 55.26 & 45.53 , 64.99 & 49.97 & 49.52 , 50.42 & 0.2540 + e & 46.46 & 38.43 , 54.50 & 50.02 & 49.57 , 50.47 & 0.3480 + o & 48.01 & 43.74 , 52.28 & 50.01 & 49.56 , 50.47 & 0.3242 + a & 39.61 & 30.12 , 49.11 & 50.06 & 49.61 , 50.51 & 0.0343 + c & 44.28 & 36.73 , 51.83 & 50.03 & 49.58 , 50.49 & 0.1211 + n & 50.28 & 47.99 , 52.58 & 49.99 & 49.53 , 50.45 & 0.8031 + e & 53.37 & 50.71 , 56.03 & 49.84 & 49.39 , 50.30 & 0.0110 + o & 56.15 & 54.15 , 58.15 & 49.71 & 49.25 , 50.17 & 0.0001 + a & 48.56 & 46.38 , 50.75 & 50.07 & 49.61 , 50.53 & 0.1837 + c & 46.98 & 44.78 , 49.18 & 50.14 & 49.68 , 50.60 & 0.0065 +n & 58.66 & 55.84 , 61.47 & 49.86 & 49.41 , 50.32 & 0.0001 + e & 44.77 & 39.80 , 49.73 & 50.08 & 49.63 , 50.53 & 0.0376 + o & 52.41 & 48.00 , 56.81 & 49.96 & 49.51 , 50.42 & 0.2681 + a & 41.48 & 38.25 , 44.70 & 50.13 & 49.68 , 50.59 & 0.0001 + c & 43.04 & 38.95 , 47.12 & 50.11 & 49.66 , 50.56 & 0.0015 + + # & & & + n & 50.47 & 46.78 , 54.16 & 49.99 & 49.54 , 50.45 & 0.7952 + e & 47.23 & 42.01 , 52.46 & 50.06 & 49.61 , 50.51 & 0.2822 + o & 54.39 & 50.87 , 57.90 & 49.91 & 49.46 , 50.37 & 0.0148 + a & 44.50 & 40.15 , 48.85 & 50.11 & 49.66 , 50.56 & 0.0133 + c & 44.99 & 41.13 , 48.85 & 50.10 & 49.65 , 50.55 & 0.0113 + + # & & & + n & 53.13 & 51.32 , 54.94 & 49.77 & 49.30 , 50.23 & 0.0005 + e & 47.12 & 45.06 , 49.18 & 50.22 & 49.76 , 50.67 & 0.0044 + o & 53.16 & 51.53 , 54.79 & 49.76 & 49.30 , 50.23 & 0.0001 + a & 46.25 & 44.62 , 47.87 & 50.28 & 49.81 , 50.75 & 0.0001 + c & 44.50 & 42.88 , 46.11 & 50.41 & 49.95 , 50.88 & 0.0001 + n & 50.28 & 47.82 , 52.73 & 49.99 & 49.53 , 50.45 & 0.8196 + e & 52.70 & 49.90 , 55.51 & 49.90 & 49.44 , 50.35 & 0.0531 + o & 57.57 & 55.78 , 59.35 & 49.71 & 49.25 , 50.17 & 0.0001 + a & 50.03 & 47.88 , 52.17 & 50.00 & 49.54 , 50.46 & 0.9793 + c & 46.98 & 44.90 , 49.06 & 50.11 & 49.65 , 50.58 & 0.0045 +n & 54.99 & 53.14 , 56.84 & 49.66 & 49.20 , 50.12 & 0.0001 + e & 45.27 & 43.10 , 47.43 & 50.32 & 49.87 , 50.78 & 0.0001 + o & 51.87 & 50.02 , 53.72 & 49.87 & 49.41 , 50.34 & 0.0406 + a & 46.00 & 43.84 , 48.15 & 50.27 & 49.82 , 50.73 & 0.0002 + c & 45.74 & 43.89 , 47.59 & 50.29 & 49.83 , 50.76 & 0.0001 + + # & & & + n & 49.79 & 46.82 , 52.75 & 50.01 & 49.55 , 50.46 & 0.8844 + e & 53.71 & 50.31 , 57.12 & 49.91 & 49.46 , 50.37 & 0.0309 + o & 57.89 & 55.73 , 60.05 & 49.81 & 49.35 , 50.27 & 0.0001 + a & 50.14 & 46.71 , 53.57 & 50.00 & 49.54 , 50.45 & 0.9352 + c & 48.03 & 45.33 , 50.74 & 50.05 & 49.59 , 50.51 & 0.1459 +n & 51.32 & 50.59 , 52.04 & 49.10 & 48.52 , 49.67 & 0.0001 + e & 49.91 & 49.17 , 50.65 & 50.06 & 49.49 , 50.63 & 0.7485 + o & 51.57 & 50.85 , 52.28 & 48.92 & 48.35 , 49.50 & 0.0001 + a & 49.04 & 48.31 , 49.78 & 50.66 & 50.09 , 51.23 & 0.0007 + c & 47.69 & 46.99 , 48.38 & 51.59 & 51.01 , 52.16 & 0.0001 +n & 50.92 & 46.02 , 55.82 & 49.99 & 49.54 , 50.44 & 0.6974 + e & 52.60 & 47.66 , 57.53 & 49.97 & 49.52 , 50.42 & 0.2828 + o & 56.30 & 53.63 , 58.97 & 49.93 & 49.47 , 50.38 & 0.0001 + a & 46.36 & 41.38 , 51.34 & 50.04 & 49.59 , 50.49 & 0.1407 + c & 49.99 & 45.49 , 54.49 & 50.00 & 49.55 , 50.45 & 0.9955 + who .prevention of mental disorders : effective interventions and policy options : summary report .geneva : world health organization ; 2004 .available : http://www.who.int/mental_health/evidence/en/ prevention_of_mental_disorders_sr.pdf[http://www.who.int / mental_health / evidence / en/ prevention_of_mental_disorders_sr.pdf ] .beaglehole r , bonita r , horton r , adams c , alleyne g , asaria p , baugh v , bekedam h , billo n , casswell s , et al .priority actions for the non - communicable disease crisis . the lancet . 2011 ; 377(9775):14381447 .bickel wk , johnson mw , koffarnus mn , mackillop j , murphy g james .the behavioral economics of substance use disorders : reinforcement pathologies and their repair. annual review of clinical psychology . 2014 ; 10:641677 .roncero c , daigre c , barral c , ros - cucurull e , grau - lpez l , rodrguez - cintas l , tarifa n , casas m , valero s. neuroticism associated with cocaine - induced psychosis in cocaine - dependent patients : a cross - sectional observational study .plos one . 2014 ; 9(9):e106,111 .flory k , lynam d , milich r , leukefeld c , clayton r. the relations among personality , symptoms of alcohol and marijuana abuse , and symptoms of comorbid psychopathology : results from a community sample . experimental and clinical psychopharmacology .2002 ; 10(4):425434 .andreassen cs , griffiths md , gjertsen sr , krossbakken e , kvam s , pallesen s. the relationships between behavioral addictions and the five - factor model of personality .journal of behavioral addictions . 2013 ; 2(2 ) : 9099 .turiano na , whiteman sd , hampson se , roberts bw , mroczek dk . personality and substance use in midlife : conscientiousness as a moderator and the effects of trait change .journal of research in personality . 2012; 46(3):295305 .haider ah , edwin dh , mackenzie ej , bosse mj , castillo rc , travison tg , group ls , et al .the use of the neo - five factor inventory to assess personality in trauma patients : a two - year prospective study .journal of orthopaedic trauma . 2002; 16(9):660667 .kopstein an , crum rm , celentano dd , martin ss .sensation seeking needs among 8th and 11th graders : characteristics associated with cigarette and marijuana use . drug and alcohol dependence .2001 ; 62(3):195203 .yasnitskiy l , gratsilev v , kulyashova j , cherepanov f. possibilities of artificial intellect in detection of predisposition to drug addiction .perm university herald series `` philosophy psychology sociology '' . 2015 ; 1(21):6173 .valero s , daigre c , rodrguez - cintas l , barral c , gom - i - freixanet m , ferrer m , casas m , roncero c. neuroticism and impulsivity : their hierarchical organization in the personality characterization of drug - dependent patients from a decision tree learning perspective . comprehensive psychiatry . 2014 ; 55(5):12271233 .bulut f , bucak i .an urgent precaution system to detect students at risk of substance abuse through classification algorithms .turkish journal of electrical engineering & computer sciences . 2014; 22(3):690707 .rioux c , castellanos - ryan n , parent s , sgu jr , the interaction between temperament and the family environment in adolescent substance use and externalizing behaviors : support for diathesis stress or differential susceptibility ? developmental review .2016 ; 40 : 117150 .weissman dg , schriber ra , fassbender c , atherton o , krafft c , robins rw , hastings pd , guyerb ae , earlier adolescent substance use onset predicts stronger connectivity between reward and cognitive control brain networks .developmental cognitive neuroscience .2015 ; 16:121129 .fehrman e , egan v. drug consumption , collected online march 2011 to march 2012 , english - speaking countries .icpsr36536-v1 . ann arbor , mi : inter - university consortium for political and social research [ distributor ] , 2016 - 09 - 09 .deposited by mirkes e. http://doi.org/10.3886/icpsr36536.v1 stanford ms , mathias cw , dougherty dm , lake sl , anderson ne , patton jh .fifty years of the barratt impulsiveness scale : an update and review . personality and individual differences . 2009 ; 47(5):385395 .egan v , deary i , austin e. the neo - ffi : emerging british norms and an item - level analysis suggest n , a and c are more reliable than o and e. personality and individual differences . 2000 ; 29(5):907920 .armbruster ws , di stilio vs , tuxill jd , flores tc , runk jl .covariance and decoupling of floral and vegetative traits in nine neotropical plants : a re - evaluation of berg s correlation - pleiades concept .american journal of botany .1999 ; 86(1):3955 .omote h , sugiyama k. method for drawing intersecting clustered graphs and its application to web ontology language . in proceedings of the 2006 asia - pacific symposium on information visualisation - volume 60 2006 jan 1 ( pp .89 - 92 ) .australian computer society , inc .available : http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.294.2209&rep=rep1&type=pdf hoare j , moon d. ( eds . ) drug misuse declared : findings from the 2009/10 .british crime survey home office statistical bulletin 13/10 ; 2010 .available : https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/116321/hosb1310.pdf settles re , fischer s , cyders ma , combs jl , gunn rl , smith gt . negative urgency : a personality predictor of externalizing behavior characterized by neuroticism , low conscientiousness , and disagreeableness .journal of abnormal psychology . 2012; 121(1):160172 .garca - montes jm , zaldvar - basurto f , lpez - ros f , molina - moreno a. the role of personality variables in drug abuse in a spanish university population .international journal of mental health and addiction . 2009 ; 7(3):475487 .fossati p , ergis am , allilaire jf .problem - solving abilities in unipolar depressed patients : comparison of performance on the modified version of the wisconsin and the california sorting tests . psychiatry research . 2001 ; 104(2):145156 . mcdaniel sr , mahan je .an examination of the impss scale as a valid and reliable alternative to the sss - v in optimum stimulation level research . personality and individual differences . 2008 ; 44(7):15281538 .lee sy , poon wy , bentler pm . a two - stage estimation of structural equation models with continuous and polytomous variables .british journal of mathematical and statistical psychology .1995 ; 48(2):339358 .martinson eo , hamdan ma .maximum likelihood and some other asymptotically efficient estimators of correlation in two way contingency tables .journal of statistical computation and simulation . 1971 ; 1(1):4554 .gorban an , zinovyev ay . principal graphs and manifolds . in : olivas es , guerrero jdm , sober mm , benedito jrm , lpez ajs , editors .handbook of research on machine learning applications and trends : algorithms , methods , and techniques .hershey new york .igi global ; 2009 .2859 .naikal n , yang ay , sastry ss .informative feature selection for object recognition via sparse pca . in : computer vision( iccv ) , 2011 ieee international conference on , ieee ; 2011 .doi : 10.1109/iccv.2011.6126321 clarkson kl .nearest - neighbor searching and metric space dimensions . in : shakhnarovich g , darrell t , indyk p , editors .nearest - neighbor methods for learning and vision : theory and practice .mit press ; 2006 .1559 .sofeikov ki , tyukin iy , gorban an , mirkes em , prokhorov dv , romanenko iv .learning optimization for decision tree classification of non - categorical data with information gain impurity criterion . in : neural networks ( ijcnn )joint confe . on , ieee ; 2014 .35483555 .dietterich t , kearns m , mansour y. applying the weak learning framework to understand and improve c4.5 . in : icml , proc .of the 13th int .conf . on machine learning .san francisco : morgan kaufmann ; 1996 .. 96104 .mirkes em , alexandrakis i , slater k , tuli r , gorban an .computational diagnosis of canine lymphoma .journal of physics : conference series . 2014; 490(1):012135 .available : http://stacks.iop.org/1742-6596/490/i=1/a=012135 .
the problem of evaluating an individual s risk of drug consumption and misuse is highly important . an online survey methodology was employed to collect data including big five personality traits ( neo - ffi - r ) , impulsivity ( bis-11 ) , sensation seeking ( impss ) , and demographic information . the data set contained information on the consumption of 18 central nervous system psychoactive drugs . correlation analysis demonstrated the existence of groups of drugs with strongly correlated consumption patterns . three correlation pleiades were identified , named by the central drug in the pleiade : ecstasy , heroin , and benzodiazepines pleiades . an exhaustive search was performed to select the most effective subset of input features and data mining methods to classify users and non - users for each drug and pleiad . a number of classification methods were employed ( decision tree , random forest , -nearest neighbors , linear discriminant analysis , gaussian mixture , probability density function estimation , logistic regression and nave bayes ) and the most effective classifier was selected for each drug . the quality of classification was surprisingly high with sensitivity and specificity ( evaluated by leave - one - out cross - validation ) being greater than 70% for almost all classification tasks . the best results with sensitivity and specificity being greater than 75% were achieved for cannabis , crack , ecstasy , legal highs , lsd , and volatile substance abuse ( vsa ) .
to monitor high rate beam fluency gas ionization chambers based devices can be used . in order to obtain a 2d profile of radiation dose or fluencyit is necessary to have a highly segmented anode .this represents a major problem when the size of the chamber is large compared with the anode segmentation pitch . in the present work we describe a new and simple solution developed to give real two dimensional readout on a gas ionization chamber .the method consists on scanning electronically the detector active area in the perpendicular direction to the readout strips orientation ( strips instrumented with the measuring electro - meters ) .gated charge transfer is a well known method which has been used in different radiation detectors like , for example , gated gas - filled time projection chambers in high energy physics . v / m .dimensions are given in .,width=377 ] the transfer of electrons drifting from the detector volume to the collector electrode strips can be blocked by means of a local electric field that dominates the drift field in the proximity of the gate electrode strips ( see simulation in figure [ puerta ] ) . in this way we provide a single stage control of the charge transfer from the gas drift region to the collector electrodes . in this workwe profit from the great development of micro pattern gas detectors ( mpgd ) in the last years , specially the advances on kapton etching techniques . in our casewe integrated a gating grid using a two metal layer circuit based in a kapton foil glued on a fr4 epoxy substrate to provide mechanical rigidity .the use of a 25 m thick kapton foil to produce a two level readout circuit allows to use a small gating voltage .to design and optimize the geometry of the gate and collector electrodes we simulated the electric field map of a transfer gap cell through maxwell3d and matlab programs . for the final design it was chosen a 1:2:1 ratio corresponding to the transfer gap height : gate spacing : gate width . with this geometry we expected to have a low electron transfer transparency( well bellow 1% ) using a relatively low positive ( + 5v ) _ `` closed gate '' _ voltage and moderate drift fields ( v / m ) . a wider gate strip spacing would increment the electron transparency for a given negative _ `` open gate '' _ voltage , but also higher positive voltages would be needed to block efficiently the electron drift . on the other hand the local electric fieldpresent around the gate strip either in open or closed mode should not affect significantly the electric field in the drift region .this is another important reason to use a small transfer gap between the gate and collector strips .the readout circuit layout of this prototype is described in figure [ esquema ] .we used a 25 m thick kapton foil metal coated on both sides with 5 m copper .after etching of the copper , we obtained 35 m wide and 1 mm wide strips on the gate and collector plane respectively .the foil was glued on a 1 mm fr4epoxy substrate and subsequently kapton in the gap between gate strips was removed obtaining the figure [ esquema ] micro - pattern . in order to provide a simple connection procedurea 64 pin 1/10 inch pitch standard connector was included in the board edge design for both gate and collector strips .this choice implies a 1.27 mm pitch in the detector active area .the collector strips were thus made 1 mm wide with 1.27 pitch while 17 gate strips were joined in a group in the detector edge to achieve the previous effective pitch .the drift gas gap was built using a g10 6 mm height frame glued on the readout board , and this volume was closed with a drift electrode made of 200 m thick g10 copper clad on one side .v / m.,width=377 ]the experimental transparency for electrons obtained as a function of the gate voltage is shown in figure [ conmutacion ] .this measurements were done using pure ar at atmospheric pressure , and corresponds to the ratio of the ionization current under x ray irradiation measured at the collector and at the drift electrode .the plot shows that very small transparency values ( ) can be obtained for low positive voltages at _ `` closed mode''_. the transparency increases when applying negative voltages at the gating electrodes , showing that there is a transition region centered on -15v , and can reach values very close to 1 ( _ `` open mode '' _ ) when the gate voltage is below -20v . if the drift field is increased , the gate voltage has also to be increased proportionally to maintain the same transparency value .the fraction of current produced by electrons arriving to the gate electrode is the complementary value of the transparency calculated for the collector .we have chosen -17v as _ open gate _voltage as this value can be commutated using standard cmos analogue switches .if we consider the width of the gate strips ( 35 m in figure [ esquema ] ) and the corresponding space between gate strips ( 40 m in figure [ esquema ] ) , the expected electron transparency is equal to the optical limit defined by being and the gate and drift voltages respectively , the height of the gate strips over the collector electrodes and the gas gap below the drift electrode .the measured maximum transparency significantly deviates from this expectation due to the trapezoidal etching of the kapton layer that partially covers the collector electrodes in the space between gate strips .to demonstrate the working principle of our position sensitive charge transfer scan readout method , a small area of the prototype was instrumented with charge integration electronics . to drive the detector ,two separate electronic boards were used , one with the commutation circuit to sequentially commutate from closed to open gate voltage on the gate strips and another board with the necessary electronic readout channels to integrate the charge collected by each individual collector strip ( integrators board ) . to integrate the collector current we used a precision switched ivc102 integrating amplifier ( with an internal feedback capacitor set to 10 pf ) . to sequentially change the voltage value applied to each gate stripa shift register was used , built by means of daisy chained d type flip flops ( hef40174b ) whose outputs were connected to double analogue switches ( ad7592di ) .these analogue switches commute from closed gate voltage value ( that keeps the electron transparency at lowest values ) , to a negative open gate voltage value giving a high electron transparency for the selected gate strip ( typically % in our setup ) .a scheme of the readout circuit is shown on figure [ esquema2 ] .for the synchronous control of the two electronic boards we used a personal computer pci embedded card .the first prototype was instrumented with a total number of 96 effective pixels ( on an active area of 15.2 10.2 mm ) , corresponding to 12 gate channels and 8 collector channels .the total capacitance at the detector between gate and collector plane is 7nf , meaning a 2pf capacity per pixel .this gate collector capacity gives a 2 transient time during commutation , considering the 3k impedance of the ivc102 ( with switches s1+s2 closed ) .this dead time is small compared with the typical integration times used ( of the order of milliseconds ) .the total time required to obtain an image is equal to the integration time needed to integrate the charge transfered by each gate strip times the number of gate strips .figure [ ranura ] and [ anillo ] show two x ray images obtained with this prototype using a chromium x ray tube : a 1.5 mm slit between two 5 mm thick aluminum plates and a 5 mm screw nut were illuminated .the and gate voltage values used were + 5v and -17v respectively , and the drift field applied in the image was 1.7 v / m in pure ar . to obtainthis images a scan readout cycle time of less than 10 seconds was required , with a collector current value of 7 pa in the pixels with maximum signal .the charge transparency value when the gate electrode is set to the _ closed _ state should be ideally zero .but the real value differs from zero by a small amount ( 0.1% ) .this causes a leakage current , of a value proportional to the detector area , that can seriously distort the image . nevertheless , if the leakage current per effective pixel is small we can correct this effect by what we have called _ differential readout_. the correction term is measured by integrating the detector current during the same time interval used for the standard readout with all the gate electrodesclosed_. irradiated with soft x rays.,width=566 ] this value will then be subtracted as a pedestal from the values obtained with the gate electrodes on _ opened _ state . in this waythe image will not be dramatically affected by the non zero transparency value at the _ closed _ gate state .we have proved the working principle of a simple and reliable readout method for 2d position sensitive gas ionization detectors .this two dimensional charge transfer readout solution allows to cover large detector areas giving a high number of effective pixels and minimizing the number of readout electronic channels . by using a kapton insulated two layer readout circuit , charge transfer does not affect the drift field and the control of the charge transfer process can be done with very low voltage values .the detector can be used not only with gases but with other photo - conductive media like non polar liquids ( isooctane or tetrametilsilane ) . for low beam intensity or fast readout cycle applications , an intermediate gas avalanche device , like a gas electron multiplier , can be added to improve its sensitivity . considering moderate gas gains around 100, an image could be readout with a cycle period 100 times faster than an equivalent ionization chamber , leading to a fast beam imaging device .we are grateful to manuel sanchez from cern est / dem group for his permanent technical support and collaboration .
we have constructed and tested a 2d position sensitive parallel - plate gas ionization chamber with scanned charge transfer readout . the scan readout method described here is based on the development of a new position dependent charge transfer technique . it has been implemented by using gate strips perpendicularly oriented to the collector strips . this solution reduces considerably the number of electronic readout channels needed to cover large detector areas . the use of a 25 m thick kapton etched circuit allows high charge transfer efficiency with a low gating voltage , consequently needing a very simple commutating circuit . the present prototype covers 8 with a pixel size of mm . depending on the intended use and beam characteristics a smaller effective pixel is feasible and larger active areas are possible . this detector can be used for x ray or other continuous beam intensity profile monitoring . , , , , , , , , gas ionization chamber , position sensitive detector , scan readout 29.40.cs , 29.40.gx , 07.85.fv
in human societies , cooperation is essential for the maintenance of public goods .however , the collapse of cooperation happens often in many public goods dilemmas which we nowadays face , like protecting the global climate or avoiding overfishing of our oceans . for avoiding the tragedy of the commons , we often rely on institutions to enforce public cooperation and acceptable behavior .although institutionalized punishment appears to be more common than institutionalized reward , both concepts are in use throughout the world .recently , ample research efforts have been devoted to the study of the emergence of institutions and their effectiveness in promoting prosocial behavior .it has been shown , for example , that institutional rewarding promotes the evolution of cooperation in the liner public goods game , the nonlinear public goods games , and in structured populations in general .however , institutional punishment is less costly and thus more effective to warrant a given level of public cooperation , especially if participation in the public goods game is optional . besides the obviousstick versus carrot dilemma , the question emerges how to best make use of the available resources , which inevitably are finite . in particular , we wish to make optimal use of different forms of reciprocity to promote human cooperation .one plausible approach appears to be allocating the resources depending on the properties of the interaction network that describes the connections among us .surprisingly , few studies have thus far considered the problem of the optimal allocation of incentives for maximizing public cooperation .traditionally , all groups and all individuals are considered equal , and depending on their strategies thus deserved of the same reward or punishment .this simple assumption , however , does not agree with the fact that in social networks individuals have different roles , which depend significantly on the degree of the node that they occupy . indeed , the prominent role of heterogeneous interaction networks for the successful evolution of cooperation is firmly established and well known , and the reasonable assumption is that the incentives should likely also be distributed accordingly for optimal evolutionary outcomes . in the framework of evolutionary graph theorythe interaction groups are diverse , and naturally thus the provided incentives within each group should also be different .the number of links an individual player has is traditionally assumed to be a good proxy for that player s influence and importance . in this sense , it is interesting and highly relevant to determine how to distribute the incentives in the light of this heterogeneity .here we consider the spatial public goods game with institutional reciprocity on a scale - free network , where the assumption is that the incentives at disposal for rewarding cooperators and punishing defectors are limited .we assume that the budget available to each group is proportional to its size , and that the distribution of the incentives depends on the number of individual links within the group .our aim is to arrive at a thorough understanding of how the incentives should be best distributed to maximize public cooperation . in what follows , we present the results obtained with the model described in the methods section , to where we refer for details .as we will show , if the enhancement factor is small , the level of cooperation can be maximized simply by adopting the simplest `` equal distribution '' scheme . if the value of is large , however , it is best to reward high - degree nodes more than low - degree nodes . unlike for institutionalized rewards ,the optimal distribution of resources within the framework of institutional punishment depends on whether absolute or degree - normalized payoffs are used .high - degree nodes should be punished more lenient than low - degree nodes if degree - normalized payoffs apply , while high - degree nodes should be punished stronger than low - degree nodes if absolute payoffs count .we perform monte carlo simulations of the public goods game described in the methods section , whereby we consider separately institutional rewarding with absolute payoffs and institutional punishment with absolute payoffs , as well as institutional rewarding with degree - normalized payoffs and institutional punishment with degree - normalized payoffs . as the key parameters we consider the enhancement factor , the average amount of available incentives , and the distribution strength of incentives ( see methods for details ) .we determine the stationary fractions of cooperators in the stationary state on networks comprising to nodes .the final results are averaged over independent initial conditions to further enhance accuracy .figure [ fig1 ] shows the stationary fraction of cooperators for two different values of the enhancement factor . when the enhancement factor is small [ fig .[ fig1](a ) and ( c ) ] , defectors always dominate if , and this regardless of the value of . for intermediate values of ,the cooperation level can be maximized at an intermediate value of the distribution strength , which ought to be close to zero .this indicates that an equal distribution of positive incentives , regardless of the degree of players within the group , is the optimal distribution scheme for public cooperation . for high values of , the cooperation level increases with increasing the value of .if the enhancement factor increases [ fig .[ fig1](b ) and ( d ) ] , defectors still dominate for small values of and regardless of the value of .however , the nonmonotonous dependence of the fraction of cooperators on the distribution strength disappears for intermediate values of . instead , the highest cooperation level is attainable for large values of . intuitively , it is possible to understand that when the enhancement factor is small , a modest positive incentive is not enough to reverse the doom of cooperators , no matter which distribution scheme is used .conversely , if the incentives are large and targeted preferentially towards influential players , they can have a high payoff even if the part stemming from the public goods game is small . in agreement with the traditional argument of network reciprocity , only cooperators are able to forge a long - term advantage out of this favorable situation and build sizable cooperative groups .thus , for high - enough values of , which favor the distribution of rewards towards high - degree nodes , the evolution of public cooperation is successful . to gain an understanding of the optimal intermediate value of the average amount of available incentives requires more effortfirst , we show in fig .[ fig2 ] the payoff differences between cooperators and defectors as well as the fraction of cooperators as a function of degree during different typical evolutionary stages of the game .we observe that for , which implies an `` equal distribution '' scheme irrespective of the degree of players , the payoff of cooperators is higher than that of the corresponding defectors , and this regardless of .thus , cooperators can successfully occupy all nodes of the networks .in contrast , for negative values of , the payoff of cooperators with high or middle degree is less than that of the corresponding defectors , while cooperators with low degree have a higher mean payoff than defectors with small degree . because there are interconnections among different types of nodes , and because the fermi strategy updating rule is adopted , cooperators can coexist with defectors at equilibrium . but defectors can occupy most of the nodes in the network , since low - degree cooperators do not obtain a high enough payoff to spread their strategy across the network . for positive values of , the payoff of cooperators with high and middle degree is larger than that of the corresponding defectors , while cooperators with low degree have a lower mean payoff than defectors with low degree .in addition , high - degree cooperators obtain a sufficiently high payoff through institutional rewarding that enables them to spread the cooperative strategy also towards some of the auxiliary low - degree nodes .accordingly , cooperative behavior prevails over defection , but the stationary state is still a mixed phase cooperators are unable to dominate completely . to further corroborate our arguments ,we show in fig .[ fig3 ] the payoff difference between cooperators and defectors as well as the fraction of cooperators in dependence on degree , as obtained for a two times larger value of than used in fig .[ fig2 ] . in comparison, it can be observed that for the results remain unchanged . for ,on the other hand , the process of evolution is different from what we have presented in fig .[ fig2 ] . during the early stages of evolution [ fig .[ fig3 ] ( a ) and ( b ) ] , cooperators with low degree can have a lower mean payoff than low - degree defectors , while cooperators with middle and high degree can have a higher mean payoff than the corresponding defectors .further in time , cooperators succeed in occupying all high - degree nodes [ fig .[ fig3 ] ( g ) ] , and even low - degree cooperators have a payoff comparable to that of low - degree defectors [ fig . [ fig3 ] ( c ) ] .cooperators can eventually invade the whole network [ fig .[ fig3 ] ( h ) ] , thus giving rise to the absorbing phase at , which emerges for sufficiently large values of and intermediate values of .figure [ fig4 ] shows that when the enhancement factor is small , cooperators are unable to survive for small values of , and this irrespective of the value of . for intermediate values of ,the highest cooperation level is attained at an intermediate value of the distribution strength , which is almost equal to zero , like by the consideration of institutional reward in the preceding subsection .this indicates that , for small enhancement factors , in case of institutional punishment too an `` equal distribution '' scheme works best for the evolution of public cooperation .if is large if resources for punishment abound cooperators can always dominate , regardless of the value of . for a two times larger value or cooperators are favored even more , so that the nonmonotonous dependence of the cooperation level on at intermediate values of vanishes .instead , the fraction of cooperators simply increases with increasing values of .thus , the more the high - degree defectors are punished , the better for the evolution of public cooperation .it is understandable that low values of , paired with modest resources for punishing defectors , lead to the dominance of defectors , regardless of the value of .conversely , if the resources abound , defectors are punished severely and cooperators dominate . in this limit example the distribution of fines between low , middle and high degree nodes does not play an important role . if , however , the combination of values of and just barely , or not at all , support the survivability of cooperators , then the value of , and thus the particular distribution of incentives ( in this case fines ) , plays a significant role . with the aim to explain this nontrivial dependence on , we show in fig .[ fig5 ] the payoff difference between cooperators and defectors as well as the fraction of cooperators as a function of degree during different stages of the evolutionary process .it can be observed that for cooperators can always have a higher payoff than the defectors with the same corresponding degree [ fig .[ fig5 ] ( a)-(c ) ] .cooperators can thus rise to complete dominance . while for , however , low - degree cooperators can have a higher mean payoff than low - degree defectors , while cooperators with middle or high degree ca nt match the corresponding defectors during the early stages of the evolution [ fig .[ fig5 ] ( a ) ] .defectors can therefore , over time , occupy the high - degree and middle - degree nodes [ fig .[ fig5 ] ( f ) ] .this invasion can decrease the fraction of cooperators on low - degree nodes .accordingly , the payoff difference between cooperators and defectors on these nodes continues to be negative , although low - degree defectors receive the negative incentives .ultimately cooperators therefore die out .for , on the other hand , defectors with higher degree are punished preferentially they receive a bigger share of fines from the available fond than low - degree defectors . due to the small enhancement factor and the institutional punishment ,both high - degree cooperators and high - degree defectors have negative payoffs .in fact , low - degree players can have a higher payoff than high - degree players , despite of the fact that we use absolute payoffs in this particular case . either way, defectors can easily invade low - degree nodes [ fig .[ fig5](f ) ] , and they can spread further towards middle and high degree nodes , although at the beginning of evolution cooperators have a higher payoff than defectors on these nodes .the ultimate consequence is that defectors dominate completely [ fig .[ fig5 ] ( h ) ] .it remains of interest to explain why the dependence on disappears at intermediate values of for .for this purpose , we show in fig .[ fig6 ] the same quantities as in fig .[ fig5 ] , from where it follows that the results do not change for . however , for the differences are clearly inferable . for high - degree cooperators can obtain a positive payoff , and naturally they then have a higher payoff than high - degree defectors , because the latter receive ample negative incentives from the institutional punishment pool [ fig .[ fig6 ] ( a)-(c ) ] .cooperators can therefore occupy high - degree nodes and from there spread across the whole population [ fig .[ fig6 ] ( g ) ] , and this the more effectively the higher the value of .from here onwards we turn to considering degree - normalized payoffs , which can have important negative consequences for the evolution of public cooperation in heterogeneous environments if compared to absolute payoffs . as shown in fig .[ fig7 ] , the fraction of cooperators unsurprisingly increases with increasing for various distribution strength ( top panels ) .furthermore , at small values of , for small values of defectors always dominate , regardless of the value of , while for intermediate values of cooperators recover gradually as increases . for high valuesthere exists an intermediate close - to - zero value of that maximizes the stationary fraction of cooperators [ fig .[ fig7](a ) and ( d ) ] . when the enhancement factor is larger , the extent of the parameter region where the nonmonotonous phenomenon can be observed decreased . instead , for high values of , the cooperation level increases with increasing values of and the area of complete cooperator dominance increases as well [ fig . [ fig7](e ) and( f ) ] . if comparing the results presented in fig .[ fig7 ] with those presented in fig .[ fig1 ] , we find that the nonmonotonous phenomenon still exists , and it can appear even at larger values of because of the consideration of degree - normalized payoffs . in general, however , the explanation of these results and the evolutionary mechanisms behind are the same as those described when considering institutional rewarding with absolute payoffs .lastly , we consider institutional punishment with degree - normalized payoffs . from the results presented in fig . [ fig8 ]it follows that the stationary fraction of cooperators increases with increasing values of , and this regardless of the value of ( top panels ) .when the enhancement factor is small , we can see that the nonmonotonous dependence of the fraction of cooperators on exists at intermediate values of the average incentive [ fig .[ fig8](d ) ] .when the enhancement factor increases , this phenomenon still exists , but the extent of the parameter region where the nonmonotonous dependence can be observed decreases [ fig . [ fig8](e ) ] .surprisingly , when the enhancement factor increases further , the nonmonotonous dependence disappears .instead , in a narrow region of intermediate values , the fraction of cooperators decreases with increasing values of . at the same time , the extent of the full cooperation area increases while the full defection region decreases in the considered parameter space [ fig .[ fig8](f ) ] .while the underlying mechanism for the nonmonotonous dependence on is qualitatively identical to that reported before when considering institutional punishment with absolute payoffs , the decrease of the level of cooperation at intermediate values of and as increases requires special attention .we note that when the value of is large , low - degree cooperators can still have a positive payoff . for negative , these low - degree cooperators can even have the highest payoffs because of the consideration of degree - normalized payoffs and sufficiently high values of to weigh heavily on the defectors .therefore , cooperators dominate on all low - degree nodes and from there spread further across the whole network and rise to dominance .this atypical spreading is a unique consequence of the consideration of the optimal distribution of negative incentives from the punishment pool , and it highlights the importance of the parameter . for positive values of , because most of the negative incentives are then assigned to high - degree defectors and there are only a few of those in the entire population , the majority of low - degree defectors is not punished at all .the previously described spreading of cooperators from the low - degree nodes outwards is therefore impaired , which ultimately results in an overall lower stationary fraction of cooperators . instead of cooperation , for larger values of low - degree nodes `` emit '' defection throughout the population .to summarize , we have studied how to best distribute limited institutional incentives in order to maximize public cooperation on scale - free networks .we have considered both institutional rewarding of cooperators and institutional punishment of defectors , and we have also distinguished between absolute and degree - normalized payoffs . our key assumptions was that , since in heterogeneous environments players have a different number of partners , the incentives ought to be distributed by taking this into account. this would be in agreement with the established importance of degree heterogeneity for cooperation in evolutionary games .traditionally , however , previous research has considered the limited budged be distributed equally among all the potential recipients of the incentives , irrespective of the players status and influence within the network .accordingly , how to distribute the incentives to optimize public cooperation was an important open problem . we have found interesting solutions on how to optimally distribute the incentives based on each player s social influence level , the proxy for which are the number of social ties the players have within the interaction network . we have shown that sharing the incentives equally among all regardless of status is optimal only if the social dilemma is strong and the propensity to contribute to the common pool is thus weak , and if in addition the available amount of incentives is intermediate . this result is valid for both institutional punishment and institutional rewarding , and it does not depend on whether absolute or degree - normalized payoffs count towards evolutionary fitness . however , if the environment already favors cooperative behavior when the public goods game is characterized with a high enhancement factor then it is best to reward influential players more than low - degree players , and this regardless of whether absolute or degree - normalized payoffs apply . for institutional punishment , on the other hand ,the solution of the optimization problem depends on whether absolute or degree - normalized payoffs are used .we have shown that degree - normalized payoffs require high - degree nodes be punished more lenient than low - degree nodes , while if absolute payoffs count , then high - degree nodes should be punished stronger than low - degree nodes .in general , rewarding influential cooperators strongly and punishing auxiliary defectors leniently appears to be optimal for the successful evolution of public cooperation . in terms of solving actual common goods problems, our work might have merit in situations with strong diversity in roles and group sizes .one representative example of such a situation is climate change governance , where existing research has shown that local institutions are an effective way to promote the emergence of widespread cooperation .since our results are derived not only from local institutions , but take into account also the heterogeneous interaction environment , they could offer further advice on how to arrive at globally acceptable climate policies . while the evolution of institutions remains a puzzle , their importance for enforcing socially acceptable behavior in human societiescan hardly be overstated .although institutionalized punishment appears to be prevailing , recent research concerning the effectiveness of punishment , for example related to antisocial punishment , reciprocity , and reward , is questioning the aptness of sanctioning for elevating collaborative efforts and raising social welfare . indeed , although the majority of previous studies addressing the `` stick versus carrot '' dilemma concluded that punishment is more effective than reward in sustaining public cooperation , evidence suggesting that rewards may be as effective as punishment and lead to higher total earnings without potential damage to reputation or fear from retaliation is mounting .in particular , rand and nowak argue convincingly that healthy levels of cooperation are likelier to be achieved through less destructive means .we hope that our study will prove to be inspirational for further research aimed at discerning the importance of positive and negative reciprocity for human cooperation , as well as for looking closely at their correlated effects .we consider the evolutionary public goods game on the barabsi - albert scale - free network .each player occupies one node of the network , and it can choose between cooperation ) and defection ( ) as the two competing strategies .to each public goods game cooperators contribute the cost , while defectors contribute nothing .the payoff of player who is member in the group , which is centered on player , depends on the size of the group ( here is also the degree of node ) , on the number of cooperators in the group , and on the enhancement factor .in addition to the payoffs stemming from the public goods game , each group receives institutional incentives to be used either for rewarding cooperators or for punishing defectors , where is the average amount of available incentives .when the incentives are used for rewarding , a cooperator with degree that is member in the group thus receives the payoff while a defector in the same group receives where is the distribution strength . according to the definition of the payoffs , for high - degree nodesobtain larger rewards than low - degree nodes , while for low - degree nodes receive a larger share from the incentive pool .if the incentives are used for punishing defectors rather than rewarding cooperators , then a cooperator with degree that is member in the group receives while a defector in the same group receives as by institutionalized rewarding , here to implies high - degree nodes are punished stronger than low - degree nodes , and vice versa for .each player participates in public goods games , which are staged in groups that are centered on player itself and on its neighbors , respectively .the total payoff player obtains is thus . after playing the games ,a player is allowed to learn from one of its randomly chosen neighbors and update its strategy accordingly .the probability of strategy change is given by the fermi function }},\ ] ] if we assume that absolute payoff are considered .however , previous research has emphasized also the importance of degree - normalized payoffs , in which case the probability of strategy change is }}.\ ] ] we consider both absolute ( eq . [ eq.1 ] ) and degree - normalized ( eq . [ eq.2 ] ) payoffs to be representative for the evolutionary fitness of individual players . especially for institutional punishment ,the solution of the considered optimization problem depends significantly on this difference . without losing generality we set the uncertainly in the strategy adoption process to , so that it is very likely that the better performing players will be imitated , although it is also possible that players will occasionally learn from those performing worse .this research was supported by the national natural science foundation of china ( grant 11161011 ) and the slovenian research agency ( grants j1 - 4055 and p5 - 0027 ) .x. c. would like to thank ulf dieckmann for helpful discussion . for different values of the distribution strength .the enhancement factor is in ( a ) and in ( b ) .bottom row depicts the contour plot of the fraction of cooperators as a function of and , as obtained for the enhancement factor in ( c ) and in ( d).,width=377 ] for three typical values of .institutional rewarding and absolute payoffs apply .the insets of ( a ) and ( b ) show the mean payoff difference between cooperators and defectors for low - degree and middle - degree nodes during the early stages of evolution . during the evolutionary process ,if the enhancement factor is small , cooperators always have a higher mean payoff than defectors at an intermediate value of .parameter values are and .,width=604 ] for different values of the distribution strength .the enhancement factor is in ( a ) and in ( b ) .bottom row depicts the contour plot of the fraction of cooperators as a function of and , as obtained for the enhancement factor in ( c ) and in ( d).,width=377 ] for three typical values of . institutional punishment and absolute payoffs apply .the insets of ( a ) and ( b ) show the mean payoff difference between cooperators and defectors for low - degree and middle - degree nodes during the early stages of evolution . during the evolutionary process ,if the enhancement factor is small , cooperators always have a higher mean payoff than defectors at an intermediate value of .parameter values are and .,width=604 ] for different values of the distribution strength .the enhancement factor is in ( a ) , in ( b ) , and in ( c ) .bottom row depicts the contour plot of the fraction of cooperators as a function of and , as obtained for the enhancement factor in ( d ) , in ( e ) , and in ( f).,width=453 ] for different values of the distribution strength .the enhancement factor is in ( a ) , in ( b ) , and in ( c ) .bottom row depicts the contour plot of the fraction of cooperators as a function of and , as obtained for the enhancement factor in ( d ) , in ( e ) , and in ( f).,width=453 ]
in the framework of evolutionary games with institutional reciprocity , limited incentives are at disposal for rewarding cooperators and punishing defectors . in the simplest case , it can be assumed that , depending on their strategies , all players receive equal incentives from the common pool . the question arises , however , what is the optimal distribution of institutional incentives ? how should we best reward and punish individuals for cooperation to thrive ? we study this problem for the public goods game on a scale - free network . we show that if the synergetic effects of group interactions are weak , the level of cooperation in the population can be maximized simply by adopting the simplest `` equal distribution '' scheme . if synergetic effects are strong , however , it is best to reward high - degree nodes more than low - degree nodes . these distribution schemes for institutional rewards are independent of payoff normalization . for institutional punishment , however , the same optimization problem is more complex , and its solution depends on whether absolute or degree - normalized payoffs are used . we find that degree - normalized payoffs require high - degree nodes be punished more lenient than low - degree nodes . conversely , if absolute payoffs count , then high - degree nodes should be punished stronger than low - degree nodes .
ions and electrons trapped in the earth s magnetic field may affect our technology and our daily lives in significant ways .energetic plasma particles may penetrate satellites and disable them temporarily or permanently .they can also pose serious health hazards for astronauts in space .spectacles like the aurora are created by particles that enter the earth s atmosphere at polar regions ; on the other hand , aircraft personnel and frequent flyers may accumulate a significant dose of radiation due to the same particles. all of these effects are enhanced at periods of solar maximum , the next one being expected to happen between 2012 and 2014 .occasional extreme solar events may induce currents in the ionosphere , which in turn induce significant currents on power lines , causing power outages. such events can also disrupt communications , radio and gps .thus , understanding and predicting the processes in the earth s magnetosphere have practical importance .this paper aims to outline one of these processes , charged - particle motion and associated adiabatic invariants , for physics students and instructors who wish to use it in lectures .the emphasis is on numerical computation and visualization of trajectories . for a more comprehensive discussionadvanced texts on plasma physics can be consulted .other authors have also suggested using topics from plasma research to enhance undergraduate curriculum .lopez provides examples of how space physics can be incorporated in undergraduate electromagnetism courses , and mcguire shows how computer algebra systems can be used to follow particle trajectories in electric dipole and ( separately ) in magnetic dipole fields .photographs of plasma experiments , such as those provided by huggins and lelek and by the ucla plasma lab web site are also helpful for understanding space plasma behavior . a schematic view of the earth s magnetosphere .the solar wind comes from the left .( courtesy of kenneth r. lang, reproduced with permission . ) ] figure [ fig : magnetosphere ] shows a schematic description of the earth s magnetosphere , which is the region in space where the magnetic field of the earth is dominant .charged particles trapped in the magnetosphere form the radiation belts , the plasmasphere , and current systems such as the ring current , tail current , and field - aligned currents .the earth radius ( 6378.137 km ) is a natural length scale for the magnetosphere . near the earth ,up to 3 - 4 , the field can be very well approximated with the field of a dipole .however , at larger distances , the effects of the solar wind cause significant deviations from the dipole .the solar wind is a stream of plasma carrying magnetic field from the sun .when the solar wind encounters the earth s magnetosphere , the two systems do not mix .this is because of the `` frozen - in flux '' condition which dictates that plasma particles stay attached to magnetic field lines , except at special locations such as polar cusps .the solar wind influences the magnetosphere by applying mechanical and magnetic pressure on it , compressing it earthward on the side facing the sun ( the `` dayside '' ) .this compression is stronger when the sun is more active . on the opposite side ( the `` nightside '' ) , the fieldis extended over a very large distance , forming the magnetotail .wolf provides a review of the complex and time - dependent interactions between magnetic fields , induced electric fields and plasma populations .the van allen radiation belts form a particularly significant plasma population due to their high energy and their proximity to earth .they can be found from 1000 km above the ground up to a distance of 6 .these belts are composed of electrons with energies up to several mevs and of protons with energies up to several hundred mevs .the dynamics of these particles is the main focus of this paper .this paper is organized as follows : section [ sec : nleqs ] introduces the relativistic equation of motion for a particle in an electric and magnetic field and describes the cyclotron , bounce and drift motions .it also shows some typical particle trajectories under the dipolar magnetic field , approximating the earth s field .section [ sec : firstinv ] introduces the concept of adiabatic invariants and derives the first adiabatic invariant associated with the particle motion .section [ sec : gceqs ] gives the approximate equations of motion for the guiding center of a particle , obtained by averaging out the cyclotron motion .section [ sec : secondthirdinv ] presents and derives the second and third invariants associated with the bounce and drift motions , respectively . section [ sec : exercises ] lists some exercises building on the concepts described in the paper .two of these exercises describe non - dipole fields that are used for modeling different regions of the magnetosphere .the motion of a particle with charge and mass in an electric field and magnetic field is described by the newton - lorentz equation : here is the relativistic factor and is the particle speed .suppose that .then , because of the cross product , the acceleration of the particle is perpendicular to the velocity at all times , so the speed of the particle ( and the factor ) remains constant .further suppose that the magnetic field is uniform .then , particles move on helices parallel to the field vector .the circular part of this motion is called the `` cyclotron motion '' or the `` gyromotion '' .the `` cyclotron frequency '' and the `` cyclotron radius '' are respectively given by where is the uniform field strength and is the component of the velocity perpendicular to the field vector .if there are not any other forces , the parallel component of the velocity remains constant .the combined motion traces a helix. if the electric field is not zero , we can write it as , where is parallel to , and is perpendicular to it . if , particles accelerate with along the field line and they are rapidly removed from the region . therefore , the existence of a trapped plasma population implies that the parallel electric field must be negligible .the perpendicular component of the electric field will move particles with an overall drift velocity , known as the e - cross - b - drift , which is perpendicular to both field vectors : the particle will move with with the velocity , plus the cyclotron motion described above .the drift velocity is independent of particle mass and charge .therefore , in an inertial frame moving with , the e - cross - b - drift will vanish for all types of particles . for the remainder of this paper we take ,that is , the acceleration due to the electric field is not taken into consideration .this is not because electric fields are unimportant ; on the contrary , they play an important role in the complex dynamics of plasmas . the first reason for leaving out electric field effects is described above : if the field is uniform and constant , we can transform to another frame that cancels it .even if the field is nonuniform and time - dependent , electric drifts can be vectorially added to magnetic drifts in order to obtain the overall drift .drift velocities due to different fields are independent .the second reason is the need for simplicity ; a static magnetic field provides sufficient real - life context for the discussion of guiding - center and adiabatic invariant concepts in general - purpose lectures .the final reason is that this paper focuses on the region occupied by radiation belts , and in this region the magnetic term of ( [ eq : nleqs ] ) is the dominant force. now we consider motion under the influence of a magnetic dipole .the field of a magnetic dipole with moment vector at location is given by: ,\ ] ] where , and .for earth , we take , antiparallel to the -axis , because the magnetic north pole is near the geographic south pole. at the magnetic equator ( , ) the field strength is measured to be .substitution shows that .then in cartesian coordinates , the field is given by : .\ ] ] ( color online ) trajectories of two 10mev protons in the earth s dipole field .the dipole moment is in the direction .both panels show the same trajectories from different viewing angles.,title="fig : " ] ( color online ) trajectories of two 10mev protons in the earth s dipole field .the dipole moment is in the direction .both panels show the same trajectories from different viewing angles.,title="fig : " ] figure [ fig : proton_full ] shows trajectories of two protons with 10mev kinetic energy , a typical energy for radiation belts. the trajectories are calculated with the scipy module using the python language. one proton is started at and the other at .both start with an equatorial pitch angle ( angle between the velocity and field vectors ) so that , and .both are followed for 120 seconds .the motion is again basically helical , but the nonuniformity of the field introduces two additional modes of motion on large spatial and temporal scales .these are called `` the bounce motion '' and `` the drift motion '' .the bounce motion proceeds along the field line that goes through the helix ( the `` guiding line '' ) .the motion slows down as it moves toward locations with a stronger magnetic field , reflecting back at `` mirror points '' .the bounce motion is much slower than the cyclotron motion .the drift motion takes the particle across field lines ( perpendicular to the bounce motion ) . in general ,drift motion is faster at larger distances , as observed in figure [ fig : proton_full ] .particles in dipole - like fields are trapped on closed `` drift shells '' as long as they are not disturbed by collisions or interactions with em waves .the drift motion is much slower than the bounce motion . under a dipolar field ,the bounce motion period and the drift motion period are approximately given as: \\\label{eq : driftperiod}\tau_{\rm d } & \approx & \frac{2\pi qb_0r_{\rm e}^3}{mv^2}\frac{1}{r_0}\left[1-\frac{1}{3}(\sin \alpha_{\rm eq})^{0.62}\right],\end{aligned}\ ] ] where is the equatorial distance to the guiding line and is the equatorial pitch angle .both approximations have an error of about 0.5% .if the parameters of an oscillating system are varied very slowly compared to the frequency of oscillations , the system possesses an `` adiabatic invariant '' , a quantity that remains approximately constant . in the hamiltonian formalism, the adiabatic invariant is the same as the action variable: where , are canonical variables and the integral is evaluated over one cycle of motion satisfying .the integral should be evaluated at `` frozen time '' , that is , the slowly varying parameter is considered constant during the integration cycle .there are three separate periodic motions of a charged particle in a dipole - like magnetic field .this means there are three adiabatic invariants for the particle s motion .the canonical momentum for a charged particle in a magnetic field with vector potential is . to obtain the first adiabatic invariant , we integrate the canonical momentum over one cycle of the cyclotron orbit : here is the line element of the particle trajectory . even though the path does not exactly close , we evaluate the integral as if it does .stokes theorem states that : where the right - hand side integral is taken over the surface bounded by the closed path of the left - hand side integral .using stokes theorem with , the integral takes the form : the second equation follows from substituting from ( [ eq : cycrad ] ) .it is assumed that the cycle is sufficiently small so that is considered uniform over the area bounded by the gyromotion .this assumption is essential for the existence of adiabatic invariants .the area element is antiparallel to because of the sense of gyration of the particles ; hence the negative sign of the second term .customarily , one takes as the first invariant , which differs from only in some constant factors .the parameter is called the `` magnetic moment '' because it is equal to the magnetic moment of the current generated by the particle moving on this circular path .the adiabatic invariance of explains the existence of mirror points .as the particle moves along the field line toward points with larger , the perpendicular speed must increase in order to keep constant .however , can be at most , the constant total speed. the particle stops at the point where and falls back ( see section [ sec : gceqs ] ) .the left panel of figure [ fig : firstinv ] shows the magnetic moment for the two protons shown in figure [ fig : proton_full ] .the values are oscillating with the local cyclotron frequency because instantaneous values of and are used .the actual adiabatic invariant is the average of these oscillations and it is constant in time .the top right panel shows the oscillations of vs. time for a shorter interval .comparison with the vs. time plot below , it can be seen that these oscillations are correlated with the bounce motion .oscillations of have a small amplitude near the mirror point because there the parallel motion slows down and the overall motion becomes more adiabatic . nearthe equatorial plane ( ) parallel motion is fastest , the motion is less adiabatic , and oscillates with a larger amplitude .the proper way of calculating the first invariant would remove all oscillations : after following the full trajectory , find the times where ( or ) by searching along discrete path points and by interpolating .the difference between any successive time points is half a gyroperiod .then take the average of over that time interval using the path points . repeating this procedure for all time intervals, we get a constant set of values ( apart from numerical errors ) .the `` guiding center '' is the geometric center of the cyclotron motion .if the magnetic field is uniform the guiding center moves with constant velocity parallel to the field line . in nonuniform field geometries , there is a sideways drift in addition to the motion along the field lines , as seen in the dipole example in figure [ fig : proton_full ] .calculation of the guiding center motion requires that the motion is helical in the smallest scale , and that the field does not change significantly within a cyclotron radius .this condition can be expressed as the magnetic moment is an adiabatic invariant under this condition .northrop and walt give detailed derivations of the equations of guiding - center motion .in order to derive the acceleration of the guiding center , the particle position is substituted with : where is the position of the guiding center .the vector lies on the plane perpendicular to the field , oscillates with the cyclotron frequency , and its length is equal to the cyclotron radius . assuming that the cyclotron radius is much smaller than the length scale of the field, we can expand around to first order in a taylor series : this expansion is substituted into the newton - lorenz equation ( [ eq : nleqs ] ) and the equation is averaged over a cycle , eliminating rapidly oscillating terms containing and its derivatives .the resulting acceleration of the guiding center is given by : taking the dot product of both sides of the equation with , the local magnetic field direction , will yield the equation of motion along the field line .the first term becomes identically zero because it is perpendicular to the field vector .then : where is the speed along the field line . replacing and defining as the distance along the field line, this equation can be written as : here is the field strength along the field line .the factors and can be taken inside the derivative because they are constants .this expression shows that the quantity acts like a potential energy in the parallel direction .the negative sign indicates that the parallel motion is accelerated toward regions with smaller field strength .the motion of the guiding center perpendicular to field lines can be determined by taking the cross product of eq .( [ eq : gcaccel ] ) with the field direction vector .the resulting equation is then iterated to obtain an approximate solution for the drift velocity across field lines . this drift velocity is actually the sum of two separate drift velocities : the gradient drift that arises from the nonuniformity of the magnetic field , and the curvature drift that occurs because the field lines are curved . for an example of pure gradient drift motion ,see exercise [ ex : graddrift ] in section [ sec : exercises ] .equation ( [ eq : driftvelocity ] ) shows that electrons and ions drift in opposite directions .this creates a net current around the earth , called `` the ring current '' .gradient and curvature drifts are the only drifts seen in static magnetic fields .external electric fields , external forces such as gravity , and time dependent fields create additional drift velocities. combining these , we obtain the following equations of motion for the guiding center : these equations are more complicated than the simple newton - lorentz equation , and they require computing and at each integration step .still , they have the advantage that we can follow the overall motion with relatively large time steps because we do not need to resolve the cyclotron motion .this reduces the cumulative error , as well as the total computation time .( color online ) guiding - center trajectories for the particles shown in figure [ fig : proton_full ] . the cyclotron motion is averaged out.,title="fig : " ] ( color online ) guiding - center trajectories for the particles shown in figure [ fig : proton_full ] .the cyclotron motion is averaged out.,title="fig : " ] figure [ fig : proton_gc ] shows the solution of the guiding - center equations for the same protons shown in figure [ fig : proton_full ] under a dipolar magnetic field ( python source code provided in supplement). it should be noted that the guiding - center equations are approximate because only terms first order in cyclotron radius are used in their derivation . for particles with larger cyclotron radii ( higher kinetic energies ) , there may be a noticeable difference between guiding - center and full - particle trajectories ( see exercise [ ex : gcandfull ] in section [ sec : exercises ] ) .the second adiabatic invariant is associated with the bounce motion , and it is calculated by integrating the canonical momentum over a path along the guiding field line : where is the line element along the field line .the adiabatic integrals are evaluated in a `` frozen '' system : it is assumed that the drift is stopped , so the motion moves back and forth along a single guiding field line .using stokes theorem , the second term can be converted to an integral over a surface bounded by the bounce path which is zero because the bounce motion goes along the same path in both parts of the cycle so that the enclosed area vanishes .then , the second adiabatic invariant can be written as where is the path length along the field line , and , are locations of the mirror points where the particle comes back . at the mirror pointthe parallel speed vanishes so that . from the invariance of the magnetic moment it follows that where is the field strength at the mirror point .substituting and solving for gives the integral is an adiabatic invariant in general ( even if there are electric fields or slow time - dependent fields ) .if the speed is constant , can be used as an adiabatic invariant .the integral depends only on the magnetic field , not on the particle velocity , so it can be used to compute the drift path using the field geometry only ( see exercises [ ex : evaluatei]-[ex : driftpath ] in section [ sec : exercises ] ) .the second invariant values in time , calculated using the guiding - center trajectories in fig .[ fig : proton_gc ] .lower curve is for the proton starting at 2 distance , upper curve for the proton starting at 4 . ]figure [ fig : secinv ] shows that the value of the second invariant , evaluated using the guiding - center trajectories shown in figure [ fig : proton_gc ] , stays constant in time .the integral is evaluated not using the definition of in eq .( [ eq : secinv ] ) , but using the dynamical form where the integral is evaluated over a half period .the limits of the integrals are determined by interpolation between two points where the parallel speed changes sign .the values do not oscillate because the adiabatic invariant is calculated as an average over a cycle .the drift path is found by averaging the bounce motion . in a dipolar fieldall drift paths are circular due to the symmetry of the field .the third invariant , associated with the drift motion , is defined as an integral along the drift path : where is a line element on the drift path .this can be written as in the first term , the drift speed , is the magnitude of the expression given in eq .( [ eq : driftvelocity ] ) .the second term is obtained by using stokes theorem as above .an order of - magnitude comparison shows that the first term of can be neglected because it is much smaller than the second term : from eq .( [ eq : driftvelocity ] ) the order of magnitude of the drift speed can be written as where is the typical field strength at the drift path and is the typical distance from the origin .similarly , from eq .( [ eq : cycrad ] ) , the cyclotron radius has the order of magnitude then , the order - of - magnitude ratio of the terms in eq .( [ eq : thirdinv2 ] ) is according to the adiabaticity condition eq .( [ eq : adbcondition ] ) , must be very small .therefore the first term of eq .( [ eq : thirdinv2 ] ) is ignored and we have where is the magnetic flux through the drift path .the third adiabatic invariant is useful as a conservation law when the magnetosphere changes slowly , i.e. , over longer time scales compared to the drift period .the use of three invariants gives more accurate results for the motion of particles over long periods .numerical solution of equations of motion are less accurate because of accumulated numerical errors .roederer discusses in detail how drift shells can be constructed geometrically using the invariants ( see exercise [ ex : driftpath ] in sec .[ sec : exercises ] ) .furthermore , as three invariants uniquely specify a drift shell , the invariants themselves can be used as dynamical variables when investigating the diffusion of trapped particles. section lists some further programming exercises with varying difficulty . the code given in the supplement be modified to solve some of the exercises . 1. * uniform magnetic field . * follow charged particles under a uniform magnetic field where .verify that the particles follow helices with cyclotron radius and frequency as given in eqs .( [ eq : cycfreq ] , [ eq : cycrad ] ) .experiment with particles with different mass and charge values .2 . [ ex : graddrift ] * gradient drift . * consider a magnetic field given as .the field has a gradient in the -direction , but no curvature .set and .follow the trajectory of a particle with mass and initialized with velocity at the origin .note that the sideways drift arises from the fact that the cyclotron radius is smaller at stronger fields .* equatorial particles . *consider a particle in a dipolar magnetic field , located at the equatorial plane ( ) with zero parallel speed . as the field strength is minimum at the equator with respect to the field line, there is no parallel acceleration and the particle stays on the equatorial plane at all times . using the dipole model , follow an equatorial particle and verify that the center of the motion stays on a contour of constant , as implied by the conservation of the first adiabatic invariant .4 . * explore the drift motion .* run the programs in the supplement to trace protons and electrons using the dipole model .initialize particles with different energies , starting positions and pitch angles .verify that electrons and protons drift in opposite directions , and electrons have a much smaller cyclotron radius than protons with the same kinetic energy .estimate the periods of bounce and drift motions and compare them with eqs .( [ eq : bounceperiod ] , [ eq : driftperiod ] ) . 5 .[ ex : gcandfull ] * accuracy of the guiding - center approximation . * simulate the full particle and guiding center trajectories with the same initial conditions and plot them together .shift the initial position of the particle properly so that the guiding center runs through the middle of the helix .+ repeat with protons with 1kev , 10kev , 100kev and 1mev kinetic energies . at higher energies, the guiding - center trajectory lags behind the full particle because the omitted high - order terms become more significant as the cyclotron radius increases .* different numerical methods . *solve the full particle and guiding center equations using different numerical schemes, such as verlet , euler - cromer , runge - kutta and bulirsch - stoer .verify the accuracy of the solution by checking the conservation of kinetic energy and adiabatic invariants .. * field line tracing . * plot the magnetic dipole field line starting at position . for any vector field , a field line can be traced by solving the differential equation where is the arclength along the field line .[ ex : evaluatei ] * compute . *compute the second invariant ( eq . [ eq : secinv ] ) under a dipolar field for a guiding center starting at position and an equatorial pitch angle .the integral should be taken along a field line , which can be traced as described above . from the first adiabatic invariant one finds and the limits of the integralare found by solving .[ ex : ialongdrift ] * second invariant along the drift path . *produce a guiding - center trajectory under the dipole field . by interpolation ,determine the points where the trajectory crosses the plane .compute the second invariant at these equatorial points and plot .verify that the values are constant as shown in fig .[ fig : secinv ] . 10 .[ ex : driftpath ] * drift path tracing using the second invariant . *pick a starting location and mirror field , and evaluate the second invariant as described above . compute the gradient numerically using central differences : \\ \partial_y i & \approx & \frac{1}{2\delta}\left[i(x_0,y_0+\delta , b _ { \rm m } ) - i(x_0,y_0-\delta , b _ { \rm m})\right],\end{aligned}\ ] ] where is a small number ( e.g. 0.01 ) .+ the second invariant is constant along the drift path , so for a finite step , it holds that .use this relation to trace successive steps along the drift shell .this method is more accurate than following a particle or a guiding center* * the double - dipole model.** the dipole ceases to be a good approximation for the magnetic field of the earth as we go farther in space .the double - dipole model , although unrealistic , introduces a day - night asymmetry that vaguely mimics the deformation of the magnetosphere by the solar wind .it can be used to capture some basic features of particle dynamics in the outer magnetosphere , if only qualitatively .+ the model has one dipole ( earth ) at the origin , pointing in the negative -direction , and an image dipole at . if both dipoles are identical , the magnetic field is given by : where is given by eq .( [ eq : dipolefield ] ) .+ the domains of each dipole are separated by the plane .this plane simulates the magnetopause , the boundary between the magnetosphere and the solar wind . for slightly better realism, the image dipole can be multiplied by a factor larger than 1 so that the magnetopause becomes curved .also , the two dipoles can be tilted by equal and opposite angles with respect to the sun - earth line ( -axis ) , to simulate the fact that the dipole moment of the earth is tilted .1 . starting at various latitudes , plot the magnetic field lines of eq .( [ eq : ddfield ] ) on the plane .observe the compression of field lines on the dayside and extension on the nightside .note that no field line crosses the plane .multiply the image dipole term by 1.5 and repeat .2 . follow several guiding - center trajectories starting position between and , , and pitch angles between and ( smaller pitch angle creates a longer bounce motion ) . with small pitch angles ,the particle should come closer to earth on the day side .repeat with a pitch angle of .now the particle goes away from the earth on the dayside .explain these observations using the conservation of first and second adiabatic invariants .3 . the double - dipole field can break the second invariant for some trajectories .the reason of this breaking is that the field strength has a local maximum on the dayside around the equatorial plane .particles with sufficiently small mirror fields are diverted to one side of the equatorial plane because they can not overcome this field maximum , as seen in fig .[ fig : protondsb ] . + a proton with 200kev kinetic energy , initialized on the right edge at with pitch angle in a double - dipole field . ]+ start an electron guiding center at position , with kinetic energy with an equatorial pitch angle and follow its guiding center for 1000 seconds with time step 0.01 . on the daysidethe trajectory will temporarily move above or below the equatorial plane . using the method used in fig .[ fig : secinv ] , plot the second invariant versus time . the second invariant will be constant between breaking points , but its value will differ from the initial value .the reason is that near the breaking points bounce motion slows down and the adiabaticity condition does not hold .however , the first invariant is not broken .+ this phenomenon , named drift - shell bifurcation, can be one of the causes of particle diffusion in the magnetosphere. * magnetotail current sheet . * on the tail region of the magnetosphere , magnetic field lines are heavily stretched , and a sheet of current is flowing through them. the field in the magnetotail can be represented by the simple form : in this problem we set , and .the field lines trace parabolas on the plane , which can be seen by integrating the equation .the parameter is the scale of the current sheet thickness .the truncation of the field at simulates the finite size of the tail region .+ the field vector points to opposite directions on both sides of the equatorial plane .when a charged particle is released from above , it moves toward the weaker region near where the adiabaticity condition does not hold .the helix becomes a `` serpentine orbit '' that moves in and out of the equatorial plane .the chaotic dynamics of these orbits is extensively studied. + ( color online ) types of orbits created by a particle with mass and charge near a current sheet .( a ) speiser orbits of transient particles , ( b ) cucumber orbits of quasitrapped particles and ( c ) ring orbits of trapped particles .note the different scales of axes.,title="fig : " ] ( color online ) types of orbits created by a particle with mass and charge near a current sheet .( a ) speiser orbits of transient particles , ( b ) cucumber orbits of quasitrapped particles and ( c ) ring orbits of trapped particles .note the different scales of axes.,title="fig : " ] + figure [ fig : speiser ] shows the three types of orbits that can exist in such a model. `` speiser orbits '' approach the equatorial plane and later go beyond and leave the tail region .`` cucumber orbits '' alternate between helical and serpentine orbits . these do not form closed orbits because of the breaking of the first invariant at the equatorial plane .`` ring orbits '' alternate between oppositely - directed fields ; they do not have a helical sectionevaluate for and determine the direction of gradient - curvature drift .2 . by trial and error , find initial conditions that create the types of orbits shown in fig .[ fig : speiser ] .space plasmas provide many case studies which , after proper simplification , can be used in the undergraduate physics curriculum .we have presented one such case , the basic theory of charged - particle motion under the dipole .this paper focuses on visualization and concrete computation , with the hope that students will modify or rewrite the code to run their own numerical experiments on particle motion in magnetic fields . in my opinion ,numerical simulations provide at least two important pedagogical benefits : first , even if the required analytical tools are beyond the students level , they can use simulations to obtain a qualitative understanding .second , the process of coding the simulation forces students to understand the problem at a basic and operational level . the main body of this article or the exercises can be incorporated in lectures , or they can be given as advanced assignments to interested students .a natural place for this subject is a course on electromagnetism and/or plasma physics .when the basics are introduced , the instructor can discuss related subjects such as plasma confinement , radiation belts , or space weather . in advanced mechanics courses , adiabatic invariantsare usually presented with an abstract formalism .charged particle motion provides a natural and concrete case where adiabatic invariants are relevant and indispensable .the subject can also be incorporated in courses on computational physics .accuracy and stability of different numerical integration schemes may be presented using charged particle motion .the widely separated time scales of the motion would be a challenge for most of the schemes .electric potential differences across the magnetosphere are of the order of 100kv which , over a distance of about , yield an electric field strength of the order of .the magnetic field of the earth has strength on the magnetic equatorial plane , where is the distance measured in .a typical radiation - belt proton with 10mev energy has speed .then , in si units , . therefore the electric drift can be neglected near the earth where .alternatively one can say that low - energy particles that could be affected by electric drifts have already drifted away , leaving behind the trapped high - energy particles .we are using the geocentric solar ecliptic ( gse ) coordinate system .the earth is at the origin , the -axis points to the sun , the -axis is set so that the dipole moment vector is on the -plane , and the -axis is perpendicular to both axes .see supplementary material at ://sites.google.com / site / mkaanozturk / programs for the python source code that solves the newton - lorentz and the guiding - center equations and displays them .w. j. hughes , `` the magnetopause , magnetotail and magnetic reconnection '' in _ introduction to space physics _ , edited by margaret g. kivelson and cristopher t. russell ( cambridge university press , 1995 ) , pp .227 - 287 .jrg bchner , lev m. zelenyi , `` regular and chaotic charged particle motion in magnetotaillike field reversals 1 .basic theory of trapped motion '' , journal of geophysical research , * 94 * , a9 , 11821 - 11842 ( 1989 ) .
i outline the theory of relativistic charged - particle motion in the magnetosphere in a way suitable for undergraduate courses . i discuss particle and guiding center motion , derive the three adiabatic invariants associated with them , and present particle trajectories in a dipolar field . i provide twelve computational exercises that can be used as classroom assignments or for self - study . two of the exercises , drift - shell bifurcation and speiser orbits , are adapted from active magnetospheric research . the python code provided in the supplement can be used to replicate the trajectories and can be easily extended for different field geometries .
looking around the world that we live in , we will find that there are complex systems everywhere .the study of these systems has become a hot research area in different disciplines such as physics , applied mathematics , biology , engineering , economics , and social sciences . in particular ,agent - based models have become an essential part of research on complex adaptive systems ( cas ) .for example , self - organized phenomena in an evolving population consisting of agents competing for a limited resource , have potential applications in areas such as engineering , economics , biology , and social sciences .the famous el farol bar attendance problem proposed by arthur constitutes a typical example of such a system in which a population of agents decide whether to go to a popular bar having limited seating capacity .the agents are informed of the attendance in the past weeks , and hence share common information , make decisions based on past experience , interact through their actions , and in turn generate this common information collectively .these ingredients are the key characteristics of complex systems .the proposals of the binary versions of models of competing populations , either in the form of the minority game ( mg ) or in a binary - agent - resource ( b - a - r ) game , have led to a deeper understanding in the research in agent - based models . for modest resource levels in which there are more losers than winners , the minority game proposed by challet and zhang represents a simple , yet non - trivial , model that captures many of the essential features of such a competing population .the mg , suitably modified , can be used to model financial markets and reproduce the stylized facts .the b - a - r model , which is a more general model in which the resource level is taken to be a parameter , has much richer behaviour .in particular , we will discuss the model and report the emergence of plateaux - and - jump structures in the average success rates in the population as the resource level is varied .we analyze the results within the ideas of the trail of histories in the history space and the strategy performance ranking patterns .the binary - agent - resource ( b - a - r ) model is a binary version of arthur s el farol bar attendance model , in which a population of agents repeatedly decide whether to go to a bar with limited seating based on the information of the crowd size in recent weeks . in the b - a - r model, there is a global resource level which is not announced to the agents . at each timestep , each agent decides upon two possible options : whether to access resource ( action ` 1 ' ) or not ( action ` 0 ' ) .the two global outcomes at each timestep , ` resource overused ' and ` resource not overused ' , are denoted by and .if the number of agents choosing action ` ' exceeds ( i.e. , resource overused and hence global outcome ) then the abstaining agents win .by contrast if ( i.e. , resource not overused and hence global outcome ) then the agents win .in order to investigate the behaviour of the system as changes , it is sufficient to study the range .the results for the range can be obtained from those in the present work by suitably interchanging the role of ` 0 ' and ` 1 ' . in the special case of ,the b - a - r model reduces to the minority game . in the b - a - r model, each agent shares a common knowledge of the past history of the most recent outcomes , i.e. , the winning option in the most recent timesteps .the agents are essential identical , except for the details in the strategies that they are holding .the full strategy space thus consists of strategies , as in the mg .initially , each agent randomly picks strategies from the pool of strategies , with repetitions allowed .the agents use these strategies throughout the game . at each timestep, each agent uses his momentarily best performing strategy with the highest virtual points .the virtual points for each strategy indicate the cumulative performance of that strategy : at each timestep , one * virtual point * ( vp ) is awarded to a strategy that would have predicted the correct outcome after all decisions have been made , whereas it is unaltered for those with incorrect predictions .notice that in the literature , sometimes one vp is deducted for an incorrect prediction .the results reported here , however , come out to be the same .a random coin - toss is used to break ties between strategies . in the b - a - r model , the agents in the population may or may not be connected , by some kind of network . in the case of a networked population ,each agent has access to additional information from his connected neighbours , such as his neighbours strategies and/or performance . here, we will focus our discussion on the b - a - r model in a non - networked population and report some numerical results for networked population .the b - a - r model thus represents a general setting of a competing population in which the resource level can be tuned . from a governmental management point of view , for example , one would like to study how a population may react to a decision on increasing or decreasing a certain resource in a community .will such a change in resource level lead to a large response in the community or the community will be rather insensitive ? to evaluate the performance of an agent , one * ( real ) point *is awarded to each winning agent at a given timestep .a maximum of points per turn can therefore be awarded to the agents per timestep .an agent has a success rate , which is the mean number of points awarded to the agent per turn over a long time window . the mean success rate among the agentsis then defined to be the mean number of points awarded per agent per turn , i.e. , an average of over the agents .we are interested in investigating the details of how the success rates , including the mean success rate and the highest success rate among the agents , change as the resource level varies in the efficient phase , where the number of strategies ( repetitions counted ) in play is larger than the total number of distinct strategies in the strategy space .the effects of varying were first studied by johnson _ et al .these authors reported the dependence of the fluctuations in the number of agents taking a particular option , on the memory size for different values of . for the mg ( i.e. , ) in the efficient phase ( i.e. , small values of ) the number of agents making a particular choice varies from timestep to timestep , with additional stochasticity introduced via the random tie - breaking process .the corresponding period depends on the memory length .the underlying reason is that in the efficient phase for , no strategy is better overall than any other .hence there is a tendency for the system to restore itself after a finite number of timesteps , thereby preventing a given strategy s vps from running away from the others . as a result ,the outcome bit - string shows the feature of anti - persistency or double periodicity .since a maximum of points can be awarded per turn , the mean success rate over a sufficiently large number of timesteps is bound from above by .we have carried out extensive numerical simulations on the b - a - r model to investigate the dependence of the success rate on for . unless stated otherwise , we consider systems with agents and . figure [ fig : w_m1m3s2 ] shows the results of the mean success rate ( , dark solid line ) as a function of in a typical run for ( a ) and ( b ) , together with the range corresponding to one standard deviation about in the success rates among the agents ( , dotted lines ) and the spread in the success rates given by the highest and the lowest success rates ( and , thin solid lines ) in the population . by taking a larger value of than most studies in the literature , we are able to analyze the dependence on and in greater detail and discover new features . in particular , these quantities all exhibit _ abrupt transitions _( i.e. , jumps ) at particular values of . between the jumps ,the quantities remain essentially constant and hence form steps or ` plateaux ' .we refer to these different plateaux as states or phases , since it turns out that the jump occurs when the system makes a transition from one type of state characterizing the outcome bit - string to another .the origins of the plateaux are due to ( i ) the finite number of states allowed in each , and ( ii ) insensitive to inside each state .the success rates within each plateau ( state ) come out to the the same . for different runs ,the results are almost identical . at most , there are tiny shifts in the values at which jumps arise due to ( i ) different initial strategy distributions among the agents in different runs , and ( ii ) different random initial history bit - strings used to start the runs . in other words ,a uniform distribution of strategies among the agents ( e.g. , in the limit of a large population ) gives stable values of at the jumps .the results indicate that a community does not react to a slight change in resource level that lies within a certain plateau , but the response will be abrupt if the change in resource level swaps through critical values between transitions ..the states characterized by for the b - a - r model with , and .the results are obtained from the numerical data shown in fig .[ fig : w_m1m3s2](a ) .[ cols="^,^,^,^",options="header " , ] as the game proceeds , the system evolves from one history bit - string to another .this can be regarded as transitions between different nodes ( i.e. , different histories ) in the history space . for in the efficient phase ,it has been shown that the conditional probability of an outcome of , say , following a given history is the same for all histories . for ,the result still holds for states characterized by . note that a history bit - string can only make transitions to history bit - strings that differ by the most recent outcome , e.g. , 111 can only be make transitions to either 110 or 111 , and thus many transitions between two chosen nodes in the history space are forbidden . in addition , these allowed transitions do not in general occur with equal probabilitiesthis leads to specific outcome ( and history ) bit - string statistics for a state characterized by . & 000 & 1 & 1 + & 001 & 1 & 1 + & 010 & 1 & 1 + & 100 & 1 & 1 + & 011 & 1 & 1 + &101 & 1 & 1 + & 110 & 1 & 1 + & 111 & 1 & 1 + & 8 & 8 + + 000 & 0 & 0 + 001 & 0 & 1 + 010 & 0 & 1 + 100 & 0 & 1 + 011 & 1 & 2 + 101 & 1 & 2 + 110 & 1 & 2 + 111 & 2 & 3 + & 5 & 12 + + 000 & 0 & 0 + 001 & 0 & 1 + 010 & 0 & 1 + 100 & 0 & 1 + 011 & 1 & 3 + 101 & 1 & 3 + 110 & 1 & 3 + 111 & 3 & 5 + & 6 & 17 + 0.5 cm + 000 & 0 & 0 + 001 & 0 & 0 + 010 & 0 & 0 + 100 & 0 & 0 + 011 & 0 & 1 + 101 & 0 & 1 + 110 & 0 & 1 + 111 & 1 & 2 + & 1 & 5 + + 000 & 0 & 0 + 001 & 0 & 0 + 010 & 0 & 0 + 100 & 0 & 0 + 011 & 0 & 1 + 101 & 0 & 1 + 110 & 0 & 1 + 111 & 1 & 3 + & 1 & 6 + + 000 & 0 & 0 + 001 & 0 & 0 + 010 & 0 & 0 + 100 & 0 & 0 + 011 & 0 & 1 + 101 & 0 & 1 + 110 & 0 & 1 + 111 & 1 & 4 + & 1 & 7 + + 000 & 0 & 0 + 001 & 0 & 0 + 010 & 0 & 0 + 100 & 0 & 0 + 011 & 0 & 0 + 101 & 0 & 0 + 110 & 0 & 0 + 111 & 0 & 1 + & 0 & 1 + we have carried out detailed analysis of the outcomes following a given history bit - string for , and for each of the possible states over the whole range of , i.e. , we obtain the chance of getting 1 ( or 0 ) for every 3-bit history by counting from the outcome bit - string .table [ tab : tm_m3 ] gives the _ relative numbers of occurrences _ of each outcome for every history bit - string .for the state with , for example , the outcomes and occur with equal probability for every history bit - string , as in the mg .for the other states , the results reveal several striking features .it turns out that is simply given by the _ relative frequency _ of an outcome of in the outcome bit - string , which in turn is governed by the resource level .for example , a 0 to 1 ratio of in the outcome bit - strings corresponds to the state with . in table[ tab : tm_m3 ] , we have intentionally grouped the history bit - strings into rows according to the label in fig . [fig : hs_m3 ] .we immediately notice that for every possible state in the b - a - r model , the relative frequency of each outcome is a property of the _ group _ of histories having the same label rather than the individual history bit - string , i.e. , all histories in a group have the same relative fraction of a given outcome .this observation is important in understanding the dynamics in the history space for different states in that it is no longer necessary to consider each of the history bit - strings in the history space .instead , it is sufficient to consider the four groups of histories ( for ) as shown in fig . [fig : hs_m3](a ) .analysis of results for higher values of show the same feature .for the state characterized by , the outcome bit - string is persistently and the path in the history space is repeatedly 111 ( in the format of _ history _ _ outcome _ ) .therefore , the path simply corresponds to an infinite number of loops around the history node 111 .since the path is restricted to the node , we will also refer to this state as state .the system is effectively frozen into one node in the history space . in this case, there are effectively only _ two _ kinds of strategies in the whole strategy set , which differ by their predictions for the particular history 111 .the difference in predictions for the other ( ) history bit - strings become irrelevant . obviously , the ranking in the performance of the two effective groups of strategies is such that the group of strategies that suggest an action ` 1 ' for the history 111 , outperforms the group that suggests an action ` 0 ' . for a uniform initial distribution of strategies ,there are agents taking the action ` 0 ' and agents taking the action ` 1 ' , since half of the strategies predict 0 and half of them predict 1 . to sustain a winning outcome of 1, the criterion is that the resource level should be higher than the number of agents taking the action ` 1 ' .therefore , we have for the state with that and these results are in agreement with numerical results . for , for .note that eqs . and are valid for _ any _ values of .table [ tab : tm_m3 ] shows that the states with , , have very similar features in terms of the bit - string statistics .they differ only in the frequency of giving an outcome of following the history of .note that the and histories do not occur .the results imply that as the system evolves , the path in history space for these states is restricted to the two groups of histories labeled by and .the statistics show that the outcome bit - strings for the states with , and exhibit only one -bit in a period of , and bits , respectively .we refer to these states collectively as states , since the portion of allowed history space is bounded by the histories .graphically , the path in history space consists of a few self - loops at the node 111 , i.e. , from 111 to 111 , then passing through the group of histories once and back to 111 , as shown in fig .[ fig : hs_m3](b ) .the states with , , involve the other groups of histories and exhibit complicated looping among the histories .we refer to them collectively as higher ( i.e. , ) states .the observed values of for states can also be derived by following the evolution of the strategy performance ranking pattern as the game proceeds .we summarize the main ideas here .details can be found in ref. .the major result is that for given , the values of in a state can only take on where .taking for example , we have and hence .the values of are , , and , exactly as observed in the numerical simulations .( [ eq : wmax_z1 ] ) is a result of the collective response of the agents stemming from their strategy selection and decision - making processes , which in turn are coupled to the strategy ranking pattern . to close the feedback mechanism ,the strategy ranking pattern must evolve in such a way so as to be consistent with the path in the history space . recall that the states are those in which the system only visits the and histories , as shown in fig .[ fig : hs_m3](b ) .in this situation , only history bits in a strategy are relevant , despite each strategy has entries , i.e. , strategies that only differ in their predictions for the histories which do _ not _ occur are now effectively identical . therefore , many strategies have tied performance .the key point is that a complete path of the states in history space corresponds to one in which there are turns around the history , i.e. , from for turns , then breaks away to visit each of the histories once and returns to the history ( see fig .[ fig : hs_m3](b ) ) .the value of that is consistent with the condition turns out to be .we have argued that for , the system loops around the history indefinitely , since the number of agents ( ) persistently taking the option 1 is smaller than , in the large population limit . for ,the situation is as follows .as the system starts to loop around the history , the strategies start to split in performance , with the group of strategies predicting ( ) becomes increasingly better ( worse ) .there is an overlap in performance between these two groups , as a result of their predictions when the system passes through the histories . as the number of loops at the history increases , the overlap in performance decreases and the number of agents taking option 1 increases towards , as a result of the strategy selection process .the condition , thus imposes an upper bound on .when the number of turns at the history exceeds a certain value fixed by , the option 1 becomes the _ losing _ option and the system breaks away to the histories as the system makes transition from the history 111 to the history 110 .the lower bound on is set by the restriction that the strategies in the group predicting must perform better on the average than the group predicting , as . with this understanding on the allowed paths in the history space, we can evaluate the number of winners in each turn of the path and obtain eq .( [ eq : wmax_z1 ] ) for , with being the number of loops that the system makes at the history and .we have also studied the b - a - r model in a connected population .the agents are assumed to be connected in the form of a classical random graph ( crg ) , i.e. , random network . in a crg, each node ( representing an agent here ) has a probability of connecting to another node .the agents interact in the following way . when an agent is linked to other agents , he checks if the strategies of his connected neighbours perform better than his own strategies .if so , he will choose the one with the highest vp among his neigbhours and then use it for decision in that turn .if not , he will use his own best strategy .we have carried out detailed numerical calculations of and .figures [ fig : wmcrg_m3s2 ] and [ fig : wxcrg_m3s2 ] show how and depend on for various connecting probability , with the results corresponding to those in a non - networked population .we observe that the features of plateaux - and - jumps persist in networked populations .there are differences in the details .while takes on the same set of values , the threshold in the resource level ( ) to sustain a particular level of success rate is found to be higher in a networked population than in a non - networked one , as shown in fig .[ fig : wmcrg_m3s2 ] and fig .[ fig : wxcrg_m3s2 ] . for a given resource level ,allowing agents to share information on strategy performance may actually worsen the global performance of the system .this is particularly clear if we inspect the results of in the range of in fig .[ fig : wmcrg_m3s2 ] .the addition of a small number of links will lower .this behaviour is consistent with the crowd - anticrowd theory in that the links enhance the formation of crowds. the situation can be quite different in a high resource limit , where a small number of links may actually be beneficial .the reason is that for high resource level , some resource is left unused as agents will not have access to a strategy that predicts the winning option . using the links , some of these agents switch from losers to winners and thus enhance the overall performance of the population have studied numerically and analytically the effects of a varying resource level on the success rate of the agents in a competing population within the b - a - r model .we found that the system passes through different states , characterized either by the mean success rate or by the highest success rate in the population , as decreases from the high resource level limit .transitions between these states occur at specific values of the resource level .we found that different states correspond to different paths covering a subspace within the whole history space . just below the high resource levelis a range of that gives states corresponding to the fractions , with .this result is in excellent agreement with that obtained by numerical simulations .the paths of these states in the history space are restricted to those -bit histories with at most one - bit of 0 and with loops around the 111 history .this result is derived by considering the coupling of the restricted history subspace that the system visits , the strategy performance ranking pattern , and the strategy selection process . while our analysis can also be applied to the states , the dynamics and the results are too complicated to be included here .our analysis also serves to illustrate the sensitivity within multi - agent models of competing populations , to tunable parameters . by tuning an external parameter , which we take as the resource level in the present work, the system is driven through different paths in the history space which can be regarded as a ` phase space ' of the system .the feedback mechanism , which is built - in through the decision making process and the evaluation of the performance of the strategies , makes the system highly sensitive to the resource level in terms of which states the system decides to settle in or around .these features are quite generally found in a wide range of complex systems .the ideas in the analysis carried out in the present work , while specific to the b - a - r model used , should also be applicable to other models of complex systems .while we have focused on the b - a - r model in our discussions , many of the underlying ideas are general to a wider class of complex systems . for example, one may regard the resource level as a handle in controlling a driving force in the system . with ,i.e. , in the mg , and the random initial distribution of strategies and random initial history , the system is allowed to diffuse from an initial node in the history space to visit all the possible histories . a deviation of the resource level from acts like a driving force in the history space .thus , there is always a competition between diffusive and driven behaviour , resulting in the non - trivial behaviour in the b - a - r model and its variations .for this reason , the present b - a - r system provides a fascinating laboratory for studying correlated , non - markovian diffusion on a non - trivial network ( i.e. , history space ) .
we aim to study the effects of controlling the resource level in agent - based models . we study , both numerical and analytically , a binary - agent - resource ( b - a - r ) model in which agents are competing for resources described by a resource level , where with being the maximum amount of resource per turn available to the agents . each agent picks the momentarily best - performing strategy for decision with the performance of the strategy being a result of the cumulative collective decisions of the agents . the agents may or may not be networked for information sharing . detailed numerical simulations reveal that the system exhibits well - defined plateaux regions in the success rate which are separated from each other by abrupt transitions . as increases , the maximum success rate forms a well defined sequence of simple fractions . we analyze the features by studying the outcome time series , the dynamics of the strategies performance ranking pattern and the dynamics in the history space . while the system tends to explore the whole history space due to its competitive nature , an increasing has the effect of driving the system to a restricted portion of the history space . thus the underlying cause of the observed features is an interesting self - organized phenomena in which the system , in response to the global resource level , effectively avoids particular patterns of history outcomes . we also compare results in networked population with those in non - networked population . * paper to be presented in the 10th annual workshop on economic heterogeneous interacting agents ( wehia 2005 ) , 13 - 15 june 2005 , university of essex , uk . *
mosaic ccd cameras are now becoming the wide - field imaging norm in astronomy , spurred by the practical device size fabrication limit for a reasonable yield of about 4k x 4k 15 m pixels or less .the scientific prospects for these cameras are being exploited to greater degrees as the cameras and control electronics become simpler to fabricate .programs that are already benefitting from the increased field of view include stellar population studies , galactic structure work , deep galaxy and star counts , searches for low surface brightness galaxies , and searches for gravitational lensing and microlensing .however , inadequately aligned ccds in a mosaic detector can severely compromise the performance in these applications , especially in areas such as gravitational lensing where the lens potential must be reconstructed from the image points .photometry on objects that span two ccds will also be limited in accuracy .the work involved in deconvolving a large format ( 30 ) mosaic image with misaligned ccds can be daunting , injecting some degree of error into the results of the analysis .the largest existing ccd mosaic ( 8k x 8k pixels ) uses a mounting scheme developed by luppino et al. .it uses custom packages constructed for each ccd from a low expansion alloy , which are attached to a machined mounting block .alignment screws provide the micro - adjustments to bring the ccds into position .other mosaics have also been fabricated using a combination of precision machining and delicate assembly steps involving micromanipulation of the detectors under a high power microscope , or use of an alignment template . in the optimal case ,the resulting assembled mosaic can have pixels along rows and columns aligned on the scale of one pixel width ( 15 m ) . in practice , the four element mocam detector ( canada - france - hawaii telescope ) fabricated using the luppino mounting scheme has angular misalignments between ccds of as much as 3 pixels ( greg fahlman , ubc , private communication ) .the precision of alignment achievable using etched alignment sockets in a silicon substrate has been demonstrated previously , where angular misalignments are on the order of 20 ppm ( 1 m variation over a 5 cm substrate ) and column registration is to within a tenth of a pixel . in this technique ,the varying atomic densities associated with the different crystal planes of silicon are exploited to achieve anisotropic chemical etching .the resulting etched sockets have an extremely precise definition with respect to the cad designed masks .this paper presents the first imaging results with a prototype mosaic ccd camera based on such an etched alignment technique .an overview of the technique and construction of the prototype camera are presented in section 2 . in section 3 ,the imaging results are discussed .section 4 describes three techniques for assessing the alignment precision of the mosaic , and section 5 outlines future developments .there are many issues involved , both in the lithographic fabrication of the etched substrate and the assembly of the mosaic , which are discussed in chapman et al .1997 and chapman 1996 . herewe provide an overview as a context for the prototype camera . the silicon wafer is first etched with the desired socket layout , followed by the definition of aluminum bond pads and traces around the sockets .semiconductor lithography techniques are used throughout to transfer mask patterns to the wafer and create etch stops .the fabrication process is not without its difficulties .problems with the chemical etching include residue formation , non - linear etch rates and hillocks ( small silicon growths ) , all of which can lead to misaligned or tilted ccds .an optimized edp etchant and careful cleaning steps will alleviate most of these deleterious effects .the large sockets can lead to imprecision with the metallization as they tend to funnel the photoresist used to define the etch stop .this problem can be minimized through adjustments to some of the processing steps .once the substrate has been fabricated and diced from the wafer , the mosaic is assembled by first bonding the fragile silicon substrate to an invar mounting plate with thermally conductive epoxy .the substrate is heated and bonding wax applied .the individual ccd detectors are then easily mounted and aligned against two orthogonal reference edges in the sockets , without micromanipulation under a high power microscope .paraffin wax provides a reversible bond , flows well during application and results in a very even layer , which tends to act as a lubricant during the alignment process .the detectors are then wire - bonded the to the substrate , or external circuit board .the general alignment concept is shown in figure [ a ] .the camera consists of two unthinned loral 3k x 1.5k , 15 m pixel ccds wax bonded to an etched silicon substrate .the flexible silicon substrate is epoxy - bonded to a polished invar36 plate to maintain flatness and prevent breakage .the combination of bond layers , substrate and invar plate add a thermal resistance of approximately 0.1 / w .the devices , substrate and mounting plate are bonded evenly over their entire area to form a solid structure . to minimize thermal expansion mismatchstresses , the cte of the substrate matches that of the detectors ( both silicon ) , and is close to that of the invar mounting plate ( table 1 ) . at an operating temperature of -90 , we estimate the invar silicon thermal expansion mismatch to be about 3 m over the 5 cm substrate length ; this has been modeled using finite element analysis ( ansys 5.3 ) which shows negligible bowing of the assembly . the precision of alignment is detailed in section 4 .the completed prototype is shown in figure [ c ] .the mosaic detector is mounted to a teflon ` spider ' and screwed to aluminum brackets in the detector head .wiring for this prototype is done directly from pcb to the dewar outputs .cooling is achieved with a cold finger attached to a liquid nitrogen ( ln ) tank .the cold finger quickly cools an aluminum disk , which in turn cools the invar mounting plate .as the thermal conductivity of the invar is not high , less robust cooling schemes ( such as thermo - electric ) may require a more complete invar surface contact with the cold finger to achieve the -80 operating temperatures .the hold time is roughly 9 hours .no measurable increase in the outgassing rate was observed compared to the reference empty dewar during thermal cycling .removal and replacement of ccds has proven extremely straightforward in practice .replacement of marginal or faulty devices can be accomplished without directly handling or disturbing the other ccds .the old wire - bonds are first removed under a microscope .the substrate can then be heated to c and the problem ccd replaced . in practice, it has taken as little as two hours to completely change two ccds and have the mosaic detector operational again .of the seven engineering grade loral ccds were made available to us , most had nonfunctional output amplifiers or substrate shorts .many of the devices had to be mounted and aligned in the mosaic substrate and then tested in order to find two that worked adequately .the success of the modularity of the technique is assured by the fact that 4 ccds were replaced in the bottom socket before a working one was found .the first images taken with this prototype camera are very promising for the continued and successful use of the etch alignment technique .images of various extended objects and star frames were taken in the i - band filter using the ubc 42 cm telescope with the mosaic detector mounted at the cassegrain focus .the f/13.5 focus provides an image scale of 0.5 arcsec / pixel with a total field of view of 30 arcminutes diameter .standard reduction of the images was performed ( dark subtraction and flat fielding ) .an existing ubc controller built by ron johnson was used for initial test purposes .all clock lines were hardwired in parallel on the pc board .the controller only has one channel , thus only the equivalent area of one ccd can be readout at a time .micro - code has been written for readout of the upper , lower , or half of each ccd for any given exposure .two frames are shown in figures [ m13b ] , and [ fielda ] , of m13 and a random star field .the images show that cosmetics of the devices are not very good ( also see test pattern images , section 4 ). numerous dead regions , blocked columns , and hot pixels limit the scientific uses .nonetheless , these ccds provide a very adequate proof of concept and allow precise measurement of the degree of alignment possible .the mechanical stability of the technique is also verified through repeated use of the camera in lab and on telescope .in order to test the accuracy of alignment , test images and star field images were taken .reduction of these images results in viable techniques for measurement of the relative ccd displacement . in addition, measurements under a microscope were used as an external check on the precision .the results of all three techniques are compared in table 2 .the measurements are all in agreement within error estimates .the star fields and optical measurements are considered more reliable than the test patterns for reasons discussed below . as an initial verification of the ccd angular alignment and assessment of the quality of the ccds, various test patterns were imaged across the mosaic .the camera was fitted with a short focal length lens capable of imaging a target a few feet away . as a test of alignment ,the data has certain limitations : lens aberrations , non - flat test pattern , and test pattern quality .the lens gives a vignetted field , only part of which is distortion free .the central region of each image should give images with small aberration .the data is reduced using the image reduction astronomical facility ( iraf ) .the test pattern line to be reduced is scanned along each row for a well defined drop - off point in pixel intensity of 20 percent .a linear fit to the central line of the test pattern for each ccd is performed .the coefficients correspond to the formula and is the correlation coefficient .the two sloped lines on each graph map the column and row of the test pattern on each ccd .subtraction of the slope parameters , b , gives the angular misalignment of the two ccds in the small angle approximation , as shown in figure [ triangles ] .the known distance between ccd imaging areas ( 2.13 mm ) is compared with the observed step in the y - intercept of the fit .we find the column registration to be within half a pixel .figure [ x ] shows the reduced test pattern data ( corresponding to figures [ xx1 ] , [ xx2 ] .the results are as follows : test1 - 40ppm , test2 - 90ppm , test3 - 50ppm ( 20ppm corresponds to 1 m displacement along the 5 cm substrate ) .the spread in values for the 3 tests are more likely an indication of the tests themselves rather than of the actual misalignment .the test patterns are all different and reduction of the data may not provide the same degree of accuracy in all cases . also , lens aberrations and image flatness may vary between the 3 tests .we estimate the error for these results to be about m .the images of star fields provide a powerful diagnostic of the relative ccd positions .the same telescope and setup are used as described in section 3 . the same star field is first imaged on both ccds by moving the telescope between exposures . by calculating the transformation of one star frame to the other , the angular misalignment of the ccds can be deduced .the column registration ( 1.6 m ) and inter - ccd gap ( 2.11 mm ) are found by comparison of features which overlap both ccds in some frames but not others . in principlethis technique can provide much more information about the relative placement of the ccds ( tip and tilt ) and the ccds themselves ( periodic non - uniformities and fringing across the ccd) .the technique also applies equally well to mosaics of many ccds .figure [ stars ] shows the two star fields in the same plane before mapping .the images were reduced using iraf and the centroids of all stars were found with daophot .a table of common sources in both frames is constructed and a transformation is found between the two frames using the routine geotran in iraf .several transformations are found in this way using different numbers of stars in the two frames . as translation is due to movement of the telescope , the angular rotation of the framesgives the ccd misalignment .the average value is 1.3 m over the 5 cm substrate ( or 26 ppm ) .the iraf routine geomap is used to map one star frame to the other to find an rms error of 1 m in the transformation from the mapback accuracy .a high magnification microscope , with a digitally metered movable eyepiece crosshair accurate to within 0.5 m , was used to make measurements of the alignment between the detectors and the substrate sockets .the etched socket edges were straight and aligned beyond our ability to measure them ( m ) .the angular misalignment of the ccd rows and columns with respect to the socket edge was measured to be less than 1 m along the 5 cm long axis of each detector ( 20 parts per million ) .the mechanical accuracy of the technique was also verified with a number of microscope measurements which are detailed in table 3 for the two - element prototype .planar ( x , y ) measurements of the mosaic substrate were made using a photographic plate comparator . at low magnification ,these were repeatable to about a micron .height measurements with a calibrated z stage ( m accuracy ) at the four corners of each device indicate a slight tilt to one of the devices along the long axis of the ccds , resulting in a m overall flatness variation .as this is likely due to excessive butting against the angled socket edge , it may be possible to improve the overall flatness .there was no measurable tip to the ccds along the short axis of the ccds .a 6 meter liquid mirror telescope ( lmt ) is being constructed near vancouver bc canada for a dedicated survey of large scale structure in the universe .we are using our mosaic technique to fabricate a 4k x 4k ccd mosaic ( four 2kx2k 15 m pixel ccds ) to operate in time - delay and integrate ( tdi ) mode .a widefield corrector lens will be used to correct for distortion of star trails across the ccds resulting from the high latitude of the telescope .alternatively , ccds can easily be angled with respect to each other using the etched socket technique , thereby minimizing the error .the devices are 2-side buttable , but we are forced to arrange them all with the same orientation for use as a tdi imager .we can therefore only minimize the ccd gap in the n - s orientation .however , the performance of the mosaic is not reduced by more sizable gaps ( 1 mm ) in the e - w direction ( readout direction ) for tdi readout .although all 4 ccds could be aligned in a common socket , the favored design is to use thin barriers ( 100 m ) to maintain modularity of ccds .maximum gaps between ccds of 100 m are the same as those proposed for the newest mosaic cameras being fabricated , such as an 8k x 8k pixel mosaic on the 3.5 m telescope at the apache point observatory , new mexico .ccd spacing of 100 m is probably close to the fundamental limitation of the etch technique for ccd packing density .the common socket approach would provide excellent alignment if the outer edges of the socket are used as the reference , but at the expense of maximum packing density .alternatively , if the ccds are butted against each other , little is gained except in the wax bond and the reference point of the socket edge ; the rough ccd edge is not ideal for precision alignment . in both cases , it is very difficult to remove and replace a faulty ccd without disturbing the remaining ones .other interests have been expressed in using the etch alignment technique to construct mosaic cameras consisting of two 2kx4k pixel ccds .the technique appears to be adaptable to thinned devices and scaleable to the largest silicon wafer sizes . on an 8 `` wafer , eight 2kx 4k devices can be aligned .the 12 '' wafer awaits 3k x 6k , 15 m pixel ccds .there are limitations to the technique .both the precision of angular alignment and the registration of rows / columns depend on the wafer dicing process . since the device is butted against the socket edge , any angular misalignment of the device edge or inconsistency in the imaging area detector edge distance , will show up as misalignment of the imaging area .based on our measurements of our loral ccds , this appears to be a surmountable problem .height variations of the individual ccd devices will show up as height variations of the composite mosaic .the most realistic solution is to ensure from the manufacturer that the ccds destined for a mosaic all come from the same thickness wafer .our successful construction of a prototype camera using the etch alignment technique provides a sufficient proof of concept to merit further development and use of the technique in future ccd mosaic cameras .the fabrication process is relatively simple and economical using common lithography laboratory equipment .three independent measurements assure the resulting composite device is flat , aligned and mechanically stable .replacement of faulty ccds is fast and straightforward .as far as we are aware , no other existing mosaic technique is comparable in terms of alignment achieved .if the ccds can be diced accurately enough , there will be accurate registration between pixels and minimal angular displacement between ccds .the overall flatness and alignment of the mosaic are within the tolerances of most astronomical observing projects .software reduction of images should be fast and straightforward. these devices may be suitable for other applications such as medical imaging or remote sensing where large , flat focal plane detectors are critically important .with pleasure , we acknowledge the assistance and suggestions of mike jackson , paul hickson , and ron johnson . this research is partially supported by operating grants from the natural sciences and engineering research council of canada .g.luppino and k.miller , a modular dewar design and detector mounting strategy for large format astronomical ccd mosaics , _ publications of the astronomical society of the pacific _ * 104 * , pp215222 ( 1992 ) .c.l.chen , r.w.johnson , r.c.jaeger , m.b.cornelius , w.a.foster , multichip thin - film technology for low temperature packaging , _ ieee 40th electronic components and technology conference _ , pp571579 , las vegas ( 1990 ) .technique & angular alignment & column registration + test pattern a & 2 3 m & 5 2 m + b & 4.5 m & 8 m + c & 2.5 m & 6 m + star transformations & 1.3 1 m & m + microscope & 1 1 m & m + average of 3 measures & 1.75 m & 3.13 m + parameter & value & units + device thickness a & 550 1 & m + b & 551 & + substrate thickness & 550 & m + socket depth & 80 & m + socket flatness & & m + composite device flatness & & m + device tilt a & & m + b & 0 & + device tip a , b & & m +
first imaging results are obtained with a new ccd mosaic prototype ( 3k x 3k , 15 m pixels ) . the ccds are aligned using an etched socket alignment technique . three different measurements of the alignment are made using star images , test pattern images , and microscope analysis . the ccds have an angular misalignement of less than 30 ppm . the composite device is flat to within m , with rows / columns oriented to within 20 ppm . the use of an existing technology with built in precision reduces many of the difficulties and expenses typically encountered with mosaic detector construction . a new camera being built for the ubc liquid mirror telescope is also described .
there has been a long history for the research of epidemic spreading . andin the general case , the epidemic system can be represented as a network where nodes stand for individuals and an edge connecting two nodes denotes the interaction between individuals . in the past, researchers mainly focused the disease transmission study on the conventional networks such as lattices , regular tree , and er random graph . since late 1990s, scientists have presented a series of statistical complex topological characteristics such as the small - world ( sm ) phenomenon and scale - free ( sf ) property by investigating many real networks including the internet , the www , the scientific web , the protein networks and so on .subsequently , the studies of dynamical processes on complex networks also have attracted lots of interests with various subjects , and as one of the typical dynamical processes built on complex networks , epidemic spreading has been investigating intensively once more . the basic conceptual tools in understanding the epidemic spreading and the related effective strategies for epidemic controlling should be the epidemiological models . among the numerous possible models ,the most investigated and classical models are si model , sis model and sir model , which can approximately describe the spreading of real viruses such as hiv , encephalitis , influenza virus in biological networks ; computer virus , trash mail in technological networks ; and even gossip in social networks .the most valuable result in standard sir model ( or sis model ) is that the critical threshold ( of transmission rate ) vanishes for the scale - free networks in the limit of infinite network size . in consideration of epidemic spreading in real cases , thereyet has been some inappropriate assumption in the details of the standard sir model .we know that in the classical sir model the transmission rate is a constant , but in the real world , should be different among individuals . based on this assumption , in reference , jaewook joo et al . proposed the effective transmission rate that they introduced a effective coefficient based on standard transmission rate for the edge ; similarly , in reference , ronen olinky et al .also studied the effectiveness of the transmission rate , and in their work , the transmission rate is where means the probability that a susceptible node actually acquires the epidemic through an edge connected an infected node with degree . in these previous studies ,the transmission rate on a given edge is just treated as a function with degrees of the two connecting nodes , which will induce the transmission rates of two opposite directions on the same edge are symmetrical . and, their analytical methods and results which are based on the stationary state of the sis model may be not valid in the sir model .thus , in order to make the transmission rate accord with the realistic cases much more , we take into account the effects of the weights of edges and the strengths of nodes which are of great importance measures in the weighted networks . andindeed , the weight ( or the strength ) is one of the most important indications in lots of real networks , for example , in social networks it can represent the intimacy between individuals ; in the internet the weight can imply the knowledge of its traffic flow or the bandwidths of routers ; in the world - wide airport networks it can evaluate the importance of a airport , and so on . particularly , for epidemicspreading , the weight can indicate the extent of frequency of the contacting of two nodes in scale - free networks , the larger the weight is , the more intensively the two nodes communicate , at the same time , the more possible a susceptible individual will be infected through the edge where the transmission rate is larger . on the other hand , in the classical sir model , each infected individual can establish contacts with all his / her acquaintances ( neighbors ) within one time step , that is to say, each infected node s infectivity equals its degree .but in the real case , a individual ca nt contact all his intimate friends , particularly when he is a patient . in reference , the infectivity is assumed as a constant , which means each infected individual will generate contacts at each time step .recently , fu et al . proposed a piecewise linear infectivity , which means : if the degree of a node is small , its infectivity is ; otherwise its infectivity is as a saturated value when is beyond a constant .both the constant or the piecewise linear method , the heterogeneous infectivity of nodes with different degrees is not considered as adequately as possible in scale - free networks , that is to say there may be some nodes with different degrees which have the same infectivity , and there will be a large number of such nodes if the constant is assigned irrelevantly or the size of underlying networks is infinite .so , in order to solve these problems , we introduce the nonlinear infectivity , namely , a infectivity exponent will be introduced to take control of the number of contacts that a infected node generates within one time step , and the is between 0 and 1 which is convenient to adjust for different scale - free networks . in this paper, we present the modified sir model where the infectivity exponent and the weight exponent are added ; based on the modified model , the dynamical differential equations for epidemic spreading is proposed .we parse the equations to investigate the threshold behavior and propagation behavior for epidemic spreading ; and the analytical results we obtain is verified by the necessary numerical stimulations .we show that one can adjust the values of and to rebuild the epidemic threshold to a nonzero finite value for different networks , which can prohibit or delay the epidemic outbreaks to some extent .and we find is more sensitive than in the transformation of the epidemic threshold and epidemic prevalence , which indicates the intrinsic factor ( the infectivity exponent ) take more responsibility than the extrinsic factor ( the weight exponent ) for the epidemic outbreaks in large scale - free networks .epidemic modeling has a history of researching , and mathematicians also put forward many epidemic models . in the domain of complex networks ,sir model is one of the most investigated and classical epidemic models . in the standard sir model, individuals can be divided into three classes depending on their states : susceptible ( healthy ) , infected and removed ( immunized or dead ) . in order to take into account the heterogeneity induced by the presence of nodes with different degrees , we use , , to denote the densities of susceptible , infected and removed individuals with degree at time , respectively . andthese variables are connected by means of the normalization : .the global quantities such as the ( average ) epidemic prevalence are therefore expressed by an average over the various degree classes , i.e. , .for the standard sir model , the epidemic evolves by the following rules : at each time step , a susceptible individual acquires the infection at the transmission rate in one contact with any neighboring infected individual , which means if a susceptible individual has a edge connecting a infected individual , the disease will transmitted to the susceptible one through the edge with a specific probability . on the other hand , the infected ones will recover and become immune ( ca nt be infected any more ) with rate , one can set without loss of generality . for a comparison, firstly we review some classical results from reference , where moreno et al . used the mean field theory to describe the dynamical differential equations of sir model as follows : where denotes the conditional probability for a node with degree to connect a node with degree . in the uncorrelated case , moreno et al . obtained the epidemic threshold : , which implies the absence of the epidemic threshold in a wide range of scale - free networks , .this result is a bad message for epidemic controlling and preventing , since the epidemic will prevail in many real networks with any nonzero value of transmission rate .in this section , we will give a detailed investigation about the modified sir model into which we introduce the weighted transmission rate and nonlinear infectivity .the results we obtain might deliver some useful information for the epidemiology . and for a better analysis , we firstly describe the general differential equations for sir model based on the mean field theory , as follows : where , , have the same meaning with the standard sir model ( see section 2 ) ; and , denote the infectivity of nodes with degree and the transmission rate from nodes with degree to nodes with degree , respectively .different from the previous studies , in this paper , we mainly focus the sir model on the weighted networks . among varieties of weighted patterns in complex networks, making use of nodes s degrees to express the weights of edges is very important , namely , the weight between two nodes with degree and may represent as a function of their degrees , i.e. , where the basic parameter and the exponent depend on the particular complex networks ( e.g. , in the _ e.coli _ matabolic network = 0.5 ; in the us airport network ( usan ) = 0.8 ; in the scientist collaboration networks ( scn ) = 0 ) .noteworthily , the weight belongs to an edge , similarly , a node ( with degree ) also can be measured by weights , i.e. , the strength of a node ( with degree ) , which can be obtained by summing the weights of the links that connected to it , i.e. , where is the strength of a node with degree . in this paper , for simplicity , we focus on uncorrelated ( also called non - assortative mixing ) networks where the conditional probability satisfies .thus , one can obtain . here, for each node with degree we fixed a total transmission rate which is given by , and the transmission rate on the edge from the -degree node to -degree node , will be redistributed by the proportion of the k - degree node s strength that the edge s weight accounts for , that s to say the can be defined as follows : from which we know the more proportion of that the weight of an edge accounts for , the more possible the disease will transmit through the edge . in the uncorrelated case , one can obtain . moreover ,the reasonable total probability that a susceptible node with degree will be infected at time step is given by , where denotes the degree sequence of neighboring infected nodes that connect to the susceptible node with degree at time step . on the other hand , from the general differential equations of sir model ( equations ( [ eq4 ] ) - ( [ eq6 ] ) ), we know denotes the infectivity of nodes with degree , and here , in the present model we define it as follows : which is to say , each infected individual can establish contacts with its neighbors within one time step .the exponent will dominate the infectivity among nodes with different degrees . since , it can be adjusted to make the contacts fall on a more realistic range . andthe node s infectivity will grow nonlinearly with the increasing degree .we take the simplified expressions of and into the equations ( [ eq4 ] ) - ( [ eq6 ] ) with ( without lack of generality ) , we obtain as follows : where .the above equations combined with the initial conditions , and . and in the general case is very small , then we can obtain . under this approximation ,equation ( [ eq9 ] ) can be directly integrated , as follows : where , and in the last equality we have made use of equation ( [ eq11 ] ) . in order to obtain some material results for the epidemic threshold and the average epidemic prevalence , firstly we compute the time derivative of the magnitude : \nonumber\\ & = & \langle k^{\alpha}\rangle-\phi(t)-\sum_{k}k^{\alpha}p(k)s_{k}\nonumber\\ & = & \langle k^{\alpha}\rangle-\phi(t)-\sum_{k}k^{\alpha}p(k)e^{-\frac{\lambda k^{1+\beta}}{\langle k^{1+\beta}\rangle}\phi(t)}.\end{aligned}\ ] ] for the general distribution , equation ( [ eq13 ] ) can not be solved in a closed form .however , we can still obtain some useful results in the steady state of the epidemics . since in the steady stage with sufficiently large , we have that and consequently d/dt=0 , then one can get the self - consistent equation for from equation ( [ eq13 ] ) as follows : the value =0 is always a ( trivial ) solution. then we compute the second order derivative of the rhs of equation ( [ eq14 ] ) for , and note that we can see the rhs of equation ( [ eq14 ] ) is a convex function , therefore , a nontrivial solution of equation ( [ eq14 ] ) exists only if the condition can be satisfied .this relation implies and the above inequation defines the epidemic threshold : below which the average epidemic prevalence ( ) will finally be approximatively null , and above which it will attain a finite value .one can see if , then , which induces the absence of the epidemic threshold in a wide range of scale - free networks . andif , the threshold will be a finite value given by ; similarly , if , one can also get a finite threshold which is .furthermore , we consider the epidemic threshold in the case of general scale - free networks of which the degree distribution is , where is the normalization constant .then , we obtain and , where denotes the largest ( smallest ) degree in the underlying networks . substituting into equation ( [ eq18 ] ), one can rehandle the epidemic threshold as follows : from equation ( [ eq19 ] ) , one can see that the infinite of the largest degree ( or equally , since ) will make the epidemic threshold tends towards zero if ; on the other hand , if , the epidemic threshold is approximate to be a finite value , given by .thus , the critical border is .although for most real networks including the internet , the www , the world - wide airport networks and the scientific collaborations networks , the topology exponent exists between 2 and 3 , which is incidental to induce the absence of the epidemic threshold , one can adjust the infectivity exponent and the weight exponent to restore a nonzero threshold for a given networks ( a fixed value of ) . (a ) : the epidemic threshold versus with the exponent in ba networks .( b ) : the epidemic threshold versus with the exponent in ba networks.,title="fig:",scaledwidth=37.6% ] ( a ) : the epidemic threshold versus with the exponent in ba networks .( b ) : the epidemic threshold versus with the exponent in ba networks.,title="fig:",scaledwidth=38.0% ] the epidemic threshold in a 3d - graph , which is made up with various and in ba networks.,scaledwidth=61.0% ] in order to get a intuitionistic relation with , and , we have performed numerical simulations for the epidemic threshold .firstly , on the basis of the simulated stochastic realizations of sir model on ba networks of which the theoretical scale - free exponent is 3 ( ) , we fix and adjust between 0 and 1 to show the transformation of epidemic threshold . in this casethe critical value of is : , below which is a nonzero finite value . as figure [ threshold12](a )displays , the value of is greater than 0.01 with the relation , vice versa .secondly , we fix and adjust between -2 and 2 to show the transformation of . in this casethe critical value of is : , below which is a nonzero finite value . as figure [ threshold12](b )displays , at the point of , , and will take on a much faster change in than the one in , and the finiteness of is apparent when . from figure[ threshold12 ] , one can see the simulations are consistent with the analytic results about critical threshold when we consider the effect of finite scale of the substrate work we use .moreover , it is observed that the decreasing trend of with increasing is much quicker than the one with increasing , that is to say , is more sensitive than in the transformation of epidemic threshold , which means is the leading factor for the transformation of in the present model .figure [ threshold3 ] displays the epidemic threshold in a 3d - graph , which is made up with various and , and the critical condition for a nonzero finite threshold is .one can see that is small in the blue area where the great mass of data meet the condition that , which is the condition of threshold vanishing .the steady epidemic prevalence versus for sir model in ba networks with , , , and =1.0 , 0.9 , 0.8 , , 0.2 , 0.1 ( from top to bottom).,scaledwidth=57.0% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and ( a ) : =0 , 0.5 , 1.0 , 1.5 , 2.0 ( from top to bottom ) ; ( b ) : =-0.5 , -1.0 , -1.5 , -2.0 ( from top to bottom).,title="fig:",scaledwidth=57.0% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and ( a ) : =0 , 0.5 , 1.0 , 1.5 , 2.0 ( from top to bottom ) ; ( b ) : =-0.5 , -1.0 , -1.5 , -2.0 ( from top to bottom).,title="fig:",scaledwidth=57.0% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.9 , 0.7 , 0.5 , 0.3 , 0.1 ; accordingly =-1.9 , -1.7 , -1.5 , -1.3 , -1.1 .the inset shows the susceptible density versus with the same combination of and ., scaledwidth=57.0% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.8 , 0.6 , 0.4 , 0.2 ; accordingly =-0.8 , -0.6 , -0.4 , -0.2 .the inset shows the susceptible density versus with the same combination of and . ,scaledwidth=57.0% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.7 , 0.6 , 0.5 , 0.4 , 0.3 ; accordingly =0.3 , 0.4 , 0.5 , 0.6 , 0.7 .the inset shows the susceptible density versus with the same combination of and ., scaledwidth=57.5% ] the steady epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.9 , 0.8 , 0.7 , 0.5 , 0.3 , 0.1 ; accordingly =1.1 , 1.2 , 1.3 , 1.5 , 1.7 , 1.9 .the inset shows the susceptible density versus with the same combination of and ., scaledwidth=58.0% ] for further investigation of the epidemic dynamics of the present model , we study the propagation behavior of the epidemic spreading .firstly , we investigate the average epidemic prevalence in the steady stage of epidemic evolution with different combinations of and . from the analysis in section 3 ,it is easily to conclude that at the epidemic critical point of , which will induce a quite small , and we can approximately get according to the relationship . then , expanding the rhs of equation ( [ eq14 ] ) for the small , and ignoring the higher - order terms , we obtain next , we compute the derivative of equation ( [ eq20 ] ) for at the critical point , as follows : as referred above , one can obtain that consequently , similarly , in the considering of general scale - free networks as referred above , equation ( [ eq23 ] ) can be written as follows : the obtained results shows , the exponent will make a primary contribution to the velocity of increasing of the steady epidemic prevalence ( ) by given the topology of a underlying network . combining the analysis in section 3 , for a fixed sum of and , one can conclude that the more ratio of to is , the larger the is , and the slowly the grows as increases ( since the most large - scale real networks have the relationship ) . for a better understanding of the epidemic propagation behavior , we take numerical simulations with various combination of and on ba networks ( ) .firstly , we investigate the impact of and separately , which are the two particular cases : with different , and with different .figure [ rl1 ] displays the effects of on the steady epidemic prevalence with .as the figure shows , for a same , one can observe the slope of grows as increases ( see =0.9 , 0.8 , , 0.2 , 0.1 ) , which is consistent with the analytical results from equation ( [ eq24 ] ) .figure [ rl23 ] displays versus in the case of , and figure [ rl23](a ) : =0 , 0.5 , 1.0 , 1.5 , 2.5 ( from top to bottom ) ; figure [ rl23](b ) : =-0.5 , -1.0 , -1.5 , -2.5 ( from top to bottom ) . as shown in figure [ rl23] , the larger the absolute value of is , the more slowly the grows . on the other hand , for a general case ( and ) , according to the critical equation and the algebraic sign of the sum , we consider four representative combinations of and , which are , , and . in each combination ,there also has been divided into several different configurations by the ratio of to . as shown in figures ( [ rl4 ] - [ rl7 ] ) , for a fixed sum of and , one can see the grows more quickly as the ratio increasing , which is consistent with our analytical results that is a more sensitive factor to . andfurther more , since , one can see the epidemic threshold is quite small ( tends to zero , considering the effects of finite size ) in figure [ rl23](a ) and figure [ rl7 ] , which is also consistent with the critical condition of ; otherwise , for , becomes to be a nonzero finite value as shown in figure [ rl1 ] , figure [ rl23](b ) , figure [ rl4 ] and figure [ rl5 ] . for the early stage of in figures ( [ rl1 ] - [ rl7 ] ) , one can see the grows in an exponential form with increases , then the growth rate will take off slowly for , at last , it will tend to be zero , which means the value of have attained a steady value . the temporal epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.9 , 0.8 , 0.7 , , 0.2 , 0.1 ; accordingly =-1.9 , -1.8 , -1.7 , , -1.2 , -1.1.,scaledwidth=56.0% ] the temporal epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.8 , 0.7 , , 0.2 , 0.1 ; accordingly =-0.8 , -0.7 , , -0.2 , -0.1.,scaledwidth=56.0% ] the temporal epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.9 , 0.8 , 0.7 , , 0.2 , 0.1 ; accordingly =0.1 , 0.2 , 0.3 , , 0.8 , 0.9.,scaledwidth=56.0% ] the temporal epidemic prevalence versus for sir model in ba networks with , , , and from top to bottom : =0.9 , 0.8 , 0.7 , , 0.2 , 0.1 ; accordingly =1.1 , 1.2 , 1.3 , , 1.8 , 1.9.,scaledwidth=56.0% ] for investigating the temporal propagation behavior , we simulate the time behavior of for sir model on ba networks with .as displayed in figure [ rt1 ] ( ) , figure [ rt2 ] ( ) , figure [ rt3 ] ( ) and figure [ rt4 ] ( ) , one can see that , the prevalence grows in an exponential form in the early stage , and then stabilizes in a nonzero value as time goes on .since may be smaller than when the value of is much small such as , thus if , the steady value of ( i.e. , ) will be quite small , which approximatively equals to the initial density of infected nodes ; when , the prevalence is higher as the ratio of to gets larger at the same time of , as the figures display . moreover , it is observed that the steady value of is smaller in figure [ rt4 ] compared with the other figures ( [ rt1 ] - [ rt3 ] ) .although the exact solution of about and which can demonstrate the difference well is difficult to be managed here , from a qualitative perspective , we believe that s because of the trait of sir model . in figure[ rt4 ] , which will induce a small threshold , thus at the early stage of evolution many susceptible nodes will be infected in view of the relation of , and due to the immune rate we set is unity ( ) , the old infected nodes will become removed ones that cantt be infected any more at the same time .consequently the optional objects for a infected node will decrease as time evolutes , moreover , there will be many infected nodes surrounded by the removed ones . under this situation , the epidemic spreading will arrive at a equilibrium much more quickly , and thus the epidemic prevalence in the steady stage also will be a small value as figure [ rt4 ] displays .to sum up , in this paper , we have investigated the dynamical behavior of sir model with weighted transmission rate and nonlinear infectivity , we present that one can adjust the exponent and to control the epidemic threshold which is absent for the standard sir model in scale - free networks .the critical value just depends on the exponent and for a given topology of networks ( a fixed value of ) , and is more sensitive than for the transformation of the epidemic threshold and epidemic prevalence , which agrees with the numerical simulations very well . and the numerical results of the time behavior of also have been presented , where the remarkable result is that , for a fixed , the smaller threshold will induce a smaller epidemic prevalence at the equilibrium . in a way , epidemic spreading can be managed as a reaction - diffusion process , which also has a very close relation with information retrieval , peer - trust and influence spreading .the efficient diffusion or inefficient diffusion maybe has its merits in various natural and artificial networks .our work might deliver some useful information or new insights in designing data layout , city layout and network layout for performing their best advantages .we benefited from useful discussions with yichao zhang , ming tang .this research was supported by the national basic research program of china under grant no .2007cb310806 , the national natural science foundation of china under grant nos .60704044 , 60873040 and 60873070 , shanghai leading academic discipline project no .b114 , and the program for new century excellent talents in university of china ( ncet-06 - 0376 ) .ref1 kermack w o and mckendrick a g 1927 _ proc .r. soc . london .ser . _ a * 115 * 700
in this paper , we investigate the epidemic spreading for sir model in weighted scale - free networks with nonlinear infectivity , where the transmission rate in our analytical model is weighted . concretely , we introduce the infectivity exponent and the weight exponent into the analytical sir model , then examine the combination effects of and on the epidemic threshold and phase transition . we show that one can adjust the values of and to rebuild the epidemic threshold to a finite value , and it is observed that the steady epidemic prevalence grows in an exponential form in the early stage , then follows hierarchical dynamics . furthermore , we find is more sensitive than in the transformation of the epidemic threshold and epidemic prevalence , which might deliver some useful information or new insights in the epidemic spreading and the correlative immunization schemes .
throughout this paper we assume that is injective ( one - to - one ) .such functions are generic in the space of real functions on .one may associate a _ gradient flow _ of on the graph , as a map which maps a subset of vertices to its immediate neighbors with lower values .more precisely , given , define the neighbor set of with lower energy and for any , we define let , _etc_. we say that is _ reachable _ from , denoted by or , if for some , _ i.e. _ , we can find an energy decreasing path from to .note that our construction of the gradient flow is related to , but different from the gradient network , in which each node is only connected to its neighbor with the lowest energy ( _ i.e. _ the neighbor in the steepest descent direction ) .we also remark that the gradient flow can be viewed as a `` zero temperature '' limit of the stochastic gradient flow introduced in in the study of network communities .the _ local minima _ of are those vertices whose value is no larger than the values of its neighbors . in other words , the set of local minima are precisely the maximal vertex set of _ fixed points _ of the gradient flow . given a local minimum , its _ attraction basin _is defined to be : these are the points that reach the local minimum but not any other local minima . _ boundary or separatrix _ consists of those nodes which can reach more than one local minimum following the gradient flow it is clear by definition that we have the non - overlapping decomposition our next task is to classify the nodes in .we do so according to their role in the pathways connecting the different local minima .in particular , _index- critical nodes ( saddles ) _ are defined as the maxima on _ local minimum energy paths _ connecting different local minima .clearly such a definition relies on the notion of _ local minimal energy paths _ , which depends on the topology of the path space .given two local minima , we examine all the paths connecting them . if a path can be deformed by the gradient flow to another path , we say that is _ deformable _ to .the _ local minimum energy paths _ are paths which can not be deformed by the gradient flow . to be more precise , given two points , we define a _ path _ from to as such that , , and for .we denote the collection of paths from to as .we note the following elementary lemma , whose proof is obvious .[ lem : path ] let , we can then find a path from to such that for .given two paths , we say is _ deformable to _ , if there is a map , such that * ( reaching ) every node in reaches some nodes in , _i.e. _ for any , is not empty and for each , ; * ( onto ) every node in is reachable from , _ i.e. _ for any , there exists , so that , or equivalently , let be two local minima .we call a path _ local minimum energy path _ , if it is not deformable to any other path in .we define the energy of a path the maximal energy traversed by the path , _i.e. _ . from the definition ,if is deformable to , we have , so in terms of energy barrier , is a more preferable path than .given a local minimum energy path , we call the node of maximal energy on the path an _ index- critical node_. the set of all index- critical nodes is denoted by . we will also call local minima _ index- critical nodes _ , and hence the notation .the following fact gives a characterization of index- critical nodes .the proof can be found in the si .[ thm : lower ] all local minima in are index- critical nodes .the other index- critical nodes will reach one of the local minima in by the gradient flow .we call the index- critical nodes that are also local minima in the nondegenerate index- critical nodes , the set of which will be denoted as .the other index- critical nodes are called degenerate .not every index- critical node is a local minimum in , for example in some cluster trees ( see figure [ fig : tree ] ) .critical node , where node 5 on top of the tree is a degenerate index- saddle while nodes 3 is a nongenerate index- saddle .right : an example of both degenerate and non - degenerate critical node , where node 7 on top of the tree is a degenerate index-1 saddle as it lies on the minimum energy path connecting local minima 1 ( or 2 ) and 3 ( or 4 ) , and as well a non - degenerate index-2 saddle as it is on the minimum energy path linking index-1 saddles 5 and 6.,title="fig:",scaledwidth=40.0% ] critical node , where node 5 on top of the tree is a degenerate index- saddle while nodes 3 is a nongenerate index- saddle .right : an example of both degenerate and non - degenerate critical node , where node 7 on top of the tree is a degenerate index-1 saddle as it lies on the minimum energy path connecting local minima 1 ( or 2 ) and 3 ( or 4 ) , and as well a non - degenerate index-2 saddle as it is on the minimum energy path linking index-1 saddles 5 and 6.,title="fig:",scaledwidth=55.0% ] the procedure presented above can be extended to define higher index critical nodes . to define index- critical nodes, we consider the subgraph with nodes in and edges restricted on this subset , denoted by .the gradient flow on is defined similarly as for .we define the attraction basins for as note that for any nondegenerate index- critical node , the attraction basin is nonempty . while for a degenerate index- critical node , the attraction basin is an empty set .this explains the notion `` degenerate '' for the critical nodes that are not local minima in .we define the boundary set as as shown in proposition [ thm : lower ] , all local minima on are in .therefore , we have the decomposition analogously , we define_ index- critical nodes _ as the maxima on local minimum energy paths connecting different nondegenerate index- critical nodes .it is clear that index- critical nodes , if exist , must be in .we remark that under our definition , a degenerate index- critical node can also be an index- critical node , as shown in figure .this ambiguity is actually quite natural from the network point of view , as these points play multiple roles in the structure of the network .the degenerate index- critical node can lie either in the basin of a nondegenerate critical node or link together two different nondegenerate critical nodes .higher index critical nodes can be defined recursively through further decomposition of .classification for high index critical points can be done following similar arguments as above .combining these , we obtain : [ thm : decomp ] admits the following decomposition where here is the attraction basin of local minima restricted on the -th boundary set and is the set of nondegenerate index- critical nodes .the theorem gives us a hierarchical representation of the network associated to the energy landscape .it actually leads to a hypergraph representation whose hypernodes are made up of critical nodes with their attraction basins .the landscape introduced above can be naturally formulated in terms of a flooding procedure , from low to high values of the height function .flooding starts from local minima , followed by the attraction basins .once the relevant index- saddle is passed , basins of local minima are merged together .this procedure then continues on to critical points of higher indices .more precisely , this procedure can be described in terms of persistent homology .persistent homology , firstly proposed by and developed afterwards largely in , is an algebraic tool for computing the betti numbers and homology groups of a simplicial complex when its faces are added sequentially . to work with persistent homology ,we extend the graph into a simplicial complex up to dimension 2 , and also define a filtration which consists of such simplicial complexes , in a spirit close to for pl - manifolds .an abstract simplicial complex is a collection of subsets of , which is closed under deletion or inclusion , i.e. if , then for any .we define _ the flooding complex of network associated with the function _ , as follows : * -simplex : the vertex set ; * -simplex : the vertex pairs that , _i.e. _ , for some ; * -simplex : collections of triangles , such that and .one can similarly extend the definition above to general -simplex .however for our purpose it suffices to define up to dimension simplices .a filtration of flooding complex is a nested family with which respects the order of deletion or inclusion in , i.e. if and then .assume that is injective or one - to - one , which is generically the case . by taking the maximum over vertices, one can extend from the vertex set to simplicies , and thus to the simplicial complex .for a simplex let .this implies that a face s -value is always no more than that of its associated simplex , i.e. . a filtration respecting the order of can be defined in the following way : 1 . ; 2 . , _ i.e. _ there is precisely one node being added into the filtration for each step ; 3 . , where , _ i.e._ when a node is added into the filtration , all the simplices of the same energy are added into the filtration simultaneously . note that under this construction, consists of the global minimum of . in this construction , we consider the filtration corresponding to the flooding procedure from low to high values .the change of betti numbers identifies the index- and index- critical nodes .once the filtration is defined , persistent homology computes the betti numbers of the simplicial complex in for each , and draws the barcodes of betti number versus the or values , _e.g. _ using jplex toolbox .the proof of the following theorem is in the si .[ thm : persist1]consider the filtration . for all , contains an index- critical node if and only if increases from to ; contains an index- critical node if and only if either decreases or increases from to . to find higher index saddles ,we restrict on the subgraph where and consists of edges restricted on . we can analogously construct the filtration corresponds to the flooding procedure on the subgraph .similar identification holds for higher index saddles .[ thm : persistk]consider the filtration on subgraph for . for all such that contains an index- critical node if either decreases or increases from to .clearly our characterization of high order critical nodes above only exploits simplicial complex up to dimension , whose persistent homology computation is recently improved to be of complexity with the total number of simplices and the number of nodes .such a complexity does not suffer the curse of dimensionality as the computation of high order betti numbers in general .as we know from proposition [ thm : lower ] that nondegenerate critical nodes are actually local minimum in sub - graphs , this leads to an efficient algorithm for finding nondegenerate critical nodes .in fact , all the examples shown in this paper have only nondegenerate critical nodes and thus can be found efficiently using this algorithm .given an injective function on the vertices , we obtain the local mimina and nondegenerate index- saddles using algorithm [ alg : nondegenerate ] . the bottleneck in this algorithmis in finding the attraction basins of local minima , whose complexity can be where is the number of vertices and is the maximum degree a node has .the total complexity is where is the maximum index of critical points .the algorithm is much faster than the previous algorithm for finding all critical nodes .sort the nodes according to in increasing order ; set ; find neighbors of with lower energy , ; add to and set the color of as its node index ; set the color of as the single color ; leave the color of as blank ; : + ( 1 ) local minima as nondegenerate critical nodes ; + ( 2 ) attraction basins ( ) as color components ; + ( 3 ) boundary as the blank nodes ; set where are edges restricted on ;zachary s karate club network consists of 34 nodes , representing 34 members in a karate club with node 1 being the instructor and node 34 being the president ( figure [ fig : karate ] ) .an edge between two nodes means that the two members join some common activities beyond the normal club classes and meetings .conflicts broke out between the instructor and the president when the instructor sought to raise the fee and the president opposed the proposal .the club eventually split into two , one formed by the president ( blue nodes in figure [ fig : karate](a ) ) and another one led by the instructor ( red nodes in figure [ fig : karate](a ) ) .a lot of information about this fission can be disclosed by looking at the graph structure of this social network .+ + versus .node 34 with the lowest energy is added at which creates a connected component which never disappears .node 1 with the second smallest energy is added at which creates a new connected component disappeared when index-1 saddle 3 is added at .bottom : versus .the loop is created by index- saddle 32 added at then cancelled by index- saddle 29 at .,title="fig : " ] versus .node 34 with the lowest energy is added at which creates a connected component which never disappears .node 1 with the second smallest energy is added at which creates a new connected component disappeared when index-1 saddle 3 is added at .bottom : versus .the loop is created by index- saddle 32 added at then cancelled by index- saddle 29 at .,title="fig : " ] let be the degree of node , and define . to avoid the same degree between two nodes in neighbor , a small enough random perturbation is added such that is injective .figure [ fig : karate](b ) shows the gradient flow of .the arrows on the edges point from low degree nodes to high degree ones . note that nodes 24 and 25 both have degree 3 , hence a small random perturbation is added resulting in the arrow from 25 to 26 . the same is done for nodes 5 and 11 .figure [ fig : karate](c ) shows the node decomposition for karate club network with each color component for a critical node and its attraction basin .two local minima , nodes 1 and 34 , are in oval shape together with their attraction basins marked in red and blue , respectively .two index- saddles , nodes 3 and 32 , are yellow and green diamond nodes , whose basins are in yellow ( nodes 3 ) and green ( node 32 ) correspondingly .node 3 is the lowest energy node connecting the local minima nodes 1 and 34 via a minimum energy path .node 32 links the two local minima by another local minimum energy path , .two index- saddles , nodes 25 ( in light blue diamond ) and 29 ( in cyan diamond ) , which connect two index- saddles via two non - deformable minimal energy paths and .figure [ fig : karate](d ) further depicts a transition path analysis of a markov chain induced on the graph ( see si ) from local minimum node 1 to node 34 , which shows two index-1 saddles capture most of transition currents . figure [ fig : barcode ] shows the barcodes for the flooding complex of this network .the social network of les misrables , collected by knuth , consists of 77 main characters in the novel by victor hugo .the edge weight record the number of co - occurrence of two characters and in the same scene .thus it is a weighted graph where as the negative logarithmic weighted degree . the original network exhibits a single local ( global ) minimum , valjean , who is the central character as the whole novel was written around his experience . .two local minima , valjean and enjoras as well as an index- saddle , courfeyrac , are identified.,scaledwidth=70.0% ] however , dropping those edges whose weights are no more than a threshold value ( here ) , there appears a subnetwork which is closely associated with the paris uprising on the 5th and 6th of june 1832 , see figure [ fig : lesmis ] .the subnetwork consists of two local minima , enjoras and valjean , the former being the leader of the revolutionary students called _ friends of the abc _ , the abaiss .led by enjolras , its other principal members are courfeyrac , combeferre , and laigle ( nicknamed bossuet ) et al . , who fought and died in the insurrection . among themis an index- saddle , courfeyrac , a law student and often seen as the heart of the group , who introduced marius to the friends of abc .marius , a descend of the gillenormands , though badly injured in the battle , was saved by the main character valjean when the barricade fell and married to cosette , the adopted daughter of valjean .the landscape of this subnetwork highlights these events in the novel .this application examines the binding of lysine- , arginine- , ornithine - binding ( lao ) protein to its ligand , recently studied in .the critical node analysis provides us a concise summary of global structure of networks while preserving important pathways , which enables us to reach a more thorough description than previous approximate analysis . andindex- saddles are shown in diamonds , circular nodes are regular nodes , and rectangular nodes are solvated states .color components represent the node decomposition ., scaledwidth=80.0% ] in a markov state model was constructed with 54 metastable states , using data obtained from molecular dynamics simulation .more information about these states can be found in si and .now we examine the transition network as a weighted directed graph , where consists of 54 nodes , each representing a metastable state , an edge if transitions from node to are observed in simulations with delays ( the implied time scale for approximate markovian behavior ) , and the number of transitions is recorded as the weight .eleven of the states ( ) are solvated or unbound states .the binding state is node 10 .let be the transition probability from state to state .this defines a markov chain with a unique stationary distribution .we threshold this graph to an undirected graph by keeping those edges such that , _i.e. _ average count number is larger than .one reason for doing this is that small numbers of transitions may be heavily influenced by the noise caused by the way of counting the transition .note that the mean transition count is about , and the qualitative behavior reported below shows certain stability under the variation of the threshold value .the energy function is where is the stationary distribution of metastable states .application of the method above gives rise to a landscape shown in figure [ fig : lao1 ] .isolated states are dropped in this picture .colors in this picture illustrate the node decomposition according to theorem [ thm : decomp ] , where each color component represents the attraction basin of a critical node .below we shall discuss structural properties of these nodes . a complete picture of structural information for all 54 states can be found in si .there are two major local minima in the landscape , nodes 10 and 18 .node 10 is the bound state which is the minimum in the most populated energy basin .its attraction basin is colored in light blue .nodes 11 ( population ) and 5 ( population ) are two encounter complexes in that basin . in these states ,the ligand is in or close to the binding site and conformations in this state have a small twist but large opening angles .the other local minimum is node 18 , a misbound state , where the ligand interacts with the protein outside the binding site and close to the hinge region of two domains of the protein .state 18 , together with state 4 , 8 , 9 , and 20 , forms a misbound basin marked in red . in these states, the ligand interacts with the protein from a distance to the binding site .state 8 and 9 exhibit similar structural properties with a negative twisting angle and a fixed distance to the binding site ( about ) , while state 4 , 18 , and 20 exhibit similar but a different type of structures .node 19 and 14 are two index- saddles connecting the two basins associated with local minima .they are metastable intermediate states between misbound and bound states .but in these saddles ligand interacts with the protein in different ways . in state 19 ( population ) , the ligand is interacting with the protein from one twisting direction ( positive ) and the protein is quite closed . in a contrast , in state 14 ( population ) the ligand is approaching the protein from the opposite twisting direction ( negative ) and the protein is still quite open ( see si ) .these two saddles actually play different roles in reactive pathways which will be discussed below .node 2 is an index- saddle , which is essentially a high energy misbound state .note that high index saddles are unstable with respect to different thresholding values . in the followingwe shall focus on index- saddles . for a quantitative analysis on the roles of index- saddles , we conduct two kinds of transition path analysis using transition path theory ( or see si ) .first , we study reactive currents from the misbound state 18 to the bound state 10 .this analysis shows that a majority of flux passes through the saddle 19 .therefore once the ligand and protein fall in the misbound state 18 , the major pathway to escape and enter the bound state is via saddle 19 .the other analysis , as was also did in , studies transition paths from the eleven solvated states marked from 43 to 53 to the bound state 10 .in particular , we investigate reactive currents from each of the solvated states to the bounded state , respectively .the results are summarized as follows .a large part of these details has been ignored in , since they only examined transition pathways , ignoring the others . 1 .solvated state 52 lies in the basin of bound state 10 , whence misbound state 18 has little influence on its pathway .2 . solvated state 53 only passes through index- critical node 19 to enter the bound state 10 , which is heavily influenced by the misbound state 18 .3 . solvated states lie in the basin of index-1 critical node 14 and enter the bound state 10 directly or via 14 .they are not much influenced by the misbound state 18 .other solvated states are in the basin of index-2 critical node 2 .transition path analysis further shows that misbound state 18 has a stronger influence on them than those in the basin of 14 . in particular state 50is mostly influenced with near of transition currents trapped by the misbound state 18 . in summary, the misbound state 18 affects some of the pathways from solvated states to the bound state .index-1 critical node 14 is a state where ligand starts to interact with protein to enter the encounter complex 11 .if we can design some mutations to disrupt the stability of this state or even encounter complexes , we may be able to make the binding much more difficult .finally we note that the critical node analysis here does not rely on the markov model assumption and can thus be applied to the analysis of transition networks in molecular dynamics beyond its markovian time scale .we have introduced a notion of critical points for network which can be used to reduce a complex network to a coarse - grained representation while preserving structural properties associated with functional gradient flows .examples have shown that the information obtained this way is of great value in capturing global structure and dynamics of the network , such as diffusive or reactive pathways . moreover, the critical point analysis leads to a hierarchical decomposition which may enable us to perform multiscale analysis of complex networks .these perspectives will be systematically pursued in the future .an interesting question is the stability of these objects against noise . to answer this question, one has to clarify the source of noise .there are two types of noise one should consider in landscape analysis of networks one associated with the energy function and the other associated with the network structure .the former can be dealt with traditional persistent homology denoising , where critical nodes with shallow basins can be merged with their saddles .the latter is however more challenging as there are no systematic studies yet on perturbation or bootstrapping of networks . in the examples above, we used edge thresholding on the les misrables and the protein binding networks , which is equivalent to modeling such networks as a superposition of a signal graph and some erds - rnyi type random graphs as noise .however there might be better models which lead to different denoising rules .w.e . acknowledges supports from aro grant w911nf-07 - 1 - 0637 and onr grant n00014 - 01 - 1 - 0674 .j.l . is grateful to eric vanden - eijnden for helpful discussions .y.y . thanks xuhui huang for providing figure s-2 in supporting information with helpful discussions , as well as supports from the national basic research program of china ( 973 program 2011cb809105 ) , nsfc ( 61071157 ) , microsoft research asia , and a professorship in the hundred talents program at peking university .we show first that every local minimum in must be an index- critical node .let be a local minimum in . then reaches at least two local minima , say .consider the subgraph with node set clearly , is connected and is the unique maximum node in . by the definition of the attraction basin , the set is not connected .since is connected , it contains at least a path from to .let be the local minimal energy path from to in the subgraph .as is not connected , must pass , so that .we now show by contradiction that is also a local minimal energy path in the original graph .suppose we can find another path from to , called , so that is deformable to . for any , we have .consider the set , which is non - empty .we distinguish two cases : 1 .then , , so that . by construction of , we have ; 2 .if there exists and , we have some point that .it is easy to see that must be , since other points on are in attraction basins of and . using lemma 1, there exists a path from to ordered in energy increase . in particular , consider the point , we have so that .moreover , and .this contradicts with the fact that is a local minimizer in .therefore , is a local minimal energy path , and is an index- critical node . let which is not a local minimum in .then , must reach a local minimum in by the gradient flow . by the first part of the proposition , .the proposition is proved .( necessity ) .we first show that index- and index- critical nodes , when added into the filtration , will change betti numbers in the way above . for index- critical nodes , they are local minima of graph . when a local minima is added into the filtration , it must create a new connected component which increases the -th betti number , .index- saddles will play a more complicated role .we have two situations * if an index- saddle lies on top of a global minimal energy path , it will decrease upon being added ; * if an index- saddle lies on top of a local minimal energy path other than the global one , it will increase upon being added .given a pair of index- critical nodes , among all local minimal energy paths connecting them ( if exist ) , there must be a global minimal energy path , so that is less than any other local minimal energy paths between and .we denote the maximal node of the global minimal energy path as .such is an index- critical node .when is added into the filtration , the -th betti number will decrease as connects two components contains and respectively . for the other local minimal energy paths connecting and , the associated index- critical nodes will increase the first betti number when added into the filtration .indeed , let be such an index- critical node .thus is a maximum of a local minimum energy path such that . is not deformable to the global minimal energy path between and .then two paths and forms a loop , and hence the first betti number increases when is added into the filtration .( sufficiency ) .we show next that no other nodes when added into the filtration will change the first two betti numbers in the same way . for any node which lies in the attraction basin of a local minima for some , reaches by gradient flow . for any edge with , reaches and thus the triangle is included in the simplicial complex .this implies that is contractible ( star - shape ) , whence no node in other than local minimum will change betti numbers .it remains to show that any node in boundary will not change betti numbers in the same way .any such node must reach at least two local minima , say and .then by lemma [ lem : path ] there is a path for some such that for and for .moreover implies that is deformable to a local minimal energy path between the same end nodes , for some . can not decreases number of connected components as the path , which appears first in the filtration , already connects and .now we show that the path will not create a loop either .let be the maximal node on .we must have . to see this ,as is deformable to , there is a node which reaches .we may assume ( ) since otherwise we are done .then , by the construction of the path , we have , and hence . note that both and reach both local minima and , node with ( ) reaches ( , respectively ) , and node with ( ) reaches ( , respectively ) .these will create a set of triangles such that is homotopy equivalent to , _ i.e. _ loop - free .the proof is analogous to that of theorem 2 .the energy landscape gives us a global picture for the different attraction basins on the network .to understand the dynamics between the different basins , the transition path theory ( tpt ) provides a natural tool .the transition path theory was originally introduced in the context of continuous - time markov process on continuous state space and discrete state space , see for a review . another description of discrete transition path theory for molecular dynamics can be also found in .here we adapt the theory to the setting of discrete time markov chain with transition probability matrix .we assume reversibility in the following presentation , the extension to non - reversible markov chain is straightforward. given two sets and in the state space , the transition path theory tells how these transitions between the two sets happen ( mechanism , rates , etc . ) .if we view as a reactant state and as a product state , then one transition from to is a reaction event .the reactve trajectories are those part of the equilibrium trajectory that the system is going from to . to make the notion more precise ,define the ordered family of times such that hence , a reaction happens from time to time . given any equilibrium trajectory , we call each portion of the trajectory of between and a _ -reactive trajectory_. we call the time during which the reaction occurs the _ reactive times_ central object in transition path theory is the committor function .its value at gives the probability that a trajectory starting from will hit the set first than , _i.e. _ , the success rate of the transition at . given two sets and in the state space , satisfies the equation the committor function provides natural decomposition of the graph . if is less than , is more likely to reach first than ; so that gives the set of points that are more attached to set .once the committor function is given , the statistical properties of the reaction trajectories between and can be quantified .we state several propositions characterizing transition mechanism from to .the proof of them is an easy adaptation of and will be omitted .the probability distribution of reactive trajectories is given by the distribution gives the equilibrium probability that a reactive trajectory visits .it provides information about the proportion of time the reactive trajectories spend in state along the way from to .the reactive current from to , defined by is given by the reactive current gives the average rate the reactive trajectories jump from state to . from the reactive current, we may define the effective reactive current on an edge and transition current through a node which characterizes the importance of an edge and a node in the transition from to , respectively .the _ effective current _ of an edge is defined as the _ transition current _ through a node is defined as in applications one often examines partial transition current through a node connecting two communities and , _ e.g. _ for , which shows relative importance of the node in bridging communities .the reaction rate , defined as the number of transitions from to happened in a unit time interval , can be obtained from adding up the probability current flowing out of the reactant state .this is stated by the next proposition .the reaction rate is given by finally , the committor functions also give information about the time proportion that an equilibrium trajectory comes from ( the trajectory hits last rather than ) . the proportion of time that the trajectory comes from ( resp . from ) is given by target set as bound state .,scaledwidth=80.0% ] the first figure is the whole co - appearance network of 77 main characters in the novel , les misrables , by victor hugo .it is an undirected weighted graph with edge weights as the number of co - appearances for a pair of characters . without thresholding this networkcontains one local minimum , valjean .however a thresholding with edge weight greater than 7 gives rise to the subnetwork in the main text .the second figure contains a list of structural information on 54 metastable states .it contains a typical crystal structure in each state , and some free energy plots on certain reaction coordinates . from these picturesone can read various structural properties of critical nodes in lao - protein binding transition network discussed in the main text .more information about this system can be found in .the third figure shows the ranking of transition currents out of misbound state 18 over eleven transition pathways .the experiment selects each of the eleven solvated states as the source set and the misbound state 10 as the common target set . in each of the eleven experiments , relative transition current out of state 18 divided by total transition current from the source ,is recorded and plotted in a descending order .donoho dl , grimes c ( 2003 ) hessian eigenmaps : locally linear embedding techniques for high - dimensional data ._ proceedings of the national academy of sciences of the united states of america _ 100:55915596 .coifman rr , et al .( 2005 ) geometric diffusions as a tool for harmonic analysis and structure definition of data : diffusion maps i. _ proceedings of the national academy of sciences of the united states of america _ 102:74267431 .no f , schtte c , vanden - eijnden e , reich l , weikl tr ( 2009 ) constructing the equilibrium ensemble of folding pathways from short off - equilibrium simulations ._ proceedings of the national academy of sciences of the united states of america _ 106:1901119016 .
topological landscape is introduced for networks with functions defined on the nodes . by extending the notion of gradient flows to the network setting , critical nodes of different indices are defined . this leads to a concise and hierarchical representation of the network . persistent homology from computational topology is used to design efficient algorithms for performing such analysis . applications to some examples in social and biological networks are demonstrated , which show that critical nodes carry important information about structures and dynamics of such networks . networks have become ubiquitous tools for describing structures that occur in a variety of fields in the past ten or fifteen years , including biology , social sciences , economics and engineering . to study a network , one has to endow it with some mathematical structure . the simplest mathematical structure on a network is the graph structure . this gives rise to notions such as degrees , paths , connectivity , etc . the distinctions between scale - free networks and small - world networks , for example , can be studied by examining this structure , see for example . but one can endow a network with more sophisticated structures , such as geometric structure as in the theory of manifold learning , , or topological structure as in the theory of persistent homology . these structures allow us to probe more deeply into the nature of the network . in this paper , we discuss how one can endow a network with a _ landscape _ when we study a function on the node set . the concept of landscape has been crucial in physics and chemistry in describing complex systems , such as energy landscape . the introduction of such a concept into complex networks may equip us with a concise description of global structures of networks and help explain certain dynamics such as information diffusion and transition pathways . many complex networks in real world carry flows of information , products , power , etc . , which are driven by local gradients of a scalar or energy . for example traffic flows may be driven by congestion function , heat flows are driven by temperature . in biomolecular folding , conformational changes are driven by the free energy of states . on internet , user s attention may be driven by the centrality or significance of websites such as pagerank . in these cases , communities or groups emerge as metastable sets of gradient - based dynamics or energy basins . therefore understanding the landscape of such functions will be crucial to disclose associated dynamics in complex networks . in the core of the landscape lies the notion of critical nodes . in continuous setting this meets the classical morse theory in the study of manifolds , where critical points can be located by vanishing gradients and their indices can be decided by dimensionality of the unstable manifold passing through . however such an approach can not be applied to the graph settings as there is no unambiguous definition of dimensionality in general . precisely , consider an undirected graph with a function defined on the node set . the question we will attempt to address is : _ given a function on its nodes , how can we endow the network with a landscape , so that one can distinguish critical nodes such as the local minima , local maxima , and saddles _ ? there are several studies in the literature which may lead to critical nodes for graphs by carrying morse theory to discrete settings . nevertheless , none of them gives a satisfied answer to the question . in computational geometry one may embed the graph into a 2d - surface and then apply morse theory for 2-manifolds . however , such a surface embedding is not natural for general graphs in biological and social networks . another candidate is discrete morse theory , which studies functions defined on all faces of cell complexes and is therefore hard to use in the graph setting above . a related subject is the extension of the poincare - hopf theorem to the graph setting , e.g. in . in this paper we present a purely combinatorial approach which starts from a discrete gradient flow induced by the function on graph nodes . such an approach does not need a surface embedding , and turns out to be closely related to persistent homology in computational topology and discrete morse theory without studying functions on high dimensional cells . in particular , given a function ( often referred to as an energy function ) on a network , we will define a discrete gradient flow associated with that function , as well as minimum energy paths between two disjoint sets of nodes . this allows us to define critical nodes or saddles . roughly speaking , critical nodes are associated with minimum energy paths between node pairs : index- critical nodes are simply local minima ; index- critical nodes are the highest energy transition nodes of minimal energy paths connecting index- critical nodes . such a critical node analysis , as we show by examples in social networks and biological networks , leads to a concise representation of networks while preserving some important structural properties . in short , the local minima or maxima together with their attraction basins can be interpreted as communities or groups in networks ; saddle points act as transition states between different critical points of lower indices . in particular , in social networks index-1 saddles act as hubs in connecting communities ; in biomolecular dynamics , index-1 saddles play roles as intermediate or transition states connecting misfolded and native states . in the latter , such an analysis does not rely on commonly used markov state model , whence can be applied to much more general data analysis . moreover , this approach leads to a hierarchical classification of nodes in the network and a global visualization of networks adaptive to the landscape of given energy function . in algorithmic aspect , critical nodes in this paper can be computed at a polynomial time cost with an algorithm based on computational topology by monitoring topological changes over energy evel sets , and in nondegenerate case an almost linear algorithm exists which is scalable for the analysis of large scale networks .
a key challenge for the future wireless networks is the increasing video traffic demand , which reached 70% of total mobile ip traffic in 2015 .classical downlink systems can not meet this demand since they have limited resource blocks , and therefore as the number of simultaneous video transfers increases , the per - video throughput vanishes as .recently it was shown that scalable per - video throughput can be achieved if the communications are synergistically designed with caching at the receivers . indeed , the recent breakthrough of _ coded caching _ has inspired a rethinking of wireless downlink .different video sub - files are cached at the receivers , and video requests are served by coded multicasts . by careful selection of sub - file caching and exploitation of the broadcast wireless channel ,the transmitted signal is simultaneously useful for decoding at users with different video requests .although this scheme theoretically proved to scale well can potentially resolve the future downlink bottleneck , several limitations hinder its applicability in practical systems . in this work ,we take a closer look to the limitations that arise from the fact that _ coded caching was originally designed for a symmetric error - free shared link . _if instead we consider a realistic model for the wireless channel , we observe that a naive application of coded caching faces a _ short - term _ limitation : since the channel qualities of the users fluctuate over time and our transmissions need to reach all users , the transmissions need to be designed for the worst channel quality .this is in stark contrast with standard downlink techniques , like _ opportunistic scheduling _ , which serve the user with the best instantaneous channel quality .thus , a first challenge is to discover a way to allow coded caching technique to opportunistically exploit the fading of the wireless channel .apart from the fast fading consideration , there is also a _ long - term _ limitation due to the network topology .the user locations might vary , which leads to consistently poor channel quality for the ill - positioned users .the classical coded caching scheme is designed to deliver equal video shares to all users , which leads to ill - positioned users consuming most of the air time and hence driving the overall system performance to low efficiency . in the literature , this problem has been resolved by the use of fairness among user throughputs . by allowing poorly located users to receive less throughput than others , precious airtime is saved and the overall system performance is greatly increased . since the sum throughput rate and equalitarian fairness are typically the two extreme cases , past works have proposed the use of alpha - fairness which allows to select the coefficient and drive the system to any desirable tradeoff point in between of the two extremes .previously , the alpha - fair objectives have been studied in the context of ( i ) multiple user activations , ( ii ) multiple antennas and ( iii ) broadcast channels . however ,here the fairness problem is further complicated by the interplay between scheduling and the coded caching operation . in particular, we wish to shed light into the following questions : _ what is the right user grouping and how we should design the codewords to achieve our fairness objective while adapting to changing channel quality ? _ to address these questions , we study the content delivery over a realistic block - fading broadcast channel , where the channel quality varies across users and time . in this setting , we design a scheme that decouples transmissions from coding . in the transmission side ,we select the multicast user set dynamically depending on the instantaneous channel quality and user urgency captured by queue lengths . in the coding side, we adapt the codeword construction of depending on how fast the transmission side serves each user set . combining with an appropriate congestion controller , we show that this approach yields our alpha - fair objective .more specifically , our approaches and contributions are summarized below : * we impose a novel queueing structure which decomposes the channel scheduling from the codeword construction .although it is clear that the codeword construction needs to be adaptive to channel variation , our scheme ensures this through our _ backpressure _ that connects the user queues and the codeword queues .hence , we are able to show that this decomposition is without loss of optimality . *we then provide an online policy consisting of ( i ) admission control of new files into the system ; ( ii ) combination of files to perform coded caching ; ( iii ) scheduling and power control of codeword transmissions to subset of users on the wireless channel .we prove that the long - term video delivery rate vector achieved by our scheme is a near optimal solution to the alpha - fair optimization problem under the specific coded caching scheme .* through numerical examples , we demonstrate the superiority of our approach versus ( a ) opportunistic scheduling with unicast transmissions and classical network caching ( storing a fraction of each video ) , ( b ) standard coded caching based on transmitting - to - all .since coded caching was first proposed and its potential was recognized by the community , substantial efforts have been devoted to quantify the gain in realistic scenarios , including decentralized placement , non - uniform popularities , and device - to - device ( d2d ) networks .a number of recent works replace the original perfect shared link with wireless channels .commonly in the works with wireless channels , the performance of coded caching is limited by the user in the worst channel condition because the wireless multicast capacity is determined by the worst user ( * ? ? ?* chapter 7.2 ) .this limitation of coded caching has been recently highlighted in , while similar conclusions and some directions are given in .our work is the first to addresses this aspect by jointly designing the transmissions over the broadcast channel and scheduling appropriate subsets of users .most past works deal with _ offline _ caching in the sense that both cache placement and delivery phases are performed once and do not capture the random and asynchronous nature of video traffic .the papers addressed partly the online nature by studying cache eviction strategies , and delay aspects . in this paper, we explore a different online aspect .requests for video files arrive in an online fashion , and transmissions are scheduled over time - varying wireless channels .online transmission scheduling over wireless channels has been extensively studied in the context of opportunistic scheduling and network utility maximization .prior works emphasize two fundamental aspects : ( a ) the balancing of user rates according to fairness and efficiency considerations , and ( b ) the opportunistic exploitation of the time - varying fading channels .related to our work are the studies of wireless downlink with broadcast degraded channels ; gives a maxweight - type of policy and provides a throughput optimal policy based on a fluid limit analysis .our work is the first to our knowledge that studies coded caching in this setting .the new element in our study is the joint consideration of user scheduling with codeword construction for the coded caching delivery phase .we study a wireless downlink consisting of a base station and users .the users are interested in downloading files over the wireless channel .the performance metric is the _ time average delivery rate of files _ to user , denoted by .hence our objective is expressed with respect to the vector of delivery rates .we are interested in the _ fair file delivery _ problem : where denotes the set of all feasible delivery rate vectors clarified in the following subsection and the utility function corresponds to the _ alpha fair _ family of concave functions obtained by choosing : for some arbitrarily small ( used to extend the domain of the functions to ) . tuning the value of changes the shape of the utility function andconsequently drives the system performance to different points : ( i ) yields max sum delivery rate , ( ii ) yields max - min delivery rate , ( iii ) yields proportionally fair delivery rate .choosing leads to a tradeoff between max sum and proportionally fair delivery rates .the optimization is designed to allow us tweak the performance of the system ; we highlight its importance by an example .suppose that for a 2-user system is given by the convex set shown on figure [ fig : example ] .different boundary points are obtained as solutions to .if we choose , the system is operated at the point that maximizes the sum .the choice leads to the maximum such that , while maximizes the sum of logarithms .the operation point a is obtained when we always broadcast to all users at the weakest user rate and use for coded caching transmissions .note that this results in a significant loss of efficiency due to the variations of the fading channel , and consequently a lies in the interior of .we may infer that the point is obtained by avoiding transmissions to users with instantaneous poor channel quality but still balancing their throughputs in the long run . to analyze the set of feasible rate vectors we need to zoom in the detailed model of transmissions . * caching model .* there are equally popular files , each bits long .the files are available to the base station .user is equipped with cache memory of bits , where ] ; are additive white gaussian noises with covariance matrix identity of size , assumed independent of each other ; are channel fading coefficients independently distributed across time and users , with denoting the path - loss parameter of user . *encoding and transmissions . *the transmissions aim to contribute information towards the delivery of a specific vector of file requests , where denotes the index of the requested file by user in slot . here is the video library size , typically in the order of 10k .the requests are generated randomly , and whenever a file is delivered to user , the next request of this user will be for another randomly selected file . at each time slot , the base station observes the channel state and the request vector up to , , constructs a transmit symbol using the _ encoding function _ . finally , it transmits a codeword for the channel uses over the fading broadcast channel in slot .the encoding function may be chosen at each slot to contribute information to a selected subset of users .this allows several possibilities , e.g. to send more information to a small set of users with good instantaneous channel qualities , or less information to a large set that includes users with poor quality .* decoding .* at slot , each user observes the local cache contents and the sequence of channel outputs so far and employs a _ decoding function _ to determine the decoded files .let denote the number of files decoded by user after slots .the decoding function is a mapping the decoded files of user at slot are given by , and depend on the channel outputs and states up to , the local cache contents , and the requested files of all users up to .a file is incorrectly decoded if it does not belongs to the set of requested files .the number of incorrectly decoded files are then given by and the number of correctly decoded files at time is : a rate vector is said to be _ feasible _ if there exist functions , [ f_t ] , [ \xi_k]) ] denote the time average number of admitted files for user .lemma [ lem : equivalence ] implies the following corollary .[ cor : equivalentoptimization ] solving is equivalent to finding a policy such that see appendix [ appendix : equivalence_proof ] contrary to the offline coded caching in ,we propose an online delivery scheme consisting of the following three blocks . each block is operated at each slot . 1 .* admission control : * at the beginning of each slot , the controller decides how many requests for each user , should be pulled into the system from the infinite reservoir .* routing : * the cumulative accepted files for user are stored in the admitted demand queue whose size is given by for .the server decides the combinations of files to perform coded caching .the decision at slot for a subset of users , denoted by , refers to the number of combined requests for this subset of users .it is worth noticing that offline coded caching lets for and zero for all the other subsets .the size of the queue evolves as : ^+ + a_k(t ) \label{eq : codewordqueues}\end{aligned}\ ] ] if , the server creates codewords by applying offline coded caching explained in section [ * * ] for this subset of users as a function of the cache contents .* scheduling : * the codewords intended to the subset of users are stored in codeword queue whose size is given by for .given the instantaneous channel realization and the queue state , the server performs scheduling and rate allocation .namely , at slot , it determines the number of bits per channel use to be transmitted for the users in subset . by letting the number of bits generated for codeword queue when offline coded caching is performed to the users in , codeword queue evolves as ^+ + \sum_{{{\cal j}}:{{\cal i}}\subseteq{{\cal j}}}b_{{{\cal j}},{{\cal i}}}\sigma_{{{\cal j}}}(t ) \end{aligned}\ ] ] where . inorder determine our proposed policy , namely the set of decisions at each slot , we first characterize the feasible region as a set of arrival rates .we let denote the probability that the channel state at slot is where is the set of all possible channel states .we let denote the capacity region for a fixed channel state .then we have the following [ th : feasibilityregion ] a demand rate vector is feasible , i.e. , if and only if there exist , , \forall { { \cal i}}\subseteq \{1,\dots , k\} ] the message that must be decoded by user ( user decodes all bits that user decodes ) at rate .more explicitly , , , . by fano s inequality ,we have consider since , there exist such that using and we obtain next consider using the conditional entropy power inequality in , we have and imply equivalent to since , there exists such that and using , and it follows superposition coding achieves the upper bound . for , generate random sequences , ] , we have where is a constant that depends only on the parameters of the system . adding the quantity to both hands of and rearranging the right hand side, we have now observe that the control algorithm minimizes right hand side of given the channel state ( for any channel state ) .therefore , taking expectations over the channel state distributions , for every vectors ^k , \bar{\mathbf{\gamma}}\in [ 1,\gamma_{max}]^k , \bar{\mathbf{\sigma}}\in conv(\{0, .. ,\sigma_{max}\}^m ) , \bar{\mu}\in \sum_{{\pmb{h}}\in\mathcal{h}}\pi_{{\pmb{h}}}\gamma({\pmb{h}}) ] be the number of timeslots between the and visit to this set ( we make the convention that ] be the number of demands that arrived and were delivered in this frame , respectively . then , since within this frame the queues start and end empty , we have = \hat{d}_k[n ] , \forall n , \forall k.\ ] ]
the performance of existing _ coded caching _ schemes is sensitive to worst channel quality , a problem which is exacerbated when communicating over fading channels . in this paper we address this limitation in the following manner : _ in short - term _ , we allow transmissions to subsets of users with good channel quality , avoiding users with fades , while _ in long - term _ we ensure fairness across the different users . our online scheme combines ( i ) joint scheduling and power control for the broadcast channel with fading , and ( ii ) congestion control for ensuring the optimal long - term average performance . we restrict the caching operations to the decentralized scheme of , and subject to this restriction we prove that our scheme has near - optimal overall performance with respect to the convex alpha - fairness coded caching optimization . by tuning the coefficient alpha , the operator can differentiate user performance with respect to video delivery rates achievable by coded caching . we demonstrate via simulations our scheme s superiority over legacy coded caching and unicast opportunistic scheduling , which are identified as special cases of our general framework . broadcast channel , coded caching , fairness , lyapunov optimization .
the corrosion of metalic surfaces is a subject of increasing interest due to widespread use of corrodable materials with all the economical and environmental implications .when the metal is in contact with an aggressive solution , the formation of a passive layer containing the metal oxide is observed after the corrosion of the topmost layer of the surface .as this layer grows , it more efficiently protects the surface from the attack of the solution , leading to the slowdown of the corrosion rate .for instance , according to the cabrera and mott law , the corrosion front is expected to move as ( growth rate decreasing as ) , which is in agreement with experimental findings .the changes in the corrosion rate are certainly accompanied by changes in the roughness of the front separating the metal and the passive layer . in the beginning of the process ,all the surface is uniformly exposed to the corrosive solution , but after the formation of the passive layer there appear points of easier or more difficult access to the solution .it is particularly important to investigate the scaling properties of the roughness because they are intimately connected to the main dynamical processes during the interface evolution . from the theoretical point of view, additional motivation comes from the renewed interest in systems whose competitive dynamics leads to crossover in roughness scaling . herewe will analyze the crossover in the front evolution and in the roughness scaling of a recently introduced model for the corrosion of a metalic surface by a solution .the basic mechanisms of the model are the corrosion of the mettalic surface exposed to the solution and the formation of the passive layer with the oxides .the layer is assumed to be porous , but the larger molar volumes of the products may block the pores and , consequently , diffusive relaxation of the volume excess is necessary for new corrosion events .this model was not devised to represent a particular real process , but to contain essential ingredients of a large variety of corrosion processes , allowing the investigation of universal features .previous numerical work on this model has shown significant differences in the corrosion fronts for small and large values of the local corrosion rate of the metal in contact with the solution . however , besides these qualitative features , only universal laws for the global growth rates in limiting cases were already derived . in the present paper ,we advance the quantitative study of the model , focusing on the properties of the corrosion front which separates the metal and the passive layer .we show the emergence of a crossover in the global growth rate accompanied by a crossover in surface roughness scaling , from a short time kpz behavior to long time laplacian growth ( or diffusion - limited erosion - dle ) . a scaling picture of the process is derived , along the same lines of those developed for related systems , and the results are confirmed by numerical simulation .the rest of this paper is organized as follows . in sec .ii we present the model for corrosion and passivation . in sec .iii , we present a scaling theory for the front growth and numerical results that confirm it . in sec .iv , we discuss the crossover kpz - dle of the surface roughness . in sec .v we summarize our results and present our conclusions .the model represents a system with the structure shown in fig .while the metal is corroded , a porous passive layer grows , with some products filling their pores .these products block the access of the solution to the surface , but they may diffuse and leave the surface of the metal exposed again . for simplicity , the model is defined in a lattice , where the sites may have three different states : m ( metal ) , s ( solution ) or p ( passive ) .the lattice sites represent the dominant chemical species inside one a mesoscopic region . due to computational limitations of memory and time ,a square lattice will be considered in our numerical work . in order to represent the effect of the porosity of the passive layer ,we assume that a site m can be corroded ( passivated ) even when it is in contact only with p sites , thus avoiding the complication of representing the porous structure of the passive layer . the excess of products will be represented by particles w , which can only lay at p sites and may diffuse through the p matrix . atmost one particle w is allowed at each p site .a region of the lattice is illustrated in fig .1b . in the beginning of the process ,all sites above a certain row of the lattice ( height ) are labeled s , and all sites below or at that row are labeled m. an m site is said to be in contact with the solution if it has at least one neighbor s or one neighbor p without a w particle over it . at each time unit , each m satisfying this condition may be changed into p with probability ( reaction ) .it corresponds to the reduction of a chemical species in the solution by the metal and the formation of an insoluble film . a reaction is illustrated for the central site in fig . 2a . on the other hand ,2b shows a case where this reaction is not possible for the central m site due to the blocking effect of the w particle .we assume that the molar volume of the oxide products is larger than that of the metal ( pilling and bedworth factor larger than one ) , and we represent this excess by creation of w particles after the reaction .two w particles are produced immediately after each reaction : the first one lays at the newly created p site and the second one lays at a neighboring p or s site . the creation of the new w particles is also illustrated in fig .some differences between models with creation of one or two w particles per reaction were discussed in refs . . the particles w may also diffuse through the p layer .this represents some type of stress relaxation which leads to the redistribution of the excess of corrosion products . at each time unit , each w attempts to execute random displacements to neighboring sites .the movement is possible only if the target site is p or s and if it does not contain a w particle , otherwise the attempt is rejected .an acceptable movement is illustrated in fig .2c , but an attempt to move the same w particle to any other direction would be rejected in that case . when a w particle attempts to move to an s site , or when the w particle is created over an s site ( top of the passive layer ) , this s site is converted into p.this condition is reasonable to represent the expansion of the passive layer towards the solution side .however , it does not have a significant influence on the corrosion front , which separates the metal and the passive layer , and which is the main interest of the present study .here we focus on the case of small corrosion rates and of not very small values of .first we consider the short time behavior of the system . in one time unit ,the number of corroded sites is small compared to the total number of sites in the front , but all w particles attempt to move during the same time interval .the diffusion of w particles removes the restrictions for the corrosion and the situation in fig .2b is rare : the w particle at the neighboring p site will move away long before a successful corrosion attempt occurs at the central site . consequently, we expect that all m sites at the interface are available for corrosion with equal probabilities .the distance of the corrosion front from the initial level , hereafter called the height of the corrosion front , increases as ( is being measured in lattice units ) . in this regime ,the model is equivalent to an inverse of the eden model , in which a new particle is added to a randomly chosen neighbor of the aggregate surface at each time step .now consider the system at long times .since w particles move diffusively , their upwards displacement away from the corrosion front increases as , thus their velocity in the upwards direction evolves as \sim { \left ( d / t\right)}^{1/2}$ ] .this decreasing velocity leads to the accumulation of w particles near the surface , which block new corrosion events .this is schematically illustrated in fig.3 , where a sea of w particles is shown at the bottom of the passive layer .the crossover from the initial regime to the long time one takes place at a characteristic time in which the height of the sea of w particles covers a finite fraction of the passive layer .the height of that sea increases as because it is determined by the diffusion of w particles produced at short times . on the other hand , the height of the passive layer increases as eq .( [ hini ] ) before the crossover . matching these heightswe obtain the crossover time the height of the corrosion front at the crossover , measured in lattice units , is after the crossover , a corrosion event is possible only when an m site has an empty neighbor p , which requires the migration of voids ( holes ) through the sea of w particles and their arrival at the corrosion front .when the m site is corroded ( reaction ) , the hole of the w sea disappears because it is occupied by one of the new w particles .since these holes move diffusively from the top of the w sea to that front ( similarly to the w particles ) , we conclude that that the corrosion front will also move diffusively , with displacement increasing as and velocity decreasing as .the time evolution of the height of the corrosion front is expected to follow a scaling relation that involves and : where is a scaling function . indeed , with for small times , we regain eq .( [ hini ] ) , and with for long times we obtain , in agreement with the above discussion . in order to test the scaling relation ( [ scalingh ] ) , we use simulation data for several values of , ranging from to , and . these data were obtained in lattices with lateral length and periodic boundary conditions . in fig .4 we show a scaling plot of versus , which are respectively proportional to and .the excellent data collapse of the results for consecutive values of shows that there is a universal relation between those quantities .lines with slopes and are also included in the plot in order to highlight the crossover from the growth to the growth . in previous numerical works with small and large , this universal behavior was not observed because , for a single value of , it is difficult to span a time interval long enough to show both scaling regimes .numerical studies in three dimensions would certainly be more difficult .however , the scaling picture will be the same in three dimensions because it is derived from diffusion properties and the same stochastic rules for the model .for small times , the equivalence to the eden model ( sec .iii ) indicates that the corrosion model is in the kpz class of growth , i. e. in the continuous limit it is represented by the kpz equation here , is the height at the position in a -dimensional substrate at time , and are constants and is a gaussian noise . in the growth regime ,the lateral correlation length is much smaller than the lattice length and the roughness increases as , where is called the growth exponent ( in , in ) and is an amplitude related to the coefficients of the kpz equation . in the original eden model ,each site of the front can grow with probability at each time unit . however , the corrosion of an m site in our model takes place with probability per time unit .thus , for small times we expect that where is a constant related to eden growth properties . in fig .5 we show a log - log plot of the small time roughness as a function of , for small , confirming the validity of eq .( [ wkpz ] ) .the average slope of this plot is significantly below the asymptotic kpz value , but these short time deviations are expected in the eden model as well as in related kpz models . at long times , a corrosion event takes place when a void of the w sea arrives at the metal surface , after performing a random walk which begins at a distant point .consequently , surface peaks will be more easily subject to corrosion than the valleys of the surface , which favors the production of surfaces with low roughness , as observed in previous numerical works for high values of .these features indicate that the long time regime is equivalent to that of the dle model . in dle ,eroding particles are randomly left at points very distant from the solid surface , they are allowed to diffuse and , when they reach the surface , they are annihilated together with the particle of the solid . in our model , the eroding particles of dleare represented by the holes of the w sea . in the continuous description of dle, the stable interface is driven by the gradient of a laplacian field .a logarithmic scaling of the squared roughness is predicted in dimensions : where is the growth velocity , is the time , is the lattice parameter and , where is a lengthscale associated with the noise .our model is equivalent to a dle with a probability of the incident particle to remove a particle of the solid .moreover , changes in the time scale are also expected because the growth rate of our model decreases in time , while it is constant in the original dle .thus , for a connection of the dle roughness in eq .( [ wlaplacian ] ) and that of our model , we use as the height of the corrosion front and .this gives where is measured in lattice units and is a time - independent constant associated with the noise ( and ) .the time scaling of the surface roughness , implicitly given in its relation with the height in eq .( [ wdle ] ) , is tested in fig .6 , where we plot as a function of .large values of were chosen in order to ensure that only data in the long time regime were considered ( - see fig .4 ) , but simulations for extremely long times are feasible only for large .we notice that , for a given value of , oscillates around some constant value , which provides the estimate of the constant in eq .[ wdle ] .the crossover from kpz to dle scaling is expected to take place at a time of order because the changes in the growth rate and in the roughness scaling are caused by the same mechanisms .in other models with crossover between different growth dynamics , it is observed that matching the scaling relations for the roughness of both dynamics leads to correct predictions of . following this reasoning , the values in eqs .( [ wkpz ] ) and ( [ wdle ] ) must be the same at time and for a height .this gives where and are constants . in order to test relation ( [ eqb ] ) , we plot as a function of in fig .we obtain a reasonable linear fit of the data for three orders of magnitude of , which suggests the validity of our approach .the crossover of surface roughness is slightly different in dimensions because the exponent of the kpz class is changed and the roughness of dle saturates at a finite value .however , the model rules indicate that the short and long time dynamics are also kpz and laplacian growth , respectively .we studied the crossover from kardar - parisi - zhang growth to laplacian front growth in a model for corrosion and passivation of a metalic surface . at small times , a regime of constant growth of the corrosion front is observed , with the roughness obeying kpz scaling . at long times ,a large fraction of the layer is blocked , leading to a global corrosion rate that decreases as .the roughness shows logarithmic scaling , with amplitudes depending on the corrosion probability .scaling arguments show that the crossover between these different dynamics takes place at , and that the height of the corrosion front at that time is .universal scaling relations between the height of the corrosion front and the time are obtained , as well as relations for the surface roughness .all results are confirmed by numerical simulations in square lattices , although the scaling arguments are also valid in three - dimensions .as far as we know , the crossover from kpz to dle scaling in statistical growth models was not predicted before .the inverse situation , i. e. the crossover from dle ( short times ) to kpz ( long times ) was suggested to appear in an extension of the dle model where the movement of the eroding particles is biased towards the surface .however , recent simulation results suggest a crossover to edwards - wilkinson scaling instead of kpz .we also recall that the crossover of our model must not be confused with the widely studied crossover from kpz to diffusion - limited aggregation ( dla ) - see e. g. ref . .indeed , surface roughness scaling is very different in dle and dla .the universal behavior discussed here may be observed in real systems where the basic features illustrated in fig .1 are expected .such tests are certainly possible with the modern techniques of microscopy , that allow the determination of surface roughness with good accuracy. one of the interesting points for comparison is the time behavior of the roughness , which evolves from a rapid growth , typical of kpz scaling , to a very slow growth , typical of the logarithmic scaling in laplacian front growth .moreover , it would also be interesting to confirm the association of the changes in the growth rate with the changes in roughness scaling .
we study a model of corrosion and passivation of a metalic surface in contact with a solution using scaling arguments and simulation . the passive layer is porous so that the metal surface is in contact with the solution . the volume excess of the products may suppress the access of the solution to the metal surface , but it is then restored by a diffusion mechanism . a metalic site in contact with the solution or with the porous layer can be passivated with rate and volume excess diffuses with rate . at small times , the corrosion front linearly grows in time , but the growth velocity shows a decrease after a crossover time of order , where the average front height is of order . a universal scaling relation between and is proposed and confirmed by simulation for in square lattices . the roughness of the corrosion front shows a crossover from kardar - parisi - zhang scaling to laplacian growth ( diffusion - limited erosion - dle ) at . the amplitudes of roughness scaling are obtained by the same kind of arguments as previously applied to the other competitive growth models . the simulation results confirm their validity . since the proposed model captures the essential ingredients of different corrosion processes , we also expect these universal features to appear in real systems .
kohn - sham density functional theory ( dft ) has a relatively high accuracy / cost ratio , which makes it a popular electronic structure method for predicting material properties and behavior . in dft ,the system of interacting electrons is replaced with a system of non - interacting electrons moving in an effective potential .the electronic ground - state in dft is typically determined by solving for the kohn - sham orbitals , the number of which is commensurate with the size of the system , i.e. number of electrons . since these orbitals need to be orthonormal , the overall solution procedure scales cubically with the number of atoms . in order to overcome this restrictive scaling ,significant research has focused on the development of linear - scaling methods .nearly all of these approaches , in one form or the other , employ the decay of the density matrix in conjunction with truncation to achieve linear - scaling .however , an efficient linear - scaling algorithm for metallic systems at low temperatures still remains an open problem .orbital - free dft ( of - dft ) represents a simplified version of dft , wherein the electronic kinetic energy is modeled using a functional of the electron density .commonly used kinetic energy functionals include the thomas - fermi - von weizsacker ( tfw ) , wang - teter ( wt ) , and wang , govind & carter ( wgc ) variants . amongst these ,the wt and wgc functionals are designed so as to match the linear - response of a homogeneous electron gas .previous studies have shown that of - dft is able to provide an accurate description of systems whose electronic structure resembles a free - electron gas , e.g. aluminum and magnesium .there have been recent efforts to extend the applicability of of - dft to covalently bonded materials as well as molecular systems .in essence , of - dft can be viewed as a ` single - orbital ' version of dft , wherein the cubic - scaling bottleneck arising from orthogonalization is no longer applicable .in addition , of - dft possesses an extremely favorable scaling with respect to temperature compared to dft .overall , of - dft has the potential to enable electronic structure calculations for system sizes that are intractable for dft .the plane - wave basis is attractive for performing of - dft calculations because of the spectral convergence with increasing basis size and the efficient evaluation of convolutions using the fast fourier transform ( fft ) .however , developing implementations which can efficiently utilize modern large - scale , distributed - memory computer architectures is particularly challenging .further , evaluation of the electrostatic terms within the plane - wave basis typically scales quadratically with the number of atoms . in view of this, recent efforts have been directed towards developing real - space approaches for of - dft , including finite - differences and finite - elements . amongst these, the finite - element method provides the flexibility of an adaptive discretization .this attribute has been employed to perform all - electron calculations and to develop a coarse - grained formulation of of - dft for studying crystal defects .however , higher - order finite - differences shown to be extremely efficient in non - periodic of - dft with the tfw kinetic energy functional remain unexplored in the context of periodic of - dft simulations , particularly when linear - response kinetic energy functionals like wt and wgc are employed . the electronic ground state in of - dftcan be expressed as the solution of a non - linear , constrained minimization problem .the approaches which have previously been employed to solve this problem include variants of conjugate - gradient and newton methods . in these approaches , the techniques used to enforce the constraints include lagrange multipliers , the penalty method and the augmented - lagrangian method . in this work ,we present a local real - space formulation and implementation of periodic of - dft .in particular , we develop a fixed - point iteration with respect to the kernel potential for simulations involving linear - response kinetic energy functionals . we develop a parallel implementation of the formulation using higher - order finite - differences .we demonstrate the robustness , efficiency and accuracy of the proposed approach through selected examples , the results of which are compared against existing plane - wave methods . the remainder of this paper is organized as follows .we introduce of - dft in section [ sec : ofdft ] and discuss its real - space formulation in section [ sec : formulation ] .subsequently , we describe the numerical implementation in section [ section : numericalimplementation ] , and validate it through examples in section [ section : examples ] .finally , we conclude in section [ section : conclusions ] .consider a charge neutral system of atoms and electrons in a cuboidal domain under periodic boundary conditions .let denote the positions of the nuclei with charges respectively .the energy of this system as described by of - dft is where , being the electron density . introducing the parameters and so that different variants of the electronic kinetic energy can be encompassed within a single expression ,we can write where is the thomas - fermi energy , is the von weizsacker term and is a non - local kernel energy incorporated to make the kinetic energy satisfy the linear - response of a homogeneous electron gas . they can be represented as where and are parameters , and the constant . on the one hand , the thomas - fermi - von weizsacker ( tfw ) family of functionals with the adjustable parameter obtained by setting . on the other hand , kinetic energy functionals which satisfy the lindhard susceptibility functionare obtained by setting with appropriate choices of , and the kernel . in particular ,the wang & teter ( wt ) functional utilizes a density independent kernel , whereas the wang , govind & carter ( wgc ) functional employs a density dependent kernel .it is common to perform a taylor series expansion of the density dependent kernel about the average electron density .on doing so , we arrive at where is the order of the expansion , the coefficients and the kernels the second term in eqn .[ eqn : energy : ofdft ] is referred to as the exchange - correlation energy .it is generally modeled in of - dft using the local density approximation ( lda ) : where is the sum of the exchange and correlation per particle of a uniform electron gas of density . employing the perdew - zunger parameterization of the correlation energy calculated by ceperley - alder , the exchange and correlation functionals can be represented as where , and the constants , , , , , and .the final three terms in eqn .[ eqn : energy : ofdft ] represent electrostatic energies . in periodic systems, they can be expressed as where the summation indices and run over all atoms in and , respectively .the hartree energy is the classical interaction energy of the electron density , is the potential due to the nucleus positioned at , is the interaction energy between the electron density and the nuclei , and is the repulsion energy between the nuclei .the ground state of the system in of - dft is given by the variational problem where is some appropriate space of periodic functions and represents the constraint on the total number of electrons .the inequality constraint is to ensure that is nodeless , i.e. does not change sign . in this work ,we focus on developing a local formulation and higher - order finite - difference implementation for determining the ground - state in periodic of - dft simulations .in this section , we develop a framework for periodic of - dft that is amenable to a linear - scaling real - space implementation .first , we present a local description of the kernel energy and potential in section [ subsec : localreformulationvlr ] .next , we discuss how the electrostatics can be rewritten into local form in section [ subsec : electrostaticreformulation ] . finally , we describe the methodology for determining the of - dft ground - state in section [ subsec : groundstate ] . in simulations where linear - response kinetic energy functionals are employed ,the kernel energy as well as the kernel potential \,,\end{aligned}\ ] ] are inherently non - local in real - space . in order to enable a linear - scaling implementation , we start by defining the potentials after approximating the kernels in fourier space using rational functions , we arrive at where and are solutions of the helmholtz equations under periodic boundary conditions and appropriate choice of complex constants and . above , and thereafter , the kernel potential and the corresponding kernel energy can be calculated in linear - scaling fashion using the expressions \ , , \label{eqn : vlinearresponse : local } \\t_{lr}(u ) & = & \frac{1}{2}c_f \sum_{m=0}^{l } \sum_{n=0}^{l } \sum_{p=0}^m \sum_{q=0}^n \sum_{r=1}^r c_{mnpq } \int_{\omega } \bigg [ u^{2(m - p+\alpha)}(\bx ) v_{mnq\beta r}(\bx ) + u^{2(n - q+\beta)}(\bx ) v_{mnp\alpha r}(\bx ) \bigg ] \ ,\mathrm{d\bx } \nonumber \\ \end{aligned}\ ] ] where and are solutions of the helmholtz equations given in eqns .[ eqn : helmholtz : beta ] and [ eqn : helmholtz : alpha ] , respectively .the electrostatic energies in eqns .[ eqn : eh ] , [ eqn : eext ] and [ eqn : ezz ] are non - local in real - space .moreover , they are individually divergent in periodic systems . to overcome this , we introduce the charge density of the nuclei : where is the charge density of the nucleus , and the summation index runs over all atoms in . in of - dft calculations , it is common to remove the core electrons and replace the singular coulomb potential with an effective potential , referred to as the pseudopotential approximation .the absence of orbitals in of - dft requires that the pseudopotential be local , i.e. depends only on the distance from the nucleus . since the pseudopotential replicates the coulomb potential outside the core cutoff radius , has a compact support within a ball of radius centered at .it follows that using the above definition for the charge densities , we can rewrite the total electrostatic energy as the following variational problem where is the electrostatic potential , is some appropriate space of periodic functions , the second last term accounts for the self energy of the nuclei and the last term corrects for overlapping charge densities .a detailed discussion on the nature of and its evaluation can be found in appendix [ appendix : correct : repulsiveenergy ] . with this reformulation of the total electrostatic energy , we arrive at the variational problem where the functional in the framework described above , the variational problem for determining the ground - state in of - dft can be written as where through this decomposition, the ground - state can be ascertained by solving the electronic structure problem in eqn .[ eqn : groundstateelectronic ] for every configuration of the nuclei encountered during the geometry optimization described by eqn .[ eqn : groundstatesplit ] .below , we discuss the solution strategy for both of these simulation components .consider the variational problem in eqn .[ eqn : groundstateelectronic ] for determining the electronic ground - state .on taking the first variation , we arrive at the euler - lagrange equation where is as given by eqn .[ eqn : vlinearresponse : local ] , and is the lagrange multiplier used to enforce the constraint .further , is the solution of the poisson equation under periodic boundary conditions and the exchange - correlation potential can be decomposed as with and being the exchange and correlation potentials , respectively .even though the notation does not make it explicit , the dependence of on makes eqn .[ eqn : eulerlagrange ] a non - linear problem .it is worth noting that since , the poisson problem defined by eqn .[ eqn : poisson ] with periodic boundary conditions is well - posed .the electronic ground - state can be determined by solving the non - linear eigenvalue problem in eqn .[ eqn : eulerlagrange ] for the eigenfunction corresponding to the lowest eigenvalue .irrespective of the solution technique and choice of kinetic energy functional , needs to be recalculated for every update in .the same is true for when linear - response kinetic energy functionals are employed .therefore , the solution of eqn .[ eqn : eulerlagrange ] requires the repeated solution of the poisson equation in eqn .[ eqn : poisson ] and the complex - valued non - hermitian helmholtz equations in eqns .[ eqn : helmholtz : beta ] and [ eqn : helmholtz : alpha ] . in view of this , the self - consistent field method ( scf ) commonly utilized in dft calculations is an attractive choice because relatively few iterations are typically required for convergence .however , we have found such an approach to be unstable for both the tfw and wgc kinetic energy functionals , especially as the system size is increased . since the number of helmholtz equations that need to be solved can be significantly large in practice ( e.g. fifty - two in this work ) , they are expected to completely dominate the execution time . in order to mitigate this , we develop a fixed - point method for determining the electronic ground - state when linear response kinetic energy functionals are employed .this is similar in spirit to the scf method , and is found to converge rapidly , as demonstrated by the examples in section [ section : examples ] .we rewrite the nonlinear eigenvalue problem in eqn .[ eqn : eulerlagrange ] as a fixed - point problem with respect to : \ , , \label{eqn : fixedpoint : map1}\ ] ] where the mappings and & = & c_f \sum_{m=0}^{l } \sum_{n=0}^{l } \sum_{p=0}^m \sum_{q=0}^n \sum_{r=1}^r c_{mnpq } \bigg [ ( m - p+\alpha ) u^{2(m - p+\alpha-1)}(\bx ) v_{mnq\beta r}(\bx ) \ , \nonumber \\ & + & ( n - q+\beta ) u^{2(n - q+\beta-1)}(\bx ) v_{mnp\alpha r}(\bx ) \bigg ] \,.\end{aligned}\ ] ] above , and are solutions to the helmholtz equations given in eqns . [eqn : helmholtz : beta ] and [ eqn : helmholtz : alpha ] , respectively .the mapping corresponds to the solution of the nonlinear eigenvalue problem in eqn .[ eqn : eulerlagrange ] for a fixed kernel potential .the mapping ] coincides with the solution of the euler - lagrange equation ( eqn .[ eqn : eulerlagrange ] ) for the electronic ground - state .in order to solve this fixed - point problem , we treat it as a non - linear equation and adopt an iteration of the form - v_{lr , k } \right)\,,\ ] ] where the index represents the iteration number and is appropriately chosen to ensure / accelerate convergence .once the fixed - point has been determined , can be calculated by solving eqn .[ eqn : mvf ] for . in fig .[ fig : flowchart ] , we present a flowchart that outlines the aforedescribed fixed - point approach .it is worth noting that for the choice of tfw kinetic energy functional ( ) , the solution of eqn .[ eqn : mvf ] coincides with the electronic ground - state . after determining the electronic ground - state ,the corresponding energy can be evaluated using the expression \, \mathrm{d\bx } \nonumber \\ & + & \int_{\omega } \varepsilon_{xc } ( u^*(\bx ) ) u^{*2}(\bx ) \, \mathrm{d \bx } + \frac{1}{2 } \int_{\omega}(u^{*2}(\bx)+ b(\bx,\br ) ) \phi^{*}(\bx ) \ , \mathrm{d\bx } \nonumber \\ & - & \frac{1}{2}\sum_{j } \int_{\omega } b_j(\bx,\br_j ) v_j(\bx,\br_j ) \ , \mathrm{d\bx } + \mathcal{e}_c^*(\br ) \,,\end{aligned}\ ] ] where , and are solutions of eqns .[ eqn : helmholtz : beta ] , [ eqn : helmholtz : alpha ] and [ eqn : poisson ] , respectively , for ..,scaledwidth=100.0% ] consider the minimization problem in eqn .[ eqn : groundstatesplit ] for determining the equilibrium configuration of the atoms . during this geometry optimization, the forces on the nuclei can be calculated using the relation where denotes the force on the nucleus and the summation over signifies the atom and its periodic images .additionally , is the solution of the poisson equation in eqn .[ eqn : poisson ] for and corrects for the error in forces due to overlapping charge density of nuclei .the expression for this correction has been derived in appendix [ appendix : correct : repulsiveenergy ] .the second equality in eqn . [ eqn : force : nuclei ] is obtained by using the fact that the energy is stationary with respect to and at the electronic ground - state , and the last equality is obtained by using the spherical symmetry of ( i.e. , ) .since has compact support in a ball of radius centered at , only a finite number of periodic images of the atom have an overlap with . therefore , evaluation of the atomic forces is amenable to a linear - scaling real - space implementation .in this section , we describe a higher - order finite - difference implementation of the formulation presented in the previous section . we restrict our computation to a cuboidal domain of sides , and .we generate a uniform finite - difference grid with spacing such that , and , where , and are natural numbers .we index the grid points by , where , and .we approximate the laplacian of any function at the grid point using higher - order finite - differences where represents the value of the function at the grid point .the weights are given by similarly , we approximate the gradient at the grid point using higher - order finite - differences where , and represent unit vectors along the edges of the cuboidal domain .the weights are given by these finite - difference expressions for the laplacian and gradient represent order accurate approximations , i.e. error is . while performing spatial integrations , we assume that the function is constant in a cube of side around each grid point , i.e. we enforce periodic boundary conditions on by employing the following strategy . in the finite - difference representations of the laplacian and gradient presented in eqns .[ eqn : fd : laplacian ] and [ eqn : gradient : approximate ] respectively , we map any index that does not correspond to a node in the finite - difference grid to its periodic image within .we start with precomputed radially - symmetric and compactly - supported isolated - atom electron densities for each type of atom .we superimpose these isolated - atom electron densities for the initial configuration of the nuclei , and scale the resulting electron density such that the constraint on the total number of electrons is satisfied .we take the pointwise square - root of the electron density so obtained as the starting guess . during the aforedescribed calculation , we only visit atoms whose isolated - atom electron densities have non - zero overlap with .similarly , for every new configuration of atoms encountered during the geometry optimization , we calculate the charge density of the nuclei using the relations where the summation reduces to all atoms whose charge density has non - zero overlap with .the localized nature of the above operations ensures that the evaluation of and scales linearly with the number of atoms .we solve the variational problem in eqn .[ eqn : mvf ] using a conjugate gradient method that was originally developed for dft and later adopted in simplified form for of - dft .specifically , we utilize the polak - ribiere update with brent s method for the line - search .we refer the reader to appendix [ appendix : nlcgteter ] for further details on the implemented algorithm . for every update in the square - root electron density, we solve the poisson equation in eqn .[ eqn : poisson ] under periodic boundary conditions using the generalized minimal residual ( gmres ) method with the block - jacobi preconditioner .since the solution so obtained is accurate to within an indeterminate constant , we enforce the condition for definiteness . in every subsequent poisson equation encountered ,we use the previous solution as starting guess . for the complex - valued helmholtz equations in eqns .[ eqn : helmholtz : beta ] and [ eqn : helmholtz : alpha ] , we first separate out each equation into its real and imaginary parts , and then solve the resulting coupled equations simultaneously under periodic boundary conditions using gmres with block - jacobi preconditioners . in every iteration of the fixed - point method, we use the solution of the helmholtz equations from the previous iteration as the starting guess .we accelerate the convergence of the fixed - point iteration by utilizing anderson mixing , details of which can be found in appendix [ appendix : anderson ] .once the electronic ground - state square - root electron density has been determined , the energy and forces are evaluated using eqns .[ eqn : groundstateenergy ] and [ eqn : force : nuclei ] respectively .while doing so , we restrict the summation over the periodic images to atoms whose charge densities have non - zero overlap with . we solve for the equilibrium configuration of the atoms by using the conjugate gradient method with the polak - ribiere update and secant line search .we have developed a parallel implementation of the proposed approach using the portable , extensible toolkit for scientific computations ( petsc ) suite of data structures and routines .within petsc , we have utilized distributed arrays with the star - type stencil option .the communication between the processors is handled via the message passing interface ( mpi ) .in this section , we validate the proposed formulation and higher - order finite - difference implementation of periodic of - dft through selected examples .henceforth , we shall refer to this framework as rs - fd , which is an acronym for real - space finite - differences .in all the simulations , we employ the goodwin - needs - heine pseudopotential .in addition , we choose for the tfw functional , and , , , and for the wgc functional . wherever applicable, we compare our results with the plane - wave code profess . within profess , we utilize a plane - wave energy cutoff of ev , which results in energies and forces that are converged to within ev / atom and ev / bohr respectively . unless specified otherwise , we use sixth - order accurate finite - differences and a mesh size of bohr within rs - fd .we choose a cutoff radius of bohr for the isolated - atom electron densities as well as the charge densities of the nuclei , whereby the enclosed charge for each nucleus is accurate to within .we utilize tolerances of and on the normalized residual as the stopping criterion for the conjugate gradient and gmres methods , respectively .we employ a history of in anderson mixing and a tolerance of on the normalized residual for convergence of the fixed - point method .these parameters and tolerances result in rs - fd energies and forces that are converged to within ev / atom and ev / bohr , respectively .it is worth noting that the aforementioned rs - fd tolerances are highly conservative , i.e. chemical accuracies are achieved even when they are significantly relaxed , as discussed in section [ subsec : scalingperformance ] .we perform all simulations on computer cluster wherein each node has the following configuration : altus 1804i server - 4p interlagos node , quad amd opteron 6276 , 16c , 2.3 ghz , 128 gb , ddr3 - 1333 ecc , 80 gb ssd , mlc , 2.5 " hca , mellanox connectx 2 , 1-port qsfp , qdr , memfree , centos , version 5 , and connected through infiniband cable .we start by verifying convergence of the energy computed by rs - fd with respect to the mesh - size ( ) . as the representative example , we choose a -atom face - centered cubic ( fcc ) unit cell of aluminum with lattice constant of bohr , and displace the atom at the corner of the unit cell the origin of the coordinate system to [ bohr .we evaluate the energy of this system as a function of for second and sixth - order accurate finite - difference approximations . in fig .[ fig : convergenceenergy ] , we plot the resulting convergence in energy for the tfw and wgc kinetic energy functionals , with the reference value computed using sixth - order finite - differences and bohr .we observe that sixth - order finite - differences demonstrates significantly higher rates of convergence compared to second - order finite - differences .specifically , the sixth - order scheme obtains convergence rates of and for the tfw and wgc functionals , respectively , whereas the second - order discretization obtains rates of and , respectively .interestingly , the computed convergence rates are not equal to the order of the finite - difference approximation .possible reasons for this include the nonlinearity of the problem , need for finer meshes to obtain the asymptotic convergence rates , the use of trapezoidal rule for integration , and the egg - box " effect .overall , these results indicate that second - order finite - differences are prohibitively expensive for obtaining the chemical accuracies desired in of - dft calculations , thereby motivating higher - order approximations . + next , we verify the convergence of the atomic forces with respect to the mesh - size ( ) .we choose the same example as that used for studying convergence of the energy in section [ subsec : convergenceenergymesh ] .we calculate the force on the displaced atom for the tfw and wgc kinetic energy functionals , and plot the resulting error versus in fig . [fig : convergenceforce ] .the error is defined to be the maximum difference in the force from that obtained using sixth - order finite - differences with mesh - size of bohr .we again observe that sixth - order finite - differences demonstrates significantly larger convergence rates compared to second - order finite - differences .specifically , the convergence rates obtained by the sixth - order scheme for tfw and wgc are and , respectively , whereas the rates for the second - order approximation are and , respectively .notably , the convergence rates for the force are larger than those obtained for the energy when using a sixth - order discretization . the possible reasons for the convergence rates not matching the finite - difference order are the need for finer meshes for obtaining asymptotic rates , the non - variational nature of the finite - difference approximation , and the egg - box " effect .+ overall , we conclude from the results presented in the previous and current subsection that higher - order finite - differences are necessary for performing accurate and efficient electronic structure calculations based on of - dft. indeed , larger convergence rates may be possible as the order of the finite - difference approximation is increased .however , this comes at the price of increased computational cost per iteration due to the reduced locality of the discretized operators and larger inter - processor communication .we have found sixth - order finite - differences to be an efficient choice , which is in agreement with our previous conclusions for the non - periodic tfw setting . in view of this, we will employ sixth - order finite - differences for all the remaining simulations in this work .we now demonstrate convergence of the fixed - point method for simulations involving the wgc kinetic energy functional . for this study , we choose ( i ) -atom system consisting of fcc unit cells of aluminum with lattice constant of bohr ( ii ) -atom system consisting of a vacancy in fcc unit cells of aluminum with lattice constant of bohr .for these two examples , we plot in fig .[ fig : andersonmixing : mixparam ] the progression of error during the fixed - point iteration .specifically , we compare the convergence of the basic fixed - point iteration ( i.e. no mixing ) with that accelerated by anderson mixing . within anderson mixing , we choose mixing history size , and mixing parameters and .we observe that anderson mixing significantly accelerates the convergence of the fixed - point iteration , with the mixing parameter demonstrating the best performance .we have found these results to be representative of other calculations utilizing the wgc kinetic energy functional . in view of this, we will utilize anderson mixing with mixing parameter for the remaining simulations in this work . + in fig .[ fig : andersonmixing : mixhist ] , we compare the convergence of the anderson accelerated fixed - point iteration for mixing histories of different sizes . specifically , we choose , , and for this study .we observe that the size of the mixing history does not have any noticeable impact on the fixed - point iteration .in fact , the plots of the error versus iteration number in fig . [fig : andersonmixing : mixhist ] are nearly identical .overall , we conclude that the fixed - point iteration accelerated with anderson mixing is extremely robust and efficient .in particular , the error decreases rapidly , and approximately iterations are sufficient to obtain the desired chemical accuracy in energies and forces .indeed , the energies are converged to within ev / atom and the forces are converged to within ev / bohr for a fixed - point iteration error of in figs . [fig : andersonmixing : mixparam ] and [ fig : andersonmixing : mixhist ] .+ in this work , we have proposed a fixed - point problem with respect to for simulations involving linear - response kinetic energy functionals . however , it is also possible to develop an analogous fixed - point problem with respect to . in table[ table : mixcomparison ] , we compare the performance of the fixed - point iterations with respect to and for the wgc functional . in both cases , we accelerate the iteration using anderson mixing with .it is clear that the relative performance of the two fixed - point iterations is system dependent .however , we have found that the iteration with respect to is significantly more robust than the one with .therefore , we employ the fixed - point iteration with respect to for determining the electronic ground - state in simulations involving linear - response kinetic energy functionals ..number of steps required to reduce the error to in the fixed - point iterations with respect to and .anderson mixing with has been employed in both cases .[ cols="^,^,^",options="header " , ]in the local reformulation of the electrostatics presented in section [ subsec : electrostaticreformulation ] , the repulsive energy can be expressed as where the second term accounts for the self energy of the nuclei .using eqn .[ eqn : b : pseudopotential ] , we arrive at above , the summations indices and run over all atoms in .if the charge density of the nuclei do not overlap , eqn .[ eqn : b : ezz ] can be rewritten as which is exactly the expression given in eqn .[ eqn : ezz ] for the repulsive energy prior to reformulation .however , the use of relatively ` soft ' pseudopotentials which are attractive because of the significant reduction in the number of basis functions required for convergence can frequently result in overlapping charge density of the nuclei .even in this situation , the repulsive energy in ab - initio calculations is calculated by treating the nuclei as point charges ( i.e. , eqn .[ eqn : repulsiveenergyappendix ] ) .since the electrostatic reformulation in this work does make this distinction between overlapping and non - overlapping charge density of the nuclei , we present a technique below that reestablishes agrement .we start by generating a ` reference ' charge density which is the superposition of spherically symmetric and compactly supported ` reference ' charge densities .these nuclei - centered charge densities satisfy the relations thereafter , the correction to the repulsive energy can be expressed as a direct computation of this energy correction will scale quadratically with respect to the number of atoms . in order to enable linear - scaling ,we rewrite eqn .[ eqn : repulsivecorrection ] as where is the solution to the poisson equation with periodic boundary conditions .the potential so calculated is accurate to within a constant , which can be determined by evaluating at any point in space .the correction to the forces on the nuclei can be represented as \,\mathrm{d\bx } \nonumber \\ & = & \frac{1}{2 } \sum_{j ' } \int_{\omega } \bigg [ \nabla \tilde{b}_{j'}(\bx,\br_{j ' } ) \left(v_c(\bx,\br)- \tilde{v}_{j'}(\bx,\br_{j'})\right ) + \nabla b_{j'}(\bx,\br_{j ' } ) \left(v_c(\bx,\br)+v_{j'}(\bx,\br_{j'})\right ) \nonumber \\ & & + \nabla v_{c , j'}(\bx,\br_{j ' } ) \left(\tilde{b}(\bx,\br)+b(\bx,\br)\right ) + b_{j'}(\bx,\br_{j ' } ) \nabla v_{j'}(\bx,\br_{j ' } ) - \tilde{b}_{j'}(\bx,\br_{j ' } ) \nabla \tilde{v}_{j'}(\bx,\br_{j ' } ) \bigg ] \,\mathrm{d\bx } \nonumber \,,\end{aligned}\ ] ] where the summation is over atom and its periodic images .additionally , it is important to note that even with these corrections to the energy and forces , the overall of - dft formulation maintains its linear - scaling nature with respect to the number of atoms .in algorithm [ algo : nlcg ] , we present the conjugate gradient method implemented in rs - fd to solve the variational problem in eqn .[ eqn : mvf ] .this differs from the standard non - linear conjugate gradient method in that it is able to handle the constraints and .[ algo : nlcg ] * input * : , , and + + ( ) , where denotes the inner product + + + + + + * output * : fixed - point problem in eqn .[ eqn : fixedpoint : map1 ] can be rewritten as - v_{lr } \,.\ ] ] this equation can be solved using an iteration of the form where is chosen to approximate the inverse jacobian . in multi - secant type methods , is set to the solution of the constrained minimization problem where \ , , \nonumber \\ y_k & = & [ f(v_{lr , k - m+1 } ) - f(v_{lr , k - m } ) , \ldots , f(v_{lr , k } ) - f(v_{lr , k-1 } ) ] \ , .\nonumber \end{aligned}\ ] ] in the above equations , represents the mixing history .the solution of this variational problem is in the specific case of anderson mixing , is set to , where is a identity matrix .this leads to the update formula : 65 natexlab#1#1[2]#2 , , ( ) . , , ( ) ., , , , , .( ) . , , ( ) . , , , ( ), , , ( ) . , , in : ( ed . ) , , volume of _ _ , , , pp . . , ( ) . ,, , , ( ) . , , , ( ) ., , ( ) . , , , , ( ) ., , , ( ) . , , , , ( ) . , , ( ) ., , , ( ) . , , , ( ) ., , , , , , ( ) . , , ( ) ., , , , ( ) . , , , , ( ) ., , , ( ) . , ( ) ., , , ( ) . , , ( ) ., , , ( ) . , , ( ) ., , , ( ) . , , ( ) ., , , ( ) . ,( ) . , , ( ) . , ,, , , . , ( ) ., , , ( ) . , , , , , ( ) . , , ( ) ., , , volume , , . , ( ) ., , , , , , , , , , , , , argonne national laboratory , ., , , , in : , , ( eds . ) , , , , pp . . , , , , volume , , . , , , ( ) . , , ( ) . , , , . , ( ) . , , volume , , .
we present a real - space formulation and higher - order finite - difference implementation of periodic orbital - free density functional theory ( of - dft ) . specifically , utilizing a local reformulation of the electrostatic and kernel terms , we develop a generalized framework for performing of - dft simulations with different variants of the electronic kinetic energy . in particular , we propose a self - consistent field ( scf ) type fixed - point method for calculations involving linear - response kinetic energy functionals . in this framework , evaluation of both the electronic ground - state as well as forces on the nuclei are amenable to computations that scale linearly with the number of atoms . we develop a parallel implementation of this formulation using the finite - difference discretization . we demonstrate that higher - order finite - differences can achieve relatively large convergence rates with respect to mesh - size in both the energies and forces . additionally , we establish that the fixed - point iteration converges rapidly , and that it can be further accelerated using extrapolation techniques like anderson s mixing . we validate the accuracy of the results by comparing the energies and forces with plane - wave methods for selected examples , including the vacancy formation energy in aluminum . overall , the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large - scale of - dft calculations consisting of thousands of atoms . finite - differences , real - space , fixed - point , anderson mixing , conjugate gradient , electronic structure .
this is an exciting time for the maturing field of gravitational wave ( gw ) physics .the network of ground - based laser interferometer gw detectors ( gwd ) is making rapid progress towards its goal of producing advanced instruments sensitive enough to make monthly gw detections in the 10hz to 10khz frequency range by the end of the decade .simultaneously , there has recently been a claim by the bicep2 experiment that the measurement of the b - modes in the cosmic microwave background polarization can be the signature of the primordial gws produced by inflation .if confirmed , this would add to the original evidence for gws from 1975 by hulse and taylor who observed that the rate of change of the orbital period of a binary star system precisely agrees with the predictions of gr .moreover , the elisa space - based gravitational wave detector together with the promising pulsar timing technique will be very important in the coming decades for the investigation of very massive objects and other gw sources in the milli- to micro - hz frequency ranges , respectively .gws are dynamic strains in space - time that travel at the speed of light and are generated by non - axisymmetric acceleration of mass .they perturb the flat minkowski metric describing space - time .the effect is the production of a dimensionless strain between two inertial masses located at a proper distance from one another so that their distance changes as : in the late 20th century following the era of the operation of bar detectors for gw detection , large laser interferometers were identified as the promising route forward because of the very high strain sensitivities that could be achieved over a wide frequency band .r. weiss produced the first detailed design study in 1972 of a large scale laser interferometer for gw detection , complete with calculations of fundamental noise sources .then , following the work of forward who built the first laser interferometric gw detector prototype , many groups around the world proceeded to study the benefits of laser interferometry , build new prototypes , perfect the design , and push technology development .the ligo detectors in the u.s . , the virgo detector in italy , the geo600 detector in germany and the tama300 detector in japan formed the first generation laser - interferometric gwd network .construction of these projects began in the mid-1990s and then progressed in sequential commissioning and data - taking phases in the 2000s .these first - generation detectors achieved their original instrument sensitivity goals and are now undergoing major hardware upgrades to expand thousand - fold the observable volume of the cosmos .the strain sensitivities achieved by these initial detectors is shown in figure [ fig : sensitivity ] .this paper describes the common design basics of the detectors and provides an overall summary of the current status of the worldwide network of gwds , including a timeline for operations .the next four proceedings focus on the particular features and individual status of each of the advanced versions of these detectors .a direct detection of gravitational waves themselves has not yet been made , but this is also not a surprise .the rate estimates for coalescing binary neutron stars , for instance , predict a detection probability of one event in 100 years for the initial detector sensitivities .about two years worth of double - coincidence data were collected .nonetheless , the ligo and virgo collaborations have already produced astrophysical results from the data collected thus far .they have placed an upper limit that beats previous best estimates of the fraction of spin - down power emitted in gws from the crab pulsar .they have also placed an upper limit at 100hz on the normalized energy density of the stochastic gw background of cosmological and astrophysical origin , a result otherwise inaccessible to standard observational techniques .the last decade brought great advances in demonstrating the experimental feasibility of achieving the strain sensitivities required to witness astrophysical events and informed the design of today s generation of gwds .both ligo and virgo are re - using the infrastructure from the initial generation detectors and are replacing the hardware within the vacuum system .the upgraded detectors are called advanced ligo and advanced virgo , respectively .tama300 as well as clio , a prototype cryogenic laser interferometer , informed the design of a brand new detector called kagra , the first underground , cryogenic laser interferometer .geo600 is keeping its infrastructure and most of its initial generation hardware , but is carrying out upgrades to demonstrate advanced techniques in a program called geo - hf .in addition , a proposal to expand ligo s baseline by building an interferometer in india is moving forward .the typical advanced detector configuration is that of a dual - recycled fabry - perot michelson ( drfpm ) laser interferometer as depicted in figure [ fig : ifoschematic ] . a power amplified , and intensity and frequency stabilized nd :yag solid state laser system injects linear - polarized 1064 nm light into a triangular input mode - cleaner ( imc ) cavity .the imc suppresses laser frequency noise and provides spatial filtering of the laser beam to reduce beam jitter that could otherwise couple to the gw readout .a beam splitter ( bs ) sends the beam to the two fabry - perot arms , which are made of an input test mass mirror ( itm ) and an end test mass mirror ( etm ) .both arms are of km - scale lengths and are set to maintain nearly perfect destructive interference of the recombined light at the anti - symmetric ( as ) port which carries the gw information . here , the beam is directed to an output mode - cleaning ( omc ) cavity and then onto a photo - detector .the omc transmits only the signal - carrying light to improve the signal - to - noise ratio .a power recycling mirror at the symmetric port directs the constructively - interfered light back into the interferometer .the transmissivity of the power recycling mirror is set to match the losses of the main optics to create a nearly critically coupled cavity .a signal recycling mirror at the anti - symmetric port creates an additional cavity which can be used to adjust the storage time of the gravitational wave signal in the interferometer and thus the frequency response of the detector .the signal recycling mirror has a transmissivity selected to compromise between high and low frequency sensitivity based on thermal noise and laser power . until a few years ago all detectors used heterodyne readout . since approximately 2008 as part of intermediary upgrades to the initial detectors , homodyne ( _ dc _ ) readout was implemented together with the addition of an omc .nearly the entire interferometer is enclosed in an ultra high vacuum ( uhv ) system to render phase noise of residual gas unimportant and to keep the optics free of dust .the primary interferometer optics are suspended as pendulums to decouple them from ground motion so that they act like free masses in the horizontal plane at the frequencies in the gw detection band . to minimize the impact of thermal noise , the mirror suspensions are designed to minimize dissipation and , in the case of kagra , operated at cryogenic temperatures .each mirror is equipped with actuators for coarse and fine control of the mirror position and orientation .a feedback control system is implemented to hold the system sufficiently near the intended operating point so that the response to residual deviations remains linear .calibration of the detector must take into account the action of the control system and the frequency response of the detector . the various length ( illustrated in figure [ fig : ifoschematic ] ) and angular degrees of freedomare sensed through the use of radio - frequency sidebands on the carrier light that are created through phase modulation by electro - optic modulators .the differential arm length signal is sensitive to gravitational waves , and is sensed using homodyne readout in transmission of the omc , where the gw signal is encoded as power variations of the light .the standard technique for locking optical cavities is the pound hall method of laser frequency stabilization .although the interferometer is an analog instrument , it is interfaced through a digital control system which allows complex filters to be implemented and tuned from the control room .the sources of noise that contaminate the detector s output can be grouped into two categories : displacement noise and sensing noise .displacement noises are those that create real motion of the mirrors , while sensing noises are those that arise in the process of measuring the mirrors position . the primary displacement noise that plagues terrestrial laser interferometers at very low frequencies is motion of the ground , i.e. seismic noise .thermal motion of the mirrors , their dielectric coatings and suspensions as well as quantum radiation pressure noise are the other two types of displacement noise which dominate in the low- to mid - frequency range .the primary sensing noise is shot noise that arises from the poisson statistics of photon arrival at the photodetector .figure [ fig : designcurves ] shows the spectra of these ultimate limits to the performance of each of the gw detectors in the advanced detector network .each curve reflects the incoherent sum of fundamental noise sources , which gives a likely best limit to performance .the actual sensitivity will depend also on technical noise sources .the narrow lines in each of the noise curves represent the thermally - excited violin modes of the test mass suspension fibers .quantum noise and thermal noise play the dominant roles in limiting the sensitivity of each of the detectors .the frequency at which there is a dip in the noise together with the shapes of the noise curves are largely affected by the signal recycling parameters , which may be adjusted during the course of operation of the advanced detectors .we postpone the detailed description of each noise curve to the proceedings dedicated to each detector .table [ tab : overview ] provides an overview comparing some of the major properties of each of the detector designs . [ cols="<,<,<,<,<",options="header " , ] [ tab : overview ]the advanced detectors form a four - site network which is crucial for gw signal characterization .the sky coverage depends on the detector locations and orientations .the two ligo detectors are nearly aligned for maximum correlation , but they are relatively close together which results in signals that have largely redundant information about the source direction and character . for this reason ,only the four - site network can provide full sky coverage .another important feature of a detector network is sky localization for electro - magetic follow ups and multi - messenger investigations . in the case of signals from coalescing binary neutron star systems, the two advanced ligo detectors , advanced virgo , and kagra would provide a localization accuracy of 10 for of the sources , while a five site network , including ligo india , would improve the accuracy to 5 for of the sources .this network of gw detectors together with the joint detection of the electro - magnetic counterparts is poised to open a promising window to gravitational wave astronomy . figure [ fig : timeline ] shows a timeline of the construction , commissioning , and science run stages of each of the gwds .the down times for each of the detectors is a bit staggered , with geo600 serving the role of being the sole detector online for the entirety of the period when the other detectors are offline .commissioning and science run stages of the advanced detectors will be interspersed to allow for possible early results once astrophysically interesting sensitivities are reached , but before design sensitivity is reached .the early science runs are likely to start in late 2015 with the advanced ligo detectors , and advanced virgo and kagra will join in due time . together with the promise of continuing to add to the collection of upper limits on gw emission in the era of advanced detectors , the first direct detection of gwsis expected to be made in only a couple years time from now .a first detection is expected to witness an event such as a binary neutron star coalescence .the authors gratefully acknowledge the support of the united states national science foundation for the construction and operation of the ligo laboratory ; the science and technology facilities council of the united kingdom , the max - planck - society , and the state of niedersachsen / germany for support of the construction and operation of the geo600 detector ; the italian istituto nazionale di fisica nucleare and the french centre national de la recherche scientifique for the construction and operation of the virgo detector ; and the japan society for the promotion of science ( jsps ) core - to - core program a , advanced research networks , and grant - in - aid for specially promoted research of the kagra project .this document has been assigned ligo document number ligo - p1400153 .10 url # 1#1urlprefix[2][]#2 abadie j _ et al ._ 2010 _ classical and quantum gravity _ * 27 * 173001 + geo - hf logbook , p.60 https://intranet.aei.uni - hannover.de / geo600/geohflogbook.nsf/% f5b2cbf2a827c0198525624b00057d30/4837a612ac990060c12575ce004e70fd[https://intranet.aei.uni - hannover.de / geo600/geohflogbook.nsf/% f5b2cbf2a827c0198525624b00057d30/4837a612ac990060c12575ce004e70fd ] the ligo scientific collaboration and the virgo collaboration 2013 prospects for localization of gravitational wave transients by the advanced ligo and advanced virgo observatories http://arxiv.org/abs/1304.0670
ground - based laser interferometers for gravitational - wave ( gw ) detection were first constructed starting 20 years ago and as of 2010 collection of several years worth of science data at initial design sensitivities was completed . upgrades to the initial detectors together with construction of brand new detectors are ongoing and feature advanced technologies to improve the sensitivity to gws . this conference proceeding provides an overview of the common design features of ground - based laser interferometric gw detectors and establishes the context for the status updates of each of the four gravitational - wave detectors around the world : advanced ligo , advanced virgo , geo600 and kagra .
_ resource usage analysis _ infers the aggregation of some numerical properties , like memory usage , time spent in computation , or bytes sent over a wire , throughout the execution of a piece of code .such numerical properties are known as _ resources_. the expressions giving the usage of resources are usually given in terms of the sizes of some input arguments to procedures .our starting point is the methodology outlined by and , characterized by the setting up of recurrence equations . in that methodology, the size analysis is the first of several other analysis steps that include cardinality analysis ( that infers lower and upper bounds on the number of solutions computed by a predicate ) , and which ultimately obtain the resource usage bounds .one drawback of these proposals , as well as most of their subsequent derivatives , is that they are only able to cope with size information about subterms in a very limited way .this is an important limitation , which causes the analysis to infer trivial bounds for a large class of programs .for example , consider a predicate which computes the factorials of a list : p6.3cmp6 cm .... listfact ( [ ] , [ ] ) .listfact([e|r],[f|fr ] ) : - fact(e , f ) , listfact(r , fr ) . .... & .... fact(0,1 ) .fact(n , m ) : - n1 is n - 1 , fact(n1 , m1 ) , m is n * m1 ..... intuitively , the best bound for the running time of this program for a list is , where and are constants related to the unification and calling costs . but with no further information , the upper bound for the elements of must be to be on the safe side , and then the returned overall time bound must also be . in a previous paper we focused on a proposal to improve the size analysis based on _sized types_. these sized types are similar to the ones present in for functional programs , but our proposal includes some enhancements to deal with regular types in logic programs , developing solutions to deal with the additional features of logic programming such as non - determinism and backtracking . while in that paper we already hinted at the fact that the application of our sized types in resource analysis could result in considerable improvement , no description was provided of the actual resource analysis .this paper is complementary and fills this gap by describing a new resource usage analysis with two novel aspects .firstly , it can _ take advantage of the new information contained in sized types_. furthermore , this resource analysis is _ fully based on abstract interpretation _ , i.e. , not just the auxiliary analyses but also the resource analysis itself .this allows us to integrate resource analysis within the plai abstract interpretation framework in the ciaopp system , which brings in features such as _ multivariance _ , fixpoints , and assertion - based verification and user interaction for free .we also perform a performance assessment of the resulting global system .in section [ sec : overview ] we give a high - level view of the approach . in the following section we review the abstract interpretation approach to size analysis using sized types .section [ sec : resources ] gets deeper into the resource usage analysis , our main contribution .experimental results are shown in section [ sec : results ] .finally we review some related work and discuss future directions of our resource analysis work .we give now an overview of our approach to resource usage analysis , and present the main ideas in our proposal using the classical ` append/3 ` predicate as a running example : .... append ( [ ] , s , s ) .append([e|r ] , s , [ e|t ] ) : - append(r , s , t ) ..... the process starts by performing the regular type analysis present in the ciaopp system . in our example, the system infers that for any call to the predicate ` append(x , y , z ) ` with ` x ` and ` y ` bound to lists of numbers and ` z ` a free variable , if the call succeeds , then ` z ` also gets bound to a list of numbers .the set of `` list of numbers '' is represented by the regular type `` listnum , '' defined as follows : .... listnum - > [ ] | .(num , listnum ) .... from this regular type definition , sized type schemas are derived . in our case , the sized type schema is derived from .this schema corresponds to a list that contains a number of elements between and , and each element is between the bounds and .it is defined as : from now on , in the examples we will use and instead of and for the sake of conciseness .the next phase involves relating the sized types of the different arguments to the ` append/3 ` predicate using recurrence ( in)equations .let denote the sized type schema corresponding to argument ` x ` in a call ` append(x , y , z ) ` ( created from the regular type inferred by a previous analysis ) .we have that denotes .similarly , the sized type schema for the output argument ` z ` is , denoted by .now , we are interested in expressing bounds on the length of the output list ` z ` and the value of its elements as a function of size bounds for the input lists ` x ` and ` y ` ( and their elements ) . for this purpose, we set up a system of inequations .for instance , the inequations that are set up to express a lower bound on the length of the output argument ` z ` , denoted , as a function on the size bounds of the input arguments ` x ` and ` y ` , and their subarguments ( , and ) are : note that in the recurrence inequation set up for the second clause of ` append/3 ` , the expression ( respectively ) represents the size relationship that a lower ( respectively upper ) bound on the length of the list in the first argument of the recursive call to ` append/3 ` is one unit less than the length of the first argument in the clause head .as the number of size variables grows , the set of inequations becomes too large .thus , we propose a compact representation .the first change in our proposal is to write the parameters to size functions directly as sized types .now , the parameters to the function are the sized type schemas corresponding to the arguments ` x ` and ` y ` of the ` append/3 ` predicate : in a second step , we group together all the inequalities of a single sized type .as we always alternate lower and upper bounds , it is always possible to distinguish the type of each inequality .we do not write equalities , so that we do not use the symbol .however , we always write inequalities of both signs ( and ) for each size function , since we compute both lower and upper size bounds .thus , we use a compact representation for the symbols and that are always paired .for example , the expression : represents the conjunction of the following size constraints : after setting up the corresponding system of inequations for the output argument ` z ` of ` append/3 ` , and solving it , we obtain the following expression : that represents , among others , the relation ( resp . ) , expressing that a lower ( resp .upper ) bound on the length of the output list ` z ` , denoted ( resp . ) , is the addition of the lower ( resp .upper ) bounds on the lengths of ` x ` and ` y ` .it also represents the relation ( resp . ) , which expresses that a lower ( resp .upper ) bound on the size of the elements of the list ` z ` , denoted ( resp . ) , is the minimum ( resp .maximum ) of the lower ( resp .upper ) bounds on the sizes of the elements of the input lists ` x ` and ` y ` .resource analysis builds upon the sized type analysis and adds recurrence equations for each resource we want to analyze .apart from that , when considering logic programs , we have to take into account that they can fail or have multiple solutions when executed , so we need an auxiliary _ cardinality analysis _ to get correct results .let us focus now on cardinality analysis .let and denote lower and upper bounds on the number of solutions respectively that predicate ` append/3 ` can generate .following the program structure we can infer that : the solution to these inequations is , so we have inferred that ` append/3 ` generates at least ( and at most ) one solution .thus , it behaves like a function .when setting up the equations , we have used our knowledge that ` append/3 ` can not fail when given lists as arguments . if not , the lower bound in the number of solutions would be 0 .now we move forward to analyzing the number of resolution steps performed by a call to ` append/3 ` ( we will only focus on upper bounds , , for brevity ) .for the first clause , we know that only one resolution step is needed , so : the second clause performs one resolution step plus all the resolution steps performed by all possible backtrackings over the call in the body of the clause .this number of possible backtrackings is bounded by the number of solutions of the predicate .so the equation reads : solving these equations we infer that an upper bound on the number of resolution steps is the ( upper bound on the length ) of the input list ` x ` plus one .this is expressed as : shown in the ` append ` example , the ( bound ) variables that we relate in our inequations come from sized types , which are ultimately derived from the regular types previously inferred for the program . among several representations of regular types used in the literature, we use one based on _ regular term grammars _ , equivalent to but with some adaptations . a _ type term _ is either a _ base type _ ( taken from a finite set ) , a _ type symbol _ ( taken from an infinite set ) , or a term of the form , where is a -ary function symbol ( taken from an infinite set ) and are _ type terms_. a _ type rule _ has the form , where is a _ type symbol _ and a _ type term_. a _ regular term grammar _ is a set of _ type rules_. to devise the abstract domain we focus specifically on the generic and - or trees procedure of , with the optimizations of .this procedure is _ generic _ and goal dependent : it takes as input a pair representing a predicate along with an abstraction of the call patterns ( in the chosen _ abstract domain _ ) and produces an abstraction which overapproximates the possible outputs .this procedure is the basis of the plai abstract analyzer present in ciaopp , where we have integrated an implementation of the proposed size analysis .the formal concept of _ sized type _ is an abstraction of a set of herbrand terms which are a subset of some regular type and meet some lower- and upper - bound size constraints on the number of _ type rule applications_. a grammar for the new sized types follows : ' '' '' [ cols=">,^ , < , > " , ]we have constructed a prototype implementation in ciao by defining the abstract operations for sized type and resource analysis that we have described in this paper and plugging them into ciaopp s plai implementation .our objective is to assess the gains in precision in resource consumption analysis .table [ expresults ] shows the results of the comparison between the new lower ( * _ lb _ * ) and upper bound ( * _ ub _ * ) resource analyses implemented in ciaopp , which also use the new size analysis ( columns _ new _ ) , and the previous resource analyses in ciaopp ( columns _ previous _ ) .we also compare ( for upper bounds ) with _ raml _ s analysis ( column _ raml _ ) .although the new resource analysis and the previous one infer concrete resource usage bound functions ( as the ones in ) , for the sake of conciseness and to make the comparison with raml meaningful , table [ expresults ] only shows the complexity orders of such functions , e.g. , if the analysis infers the resource usage bound function , and , table [ expresults ] shows .the parameters of such functions are ( lower or upper ) bounds on input data sizes .the symbols used to name such parameters have been chosen assuming that lists of numbers have size , lists of lists of lists of numbers have size , and numbers have size .table [ expresults ] also includes columns with symbols summarizing whether the new ciaoppresource analysis improves on the previous one and/or _ raml _ s : ( resp . ) indicates more ( resp .less ) precise bounds , and the same bounds .the new size analysis improves on ciaopp s previous resource analysis in most cases .moreover , raml can only infer polynomial costs , while our approach is able to infer other types of cost functions , as is shown for the divide - and - conquer benchmarks ` hanoi ` and ` fib ` , which represent a large and common class of programs . for predicates with polynomial cost ,we get equal or better results than raml .several other analyses for resources have been proposed in the literature . some of themjust focus on one particular resource ( usually execution time or heap consumption ) , but it seems clear that those analyses could be generalized .we already mentioned raml in section [ sec : results ] .their approach differs from ours in the theoretical framework being used : raml uses a type and effect system , whereas our system uses abstract interpretation .another important difference is the use of polynomials in raml , which allows a complete method of resolution but limits the type of closed forms that can be analyzed .in contrast , we use recurrence equations , which have no complete decision procedure , but encompass a much larger class of functions .type systems are also used to guide inference in and . in ,the authors use sparsity information to infer asymptotic complexities .in contrast , we only get closed forms . similarly to ciaopp s previous analysis, the approach of applies the recurrence equation method directly ( i.e. , not within an abstract interpretation framework ) . shows a complexity analysis based on abstract interpretation over a step - counting version of functional programs . uses symbolic evaluation graphs to derive termination and complexity properties of logic programs .in this paper we have presented a new formulation of resource analysis as a domain within abstract interpretation and which uses as input information the sized types that we developed in .we have seen how this approach offers benefits both in the quality of the bounds inferred by the analysis , and in the ease of implementation and integration within a framework such as plai / ciaopp . in the future, we would like to study the generalization of this framework to different behaviors regarding aggregation .for example , when running tasks in parallel , the total time is basically the maximum of both tasks , but memory usage is bounded by the sum of them .another future direction is the use of more ancillary analyses to obtain more precise results .also , since we use sized types as a basis , any new research that improves such analysis will directly benefit the resource analysis . finally , another planned enhancement is the use of mutual exclusion analysis ( already present in ciaopp ) to aggregate recurrence equations in a better way .e. albert , s. genaim , and a. n. masud .ore precise yet widely applicable cost analysis . in r.jhala and d. schmidt , editors , _12th verification , model checking , and abstract interpretation ( vmcai11 ) _ , volume 6538 of _ lecture notes in computer science _ , pages 3853 .springer verlag , january 2011 .r. bagnara , a. pescetti , a. zaccagnini , and e. zaffanella . : towards computer algebra support for fully automatic worst - case complexity analysis . technical report , 2005 .available from http://arxiv.org/. f. bueno , p. lpez - garca , and m. hermenegildo .ultivariant non - failure analysis via standard abstract interpretation . in _7th international symposium on functional and logic programming ( flops 2004 ) _ , number 2998 in lncs , pages 100116 , heidelberg , germany , april 2004 .springer - verlag .s. k. debray , n .- w .lin , and m. hermenegildo .ask granularity analysis in logic programs . in _ proc . of the 1990 acm conf. on programming language design and implementation _ , pages 174188 .acm press , june 1990 .s. k. debray , p. lpez - garca , m. hermenegildo , and n .- w .ower bound cost estimation for logic programs . in _ 1997 international logic programming symposium _ , pages 291305 .mit press , cambridge , ma , october 1997 .jrgen giesl , thomas strder , peter schneider - kamp , fabian emmes , and carsten fuhs . symbolic evaluation graphs and term rewriting : a general methodology for analyzing logic programs . in _ ppdp _ , pages 112 .acm , 2012 .p. lpez - garca , l. darmawan , and f. bueno .ramework for verification and debugging of resource usage properties . in m.hermenegildo and t. schaub , editors , _ technical communications of the 26th intl .conference on logic programming ( iclp10 ) _ , volume 7 of _ leibniz international proceedings in informatics ( lipics ) _ , pages 104113 , dagstuhl , germany , july 2010 . leibniz - zentrum fuer informatik .j. navas , e. mera , p. lpez - garca , and m. hermenegildo .ser - definable resource bounds analysis for logic programs . in _ 23rd international conference on logic programming ( iclp07 ) _ , volume 4670 of _ lecture notes in computer science_. springer , 2007 . g. puebla and m. hermenegildo .ptimized algorithms for the incremental analysis of logic programs . in _ international static analysis symposium( sas 1996 ) _ , number 1145 in lncs , pages 270284 .springer - verlag , september 1996 .pedro b. vasconcelos and kevin hammond . inferring cost equations for recursive , polymorphic and higher - order functional programs . in philipw. trinder , greg michaelson , and ricardo pena , editors , _ ifl _ , volume 3145 of _ lecture notes in computer science _ , pages 86101 .springer , 2003 . c. vaucheret and f. bueno .ore precise yet efficient type inference for logic programs . in _ international static analysis symposium _ ,volume 2477 of _ lecture notes in computer science _ , pages 102116 .springer - verlag , september 2002 .
we present a novel general resource analysis for logic programs based on sized types.sized types are representations that incorporate structural ( shape ) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth . they also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates . using these sized types , the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term ( and subterm ) sizes , overcoming limitations of existing analyses and enhancing their precision . our new resource analysis has been developed within the abstract interpretation framework , as an extension of the sized types abstract domain , and has been integrated into the ciao preprocessor , ciaopp . the abstract domain operations are integrated with the setting up and solving of recurrence equations for both , inferring size and resource usage functions . we show that the analysis is an improvement over the previous resource analysis present in ciaopp and compares well in power to state of the art systems . [ firstpage ]
in this paper we develop a method to approximate steady - state solutions of the following one - dimensional thermistor problem : subject to boundary and initial conditions , and coupled with the electric potential equation : the motivation for studying this kind of problem is that - has important implications for a variety of technological processes . for example, it arises in the analytical study of phenomena associated with the occurrence of shear band in metal being deformed at high strain rates ; in the theory of gravitational equilibrium of polytropic stars ; in the investigation of the fully turbulent behavior of flows ; in modelling aggregation of cells via interaction with a chemical substance ( chemotaxis ) ; and specially in modelling electrical heating in a conductor . in this case , is the temperature of the conductor , is the electrical potential .functions and are , respectively , the electrical and thermal conductivities ; is the heat transfer coefficient .the condition is a condition of robin - type . when it is called an adiabatic condition .equation consists in the heat equation with joule heating as a source ; describes conservation of current in the conductor .the thermistor problem has been extensively studied by several authors , where existence and uniqueness of solutions are given .theoretical analysis , consisting in existence of solutions with the required regularity and which ensure error estimates of optimal order of convergence , are done in . to construct a numerical approximation of the steady state solution we use a numerical method to approximate the solution of the parabolic problem .this approach has been used by in the one - dimensional thermistor problem .further , in these last works authors consider the thermal conductivity equal to and a particular electrical conductivity , then they obtain the exact solution of the conservation problem - and so system - of thermistor problem is reduced to the following single heat conduction problem : subject to the boundary conditions - . in this paper, we propose to solve both equations and at the same time by using a finite element method and a fully crank - nicolson approach .the formulation of the finite element method is standard and is based on a variational formulation of the continuous problem . in section [ sec2 ]we give the variational formulation of problem - .an algorithm for solving the problem is then proposed in section [ sec3 ] . in section [ sec4 ] ,numerical results are obtained for an appropriate test - problem .we divide the interval $ ] into equal finite elements .let be a partition of and the step length .by we denote a basis of the usual pyramid functions : ,\\ - \frac{1}{h}x+(1+j ) & \mbox { on } [ x_{j } , x_{j+1}],\\ 0 & otherwise .\end{cases}\ ] ] as indicated above , it is convenient to proceed in two steps with the derivation and analysis of the approximate solution of - . first , we write the problem in weak or variational form . we multiply the parabolic equation by ( for fixed ) , integrate over , and apply green s formula on the left - hand side , to obtain using the boundary condition we get we also have we now turn our attention to solve this system by discretization with respect to the time variable .we introduce a time step and time levels , where is a nonnegative integer , and denote by the approximation of to be determined .we use the backward euler galerkin method , which is defined by replacing the time derivative in by a backward difference .so the approximations , admit unique representations where , are unknown real coefficients to be determined .then , after decoupling , we have that scheme , we have on the other hand , we have using boundary conditions and initial condition , it follows that then , we have the resulting system of equations : for , for , for , coming back to , the following may be stated in terms of the functions : find the coefficients in such that in matrix notation , this may be expressed as where and since the matrix and are gram matrices , in particular they are positive definite and invertible .thus , the above system of ordinary differential equations has obviously a unique solution .we solve the system for each time level .estimating each term of separately , we have : using the expression of and , we obtain in the same way , we have on other hand , we similarly have it also holds : using together and , we get a system of linear algebraic equations using the boundary conditions , we find from the initial condition we get let and substituting in , we obtain the following system of equations : for , for , for , where this section we give an example of a model of the thermistor problem : the exact solution of the electrical potencial problem is , .then , the diffusion equation can be reduced to the form using the proposed galerkin finite element approach , we get the following system of algebraic equations : for , for , for , where we now show some results from numerical experiments performed using our method and the computer algebra system . according with physical situations , we choose values of and verifying .in particular , we fixed and .the calculation of the steady - state for the thermistor problem is an important issue regarding the applications of the model in the thermistor device .we obtained stable steady - state times for ( see fig .[ fig : graph ] ) .the authors are grateful to the support of the _ portuguese foundation for science and technology _ ( fct ) through the _centre for research in optimization and control _ ( ceoc ) of the university of aveiro , cofinanced by the european community fund feder / poci 2010 , and the project sfrh / bpd/20934/2004 .e. caglioti , p .-lions , c. marchioro , m. pulvirenti , a special class of stationary flows for two - dimensional euler equations : a statistical mechanics description , comm .* 143 * ( 1992 ) , no . 3 , 501525 .a. el hachimi and m. r. sidi ammi , existence of weak solutions for the thermistor problem with degeneracy , in _ proceedings of the 2002 fez conference on partial differential equations_ , 127137 ( electronic ) , electron .j. differ .conf . , 9 , southwest texas state univ . , san marcos , tx , 2002 .
_ we use a finite element approach based on galerkin method to obtain approximate steady state solutions of the thermistor problem with temperature dependent electrical conductivity . _ * keywords : * parabolic equation , finite element method , thermistor problem . * 2000 mathematics subject classification : * 35k40 , 74s05 .
as location - based services become widely deployed , the importance of verifying the location information being fed into the location service is becoming a critical security issue .the main difference between a location verification system ( lvs ) and a localization system is that we are confronted by some _ a priori _ information , such as a claimed position in the lvs . in the context of a main target application of our system , namely intelligent transport systems ( its ) ,the issue of location verification has attracted a considerable amount of recent attention .normally , in order to infer whether a network user or node is malicious ( attempting to spoof location ) or legitimate ( actually at the claimed location ) , we have to set a threshold for the lvs .this threshold is set so as to obtain low false positive rates for legitimate users and high detection rates for malicious users .as such , the specific value of the threshold will directly affect the performance of an lvs . one traditional approach to set the threshold of an lvs is to search for a tradeoff between false positive rate and detection rate according to receiver operating characteristic ( roc ) curve .another technique is to obtain the false positive and detection rates through empirical training data and minimize specific functions of the two rates to set the threshold .for example , in , the sum of false positive and false negative rates were minimized .however , although successful in many scenarios , the approaches mentioned above do not specify in any formal sense what the ` optimal ' threshold value of an lvs should be .in addition , in our key target application of our lvs , namely its , it is not practical to collect the required training data due to the variable circumstances .the main point of this paper is to develop for the first time an information theoretic framework that will allow us to formally set the optimal threshold of an lvs . in order to do this ,we first define a threshold based on the squared mahalanobis distance , which utilizes the fisher information matrix ( fim ) associated with the location information metrics utilized by the lvs . to optimize the threshold , the intrusion detection capability ( idc ) proposed by gu for an intrusion detection system ( ids ) will be utilized .the idc is the ratio of the reduction of uncertainty of the ids input given the output . as such, the idc measures the capability of an ids to classify the input events correctly .a larger idc means that the lvs has an improved capability of classifying users as malicious or legitimate accurately . from an information theoretic point of viewthe optimal threshold is the value that maximizes the idc .the rest of this paper is organized as follows .section 2 presents the system model , which details the observation model and the threat model we utilize . in section 3, the threshold is defined in terms of the fim associated with the location metrics .section 3 also provides the techniques used to determine the false positive and detection rates , which are utilized to derive the idc .section 4 provides the details of how the idc is used in the optimizing the threshold .simulation results which validate our new analytical lvs framework are presented in section 5 .section 6 concludes and discusses some future directions .let us assume a user could obtain its true position , ] , is exactly the same as its true position .however , a malicious user will falsify ( spoof ) its claimed position in an attempt to fool the lvs .we denote the legitimate and malicious hypothesis as and , respectively , and the _ a priori _ information can be summarized as although the framework we develop can be built on any location information metric , for purposes of illustration in this work we will solely investigate the case where the location information metric is the received signal strength ( rss ) obtained by a base station ( bs ) from a user .the rss of the -th bs from a legitimate user , , is assumed to be given by where is a reference received power , is the reference distance , is the path loss exponent , is a zero - mean normal random variable with variance , the euclidean distance of the -th bs to the user s true position ] is the location of the -th bs , and is the number of bss . for in eq .( [ prior ] ) , in eq .( [ observ ] ) can be replaced by , where is the euclidean distance of the -th bs to the user s claimed position ] .. in this alternative description , an error ellipse is derived directly from the fim , with the scale of the ellipse being set by and the orientation being set by the eigenvectors of the inverse fim . for different values of the threshold the ellipse size scales as , and the detection algorithm decides the user is malicious if the estimated position returned by the location mle lies outside of the ellipse . ]the squared mahalanobis distance can be expressed as where is the mean of and is the covariance matrix of . according to the definition of ,it is a dimensionless scalar and involves not only the euclidean distance but also the geometric information . in an lvs , we are interested in the ` distance ' between a user s estimated position and its claimed position .thus , we will use instead of to calculate .in addition , without any _ a priori _ results from a localization algorithms , we can not obtain any estimate of the covariance matrix .therefore , we will utilize the inverse fim , , to approximate . with this , the squared mahalanobis distance in our lvs can be written as where and is the fim to be calculated as given below . in practice, the lvs works on the observation model based on , and the likelihood function of received powers can be obtained using eq .( [ observ ] ) .let us assume the observations received by different bss are independent , then the log - likelihood function can be expressed as ^ 2 + \log \mathbf{c}.\end{aligned}\ ] ] where is the -dimension observation vector and the constant number is then , we can calculate the terms of the fim through ,\end{aligned}\ ] ] where represents the expectation operation with respect to all observations .after some algebra , the fim can be written as , ,\end{aligned}\ ] ] where after setting a threshold parameter for the squared mahalanobis distance , the decision rule of an lvs ( _ i.e. _ a malicious user or not ) can be expressed as follows note that , we are able to transform any covariance matrix into a diagonal matrix by rotating the position vector .thus , the general form of can be expressed as }.\end{aligned}\ ] ] then , the threshold can be encapsulated within the equation for an ellipse as follows therefore , the threshold can also be understood as an ellipse , denoted as , which is determined by extending the error ellipse provided by the fim with the threshold parameter .based on the above analysis , the overall process of an lvs includes four steps * collect observations of the rss received from a user by each bs ; * apply a localization algorithm to obtain an estimated position ; * calculate the squared mahalanobis distance of to the user s claimed position ; * infer if the user is legitimate or malicious according to the decision rule in eq .( [ rule ] ) . in practice ,the above are all the steps of our lvs . however , to evaluate an lvs , false positive and detection rates , which are functions of the threshold parameter and other lvs parameters , are always investigated in theory . in the following subsections , we provide techniques used to determine false positive and detection rates in order to optimize the threshold parameter . the false positive rate is the probability by which legitimate users are judged as malicious ones . for a legitimate user , .then , in the 2-d physical space , the false positive rate can be expressed as .in fact , the true positive rate ( ) is a well known metric that underlies the performance of unbiased localization algorithms .for example , in the 2-d physical space , it states that the probability by which an estimated position lies within the ellipse with is no more than .the detection rate is the probability that malicious users are recognized as malicious ones . in order to calculate , we have to obtain the posterior probability density function ( pdf ) for a location given some rss observation vector , which can be expressed as where ] is the observation vector . of course, if the user is malicious the observed signal vector will be one that has undergone a boost as described by eq .( [ threat2 ] ) .let us denote the average value of this spoofed observation vector as .given this , the likelihood function can be derived from eq .( [ observ ] ) .if we take to be a uniform variable vector , then the detection rate can be calculated as \in \mathbb{t}}}f(\bm{\theta}|\bm{\hat{p}})dx dy = 1-\frac{1}{a_1 } { \mathop{\int\int}\limits_{[x , y]\in \mathbb{t}}}f(\bm{\hat{p}}|\bm{\theta})dx dy,\end{aligned}\ ] ] where is a normalizing constant that can be written as where numerical methods are utilized to solve the above integral equation for since there is no closed form solution .based on the above analysis , is also a function of . as an aside it is worth mentioning that the false positive rate can also be written in a similar form as follows \in \mathbb{t}}}f(\bm{\tilde{p}}|\bm{\theta})dx dy,\end{aligned}\ ] ] where is the average non - spoofed observation vector and where this section we will optimize the value of the threshold by maximizing the idc , which is a function of the false positive rate , detection rate and the base rate ( the _ a priori _ probability of intrusion in the input event data ) . that is , our optimization procedure is to find the value of that maximizes the idc .from an information theoretic point of view , the idc is a metric that measures the capability of an ids to classify the input events correctly and is defined as where is the entropy of the input data , is the mutual information of input data and output data , and is the conditional entropy . mutual information measures the reduction of uncertainty of the input given the output .thus , is the ratio of the reduction of uncertainty of the input given the output .its value range is [ 0 , 1 ] .a larger value means that the ids has an improved capability of classifying input events accurately .our lvs can be modeled as an ids whose input data are the claimed positions , and the output data are the binary decisions . then , represents an actual claimed position from a legitimate user , represents a spoofed claimed position from a malicious user , infers the user is legitimate , and indicates the user is malicious .accordingly , the false positive rate is the probability , and detection rate is the probability .therefore , the optimal value of is the one that maximizes the value of the of the lvs .the realizations of input and output data are denoted as and , respectively .given the base rate , the entropy of the input data can be written as the conditional entropy can be expressed as numerical methods are applied in order to search for the optimal value of since there is no closed form for . in the following we refer to this optimal value as .adopting a maximum likelihood estimator ( mle ) in our location estimation algorithm we now verify , via detailed simulations , our previous analysis .the theoretical and simulated , and , all of which are dependent on , are utilized in order to find the value that maximizes .the simulation settings are as follows : * bss are deployed in a square field and the legitimate and honest users can communicate with all bss ; * the claimed positions of honest and malicious users are the same , denoted ; * observations are collected from each base station ; * the bss are set at fixed positions ( we investigate a range of fixed locations ) ; * the results shown are averaged over 1,000 monte carlo realizations of the estimated position , and where the base rate for all the simulations . as shown in fig.1 ,the solid lines are the theoretical , and while the symbols are the simulated , and .the simulated values of and are calculated directly according to the realizations of estimated positions , and then the simulated is obtained from eq .( [ h7 ] ) .the simulation parameters are shown in the figure caption and the theoretical optimal value can be seen to be ( note that in all the figures explicitly shown in this paper the four bss are fixed at the corners of a 200 m x 200 m grid ) .the comparison between simulation and analysis shows excellent agreement . beyond the simulations explicitly shown in fig.1, we have investigated a range of other fixed bss positions ( up to 10 bss whose positions are randomly selected ) , and these simulation also show excellent agreement with simulations .collectively , these simulation results verify the analysis we have provided earlier .the simulation results with a malicious user having a certain distance to all bss are shown in fig.2 .the true position of the malicious user in the simulations is set at 10 km away from the claimed position .although the simulation and theoretical values of , and do not match with each other exactly ( the theoretical analysis approximates the user as being at infinity ) , the simulation and theoretical optimal values are effectively the same .we find this result holds down to distance where the malicious user is a few km away from the claimed position .this shows that our framework is tenable when the assumption that malicious user is infinitely far away is relaxed down to the few km range .in order to verify the with the optimal value is correct , we also simulated for a range of .3 shows such results for the case where the user malicious user if effectively at infinity . herethe optimal value is derived from the proposed theoretical analysis , but in the simulations the threshold is set to the other values of t shown ( and ) . from the results shown we can see that these other values do provide simulated false positive and detection rates which result in lower values of ( and therefore sub - optimal performance ) , which once again verifies the robustness of our analytical framework .4 shows the same results except that the malicious user is again set at 10 km away from the claimed position .again we see a validation of our analysis .in this paper , we have proposed a novel and rigorous information theoretic framework for an lvs .the theoretical framework we have developed shows how the value of the threshold used in the detection of a spoofed location can be optimized in terms of the mutual information between the input and output data . in order to verify the legitimacy of our frameworkwe have carried out detailed numerical simulations of our framework under the assumption of an idealized threat model in which the malicious user is far enough from the claimed location such that his boosted signal strength results in all bss receiving the same rss ( modulo noise ) .our numerical simulations mimic the practical scenario where a system deployed using our framework must make a binary yes / no malicious decision " to each snapshot of rss values obtained by the bss .the comparison between simulation and analysis shows excellent agreement .other simulations where we modify the approximation of constant rss at bss also showed very good agreement with analysis .the work described in this paper formalises the performance of an optimal lvs system under the simplest ( and perhaps most likely scenario ) , where a single malicious user attempts to spoof his location to a wider wireless network .the practical scenario we had in mind whilst carrying out our simulations was in an its where another vehicle is attempting to provide falsified location information the wider vehicular network .future work related our new framework will include the formal inclusion of more sophisticated threat models , where the malicious user is both closer to the claimed location and has the use of colluding adversaries .it is well known that no lvs can be made foolproof under the colluding adversary scenario , however , we will investigate in a formal information theoretic sense the detailed nature of the vulnerability of an lvs under such different threat models .this work has been supported by the university of new south wales , and the australian research council ( arc ) .s. , k. b. rasmussen , m. , and m. srivastava , secure location verification with hidden and mobile base station , " _ ieee trans .mobile comput ._ , vol . 7 , no .470 - 483 , apr . 2008 .h. buhrman , n. chandran , s. fehr , r. gelles , v. goyal , r. ostrovsky and c. schaffner , position - based quantum cryptography : impossibility and constructions , " in advances in cryptology , vol .6841 of lecture notes in computer science , pp .429 - 446 , springer - verlag , 2011 .
as location - based applications become ubiquitous in emerging wireless networks , location verification systems ( lvs ) are of growing importance . in this paper we propose , for the first time , a rigorous information - theoretic framework for an lvs . the theoretical framework we develop illustrates how the threshold used in the detection of a spoofed location can be optimized in terms of the mutual information between the input and output data of the lvs . in order to verify the legitimacy of our analytical framework we have carried out detailed numerical simulations . our simulations mimic the practical scenario where a system deployed using our framework must make a binary yes / no malicious decision " to each snapshot of the signal strength values obtained by base stations . the comparison between simulation and analysis shows excellent agreement . our optimized lvs framework provides a defence against location spoofing attacks in emerging wireless networks such as those envisioned for intelligent transport systems , where verification of location information is of paramount importance .
[ sec:1 ] magnetized plasmas are encountered in a wide variety of astrophysical situations , but also in magnetic fusion devices such as tokamaks , where a large external magnetic field needs to be applied in order to keep the particles on the desired tracks . in particle - in - cell ( pic ) simulations of such devices, this large external magnetic field obviously needs to be taken into account when pushing the particles .however , due to the magnitude of the concerned field this often adds a new time scale to the simulation and thus a stringent restriction on the time step . in orderto get rid of this additional time scale , we would like to find approximate equations , where only the gross behavior implied by the external field would be retained and which could be used in a numerical simulation .in the simplest situation , the trajectory of a particle in a constant magnetic field is a helicoid along the magnetic field lines with a radius proportional to the inverse of the magnitude of .hence , when this field becomes very large the particle gets trapped along the magnetic field lines .however due to the fast oscillations around the apparent trajectory , its apparent velocity is smaller than the actual one .this result has been known for some time as the guiding center approximation , and the link between the real and the apparent velocity is well known in terms of . here, we consider a plasma constituted of a large number of charged particles , which is described by the vlasov equation coupled with the maxwell or poisson equations to compute the self - consistent fields .it describes the evolution of a system of particles under the effects of external and self - consistent fields .the unknown , depending on the time , the position , and the velocity , represents the distribution of particles in phase space for each species with , .its behaviour is given by the vlasov equation , where the force field is coupled with the distribution function giving a nonlinear system .we first define the charge density and the current density which are given by where is the elementary charge . for the vlasov - poisson model where represents the mass of one particle . on the other hand for the vlasov - maxwell model ,we have and , are solutions of the maxwell equations with the compatibility condition which is verified by the solutions of the vlasov equation . herewe will consider an intermediate model where the magnetic field is given , with and we focus on the long time behavior of the plasma in the orthogonal plane to the external magnetic field , that is the two dimensional vlasov - poisson system with an external strong magnetic field here , for simplicity we set all physical constants to one and consider that is a small parameter related to the ratio between the reciprocal larmor frequency and the advection time scale .the term in front of the time derivative of stands for the fact that we want to approximate the solution for large time .we want to construct numerical solutions to the vlasov - poisson system ( [ eq : vlasov2d ] ) by particle methods ( see ) , which consist in approximating the distribution function by a finite number of macro - particles .the trajectories of these particles are computed from the characteristic curves corresponding to the vlasov equation where the electric field is computed from a discretization of the poisson equation in ( [ eq : vlasov2d ] ) on a mesh of the physical space .the main purpose of this work is the construction of efficient numerical methods for stiff transport equations of type ( [ eq : vlasov2d ] ) in the limit .indeed , setting the system ( [ traj:00 ] ) can be re - written for as therefore , we denote by the solution to ( [ traj:01 ] ) , and under some classical smoothness assumptions on the electromagnetic fields , it is well - known at least when or is homogeneous that converges weakly to zero when , and , where corresponds to the guiding center approximation here , we are of course interested in the behavior of the sequence solution to the vlasov - poisson system ( [ eq : vlasov2d ] ) when , which corresponds to the gyro - kinetic approximation of the vlasov - poissons system . following the work of l. saint - raymond ,it can be proved at least when is homogeneous that the charge density converges to the solution to the guiding center approximation -\delta _ { { \mathbf{x } } _\perp}\phi=\rho , \end{array } \right . \label{eq : gc}\ ] ] where the velocity is we observe that the limit system ( [ traj : limit ] ) corresponds to the characteristic curves to the limit equation ( [ eq : gc ] ) .we seek a method that is able to capture these properties , while the numerical parameters may be kept independent of the stiffness degree of these scales .this concept is known and widely studied for dissipative systems in the framework of asymptotic preserving schemes .contrary to collisional kinetic equations in hydrodynamic or diffusion asymptotic , collisionless equations like the vlasov - poisson system ( [ eq : vlasov2d ] ) involve time oscillations . in this context , the situation is more complicated than the one encountered in collisional regimes since we can not expect any dissipative phenomenon . therefore , the notion of two - scale convergence has been introduced both at the theoretical and numerical level in order to derive asymptotic models. however , these asymptotic models , obtained after removing the fast scales , are valid only when is small .we refer to e. frnod and e. sonnendrcker , and f. golse and l. saint - raymond for a theoretical point of view on these questions , and e. frnod , f. salvarani and e. sonnendrcker for numerical applications of such techniques .another approach is to combine both disparate scales into one and single model .such a decomposition can be done using a micro - macro approach ( see and the references therein ) .such a model may be used when the small parameter of the equation is not everywhere small .hence , a scheme for a micro - macro model can switch from one regime to another without any treatment of the transition between the different regimes . a different methodconsists in separating fast and slow time scales when such a structure can be identified or .theses techniques work well when the magnetic field is uniform since fast scales can be computed from a formal asymptotic analysis , but for more complicated problems , that is , when the external magnetic field depends on time and position , the generalization of this approach is an open problem . in this paper, we propose an alternative to such methods allowing to make direct simulations of systems ( [ eq : vlasov2d ] ) with large time steps with respect to .we develop numerical schemes that are able to deal with a wide range of values for , so - called asymptotic preserving ( ap ) class , such schemes are consistent with the kinetic model for all positive value of , and degenerate into consistent schemes with the asymptotic model when . before presenting our time discretization technique , let us first briefly review the basic tools of particle - in - cell methods which are widely used for plasma physics simulations .the numerical resolution of the vlasov equation and related models is usually performed by particle - in - cell ( pic ) methods which approximate the plasma by a finite number of particles .trajectories of these particles are computed from characteristic curves ( [ traj:00 ] ) corresponding to the the vlasov equation ( [ eq : vlasov2d ] ) , whereas self - consistent fields are computed on a mesh of the physical space .this method yields satisfying results with a relatively small number of particles but it is sometimes subject to fluctuations , due to the numerical noise , which are difficult to control . to improve the accuracy ,direct numerical simulation techniques have been developed .the vlasov equation is discretized in phase space using either semi - lagrangian , finite difference or discontinuous galerkin schemes . butthese direct methods are very costly , hence several variants of particle methods have been developed over the past decades . in the complex particle kinetic scheme introduced by bateson and hewett , particles have a gaussian shape that is transformed by the local shearing of the flow. moreover they can be fragmented to probe for emerging features , and merged where fine particles are no longer needed . in the cloud in mesh ( cm ) scheme of alard and colombi particlesalso have gaussian shapes , and they are deformed by local linearization of the force field .more recently in , the authors proposed a linearly - transformed particle - in - cell method , that employs linear deformations of the particles . herewe focus on the time discretization technique , hence we will only consider standard particle method even if our approach is completely independant on the choice of the particle method . the particles method consists in approximating the initial condition in ( [ eq : vlasov2d ] ) by the following dirac mass sum where is a beam of particles distributed in the four dimensional phase space according to the density function . afterwards , one approximates the solution of ( [ eq : vlasov2d ] ) , by where is the position in phase space of particle moving along the characteristic curves ( [ traj:00 ] ) with the initial data , for .however when the vlasov equation is coupled with the poisson equation for the computation of the electric field , the dirac mass has to be replaced by a smooth function where is a particle shape function with radius proportional to , usually seen as a smooth approximation of the dirac measure obtained by scaling a compactly supported `` cut - off '' function for which common choices include b - splines and smoothing kernels with vanishing moments , see _e.g. _ .particle centers are then pushed forward at each time by following a numerical approximation of the flow ( [ traj:00 ] ) , leading to in the classical error analysis , the above process is seen as * an approximation ( in the distribution sense ) of the initial data by a collection of weighted dirac measures ; * the exact transport of the dirac particles along the flow ; * the smoothing of the resulting distribution with the convolution kernel . the classical error estimate reads then as follows : [ lmm:0 ] consider the vlasov equation with a given electromagnetic field and a smooth initial datum , with .if for some prescribed integers and , the cut - off has -th order smoothness and satisfies a moment condition of order , namely , and then there exists a constant independent of , or , such that we have for all , when and where the ratio .note that following , it is also possible to get explicit order of convergence for the linear vlasov equation .let us also mention related papers where the convergence of a numerical scheme for the vlasov - poisson system is investigated .cottet and p .- a .raviart present a precise mathematical analysis of the particle method for solving the one - dimensional vlasov poisson system .we also mention the papers of s. wollman and e. ozizmir and s. wollman on the topic .k. ganguly and h.d .victory give a convergence result for the vlasov - maxwell system .the rest of the paper is organized as follows . in section [ sec:3 ]we present several time discretization techniques based on high - order semi - implicit schemes for the vlasov - poisson system with a strong external magnetic field , and we prove uniform consistency of the schemes in the limit with preservation of the order of accuracy ( from first to third order accuracy ) . in section [ sec:4 ]we perform a rigorous analysis of the first order scheme for smooth electromagnetic fields .section [ sec:5 ] is then devoted to numerical simulations for one single particle motion and for the vlasov - poisson model for various asymptotics and , which illustrate the advantage of high order schemes .[ sec:3 ] let us now consider the system ( [ eq : vlasov2d ] ) and apply a particle method , where the key issue is to design a uniformly stable scheme with respect to the parameter , which is related to the magnitude of the external magnetic field .assume that at time , the set of particles are located in , we want to solve the following system on the time interval ] .assume that the sequence given by ( [ scheme:0 ] ) is such that for all , is uniformly bounded with respect to and converges in the limit to some .then , for , , as and the limit is a consistent first order approximation with respect to of the guiding center equation provided by the scheme for all , we consider the solution to ( [ scheme:0 ] ) now labeled with respect to . since , the sequence is uniformly bounded with respect to , we can extract a subsequence still abusively labeled by and find some such that as goes to zero .then , we observe that the second equation of ( [ scheme:0 ] ) can be written as and that , for any , is uniformly bounded . from thiswe conclude first that for any , is uniformly bounded then that for any .therefore , taking the limit , it yields that for , substituting the limit of in the first equation of ( [ scheme:0 ] ) we prove that the limit satisfies ( [ sch : y0 ] ) . since the limit point is uniquely determined , actually all the sequence converges .the consistency provided by the latter result is far from being uniform with respect to the time step .however we do prove in the next section that the solution to ( [ scheme:0 ] ) is both uniformly stable and consistent with respect to and .of course , such a first order scheme is not accurate enough to describe correctly the long time behavior of the numerical solution , but it has the advantage of the simplicity and we will prove in the next section that it is uniformly stable with respect to the parameter and the sequence converges to a consistent approximation of the guiding center model when .now , let us see how to generalize such an approach to second and third order schemes .now , we consider second order schemes with two stages .a first example of scheme satisfying the second order conditions is given by a combination of heun method ( explicit part ) and an -stable second order singly diagonal implicit runge - kutta sdirk method ( implicit part ) .the first stage corresponds to . }\end{array}\right.\ ] ] then the second stage is given by . } \end{array}\right.\ ] ] finally , the numerical solution at the new time step is a similar numerical scheme has been proposed in the framework of simulation of the vlasov - poisson system . under stability assumptions on the numerical solution to ( [ scheme:2 - 1])-([scheme:2 - 3 ] ) , we get the following consistency result in the limit .[ prop:2 ] let us consider a time step , a final time and set ] .assume that the sequence given by ( [ scheme:3 - 1])-([scheme:3 - 3 ] ) is such that for all , is uniformly bounded with respect to and converges in the limit to some .then , for , , as and the limit is a consistent second order approximation of the guiding center equation , given by where we omit the proof of proposition [ prop:3 ] as almost identical to the one of proposition [ prop:2 ] .the present scheme is - stable , which means uniformly linearly stable with respect to . a third order semi - implicit scheme is given by a four stages runge - kutta method introduced in the framework of hyperbolic systems with stiff source terms .first , we set , and and . then we construct the first stage as with for the second stage , we have with then , for the third stage we set with finally , for the fourth stage we set with and the numerical solution at the new time step is as for the previous schemes , under uniform stability assumptions with respect to , we prove the following proposition [ prop:5 ] let us consider a time step , a final time and set ] . by introducing the key quantity for , ,\ ] ] the first equation of reads +\delta t\ { \mathbf{z } } ^n\,,\quad n\geq1\ ] ] while ^{-1}(\eps^{-1 } { \mathbf{v } } ^0-r[{{\mathbf e}}(t^0 , { \mathbf{x } } ^0)]) ] since .this leads to \,,\quad n\geq1\,.\ ] ] assuming and introducing we infer where \right\|\,.\ ] ] for comparison we define solution to ( [ cg:01 ] ) .then , ,\quad n\geq1\,.\end{aligned}\ ] ] hence assuming moreover ( or replacing with some arbitrary positive number if ) \\ & + & \frac{b\,\delta t}{(1+k_x\delta t)[1-\frac{1+\lambda}{1+\lambda^2}]}\,\left[1+\left(\frac{1+\lambda}{1+\lambda^2}\right)^n\right]\,(1+k_x\delta t)^n\,.\end{aligned}\ ] ] as a conclusion , for any , there exists such that if then , it yields \right\|\right]\ e^{k_x\,n\delta t}\,,\ ] ] which concludes the proof of theorem [ th:1 ] when the magnetic field is uniform .we now relax the assumption that is constant and of norm .we set and let be dependent on and .we observe that now so that ^{-1}=({{\rm id}}+\lambda r)/(1+\lambda^2\mathbf{b}_{\rm ext}^2)$ ] .then introducing the drift force we essentially obtain the same estimates with replaced by and indeed introducing for , the scheme is written ^{-1}\left(\frac { { \mathbf{v } } ^0}{\eps}-{{\mathbf f}}(t^{0 } , { \mathbf{x } } ^{0})\right),\ ] ] and then ^{-1 } ( { \mathbf{z } } ^n-({{\mathbf f}}(t^{n } , { \mathbf{x } } ^{n})-{{\mathbf f}}(t^{n-1 } , { \mathbf{x } } ^{n-1})))\,,\quad n\geq1\ ] ] together with mark that if then note that this result can be slightly improved when we modify the initial condition of the asymptotic discrete model . indeed ,consider solving the gain is that now \ \left\|\frac1\eps { \mathbf{v } } ^0-r[{{\mathbf e}}(t^0 , { \mathbf{x } } ^0)]\right\|\ ] ] since \right]\ -\\frac1\lambda r\,[{{\rm id}}-\lambda r]^{-1}\left[\frac1\eps { \mathbf{v } } ^0-r[{{\mathbf e}}(t^0 , { \mathbf{x } } ^0)]\right]\,.\ ] ] this leads , for to \\ & + & \frac{b\,\deltat}{1- \frac{1+\lambda}{1+\lambda^2}}\,\frac{1+\lambda}{1+\lambda^2}\,\left[1+\left(\frac{1+\lambda}{1+\lambda^2}\right)^{n-1}\right]\,(1+k_x\delta t)^{n-1 } \\ & + & { \displaystyle } \frac{\delta t}{\lambda}\ \left[\,k_x\delta t\,+\,\frac{1+\lambda}{1+\lambda^2}\,\right]\ \left\|\frac1\eps { \mathbf{v } } ^0-r[{{\mathbf e}}(t^0 , { \mathbf{x } } ^0)]\right\|\ ( 1+k_x\delta t)^{n-1}\end{aligned}\ ] ] then with notation as above there exists a constant such that \right\|\right]\,.\ ] ] concerning the analysis of high - order schemes , it is not straightforward to adapt directly the strategy of theorem [ th:1 ] .indeed , the use of a semi - implicit scheme does not necessarily guarantee that the particle trajectories are under control .consider the scheme ( [ scheme:2 - 1])-([scheme:2 - 3 ] ) in the simplest situation where the electric field is zero and the magnetic field is , we show that the scheme preserves the kinetic energy and we have hence as goes to zero , the velocity can not converge to the null guiding center velocity except if the initial velocity does .fortunately , for the other high order schemes ( [ scheme:3 - 1])-([scheme:3 - 3 ] ) and ( [ scheme:4 - 1])-([scheme:4 - 5 ] ) , kinetic energy is dissipated and converges to 0 . [ rem : nc ]in this section , we discuss some examples to validate and to compare the different time discretization schemes .we first consider the single motion of a particle under the effect of a given electromagnetic field .it allows us to illustrate the ability of the semi - implicit schemes to capture the guiding center velocity with large time step in the limit . then we consider the vlasov - poisson system with an external magnetic field .a classical particle - in - cell method is applied with different time discretization techniques to compute the particle trajectories .hence this collection of charged particles move and give rise to a self - consistent electric field , obtained by solving numerically the poisson equation in a space grid . before going to the statistical descriptions ,let us investigate the accuracy and stability properties of the semi - implicit algorithms presented in section [ sec:3 ] on the motion of individual particles in a given electromagnetic field .here we consider an electric field , where with and a magnetic field with .we choose for all simulations and the initial data as and , such that the initial data is bounded with respect to . [ cols="^,^ " , ]in this paper we proposed a class of semi - implicit time discretization techniques for particle - in cell simulations .the main feature of this approach is to guarantee the accuracy and stability when the amplitude of the magnetic field becomes large and to get the correct long time behavior ( guiding center approximation ) .we formally showed that the present schemes preserve the initial order of accuracy when .furthermore , we performed a complete analysis of the first order semi - implicit scheme when we consider a given and smooth electromagnetic field .the time discretization techniques proposed in this paper seem to be a very simple and efficient tool to filter fast oscillations and have nice stability and consistency properties in the limit .however , a complete analysis of high order semi - implicit schemes is still missing .the main issue is to control the space trajectory uniformly with respect to and as we have shown in remark [ rem : nc ] , the use of a semi - implicit scheme does not necessarily guarantee that the particle trajectories are under control .a complete analysis of high order schemes is currently under study . on the other hand ,the present techniques will be applied to more advanced problems as the three dimensional vlasov - poisson system when the magnetic field is non uniform and the particle trajectories become more complicated .ff was supported by the eurofusion consortium and has received funding from the euratom research and training programme 2014 - 2018 under grant agreement no 633053 .the views and opinions expressed herein do not necessarily reflect those of the european commission .positivity preserving semi - lagrangian discontinuous galerkin formulation : theoretical analysis and application to the vlasov - poisson system , journal of computational physics , * 230 * , pp .83868409 ( 2011 ) .
this paper deals with the numerical resolution of the vlasov - poisson system with a strong external magnetic field by particle - in - cell ( pic ) methods . in this regime , classical pic methods are subject to stability constraints on the time and space steps related to the small larmor radius and plasma frequency . here , we propose an asymptotic - preserving pic scheme which is not subjected to these limitations . our approach is based on first and higher order semi - implicit numerical schemes already validated on dissipative systems . additionally , when the magnitude of the external magnetic field becomes large , this method provides a consistent pic discretization of the guiding - center equation , that is , incompressible euler equation in vorticity form . we propose several numerical experiments which provide a solid validation of the method and its underlying concepts . keywords . high order time discretization ; vlasov - poisson system ; guiding - centre model ; particle methods .
in the present paper we consider a model where in a prescribed region of the euclidean space an agent ( a central government or a commercial company ) has the possibility to decide the price of a certain product ; this price may vary at each point and the customers density is assumed to be completely known .we assume that all the customers buy the same quantity of the product ; on the counterpart , a customer living at the point knows the pricing function everywhere and may decide to buy the product where he lives , then paying a cost , or in another place , then paying the cost and additionally a transportation cost for a given transportation cost function .the individual strategy of each customer is then to solve the minimization problem of particular importance to our problem is the ( set - valued ) map which associates to every customer living at the point all the locations where it is optimal to purchase the good . given the price pattern , is then defined by without any other constraint , due to the fact that the customers have to buy the product ( for instance gasoline , food , a medical product or cigarettes ) , the pricing strategy for the agent in order to maximize the total income would simply be increasing everywhere the function more and more . to avoid this trivial strategywe assume that on the region some kind of regulations are present , and we study the optimization problems the agent has to solve in order to maximize its total profit .we will study two models according to two different price constraints .we also assume that the supply is unconstrained at any location of the region , which means that whatever the total demand for the product is at a given location , it can be supplied by the agent to the customers .the simplest situation we consider is when the price is constrained to remain below a fixed bound everywhere on , due for instance to some regulatory policy .the only assumption we make is that is a _ proper _ nonnegative function , intending that in the region where no restrictions on are imposed .the goal of the agent is to maximize its total income that , with the notation introduced in and , can be written as under the constraint ( state equation ) that i.e. that is compatible with the customer s individual minimization problem .one may therefore see the previous program as a nonstandard optimal control problem where is the control and the state variable .let us mention that problems with a similar structure naturally arise in the so - called _ principal - agent _ problem in economics ( see for instance rochet and chon and the references therein ) .we consider a second model of pricing strategy : we suppose that in there is a given subregion where the price is fixed as a function that the agent can not control .this is for instance the case of another country if the agent represents a central government , or of a region where for some social reasons that the agent can not modify , the prices of the product are fixed .whenever , then the agent makes no benefit from customers living at .in fact the total profit of the agent is given by under the constraint ( state equation ) that .note that in formula giving the total profit , the integration is now performed only on the set of customers that do shop in the region controlled by the agent and not in the fixed - price region .the problem we are interested in reads again as the maximization of the functional among the admissible choices of state and control variables . for both modelsabove we discuss the mathematical framework which enables us to obtain an existence result for an optimal pricing strategy and we present some particular cases where more detailed computations can be made , as the case of concave costs , the case of a quadratic cost , and the onedimensional case .the last section contains some discussions about possible extensions and developments , as for instance the case of nash equilibria when more agents operate on the same market .in what follows , will be some compact metric space ( the economic region ) , and ] since such costs are in fact metrics .we consider now the case where is some open bounded subset of the euclidean space and where is a nonnegative smooth and strictly convex function . in this framework, a -concave function can be represented as : by the smoothness of , the compactness of and lemma [ unifcont ] ensure that is lipschitz continuous on hence lebesgue a.e .differentiable on by rademacher s theorem . for every point of differentiability of and every ,it is easy to check that from ( [ envcconv ] ) one has : and since is strictly convex this can be rewritten as : where stands for the legendre transform of .this proves that for every -concave function , is in fact single - valued on a set of full lebesgue measure .now further assuming that is absolutely continuous with respect to the lebesgue measure on , we can rewrite the profit functional in a more familiar form : \,df = \int_\omega\left[v+h^*(\nabla v)-\nabla v\cdot\nabla h^*(\nabla v)\right]\,df.\ ] ] if we further restrict our attention to the quadratic case , namely and is convex , it is easy to see that is -concave on if and only if the function defined by is convex and satisfies of course the constraint translates into with .putting everything together , we then see that solves ( [ maxi ] ) if and only if and solves the following : \,df\ ] ] and problems of the calculus of variations subject to a convexity constraint with a very similar structure as ( [ calcvarcvex ] ) arise in the monopoly pricing model of rochet and chon ( ) .note also that by strict convexity , ( [ calcvarcvex ] ) possesses a unique solution .we now consider problem ( [ calcvarcvex ] ) in the special unidimensional case where , and ( which corresponds to the price bound ) .the problem amounts to maximize among convex , nondecreasing and -lipschitz functions .it is obvious that one necessarily has at the optimum , which setting and integrating by parts enables us to write \,dx\ ] ] and the previous integral has to be minimized among nondecreasing functions taking values in ] ( i.e. grows at the maximal rate on the segment ] such that yields : by density , this inequality actually holds for all , and the converse inequality follows immediately from ( [ repwp]).we thus have proved that if is -concave then it is well - known ( see ) that ( [ repwpb ] ) implies that is a _ viscosity _ solution of the eikonal equation on .now , conversely , assume that is -lipschitz on and a viscosity solution of the eikonal equation on and define then on ( in particular on ) and by the same argument as above is a viscosity solution of the eikonal equation on . a standard comparison argument ( e.g. theorem 2.7 in ) yields on so that is -concave .this proves that the set of concave functions is : where the eikonal equation has to be understood in the viscosity sense .let us also remark that the condition a.e . in in fact hidden in the definition of a viscosity solution ( equivalently in formula ( [ repwpb ] ) ) .getting back to our optimization problem ( [ pbmeenw ] ) , it is natural to introduce for every and ( the unit sphere of ) the quantity : for , we then have for a.e . assuming that is absolutely continuous with respect to the lebesgue measure on and defining by ( [ defwp ] ) , for , the profit functional is then given by : which has to be maximized over defined by ( [ formw ] ) .now , our aim is to transform the previous problem in terms of the values of on only . of course , because of ( [ repwpb ] ) , the behavior of on is fully determined by its trace on . in order to treat the behavior on , we need the following result .let and define then and . obviously and on hence the integrand in the definition of is larger on for than for ( recall that ) . if is such that , then the same conclusion holdsnow , if is such that , then we write with , if then and if then . since , in both cases we then have which proves that .in particular , a.e . on which proves the desired result .let and let be the trace of on , thanks to the previous lemma we may assume that on so that : because of ( [ repwpb ] ) , the second term only depends on , and the first one is monotone in hence for a given ( -lipschitz and smaller than ) it is maximized by the largest -lipschitz function on which has as trace on and is below i.e. simply since the previous formula also holds for by ( [ repwpb ] ) , we define for every -lipschitz function on such that the state equation the profit maximization ( [ pbmeenw ] ) can thus be reformulated as the following nonstandard optimal control problem where the control is the price on the interface : where the class of admissible boundary controls consists of all -lipschitz functions on such that and the state equation is ( [ state ] ) .for example if is the unit ball of and its boundary , then the maximization problem ( [ pbmeenw ] ) becomes maximizing : in the set of viscosity solutions of the eikonal equation on the unit ball .note that this is a highly nonconvex variational problem , which as previously may be reformulated as maximizing among -lipschitz functions on such that . in the one dimensional case, the eikonal equation has a very simple structure which makes problem ( [ controlform ] ) much simpler .in particular , if is finite then the maximization of ( [ controlform ] ) reduces to a finite dimensional problem , since the control in this case is simply given by the values of on the finite set . for instancelet us take and ,}\\ \beta - x&\mbox{if and ,}\\ 0&\mbox{otherwise , } \end{array}\right.\ ] ] and the corresponding profit can be explicitly computed as a function of : where defining the cumulative function of ( i.e. ) ] , ] , and . as explained in section [ concave ] , given the strategy of the second ( respectively first ) player , the first ( resp .second ) one optimally choses a pricing function of the form ( resp . ) for ( resp . for ) . at a nash equilibriumone must have ( if then makes zero profit as well as makes zero profit if ) .finally , the common value has to be , since if for instance then can charge a slightly lower price at the border point then getting the whole demand and increasing his profit for small enough . in this simple casethere is then a unique nash equilibrium and , no matter what the population distribution is .the equilibrium price is plotted in the next figure .
we consider an optimization problem in a given region where an agent has to decide the price of a product for every . the customers know the pricing pattern and may shop at any place , paying the cost and additionally a transportation cost for a given transportation cost function . we will study two models : the first one where the agent operates everywhere on and a second one where the agent operates only in a subregion . for both models we discuss the mathematical framework and we obtain an existence result for a pricing strategy which maximizes the total profit of the agent . we also present some particular cases where more detailed computations can be made , as the case of concave costs , the case of quadratic cost , and the onedimensional case . finally we discuss possible extensions and developments , as for instance the case of nash equilibria when more agents operate on the same market .
the trace distance is one of the most natural distance measures used in quantum - information theory .it is one of the main tools used in distinguishability theory and it is connected to the average success probability when distinguishing two states by a measurement .it is also related to quantum fidelity , which provides the measure of similarity of two quantum states .both quantities are particularly important in quantum cryptography , since the security of quantum protocols relies on an ability to measure the distance between two quantum states .the trace distance is also related to other properties of quantum states like the von neumann entropy and relative entropy .the main aim of this work is to provide a lower bound for the trace distance using measurable quantities .we use nonlinear functions of the form , where and are density matrices . for such forms there exist feasible schemes to measure them in an experiment without resorting to state tomography .we give a lower bound based on the superfidelity introduced recently in .proof of this bound gives an answer to the conjecture stated by mendonca __ in .let us denote by the space of density matrices acting on -dimensional hilbert space . for two density matrices the trace distance is defined as in the particular case of pure states we can use bloch vectors and .one can see that the trace distance between such states is equal to half of the euclidean distance between the respective bloch vectors the trace distance can be bounded with the use of the fidelity where the fidelity is defined as ^ 2.\ ] ] the inequality ( [ eqn : fid - dtr - ineq ] ) shows that and are closely related indicators of distinguishability .the main result of this work is a lower bound for the trace distance , which we prove in the next section , where is called the _ superfidelity _ , and was introduced in .for it is defined as from the matrix analytic perspective the inequality ( [ eqn : main - ineq ] ) relates the trace norm on the space to the hilbert - schmidt scalar product on . for the sake of consistency we provide basic information about the superfidelity .the most interesting feature of superfidelity is that it provides an upper bound for quantum fidelity the superfidelity also has properties which make it useful for quantifying the distance between quantum states .in particular we have : ( 1 ) bounds : .( 2 ) symmetry : . ( 3 ) unitary invariance : for any unitary operator , we have . ( 4 ) concavity : for any and ] .note that the property of joint concavity is obeyed by the square root of fidelity , but not by the fidelity ( [ eqn : fid ] ) .fidelity can be used to define the metric on the space as unfortunately the analog of the bures distance defined using the superfidelity is not a metric , but the quantity provides the metric on the space .finally one should note that the superfidelity is particularly convenient to use as the practical measure of similarity between quantum states .one of the main advantages of superfidelity is that it is possible to design feasible schemes to measure it in an experiment .also , from the computational point of view , calculation of the superfidelity is significantly less resource - consuming .the properties of superfidelity listed above suggest that it would be convenient to use it instead of fidelity to draw conclusions about the distinguishability of quantum states .this section show how this can be done by relating the superfidelity and trace distance .first we can observe that from the inequality and since we have ( [ eqn : fid - dtr - ineq ] ) we get that our main aim is to prove the following inequality , which provides tighter bound .[ conjecture - general ] for any we have or equivalently this inequality was first stated as a conjecture in , where it was verified numerically for small dimensions .clearly it is motivated by the lower bound for trace distance provided by the inequality ( [ eqn : fid - dtr - ineq ] ) . to prove the theorem [ conjecture - general ] we need the following lemma . for any let and be the projectors onto and respectively .we have the following inequalities _ proof . _ because of the similarity we will show only inequality ( [ ineq:1 ] ) .it is easy to prove inequalities ( [ ineq:2 ] ) , ( [ ineq:3 ] ) and ( [ ineq:4 ] ) in a similar manner .we subtract the right- from the left - side of ( [ ineq:1 ] ) to get because is positive semidefinite . _ proof of theorem [ conjecture - general ] ._ adding the inequalities ( [ ineq:1 ] ) and ( [ ineq:2 ] ) we get similarly , by adding inequalities ( [ ineq:3 ] ) and ( [ ineq:4 ] ) we get now we notice that , if two non - negative numbers are greater than the third one , then so is the geometric mean of the first two numbers . using this fact we combine ( [ eqn : l1 ] ) and ( [ eqn : l2 ] ) to get on the other hand we can rewrite the trace distance with the use of projectors and : now finally we can write which is equivalent to ( [ eqn : conjecture - general ] ) . is natural to consider the relation between the lower bounds in ( [ eqn : fid - dtr - ineq ] ) and ( [ eqn : main - ineq ] ) .it is clear that bound ( [ eqn : main - ineq ] ) is better than ( [ eqn : fid - dtr - ineq ] ) whenever . in the one qubit case orif one of the states is pure the bound ( [ eqn : main - ineq ] ) is always better than ( [ eqn : fid - dtr - ineq ] ) .this follows from the equality between fidelity and superfidelity in these situations . on the other hand the inequality ( [ eqn : fid - dtr - ineq ] ) provides a better lower bound if the states and have orthogonal supports . in this casethe fidelity between states vanishes , but the superfidelity is not necessarily equal to zero .to get some feeling about the difference between the bound given by fidelity and the present bound we will consider the following families of states . 1 . the family is defined as where is a pure state and in our case we take .the family is defined as where .the family is defined as : calculated for states ( [ eqn : ex - states ] ) and maximally mixed states as a function of the dimension and the parameter [ see eq .( [ eqn : ex - states ] ) ] . ] calculated for states ( [ eqn : ex - states ] ) and ( [ eqn : ex - states - dim4 ] ) as a function of parameters and . ] in fig .[ fig : low - bound - diff ] we consider the family and calculate the difference .one can see that for small dimensions and states close to the pure state the superfidelity gives a much better approximation for the trace distance than the fidelity . for larger dimensionsthis is not the case , but nevertheless we can observe that still the superfidelity provides a better bound .this advantage is lost for states close to the maximally mixed state .a similar situation can be observed in fig .[ fig : low - bound - diff - dim4 ] where the difference between and ( ) is presented . for this particular familythe bound ( [ eqn : main - ineq ] ) is better than ( [ eqn : fid - dtr - ineq ] ) , but the difference vanishes for states close to the maximally mixed state . calculated for states ( [ eqn : ex - states ] ) and ( [ eqn : ex - states - k3 ] ) ( ) as the function of parameters and .solid line is a border that separates two regions . for the parameters in lighter one the inequality ( [ eqn : fid - dtr - ineq ] ) provides better bound , while in darker region the inequality ( [ eqn : main - ineq ] ) is better . ]unfortunately , the bound ( [ eqn : main - ineq ] ) is not always tighter when compared with ( [ eqn : fid - dtr - ineq ] ) .figure [ fig : diff - levels ] shows the difference between and ( ) . as one can see for this family of states there are regions for which the bound ( [ eqn : main - ineq ] ) is better than ( [ eqn : fid - dtr - ineq ] ) but in this case for highly mixed states ( [ eqn : fid - dtr - ineq ] ) is better than ( [ eqn : main - ineq ] ) .we know that the probability of error for distinguishing two density matrices is expressed by the trace distance as .\ ] ] using the inequality ( [ eqn : main - ineq ] ) we can write we have shown the relation between the superfidelity and trace distance , which is analogous to the relation with trace distance and fidelity .this shows that superfidelity can be used to conclude about the distinguishablity of states .our bound provides the relation between the trace distance and the overlap of two operators supplementary to inequality from ( * ? ? ?* th.1 ) . as such it provides neat mathematical tool which can be used in quantum information theory .authors would like to thank k. yczkowski , r. winiarczyk and p. gawron for many interesting , stimulating and inspiring discussions .we acknowledge the financial support by the polish ministry of science and higher education under the grant number n519 012 31/1957 .
we provide a bound for the trace distance between two quantum states . the lower bound is based on the superfidelity , which provides the upper bound on quantum fidelity . one of the advantages of the presented bound is that it can be estimated using a simple measurement procedure . we also compare this bound with the one provided in terms of fidelity .
in recent years , high angular resolution x - ray telescopes make it possible to detect x - ray sources with only a few counts .this is very different from the optical photometry .because of these low counts , the poisson processes in corresponding wavebands can not be approximated to gaussian distribution .therefore the statistics will be very different in some estimations and calculations than used before . in recent years bayesian methodhas gained many applications ( e.g. van dyk et al . 2001 and references therein ) since it has more advantages in low count cases than traditional statistics .hardness ratios are widely used in high energy astrophysics since faint sources with only limited counts can not give any satisfying spectral modeling . in x - ray detection , hardness ratiosare normally used to show spectral properties roughly ( e.g. tennant et al . , 2001 ;sivakoff , sarazin & carlin 2004 ) .hardness ratios are usually defined as the ratio of counts in different wavebands ( ) or the ratio of the difference and sum of counts in two wavebands ( ) , and are counts in two wavebands and . on the other hand , for the spectra of x - ray sources , the photoelectric absorption , quantified with the hydrogen column density , can not be neglected .hydrogen column density contains many kinds of important information , such as the radial distance and the interstellar circumstance of the sources . for low count sources ,hydrogen column density is hard to know since no reliable spectral fitting can be made .however interstellar absorption is energy dependent .consequently the information of hydrogen column density can be drawn from the hardness ratios . in this paper, we first give a new definition of hardness ratio and its estimation method , and then we discuss the procedure to estimate the hydrogen column density accordingly .we begin our discussion with the following problem : suppose that one experiment obtained two counts from two different poisson distributions , we need to : ( 1 ) estimate the ratio of the expectation values of the two poisson distributions , and ( 2 ) construct the confidence interval of the ratio .the expectation values of the poisson processes are just the parameters of the poisson distribution .therefore the above problem may be formulated as follows : suppose and are two counts corresponding to two different poisson processes and with their parameters as and respectively , and its confidence interval needs to be estimated . to solve this problem we first need to derive the distribution of parameter under certain counts , i.e. , the conditional distribution of and , as follows . first we assume , as a pragmatic convention , a uniform prior for the parameter . similarly , this continuous distribution is gamma distribution , as shown in fig . 1 .in addition , we use jeffreys prior ( ) , which may be more advantageous over the uniform prior commonly used , because the inferences derived from jeffreys prior are parameterization - invariant ( see kass & wasserman 1996 for detail of this prior ) . under this prior, the conditional distribution of is and to account for the background contamination , suppose that is the count corresponding to a poisson process with the addition of a poisson background process , the expectation value of the process is assumed to be known as . according to the properties of poisson processes , the sum of two poisson processes is also a poisson process with the parameter .so the probability of the total count is .apply the bayesian assumption and the uniform prior distribution assumption , we obtain the conditional distribution of , as follows . when is much smaller than , this result is same as equation(2 ) .also we can obtain the conditional distribution under the jeffreys prior distribution assumption , and it is same as equation(4 ) when is much smaller than .there are two different definitions of hardness ratio , and . in traditional method , the estimate of and are and respectively , and the errors are propagated under the gaussian distribution , i.e. , here we propose a method to estimate the hardness ratio based on the bayesian method . both the uniform prior and the jeffreys prior will be used .when is much smaller than , we use equation(2 ) ( under the uniform prior ) or equation(4 ) ( under the jeffreys prior ) to estimate and .first we assume the uniform prior of .for the conditional distribution function of , for the conditional probability density function of , this distribution is shown in fig .2 when , .it is easy to verify that the distribution is normalized , for the hardness ratio , the probability distribution of this hardness ratio is given by , the conditional probability density function is given by , this distribution is shown in fig .3 when , .it is easy to verify that the distribution is normalized , when , .the solid line represents the distribution derived by our proposed method under the uniform prior , the doted line represents the distribution derived by our proposed method under the jeffreys prior , the dashed line represents the gaussian distribution derived by the traditional method . ] when , .the solid line represents the distribution derived by our proposed method under the uniform prior , the doted line represents the distribution derived by our proposed method under the jeffreys prior , the dashed line represents the gaussian distribution derived by the traditional method . ]similarly , we obtain the conditional probability density function of and under the jeffreys prior assumption as follows , and using the result of and , we can estimate and . under the uniform prior assumption , since we have only one observation , we take the most probable value as the estimate of , denoted as .let we obtain , similarly , we obtain the most probable value as the estimate of hr : similarly , we get the most probable value of and under the jeffreys prior assumption , and the highest posterior density ( hpd ) interval is used to give the error bars .the hpd interval under the confidence level is the range of values which contain a fraction of the probability , and the probability density within this interval is always higher than that outside the interval .there are other point estimates and error estimates .for example , the mean value and the equal tailed interval . since the distributions of and are obtained , these alternative estimates can be easily derived . when can not be ignored , equation ( 6 ) or equation ( 7 ) can be used to estimate and . in this situation ,it is difficult to give a simple analytic distribution function like equation ( 10 ) or equation ( 13 ) ; in this case , one can only use numerical integration to obtain the distribution of and , and then do the estimate .we use the monte carlo simulations to investigate the statistical properties of our result , and compare them with the traditional methods .first we set and .do poisson sampling for times and each time we get and respectively .each time we estimate and using two kinds of methods .finally we obtain that , for two methods , the mean square error of the point estimate , the coverage rate ( the percentage of times during which the confidence interval contains the real value ) , and the mean confidence interval .the simulations contain two cases : low counts and high counts . in case 1 , we first set and , then set and . in case 2 , we first set and , then set and .the confidence level in the simulations is 90% .the results of the simulations are shown in table 1 ..statistical properties of our method and traditional method . [ cols="^,^,^,^,^,^,^ " , ] [ park : tbl : coverage ] from the simulation results , we notice that our proposed method is more reliable than the traditional method when the counts are low .the reason is that the traditional method is based on using the gaussian distribution to approach the poisson distribution , which is not reliable when the counts are low .here we propose a method to estimate the hydrogen column density using data obtained with the _ chandra _ x - ray observatory .because of the high angular resolution of _ chandra _ , a positive detection of a point source during a survey observation only requires several counts .therefore our method will be more reliable when estimating the hardness ratio than the traditional method .the detail procedure of this application can be found in another paper ( wu et al .the basic idea is introduced as follows .the basic procedure consists of the following three steps : ( 1 ) calculate the relationship between the hardness ratios and values under certain spectral model ; ( 2 ) estimate hardness ratios according to observed counts in different wavebands ; and ( 3 ) interpolate the values and error intervals from hardness ratios . according to the most likely physical nature of the sources, we can assume a spectral model ( e.g. power law with photon index for typical x - ray binaries ) . then we can use _ pimms _ ( http://cxc.harvard.edu/toolkit/pimms.jsp ) tool to calculate the relationship between hydrogen column density and the hardness ratio . usingpimms _ , one can get the count rate in certain energy band under a given x - ray spectrum and hydrogen column density .the count rate in the given energy band is just the parameter in this energy band ; this was our original movivation of defining the hardness ratios in terms of the parameter .the calculated ( hydrogen column density ) relationships are shown in fig . 7 for three different energy bands of ( 1 - 3 kev ) , ( 3 - 5 kev ) and ( 5 - 8 kev ) for a _acis - i observation ( wu et al .2006 ) respectively : , . from fig. 7 , we can see that or is more appropriate for or , respectively .having the value and error interval of , we can finally do linear interpolation on curves in fig . 7 to obtain the value and error interval of hydrogen column density .first we give the conditional probability distribution of parameter under certain counts in a poisson process using bayesian statistics . according to this resultwe derive the probability density function of two kinds of hardness ratios .we take the most probable values as the estimate of hardness ratios and the hpd intervals as the error intervals .then we use monte carlo simulations to investigate the statistical properties of our results , and find that our method is more reliable than the traditional method when the counts are low .finally we show how to estimate the hydrogen column density using hardness ratios . our method developed in this paperprovides a way to estimate the hydrogen column density of sources which are too faint to do spectral fitting .however the spectral shape for these sources must be assumed _ a prior_. this method is especially convenient for a sample of faint sources with similar spectra .after this paper has been submitted initially on 06 - 03 - 29 , we noticed another submitted paper ( park et al .2006 ) which discusses the same statistical problem as we have done in this paper . in that paperthe authors also used the bayesian method to estimate the hardness ratio , and showed some applications on quiescent low - mass x - ray binaries , the evolution of a flare , etc , therefore justifying the wide range of applications of such a statistical problem . since the strict analytic solution of the hardness ratio distribution does not exist for general situations , the authors suggested methods by monte carlo and numerical integration to obtain the distribution in that paper . in our paperwe find simple analytic solutions of the probability density functions of hardness ratios for the situations in which the background can be ignored. this will be useful and convenient for some applications , such as _ chandra _ data in which background can be ignored for hardness ratio estimation of point sources . finally , we note , under the advise of the referee , that in 1980s some studies have been done on the ratio of poisson means both from a frequentist standpoint ( james & roos 1980 ) and from a bayesian standpoint ( helene 1984 and prosper 1985 ) . in this paperwe used the bayesian method under the uniform prior and the jeffreys prior , made extensive comparisons between this method and traditional method , aiming explicitly at applications in astrophysics .xie chen read the first draft carefully and gave many helpful suggestions , especially on english writing .we are particularly grateful to the referee for his pointing out relevant historical literature in other fields and suggesting using jeffreys prior .this study is supported in part by the special funds for major state basic research projects and by the national natural science foundation of china ( project no.10233030 , 10327301 and 10521001 ) .99 kass , r. e. , & wasserman , l. , 1996 , j. amer .91 , 1343 james , f. , & roos , m. , 1980 , nucl .b172 , 475 helene , o. , 1984 , nucl .instr . and meth .a228 , 120 prosper , h. b. , 1985 , nucl .instr . and meth .a241 , 236 park , t. , kashyap , v. l. , siemiginowska , a. , dyk , d. a. v. , zezas , a. , heinke , c. , & wargelin , b. j. , 2006 , submitted to apj ( arxiv : astro - ph/0606247 ) sivakoff , g. r. , sarazin , c. l. , & carlin , j. l. , 2004 , apj , 617 , 262 tennant , a. f. , wu , k. , ghosh , k. k. , kolodziejcazk , j. j. , & swartz , d. a. , 2001 , apj , 549 , l43 van dyk , d. a. , connors , a. , kashyap , v. l. , & siemiginowska , a. , 2001 , apj , 548 , 224 wu , j. f. , zhang , s. n. , lu , f. j. , & y. k. jin , 2006 , accepted for publication in chinese journal of astronomy and astrophysics ( arxiv : astro - ph/0606478 )
hardness ratios are commonly used in x - ray photometry to indicate spectral properties roughly . it is usually defined as the ratio of counts in two different wavebands . this definition , however , is problematic when the counts are very limited . here we instead define hardness ratio using the parameter of poisson processes , and develop an estimation method via bayesian statistics . our monte carlo simulations show the validity of our method . based on this new definition , we can estimate the hydrogen column density for the photoelectric absorption of x - ray spectra in the case of low counting statistics .
simulation is inevitable in studying the evolution of complex cellular systems .large cellular array simulations might require long runs on a serial computer .parallel processing , wherein each cell or a group of cells is hosted by a separate processing element ( pe ) , is a feasible method to speed up the runs .the strategy of a parallel simulation should depend on whether the simulated system is synchronous or asynchronous .a _ synchronous _ system evolves in discrete time .the state of a cell at is determined by the state of the cell and its neighbors at and may explicitly depend on and the result of a random experiment . an obvious and correct way to simulate the system synchrony using a parallel processoris simply to mimic it by the executional synchrony .the simulation is arranged in rounds with one round corresponding to one time step and with no pe processing state changes of its cells for time before all pes have processed state changes of their cells for time .an _ asynchronous _ system evolves in continuous time .state changes at different cells occur asynchronously at unpredictable random times . heretwo questions should be answered : ( a ) how to specify the asynchrony precisely ? and ( b ) how to carry out the parallel simulations for the specified asynchrony ? unlike the synchronous case , simple mimicry does not work well in the asynchronous case .when geman and geman , for example , employ executional _ physical _ asynchrony ( introduced by different speeds of different pes ) to mimic the model asynchrony , the simulation becomes irreproducible with its results depending on executional timing .such dependence may be tolerable in tasks other than simulation ( describes one such task , another example is given in ) .in the task of simulation , however , it is a serious shortcoming as seen in the following example .suppose a simulationist , after observing the results of a program run , wishes to look closer at a certain phenomenon and inserts an additional ` print ' statement into the code . as a result of the insertion , the executional timing changes and the phenomenon under investigation vanishes .ingerson and buvel and hofmann propose various reproducible computational procedures to simulate asynchronies in cellular arrays .however no uniform principle has been proposed , and no special attention to developing parallel algorithms has been paid .it has been observed that the resulting cellular patterns may depend on the computational procedure .two main results of this paper are : ( i ) a definition of a natural class of asynchronies that can be associated with cellular arrays and ( ii ) efficient parallel algorithms to simulate systems in this class .the following properties specify the _ poisson asynchrony _ , a most common member in the introduced class : + + for a particular cell form a poisson point process .+ processes for different cells are independent .+ arrival rate is the same , say , for each cell .+ there is an arrival , the state of the cell instantaneously changes ; the new state is computed based on the states of the cell and its neighbors just before the change ( in the same manner as in the synchronous model ) .the new state may be equal to the old one .+ time of arrival and a random experiment may be involved in the computation . +a familiar example of a cellular system with the poisson asynchrony is the ising model in the continuous time formulation of glauber . in this modela cell configuration is defined by the spin variables specified at the cells of a two or three dimensional array .when there is an arrival at a cell , the spin is changed to with probability . with probability ,the spin remains unchanged .the probability is determined using the values of and neighbors just before the update time .it is instructive to review the computational procedures for ising simulations .first , the ising simulationists realized that the standard procedure by metropolis , rosenbluth , rosenbluth , teller , and teller could be applied . in this procedure , the evolution of the configuration is simulated as a sequence of one - spin updates : given a configuration , define the next configuration by choosing a cell uniformly at random and changing or not changing the spin to as required . in the original standard procedure timeis discrete .time continuity could have been simply introduced by letting the consecutive arrivals form the poisson process with rate , where is the total number of spins ( cells ) in the system .the problem of long simulation runs became immediately apparent .bortz , kalos , and lebowitz developed a serial algorithm ( the bkl algorithm ) which avoids processing unsuccessful state change attempts , and reported up to a 10-fold speed - up over the straight - forward implementation of the standard model .ogielski built special purpose hardware for speeding up the processing .the bkl algorithm is serial .attempts were made to speed up the ising simulation by parallel computations ( friedberg and cameron , creutz ) .however , in these computations the original markov chain of the continuous time ising model was modified to satisfy the computational procedure .the modifications do not affect the equilibrium behavior of the chain , and as such are acceptable if one studies only the equilibrium .in the cellular models however , the transient behavior is also of interest , and no model revision should be done .this paper presents efficient methods for parallel simulation of the continuous time asynchronous cellular arrays without changing the model or type of asynchrony in favor of the computational procedure .the methods promise unlimited speed - up when the array and the parallel computer are sufficiently large . for the poisson asynchrony case, it is also shown how the bkl algorithm can be incorporated , further contributing to speed - up .for the ising model , presented algorithms can be viewed as exact parallel counterparts to the standard algorithm by metropolis et al . the latter has been known and believed to be inherently serial since 1953 .yet , the presented algorithms are parallel , efficient , and fairly simple .the `` conceptual level '' codes are rather short ( see figures [ fig : a1c1pe ] , [ fig : s1c1pe ] , [ fig : amcgen ] , [ fig : amcpoi ] , and [ fig : genout ] , ) .an implementation in a real programming language given in the appendix is longer , of course , but still rather simple .this paper is organized as follows : section [ sec : model ] presents a class of asynchronies and a comparison with other published proposals .then section [ sec : algo ] describes the new algorithms on the conceptual level .while the presented algorithms are simple , there is no simple theory which predicts speed - up of these algorithms for cellular arrays and parallel processors of large sizes .section [ sec : perf ] contains a simplified computational procedure which predicts speed - ups faster than it takes to run an actual parallel program .the predictions made by this procedure are compared with actual runs and appear to be rather accurate .the procedure predicts speed - up of more than 8000 for the simulation of poisson asynchronous cellular array in parallel by pes .actual speed - ups obtained thus far were : more than 16 on 25 pes of the balance ( tm ) computer and more than 1900 on pes of the connection machine ( r ) .time is continuous .each cell has a state . at random times , a cell is granted a chance to change the state . the changes , if they occur , are instantaneous events .random attempts to change the state of a cell are independent of similar attempts for other cells .the general model consists of two functions : _time_of_next_arrival ( ) _ and _ next_state ( ) _ .they are defined as follows : given the old state of the cell and the states of the neighbors just before time , , the next_state is where the possibility is not excluded ; and the time of the next arrival is where always . in and , denotes the result of a random experiment , e.g. , coin tossing , denotes the indexed set of states of all the neighbors of including itself .thus , if , then .subscript expresses the idea of ` just before ' , e.g. , .according to , the value of instantaneously changes at time from to . at time , the value of is already new .the ` just before ' feature resolves a possible ambiguity if two neighbors attempt to change their states at the same simulated time .compare now the class of asynchronies defined by with the ones proposed in the literature : ( a ) model 1 in reads : `` ... the cells iterate randomly , one at a time . ''let be the probability that cell is chosen .then the following choice of law yields this model where is a random number uniformly distributed on ( 0,1 ) , and is the natural logarithm , . for , the asynchrony was called the _ poisson asynchrony _ in section [ sec : intro ] ; it coincides with the one defined by the standard model , and by glauber s model for the ising spin simulations .( b ) model 2 in assigns `` each cell a period according to a gaussian distribution ...the cells iterate one at a time each having its own definite period . '' while it is not quite clear from what is meant by a `` definite period '' ( is it fixed for a cell over a simulation run ? ) , the following choice of law yields this model in a liberal interpretation : where if , and is the cumulative function for the gaussian probability distribution with mean and variance .the probability of is small when and is ignored in if this interpretation is meant . in a less liberal interpretation , for all , and is itself random and distributed according to the gaussian law .this case is even easier to represent in terms of model than the previous one : .( 3 ) model trivially extends to a synchronous simulation , where the initial state changes arrive at time 0 and then always is identical to 1 .the first model in is `` to choose a number of cells at random and change only their values before continuing . ''this is a variant of synchronous simulation ; it is substantially different from both models ( a ) and ( b ) above . in ( a ) and ( b ), the probability is 1 that no two neighbors attempt to change their states at the same time . in contrast , in this model many neighboring cells are simultaneously changing their values . how the cells are chosen for update is not precisely specified in .one way to choose the cells is to assign a probability weight for cell , , and to attempt to update cell at each iteration , with probability , independent of any other decision .such a method conforms with the law because the method is local : a cell does not need to know what is happening at distant cells . the second model in changes states of a fixed number of randomly chosen cells at each iteration .if , this method is not local and does not conform with the law .deterministic computers represent randomness by using pseudo - random number generators .thus , equations and are substituted in the computation by equations and respectively , which do not contain the parameter of randomness .this elimination of symbolizes an obvious but important difference between the simulated system and the simulator : in the simulated system , the observer , being a part of the system , does not know in advance the time of the next arrival .in contrast , the simulationist who is , of course , not a part of the simulated system , can know the time of the next arrival before the next arrival is processed .for example , it is not known in advance when the next event from a poisson stream arrives . however , in the simulation , the time of the next arrival is obtained in a deterministic manner , given the time of the previous arrival : where is the rate , is the -th pseudo - random number in the sequence uniformly distributed on , and is the invocation counter .thus , after the previous arrival is processed , the time of the next arrival is already known .if needed , the entire sequence of arrivals can be precomputed and stored in a table for later use in the simulation , so that all future arrival times would be known in advance . + * asynchronous one - cell - per - one - pe algorithm*. the algorithm in figure [ fig : a1c1pe ] is the shortest of those presented in this paper . to understand this code , imagine a parallel computer which consists of a number of pes running concurrently .one pe is assigned to simulate one cell .the pe which is assigned to simulate cell , pe , executes the code in figure [ fig : a1c1pe ] with .the pes are interconnected by the network which matches the topology of the cellular array .a pe can receive information from its neighbors .pe maintains state and local simulated time .variables and are visible ( accessible for reading only ) by the neighbors of .time has no connection with the physical time in which the parallel computer runs the program except that may not decrease when the physical time increases . at a given physical instance of simulation , different cells may have different values of .value is a constant which is known to all pes .the algorithm in figure [ fig : a1c1pe ] is very asynchronous : different pes can execute different steps concurrently and can run at different speeds . a statement ` wait_until _ condition _ ' ,like the one at step 2 in figure [ fig : a1c1pe ] , does not imply that the _ condition _ must be detected immediately after it occurs . to detect the _ condition _ at step 2 involving local times of neighborsa pe can poll its neighbors one at a time , in any order , with arbitrary delays , and without any respect to what these pes are doing meanwhile . + despite being seemingly almost chaotic , the algorithm in figure [ fig : a1c1pe ] is free from deadlock .moreover , it produces a unique simulated trajectory which is independent of executional timing , provided that : + ( i ) for the same cell , the pseudo - random sequence is always the same , + ( ii ) no two neighboring arrival times are equal . + freedom from deadlock follows from the fact that the cell , whose local time is minimal over the entire array , is always able to make progress .( this guaranteed worst case performance , is substantially exceeded in an average case .see section [ sec : perf ] . )the uniqueness of the trajectory can be seen as follows . by ( ii ) , a cell passes the test at step 2 only if its local time is smaller than the local time of any its neighbor .if this is the case , then no neighbor is able to pass the test at step 2 before changes its time at step 4 .this means that processing of the update by is safe : no neighbor changes its state or time before completes the processing . by ( i ) , functions and are independent of the run .therefore , in each program run , no matter what the neighbors of are doing or trying to do , the next arrival time and state for are always the same .it is now clear why assumption ( ii ) is needed .if ( ii ) is violated by two cells and which are neighbors , then the algorithm in figure [ fig : a1c1pe ] does not exclude concurrent updating by and . such concurrent updating introduces an indeterminism and inconsistency .a scenario of the inconsistency can be as follows : at step 3 the _ old _ value of is used to update state , but immediately following step 4 uses the _new _ value of to update time . in practice ,the algorithm in figure [ fig : a1c1pe ] is safe , when for different are independent random samples from a distribution with a continuous density , like an exponential distribution . in this case, ( ii ) holds with probability 1 .unless the pseudo - random number generators are faulty , one may imagine only one reason for violating ( ii ) : finite precision of computer representation of real numbers .+ * synchronous one - cell - per - one - pe algorithm*. if ( ii ) can be violated with a positive probability ( if takes on only integer values , for example ) , then the errors might not be tolerable . in this case the synchronous algorithm in figure [ fig : s1c1pe ] should be used .observe that while the algorithm in figure [ fig : s1c1pe ] is synchronous , it is able to simulate correctly both synchronous and asynchronous systems. two main additions in the algorithm in figure [ fig : s1c1pe ] are : private variables and for temporal storage of updated and , and synchronization barriers ` synchronize ' .when a pe hits a ` synchronize ' statement it must wait until all the other pes hit a ` synchronize ' statement ; then it may resume .two dummy synchronizations at steps 9 and 10 are executed by idling pes in order to match synchronizations at steps 5 and 8 executed by non - idling pes .when ( ii ) is violated , the synchronous algorithm avoids the ambiguity and indeterminism ( which in this case are possible in the asynchronous algorithm ) as follows : in processing concurrent updates of two neighbors and for the same simulated time , first , and read states and times of each other and compute their private s and ( steps 3 and 4 in figure [ fig : s1c1pe ] ) ; then , after the synchronization barrier at step 5 , and write their states and times at steps 6 and 7 , thus making sure that no write interferes with a read . + + * aggregation .* in the two algorithms presented above , one pe hosts only one cell .such an arrangement may be wasteful if the communication between pes dominates the computation internal to a pe .a more efficient arrangement is to assign several cells to one pe . for concreteness , consider a two - dimensional array with periodic boundary conditions .let be a multiple of and pes be available .pe carries subarray , where .( capital will be used without confusion to represent both the subarray index and the set of cells the subarray comprises , e.g. as in ) a fragment of a square cellular array in an example of such an aggregation is represented in figure [ fig : aggr] , wherein .the neighbors of a cell carried by pe1 are cells carried by pe2 , pe3 , pe4 , or pe5 .pe1 has direct connections with these four pes ( figure [ fig : aggr] ) .given cell in the subarray hosted by pe1 , one can determine with which neighboring pes communication is required in order to learn the states of the neighboring cells .let be the set of these pes .examples in figure [ fig : aggr] : is empty , \{pe5 } , \{pe3 , pe4}. ) mapping of cells to pes , ) the interconnection among the pes which supports the neighborhood topology among the cells , width=556 ] figure [ fig : amcgen ] presents an aggregated variant of the algorithm in figure [ fig : a1c1pe ] .pe , which hosts subarray , maintains the local time register .pe simulates the evolution of its subarray using the algorithm in figure [ fig : amcgen ] with .each cell is represented in the memory of pe by its current state and its next arrival time .note that unlike the one - cell - per - one - pe algorithm , the does not represent the current local time for cell . instead, local times of all cells within subarray are the same , . moves from one to another in the order of increasing value .three successive iterations of this algorithm are shown in figure [ fig : timlin ] , where the subarray consists of four cells : .circles in figure [ fig : timlin ] represent arrival points in the simulated time .a crossed - out circle represents an arrival which has just been processed , i.e. , steps 3 , 4 , and 5 of figure [ fig : amcgen ] have just been executed , so that has just taken on the value of the processed old arrival time , while the has taken on a new larger value .this new value is pointed to by an arrow from in figure [ fig : timlin ] .it is obvious that always if .local times maintained by different pe might be different .a wait at step 3 can not deadlock the execution since the pe whose is the minimum over the entire cellular array is always able to make a progress .slides along a sequence of s in successive iterations of the aggregated algorithm , width=595 ] assuming property ( ii ) as above , the algorithm correctly simulates the history of updates .the following example may serve as an informal proof of this statement .suppose pe1 is currently updating the state of cell ( see figure [ fig : aggr] ) and its local time is . since , this update is possible because the local time of pe5 , , is currently larger than . at present, pe1 receives the state of from pe5 in order to perform the update .this state is in time , i.e. , in the future with respect to local time .however , the update is correct , since the state of was the same at time , as it is at time .indeed , suppose the state of were to be changed at simulated local time , . at the moment when this change would have been processed by pe5, the local time of pe1 would have been larger than , and would have been the local time of pe5 . after this processing has supposedly taken place , the local time of pe1 should not decrease . yet at the present it is , which is smaller that .this contradiction proves that the state of can not in fact change in the interval ( ) . in the example in figure [ fig : timlin ] , only one supplies .however , the algorithm in figure [ fig : amcgen ] at step 2 commands to select _ a _ cell not _ the _ cell .this covers the unlikely situation of several cells having the same minimum time .if for different are independent random samples from a distribution with a continuous density , this case occurs with the probability zero . on the other hand ,if several cells can , with positive probability , update simultaneously , a synchronous version of the aggregated algorithm should be used instead . to eliminate indeterminism and inconsistency , the latter would use synchronization and intermediate storage techniques .these techniques were demonstrated in the algorithm in figure [ fig : s1c1pe ] and their discussion is not repeated here . for an important special case of * poisson asynchrony in the aggregated algorithm * ,the algorithm of figure [ fig : amcgen ] is rewritten in figure [ fig : amcpoi ] .this specialization capitalizes on the additive property of poisson streams , specifically , on the fact that sum of independent poisson streams with rate each is a poisson stream with rate . in the algorithm , ; this is equal to in the special case of partitioning into subarrays . unlike the general algorithm of figure [ fig : amcgen ] , in the specialization in figure [ fig : amcpoi ]neither individual streams for different cells are maintained , nor future arrivals for cells are individually computed . instead, a single cumulative stream is simulated and cells are delegated randomly to meet these arrivals . at step 5 in figure [ fig : amcpoi ], is an -th pseudo - random number in the sequence uniformly distributed in ( 0,1 ) .it follows from the notation that each pe has its own sequence .if this sequence is independent of the run ( which is condition ( i ) above ) and if updates for neighboring cells never coincide in time ( which is condition ( ii ) above ) , then this algorithm produces a unique reproducible trajectory .the same statement is also true for the algorithm in figure [ fig : amcgen ] .however , uniqueness provided by the algorithm in figure [ fig : amcpoi ] is weaker than the one provided by the algorithm in figure [ fig : amcgen ] : if the same array is partitioned differently and/or executed with different number of pes , a trajectory produced by the algorithm in figure [ fig : amcpoi ] may change ; however , a trajectory produced by the algorithm in figure [ fig : amcgen ] is invariant for such changes given that each cell uses its own fixed pseudo - random sequence . + * efficiency of aggregated algorithms*. both many - cells - per - one - pe algorithms in figure [ fig : amcgen ] and figure [ fig : amcpoi ] are more efficient than the one - cell - per - one - pe counterparts in figure [ fig : a1c1pe ] and figure [ fig : s1c1pe ] .this additional efficiency can be explained in the example of the square array , as follows : in the algorithms in figure [ fig : a1c1pe ] and figure [ fig : s1c1pe ] , a pe may wait for its four neighbors .however , in the algorithms in figure [ fig : amcgen ] and figure [ fig : amcpoi ] , a pe waits for at most two neighbors .for example ,when the state of cell in figure [ fig : aggr] is updated , pe1 might wait for pe3 and pe4 .moreover , for at least cells out of , pe1 does not wait at all , because .the cells such that form the dashed square in figure [ fig : aggr] .this additional efficiency becomes especially large if , instead of set in the original formulation of the model , one uses sets or , more generally , -th degree neighborhood , .the latter is defined for inductively where for a set of cells is defined as .it is easy to rewrite the algorithms in figure [ fig : a1c1pe ] and figure [ fig : s1c1pe ] for the case .the obtained codes have low efficiency however .for example , in the square array case , one has .thus , if , a cell might have to wait for 12 cells in order to update .in the same example , if one pe carries an subarray , and , then the pe waits for at most three other pes no matter how large the is .moreover , if then in cases out of the pe does not wait at all . +* the bkl algorithm * was originally proposed for ising spin simulations .it was noticed that the probability to flip takes on only a finite ( and small ) number of values , each corresponding to one or several combinations of old values of and neighboring spins .thus the algorithm splits the cells into pairwise disjoint classes , , ... .the rates of changes ( not just of the attempts to change ) for all are the same . at each iteration, the bkl algorithm does the following : + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(a ) selects at random according to the weights , , and selects a cell uniformly at random .+ ( b ) flips the state of the selected cell , .+ ( c ) increases the time by , where is a pseudo - random number uniformly distributed in ( 0,1 ) .+ ( d ) updates the membership in the classes ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if the asynchrony law is poisson , the idea of the bkl algorithm can be applied also to a deterministic update . herethe probability of change takes on just two values : + if , and if .+ accordingly , there are two classes : , the cells which are not going to change and , the cells which are going to change . as with the original bkl algorithm ,a substantial overhead is required for maintaining an account of the membership in the classes ( step ( d ) ) .the bkl algorithm is justified only if a large number of cells are not going to change their states .the latter is often the case .for example , in the conways s synchronous _ game of life _( gardner ) large regions of white cells ( ) remain unchanged for many iterations with very few black cells ( ) .one would expect similar behavior for an asynchronous version of the game of life .the basic bkl algorithm is serial .to use it on a parallel computer , an obvious idea is to run a copy of the serial bkl algorithm in each subarray carried by a pe .such a procedure , however , causes roll - backs , as seen in the following example : suppose pe1 is currently updating the state of cell ( figure [ fig : aggr] ) and its local time is , while the local time of pe5 , , is larger than . since is a nearest neighbor to , s membership might change because of s changed state .suppose s membership were to indeed change .although this change would have been in effect since time , pe5 , which is responsible for , would learn about the change only at time .as the past of pe5 is not , therefore , what pe5 has believed it to be , interval [ must have been simulated by pe5 incorrectly , and must be played again .this original roll - back might cause a cascade of secondary roll - backs , third generation roll - backs etc .+ * a modified bkl algorithm * applies the original bkl procedure only to a subset of the cells , whereas the procedure of the standard model is applied to the remaining cells . more specifically :an additional separate class is defined .unlike other , , class always contains the same cells .steps ( a ) - ( d ) are performed as above with the following modifications : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \1 ) the weight of at step ( a ) is taken to be .+ 2 ) if the selected belongs to , then at step ( b ) the state of may or may not change .the probability of change is determined as in the standard model .+ 3 ) the time at step ( c ) should be increased by , where is a pseudo - random number uniformly distributed in .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ now consider again the subarray carried by pe1 in figure [ fig : aggr] .the subarray can be subdivided into the `` kernel '' square and the remaining boundary layer . if first degree neighborhood , , is replaced with the -th degree neighborhood , , then the kernel is the central square , and the boundary layer has width . in figure[ fig : aggr] , the cells in the dashed square constitute the kernel with .to apply the modified bkl procedure to the subarray carried by pe1 , the boundary layer is declared to be the special fixed class .similar identification is done in the other subarrays . as a result, the fast concurrent bkl procedures on the kernels are shielded from each other by slower procedures on the layers .the roll - back is avoided , since state change of a cell in a subarray does not constitute state or membership change of a cell in another subarray .unless the performance of pe1 is taken into account , the neighbors of pe1 can not even tell whether pe1 uses the standard or the bkl algorithm to update its kernel . as the size of the subarray increases , so does both the relative weight of the kernel and the fraction of the fast bkl processing . + * generating the output*. consider the task of generating cellular patterns for specified simulated times .a method for performing this task in a serial simulation or a parallel simulation of a synchronous cellular array is obvious : as the global time reaches a specified value , the computer outputs the states of all cells . in an asynchronous simulation , the task becomes more complicated because there is no global time : different pes may have different local times at each physical instance of simulation .suppose for example , one wants to see the cellular patterns at regular time intervals on a screen of a monitor attached to the computer . without getting too involved in the details of performing i / o operations and the architecture of the parallel computer, it would be enough to assume that a separate process or processes are associated with the output ; these processes scan an output buffer memory space allocated in one or several pes or in the shared memory ; the buffer space consists of frames , numbered 0,1, ... , , each capable of storing a complete image of the cellular array for one time instance .the output processes draw the image for time on the screen as soon as the frame number ( the reminder of the integer division by ) is full and the previous images have been shown . then the frame is flashed for the next round when it will be filled with the image for time and so on .+ the algorithm must fill the appropriate frame with the appropriate data as soon as both data and the frame become available .the modifications that enable the asynchronous algorithm in figure [ fig : amcgen ] to perform this task are presented in figure [ fig : genout ] . in this algorithm , variables and are private ( i.e. , local to pe ) and and are constants whose values are the same for all the pes .note that different pes may fill different frames concurrently .if the slowest pe is presently filling an image for time , then the fastest pe is allowed to fill the image for time no later than .an attempt by the fastest pe to fill the image for time will be blocked at step 4 , until the frame number becomes available .thus , the finiteness of the output buffer introduces a restriction which is not present in the original algorithm in figure [ fig : amcgen ] .according to this restriction , the lag between concurrently processed local times can not exceed a certain constant .the exact value of the constant in each particular instance depends on the relative positions of the update times within the -slots . in any case , the constant is not smaller than and not larger than .however , even with a single output buffer segment , , the simulation does not become time - driven . in this case , the concurrently processed local times might be within a distance of up to of each other , whereas might be relatively large .no precision of update time representation is lost , although efficiency might degrade when both and become too small , see section [ sec : perf ] .modeling and analysis of asynchronous algorithms is a difficult theoretical problem . strictly speaking ,the following discussion is applicable only to synchronous algorithms .however , one may argue informally that the performance of an asynchronous algorithm is not worse than that of its synchronous counterpart , since expensive synchronizations are eliminated .first , consider the synchronous algorithm in figure [ fig : s1c1pe ] .let be the size of the array and be the number of cells which passed the test at step 2 , figure [ fig : s1c1pe ] .the ratio of useful work performed , to the total work expended at the iteration is .this ratio yields the _ efficiency _ ( or _ utilization _ ) at the given iteration . assuming that in the serial algorithm all the work is useful , and that the algorithm performs the same computation as its parallel counterpart , the speed - up of the parallel computation is the average efficiency times the number of pes involved . here the averaging is done with equal weights over all the iterations . in the general algorithms , determined using the states of the neighbors of . however , in the important applications , such as an ising model , is independent of states .the following assessment is valid only for this special case of independence . herethe configuration is irrelevant and whether the test succeeds or not can be determined knowing only the times at each iteration .this leads to a simplified model in which only local times are taken into account : at an iteration , the local time of a cell is incremented if the time does not exceed the minimum of the local times of its neighbors .a simple ( serial ) algorithm which updates only local times of cells according to the rules formulated above was exercised for different array sizes and three different dimensions : for an -element circular array , an toroidal array , and for array with periodic boundary conditions .two types of asynchronies are tried : the poisson asynchrony for which is distributed exponentially , and the asynchrony for which is uniformly distributed in ( 0,1 ) . in both cases ,random time increments for different cells are independent . the results of these six experiments are given in figure [ fig : perf1t1 ] .each solid line in figure [ fig : perf1t1 ] is enclosed between two dashed lines .the latter represent 99.99% student s confidence intervals constructed using several simulation runs , that are parametrically the same but fed with different pseudo - random sequences .in figure [ fig : perf1t1 ] , for each array topology there are two solids lines .the poisson asynchrony always corresponds to the lower line .the corresponding limiting values of performances ( when is large ) are also shown near the right end of each curve .for example , the efficiency in the simulation of a large array with the poisson asynchrony is about 0.121 , with the other asynchrony , it is about 0.132 .no analytical theory is available for predicting these values or even proving their separation from zero when .it follows from figure [ fig : perf1t1 ] that replacing exponential distribution of with the uniform distribution results in efficiency increase from 0.247 to 0.271 for a large -circle ( ) .the efficiency can be raised even more . if , where is distributed uniformly in ( 0,1 ) , then in the limit , with the student s confidence 99.99% , the efficiency is .it is not known how high the efficiency can be raised this way ( degenerated cases , like a synchronous one , in which the efficiency is 1 , are not counted ) .an efficiency of 0.12 means the speed - up of ; for this comes to more than 1900 .this assessment is confirmed in an actual full scale simulation experiment performed on pes of a connection machine ( r ) ( a quarter of the full computer ) .this simd computer appears well - suited for the synchronous execution of the one - cell - per - one - pe algorithm in figure [ fig : s1c1pe ] on a toroidal array , poisson asynchrony law .since an individual pe is rather slow , it executes several thousand instructions per second , and its absolute speed is not very impressive : it took roughly 1 sec . of real time to update all spins when the traffic generated by other tasks running on the computer was small ( more precise measurement was not available ) .this includes about rounds of the algorithm , several hundred instructions of one pe per round. the 12% efficiency in the one - cell - per - one - pe experiments could be greatly increased by aggregation .the many - cells - per - one - pe algorithm in figure [ fig : amcpoi ] is implemented as a language parallel program for a balance ( tm ) computer , which is a shared memory mimd bus machine .the array was split into subarrays , as shown in figure [ fig : aggr ] , where is a multiple of . because the computer has 30 pes , the experiments could be performed only with , and 25 pes for different and . along with these experiments , a simplified model , similar to the one - cell - per - one - pe case ,was run on a serial computer . in this model ,quantity is maintained for each pe , .the update of is arranged in rounds , wherein each is updated as follows : + ( i ) with probability , pe updates : where and are the same as in step 5 in figure [ fig : amcpoi ] . here is the probability that the pe chooses a cell so that ; + ( ii ) with probability , the pe must check the of one of its four neighbors before making the update .the is chosen uniformly at random among the four possibilities . if , then gets an increment according to ;otherwise , is not updated . here is the probability that pe will choose a cell in an edge but not in a corner , so that + ( iii ) with the remaining probability , the pe checks and of two of its adjacent neighbors ( for example in figure [ fig : aggr ] , neighbors pe2 and pe3 can be involved in the computation for pe1 ) .the two neighbors are chosen uniformly at random from the four possibilities . again ,if both and , then gets an increment according to ; otherwise , is not updated . here is the probability to choose a cell in a corner , so that .as in the previous case , this simplified model simulates a possible but not obligatory synchronous timing arrangement for executing the real asynchronous algorithm .figure [ fig : perfmt1 ] shows excellent agreement between actual and predicted performances for the aggregated ising model .the efficiency presented in figure [ fig : perfmt1 ] is computed as the parallel speed - up can be found as efficiency of pes . for 25 pes simulating a 120 ising model ,efficiency is 0.66 ; hence , the speed - up is greater than 16 .for the currently unavailable sizes , when pes simulate a array , the simplified model predicts an efficiency of about 0.8 and a speed - up of about 8000 . in the experiments reported above , the lag between the local times of any two pes was not restricted . as discussed in section [ sec : algo ] , an upper bound on the lag might result from the necessity to produce the output . to see how the bound affects the efficiency , one experiment reported in figure [ fig : perfmt1 ] , is repeated with various finite values of the lag bound . in this experiment ,an array is simulated and one pe carries an subarray , where and .the results are presented in figure [ fig : perfbl ] . in figure[ fig : perfbl ] , the unit of measure for a lag is the expectation of time intervals between consecutive arrivals for a cell . for lag bounds greater than 16 , degradation of efficiencyis almost unnoticeable , when compared with the base experiment where lag .substantial degradation starts at about 8 ; for the unity lag bound , the efficiency is about half that of the base experiment . however , even for lag bound 0.3 , the simulation remains practical , with an efficiency of about 0.1 ; since 1024 pes execute the task , this efficiency means a speed - up of more than 100 .this paper demonstrates an efficient parallel method for simulating asynchronous cellular arrays .the algorithms are quite simple and easily implementable on appropriate hardware .in particular , each algorithm presented in the paper can be implemented on a general purpose asynchronous parallel computer , such as the currently available bus machines with shared memory .the speed of such implementation depends on the speed of pes and the efficiency of the communication system .a crucial condition for success in such implementation is the availability of a good parallel generator of pseudo - random numbers . to assure reproducibility ,each pe should have its own reproducible pseudo - random sequence .the proposed algorithms present a number of challenging mathematical problems , for example , the problem of proving that efficiency tends to a positive limit when the number of pes increases to infinity . + * acknowledgments*. + i acknowledge the personnel of the thinking machine corp . for their kind invitation , and help in debugging and running the parallel * lisp program on one of their computers . particularly , the help of mr .gary rancourt and mr .bernie murray was invaluable . also , i thank andrew t. ogielski and malvin h. kalos for stimulating discussions , debasis mitra for a helpful explanation of a topic in markov chains , and brigid moynahan for carefully reading the text . mmmm s. geman and d. geman , stochastic relaxation , gibbs distributions , and the bayesian restoration of images , _ ieee transactions on pattern analysis and machine intelligence _ , * pami-6 * , 6 , ( novem .1984 ) , 721741 .# define shared_mem_size ( sizeof(double)*10000 ) # define end_time 1000 .# define a 24 / * side of small square a pe takes care of*/ # define m 5 / * number of pes along a side of the big square*/ shared int npes = m*m , spin[m*a][m*a ] ; shared float time[m][m ] ; /*local times on subarrays*/ shared float prob[10 ] ; / * probabilities of state change * / shared float j = 1 . ,h = 0 . ; / * energy= -j sum spin spin ' - h sum spin * / shared float t = 1 .; / * temperature * / shared int ato2 = a*a ; shared int am = a*m ; / * compute flip probabilities * / for ( i = 0 ; i < 5 ; i++ ) for ( j = 0 ; j < 2 ; j++ ) { index = i + 5*j ; / * index = 0,1, ... ,9 * / my_spin = 2*j - 1 ; sum_nei = 2*i - 4 ; d_e = 2.*(j * my_spin * sum_nei + h * my_spin ) ; x = exp(-d_e / t ) ; prob[index ] = x/(1.+x ) ; / * printf("prob[%d]=%f\n",index , prob[index ] ) ; * / } ; / * initialize spins at random , in seedran(seed , b ) , b is dummy*/ seedran(31234,1 ) ; for (i = 0 ; i < m*a ; i++ ) for ( j = 0 ; j < m*a ; j++ ) { bit = 2*frand(1 ) ; / * bit becomes 0 or 1 * / spin[i][j ] = 2*bit - 1 ; / * spin becomes -1 or 1 * / / * printf("spin[%d][%d]=%d\n",i , j , spin[i][j ] ) ; * / } ; for ( child_id = 0 ; child_id < npes ; child_id++ ) if ( fork ( ) = = 0 ) { tmp_affinity(child_id ) ; / * fixing a pe for process child_id * / work(child_id ) ; / * starting a child pe process * / exit(0 ) ; } work(my_id ) int my_id ; { int i , j ; int coord , var ; int x , y , my_i , my_j , sum_nei , nei_i , nei_j ; int up_i , down_i , left_j , right_j ; int i_base , j_base ; int index ; double frand ( ) ; double r ; double end_time ; while(time[my_i][my_j ] < end_time ) { r = frand(my_id ) ; /*pemy_id obtains next pseudo - random number from its own sequence*/ x = r*a ; y = ( r*a - x)*a ; /*pick a random cell with internal address ( x , y ) within the a*a square*/ /*compute sum of neighboring spins*/ sum_nei = 0 ; for ( coord = 0 ; coord < 2 ; coord + = 1 ) for ( var = -1 ; var < 2 ; var + = 2 ) { nei_i = x ; nei_j = y ; if(coord = = 0 ) nei_i + = var ; if(coord = = 1 ) nei_j + = var ; if(0 < = nei_i & & nei_i < a & & 0 <= nei_j & & nei_j < a ) { nei_i + = i_base ; nei_j + = j_base ; } else { / * 4 possible reasons to wait for a neighboring pe * / if(-1 = = nei_i ) while ( time[down_i][my_j ] < time[my_i][my_j ] ) ; if(-1 = = nei_j ) while ( time[my_i][left_j ] < time[my_i][my_j ] ) ; if(nei_i = = a ) while ( time[up_i][my_j ] < time[my_i][my_j ] ) ; if(nei_j = = a ) while ( time[my_i][right_j ] < time[my_i][my_j ] ) ;
a definition for a class of asynchronous cellular arrays is proposed . an example of such asynchrony would be independent poisson arrivals of cell iterations . the ising model in the continuous time formulation of glauber falls into this class . also proposed are efficient parallel algorithms for simulating these asynchronous cellular arrays . in the algorithms , one or several cells are assigned to a processing element ( pe ) , local times for different pes can be different . although the standard serial algorithm by metropolis , rosenbluth , rosenbluth , teller , and teller can simulate such arrays , it is usually believed to be without an efficient parallel counterpart . however , the proposed parallel algorithms contradict this belief proving to be both efficient and able to perform the same task as the standard algorithm . the results of experiments with the new algorithms are encouraging : the speed - up is greater than 16 using 25 pes on a shared memory mimd bus computer , and greater than 1900 using pes on a simd computer . the algorithm by bortz , kalos , and lebowitz can be incorporated in the proposed parallel algorithms , further contributing to speed - up .
magnetic activity in the sun is known to play a central role in driving both long - term and short - term dynamics ( tobias 2002 ; weiss 2002 ) .the magnetic field is responsible for spectacular events such as sunspots , solar flares , and coronal mass ejections , and for heating the solar corona to high temperatures .large - scale magnetic activity is known to be dominated by the eleven year activity cycle .this cycle has been systematically observed since the early seventeenth century and its properties are well documented ( see e.g. ossendrijver 2003 ) .of particular current interest is the impact of magnetic activity on solar irradiance that might have significant implications for the terrestrial climate ( see solanki _et al _ 2004 ) . given the importance of solar activity , it is not surprising that there has been a continued interest in understanding the mechanisms responsible for generating the solar magnetic field .the sun s magnetic field is believed to be generated by a hydromagnetic dynamo in which motion of the solar plasma ( advection ) is able to sustain a magnetic field against the continued action of ohmic dissipation ( see e.g moffatt 1978 ; charbonneau 2005 ) .progress in understanding this fundamental problem of solar magnetohydrodynamics is slow owing to the difficulties of the dynamo problem .the extreme parameters of the solar interior and the inherent three - dimensionality of the dynamo problem make it impossible to solve the equations accurately on a computer .much effort has therefore focused on _ mean - field _ dynamo models ( steenbeck , krause & rdler 1966 ; krause & rdler 1980 ) , which describe the evolution of the mean magnetic field , parameterising the effects of the small - scale fields and flows in terms of tensor transport coefficients .these transport coefficients include ( which leads to a regenerative term in the mean - field equations the so - called -effect ) and the turbulent diffusivity ( ) .we stress here that there is no mechanism within the theory for determining the form of these coefficients , except for flows at low magnetic reynolds number or with short correlation time , and in solar models these are usually chosen in a plausible but ad - hoc manner ( often , for simplicity , adopting isotropic representations in which and ) .much attention has been focused upon determining these transport coefficients in both the linear and nonlinear dynamo regimes from numerical simulations ( cattaneo & hughes 1996 ; brandenburg & subramanian 2005 ) but there is still no consensus over the nature of these , even to within an order of magnitude ( see courvoisier , hughes & tobias 2006 ) .mean - field models have , however , proved successful in providing illustrations of the type of behaviour that might be expected to occur in the sun ( and other stars ) .it is often argued that , _ although these models have no predictive power _ ,understanding the underlying mathematical form of the equations can lead to the identification of robust patterns of behaviour .many different models have been proposed for the solar dynamo . in the distributed dynamo model, the -effect operates throughout the convection zone and interacts with the latitudinal shear ( or the sub - surface shear layer , see brandenburg 2005 ) to generate magnetic field .alternatively , the dynamo could be operating near the tachocline , where an -effect might be driven either by a tachocline - based instability or by turbulent convection .this , in conjunction with the strong shear , could drive an `` interface '' dynamo ( parker 1993 ) .finally , there are flux transport models , in which the ( so - called ) babcock - leighton mechanism produces an -effect ( or source term ) at the surface .this surface -effect is coupled to the radial shear in the tachocline ( where another -effect may be operating ) via a meridional flow ( choudhuri , schssler & dikpati 1995 ; dikpati & charbonneau 1999 ) .the relative merits of these models are discussed elsewhere in the literature ( see , e.g. charbonneau 2005 ) the only comment we make here is that this plethora of models arises because of the lack of available constraints on the form of the transport coefficients in the mean - field formalism .we note further that it is not clear that any of the above scenarios capture the essential dynamo processes correctly or that these processes can ever be captured by a mean - field model .it is also possible to construct predictions of solar activity without using dynamo theory , and there is a long literature describing these predictive methods ( see e.g. zhang 1996 ; hathaway , wilson & reichmann 1999 ; sello 2003 ; zhao _ et al _ 2004 ; saba , strong & slater 2005 ) .one class of prediction techniques uses statistical and timeseries analysis methods ( see e.g. tong 1995 for details ) .these methods , which are also applicable in many other areas of physics , vary in complexity from simple linear methods to methods that use dynamical systems theory to reconstruct nonlinear attractors in phase space . however , these methods have the drawback that they do not utilise any of the `` physics '' of the problem .predictions can also be made by using precursor methods ( see e.g. schatten 2002 ) , which do utilise some of the physical features of the system in addition to the timeseries data . in recent papers ( dikpati , de toma & gilman 2006 ;dikpati & gilman 2006 ) , an attempt has been made to unify these two approaches by utilising a mean - field model in order to make predictions about the future activity of the sun .these papers describe an axisymmetric , mean - field model of a flux transport dynamo . herethe authors make use of the observations of magnetic flux at the solar surface to feed into a model of solar activity .the flux that is observed at the solar surface is advected by a parameterised meridional flow ( which can be observed down to a certain depth ) and interacts with a differential rotation profile that has been inferred from helioseismology .the magnetic flux also interacts with turbulence , the effects of which are parameterised by certain turbulent transport coefficients ( representing the turbulent diffusivity and the -effect ) . as with all current mean field modelsthese turbulent transport effects have been parameterised in a plausible but ad - hoc manner , and are unconstrained by observations and indeed theory . the simplest predictive scheme proposed by dikpati _et al _ ( 2006 ) therefore takes the form of a parameterised linear system forced by boundary observations .the implicit underlying philosophy here is that by reducing the correct physics for the generation of the solar activity cycle ( i.e. a nonlinear self - excited dynamo ) to such a scheme , predictions about future solar activity can be made . in this paperwe shall investigate the predictability of various dynamo models .we demonstrate that even when all the nonlinear physics of the solar dynamo is removed , problems remain for prediction owing to the increased importance of stochastic effects even very weak stochastic perturbations can produce significant modulation in these linear - type models .we also discuss the best - case scenario for prediction where stochastic effects can be ignored , and demonstrate that in these cases prediction is still difficult owing to uncertainties in the input parameters of these parameterised mean - field models .the paper is organised as follows . in the next sectionwe describe ( in a general way ) the importance of modulation and the role of stochasticity and nonlinearity in solar dynamo models . in section 3we investigate a flux transport model and demonstrate how the presence of even extremely weak noise can render predictions useless .in section 4 we consider the best - case " scenario for prediction where noise does not play a role in the modulation we demonstrate that more accurate prediction schemes may arise by using basic timeseries analysis techniques rather than from constructing mean - field models of the solar cycle .finally , in section 5 we discuss the implications of our work for predictions of the solar cycle .in this section , we discuss the problems that must be overcome by schemes designed to yield a prediction of future solar magnetic activity . some of these problems arise owing to the nature of solar magnetic activity whilst others arise from the lack of a detailed theory that is capable of describing solar magnetic activity in such extreme conditions as those that exist in the solar interior .it is clear that if the solar cycle were strictly periodic , with a constant amplitude , then it would be straightforward to predict future behaviour .however , all measurements of solar magnetic activity ( both direct observations and evidence from proxy data ) indicate that the variations in the magnetic activity do not follow a periodic pattern. departures from periodicity may be driven either by perturbations or by modulation .for the case of a weakly perturbed periodic system , the dynamics is essentially captured by the periodic signal , with the small perturbations playing a secondary role .we distinguish this behaviour from a modulated signal in which there are significant departures from periodicity ( often occurring on longer timescales ) , with large variations in the observed amplitude of the signal .all the evidence from direct observations indicates that the solar cycle is strongly modulated .the amplitude of the solar cycle varies enormously over long timescales , an extreme example of this modulation was a period of severely reduced activity in the seventeenth century known as the maunder minimum .proxy data from records of terrestrial isotopes , such as and ( see e.g. beer 2000 , weiss & tobias 2000 , wagner _ et al _ 2001 ) , demonstrate that this modulation has been a characteristic feature of the solar magnetic activity over ( at least ) the last 20,000 years .mathematically there are only two possible sources for this strong modulation of the basic solar cycle ( tobias 2002 ) .the modulation may arise either as a result of stochastic effects ( see e.g. ossendrijver & hoyng 1996 ) or by deterministic processes ( see e.g. tobias , weiss & kirk 1995 ) . in this contextwe define deterministic processes to be those that _ are _ captured by the differential equations of dynamo theory , with no random elements .stochastic processes are those that occur on an unresolved length or timescale , and so can not be described by the differential equations without including a random element into the model .it is well known that stochastic modulation can arise even if the deterministic physics that leads to the production of the basic cycle is essentially linear .this parameter regime is generally considered to be a good one for prediction , since any nonlinear effects are only playing a secondary role . however , in this stochastically - perturbed case , the small random fluctuations that lead to the modulation will have large short - term effects and render prediction extremely difficult , if not impossible .conversely , if the modulation arises purely as a result of deterministic processes , then the underlying physics is nonlinear ( or potentially non - autonomous ) and this leads to difficulty in prediction owing to the possible presence of deterministic chaos and ( more importantly ) the difficulty of constructing accurate nonlinear models with large numbers of degrees of freedom . in the next two sections we demonstrate the problems for prediction for dynamo models in both of the classes described above . in the next sectionwe describe a flux transport model of the same type as the one used in the prediction scheme of dikpati _ et al _ ( 2006 ) and we demonstrate that even very small random fluctuations can produce significant modulation , leading to extreme difficulties for prediction .we then , in section 4 , go on to describe a model where the modulation arises owing to the presence of deterministic chaos and show that in this case , prediction using model fitting is a poor way to proceed , but some prediction is possible if it is possible to reconstruct the attractor for activity .we assume initially that the modulated solar magnetic activity can be described by a stochastically - perturbed mean - field dynamo model . in this model ,nonlinear effects are playing a secondary role , and all the modulation is being driven by the stochastic effects .the aim of this section is to assess whether or not models of this type can be used to make meaningful predictions of the solar magnetic activity . in these models ,the evolution of the large - scale magnetic field is described by the standard mean - field equation ( see , for example , moffatt 1978 ) , here , represents the large - scale magnetic field and corresponds to the mean velocity field , is the ( turbulent ) magnetic diffusivity , and the term corresponds to the mean - field -effect . using the well - known approximation , we solve this equation numerically in an axisymmetric spherical shell ( and ) .in solving equation ( [ eqn:1 ] ) we need to ensure that remains solenoidal ( i.e. ) . to achieve this , we decompose the magnetic field into its poloidal and toroidal components , where denotes the toroidal ( azimuthal ) field component and the scalar potential relates to the poloidal component of the magnetic field .so , rather than solving equation ( [ eqn:1 ] ) directly , the problem has been reduced to solving two coupled partial differential equations for the scalar quantities and .we adopt idealised boundary conditions , in which at and and and and are smoothly matched to a potential field at .this particular dynamo model is closely related to the flux transport model described by dikpati & charbonneau ( 1999 ) .the large - scale velocity field , , is given by where is a prescribed analytic fit to the helioseismologically - determined solar rotation profile ( see , for example , bushby 2006 ) and and correspond to a prescribed meridional circulation .we assume that the meridional circulation pattern in each hemisphere consists of a single cell , with a polewards flow at the surface and an ( unobservable ) equatorwards flow at the base of the convection zone the flow is confined to the region .the functional form that we adopt for this flow is similar in form to the one described by dikpati & charbonneau ( 1999 ) , \xi \sin \theta \left ( 3\cos^2 \theta - \sin^2 \theta \right),\label{eqn:4}\\ u_{\theta}(r,\theta)&=&u_o \left ( \frac{r_{\odot}}{r } \right)^3 \left [ -1 + c_1\xi^{0.5}-c_2\xi^{0.75}\right ] \cos \theta \sin^2 \theta,\label{eqn:5 } \end{aligned}\ ] ] where ] , ^{-0.75}$ ] , and is some characteristic flow speed .this flow pattern can be stochastically perturbed by setting , where is a time - dependent , randomly fluctuating variable in the range .the aim here is to assess whether or not such weak stochastic variations in the flow pattern could give rise to significant modulation in the activity cycle , and if so what are the consequences for prediction . in order to complete the specification of the model , we need to choose plausible functional forms for the -effect and the turbulent magnetic diffusivity .it should be emphasised again that these mean - field coefficients are poorly constrained by theory and observations , although plausible assumptions can be made . defining to be a characteristic value of the turbulent magnetic diffusivity within the solar convection zone, we adopt a similar spherically - symmetric profile to that adopted by dikpati & charbonneau ( 1999 ) , + \beta_c , \label{eqn:6}\ ] ] where erf corresponds to the error function and ( here taken to be of ) represents the magnetic diffusivity below the turbulent convection zone .following dikpati & charbonneau ( 1999 ) , rather than prescribing a simple functional form for we neglect the -effect term in the toroidal ( ) field equation and replace the corresponding term in the poloidal ( ) equation by a non - local , nonlinear source of poloidal flux , \left [ 1 - \mbox{erf}\left(\frac{r - r_{\odot}}{0.01r_{\odot } } \right)\right ] \\\nonumber & & \left [ 1+\left(\frac{b(0.7r_{\odot},\theta , t)}{b_o}\right)^2\right]^{-1}\sin \theta \cos \theta b(0.7r_{\odot},\theta , t).\label{eqn:7}\end{aligned}\ ] ] here , is a characteristic value of this poloidal source and represents the ( somewhat arbitrarily chosen ) field strength at which this non - local source becomes suppressed by the magnetic field .this source term parameterises the contribution to the poloidal magnetic flux due to the decay of active regions the non - locality reflects the fact that active regions are believed to form as the result of buoyant magnetic flux rising from the base of the convection zone to the solar photosphere .see dikpati & charbonneau ( 1999 ) for a more detailed discussion of this source term , though again it must be stressed that the functional form and the nonlinear dependence are chosen in a plausible yet ad - hoc manner . in order to carry out numerical simulations ,we first non - dimensionalise this flux transport model . by using scalings similar to those described by dikpati & charbonneau ( 1999 ), it can be shown that the model solutions are fully determined by two non - dimensional parameters ( once other parameters such as have been selected ) . denoting the equatorial angular velocity at the solar surface by ,these non - dimensional parameters are the dynamo number , , and the magnetic reynolds number corresponding to the meridional flow , . here, we set and . in the absence of stochastic noise , this set of parameters produces a strong circulation - dominated dynamo in which the magnetic energy is a periodic function of time . although the dynamo number is not weakly supercritical , nonlinear effects are not strong enough here to produce a modulated activity cycle the primary role of the nonlinearity is to prevent the unstable dynamo mode from growing exponentially .we term such a model a `` linear - type '' model .when weak stochastic effects are included in the model , the resulting activity cycle is indeed weakly modulated .this is illustrated in figure 1 , which shows the time - dependence of this solution .the time - series clearly illustrates that , although the amplitude of the `` cycle minimum '' only appears to be weakly time - dependent , there are significant variations in the peak amplitude of the magnetic energy time - series .these variations are qualitatively similar to those observed by charbonneau & dikpati ( 2000 ) , who considered large amplitude random fluctuations in the flow pattern within the solar convection zone the peak amplitude of these fluctuations was comparable with the peak amplitude of the flow . in this particular model , we have shown that even very weak stochastic variations in the centre of mass of the flow pattern can still produce significantly modulated behaviour .these stochastic effects are expected to become increasingly significant for dynamo numbers approaching critical .so , these models are obviously highly sensitive to the addition of stochastic noise . in the absence of stochastic noise ,the attractor ( in phase space ) for this solution is two - dimensional , and the future behaviour of the solution at any instant in time is entirely determined by the current position of the system on the attractor .the same is not true when this system is perturbed by stochastic effects , and it clearly becomes much more difficult to predict the future behaviour of the system .since the attractor of this stochastically perturbed solution can not be unambiguously defined , another possible way of assessing the `` predictability '' of this solution is to look for a correlation between successive cycle maxima .defining to be the magnitude of the cycle maximum , figure 2 shows as a function of .it is clear from this scatter plot that there is no obvious correlation between the amplitudes of successive cycle maxima in this case .since the modulation is being driven entirely by random stochastic forcing , this result is not surprising .this lack of correlation suggests that the behaviour of previous cycles can not be used to infer the magnitude of the following one .this implies that even weak stochastic effects may seriously reduce the possibilities for solar cycle prediction in this linear - type regime .in the previous section , we demonstrated that even very weak stochastic perturbations to the meridional flow pattern can lead to a loss of predictability in a linear - type flux transport dynamo model . in that model ,the modulation of the activity cycle was driven entirely by stochastic effects . as discussed in section 2 ,the only other possible scenario is that the observed modulation is driven by nonlinear effects .this scenario , where the observed modulation is deterministic in origin , is the `` best - case '' scenario for prediction , as in this case the entirely unpredictable stochastic elements may be ignored .we stress again that , given that solar magnetic activity is significantly modulated , either deterministic or stochastic modulation must be considered in any realistic model ( predictive or otherwise ) of the solar cycle .so , in this section , we completely neglect stochastic effects and assume that the observed ( chaotic ) modulation in the solar magnetic activity can be described by a fully deterministic model in which any activity modulation ( e.g. solar - like `` grand minima '' ) is driven entirely by nonlinear effects . the model that we usewas described in detail in two recent papers ( bushby 2005 , 2006 ) , so we only present a brief description here .the exact details of the model are unimportant for our main conclusions . like the flux transport dynamo model from the previous section, this model describes an axisymmetric , mean - field , -dynamo in a spherical shell . unlike the previous model, this model represents an `` interface - like '' dynamo that is operating primarily in the region around the base of the solar convection zone .it is worth mentioning again that ( as discussed in the introduction ) there is still no general consensus regarding which of these dynamo scenarios is more likely to be an accurate representation of the solar dynamo . for this interface - like dynamo model ,we neglect meridional motions , since they are poorly determined near the base of the solar convection zone .like several earlier models ( e.g. tobias 1997 ; moss & brooke 2000 ; covas _ et al _ 2000 ) , this dynamo model includes the feedback ( via the azimuthal component of the lorentz force ) of the mean magnetic field upon the differential rotation ( malkus & proctor 1975 ) .this nonlinear feedback is a crucial element of the model and , in the absence of stochastic effects , is the sole driver of modulation in the magnetic activity cycle . denoting this magnetically - driven velocity perturbation by , the large - scale velocity fieldis given by \mathbf{e_{\phi } } , \label{eqn:8}\ ] ] where ( as in the previous model ) represents an analytic fit to the solar differential rotation . whilst the evolution of the large - scale magnetic field is again governed by equation ( 1 ) , an additional evolution equation is required for the velocity perturbation , .this equation is given by \cdot \mathbf{e_{\phi } } + \frac{1}{r^3}\frac{\partial}{\partial r}\left[\nu r^4 \frac{\partial}{\partial r } \left(\frac{v}{r}\right)\right ] \\\nonumber & & + \frac{1}{r^2 \sin^2 \theta}\frac{\partial}{\partial \theta}\left[\nu \sin^3 \theta \frac{\partial}{\partial \theta } \left(\frac{v}{\sin \theta}\right)\right],\label{eqn:9}\end{aligned}\ ] ] where represents the fluid density ( here taken to be constant ) , is the permeability of free space and represents the ( turbulent ) fluid viscosity . in order to complete the model , the spatial dependence of the transport coefficients ( , and )must also be specified .again , we emphasise that there are no direct observational constraints relating to these coefficients as noted in the introduction , there is no consensus as to their form and there is still a debate as to their order of magnitude ( and even their sign ) .having said that , it is possible to make some plausible assumptions for an `` interface - like '' dynamo model ( see bushby 2006 for more details ) .the precise choices of these parameters are unimportant for our main conclusions . having set up this model , it is possible to choose a set of parameters so that the solutions _ do _ reproduce some salient features of the solar dynamo ( bushby 2005 , 2006 ) .we stress here that , although the parameters have been chosen in a plausible manner , this dynamo model should not be regarded as an accurate representation of the solar interior and is subject to many uncertainties .furthermore we stress again that this is the case with _ all _ mean - field solar dynamo models .however we use this model as a useful tool to analyse the possibility of producing predictive models of the solar cycle .we proceed by choosing fiducial parameters and profiles for the turbulent transport coefficients that lead to solar - type " magnetic activity , with chaotically modulated cycles and recurrent `` grand minima '' .we then integrate this model forward in time to produce a timeseries and designate this timeseries as the target " run , which any subsequent model should be able to predict .this target run is shown in figure 3 , which shows a timeseries of the activity together with a reconstruction of the dynamo attractor in phase space .although this solution is chaotically modulated , it is certainly no more chaotic than the equivalent attractor for the data , which is a well - known proxy for solar magnetic activity ( e.g. beer 2000 ) .whilst the nonlinear effects are significant enough to drive the modulation , they are actually very difficult to detect . in this model , the cyclic component of the fluctuations in the differential rotation ( which are driven by the nonlinear lorentz force ) are small compared with the mean differential rotation .this is consistent with observations of the ( so called ) torsional oscillations in the solar convection zone .finally , note once more that , since the modulation is driven entirely by nonlinear effects , this model is specified exactly .the question is then posed as to whether _ any _ mean - field model can be constructed that leads to meaningful predictions of the future behaviour of the target run .clearly the best chance for a mean - field model being capable of predicting the future behaviour of the target run is to use _ the exact model _ that led to the target run data .hence we test this model first , as all subsequent models will be inferior to this .we proceed by setting the model parameters to be those that generated the long test run , and consider the behaviour of solutions that are started from very similar points on the attractor .some of the solutions are shown in figure 4 .this figure shows clearly that although the predictor solutions are able to track the target solution for a couple of activity cycles , the nature of the solutions means that the predictors and target solution diverge quickly after this time .this is not surprising behaviour .it is well - known that chaotic solutions have a sensitive dependence on initial conditions and that long - term prediction of such solutions is fraught with problems ( see e.g. tong 1995 ) .what is clear is that simply using a model that is based upon mean - field theory will not work in the long term _ even if the model is correct in every detail_. one might be able to predict one or two cycles ahead _ if one has solved the problem of constructing an exact representation of the solar dynamo _ but as noted above this is not an easy task .we now turn to the related problem of short - term prediction . as discussed in the introductionthere are large uncertainties in the form and amplitude of the input parameters for all mean - field models .what we investigate here is whether these uncertainties lead to significant difficulties in prediction even in the short term .again we examine the best case scenario and consider a mean - field model for the predictor runs that has correctly parameterised the _ form _ of all the input variables ( differential rotation , -effect , turbulent diffusivity and nonlinear response ) . in addition these predictors have been given the correct input _ values _ for all - but - one of the parameters .hence the predictor models are exactly the same as the target model with the exception of one input parameter that has been altered by .this would be a staggeringly good representation should it be possible to achieve this for solar activity .furthermore we increase the chances of the predictor being able to predict the future behaviour of the target solution by matching the two timeseries over a number of cycles .this is analogous to the procedure employed by dikpati _et al _ ( 2006 ) who cite support for their forecasting model by assuring that their model agrees with the solar cycle data for eight solar cycles in reality this is not difficult to achieve with enough model parameters at one s disposal .figure 5 shows the results of integrating the predictor models for two different choices of incorrect parameter .note that even though the predictor has been designed to reproduce the target over a number of cycles and that the predictor is very closely related to the target , there is still a good chance that it can get the next cycle incorrect , with significant errors in ( particularly ) the cycle amplitude .there are also clear variations in the cycle period , which obviously implies that the exact time between successive cycle maxima is also an unpredictable feature of the system .we stress again that any mean - field model of solar activity includes transport coefficients that are still uncertain possibly to an order of magnitude ( and certainly not to accuracy ) .although the incredible success of global and local helioseismology is placing restrictions on the form of the differential rotation and the meridional flows , it is unlikely in the foreseeable future that significant constraints will be put on the transport coefficients or their nonlinear response to the mean magnetic field .having established that there are difficulties in obtaining reliable predictions by fitting mean - field models ( even if the modulation is deterministic in origin ) , it is of interest to determine whether or not more reliable predictions could be obtained by utilising more general timeseries analysis techniques . in order to reconstruct an attractor from a given timeseries ,it is necessary to define a corresponding phase space .there are various ways of doing this , but given ( any ) discrete timeseries , , in which the data is sampled at intervals of , the vector \label{eqn:10}\ ] ] defines a point in a -dimensional `` embedded '' phase space ( see , e.g. , farmer & sidorowich 1987 ; casdagli 1989 ) .given a time , the idea of a prediction algorithm is to find a mapping such that gives a good approximation to .the predictive mapping technique that is used here uses a local approximation method ( see , e.g. , casdagli 1989 ) , which considers the behaviour of the nearest neighbours , in phase space , to . by using a least squares fit , the subsequent evolution of each of these neighbouring points in phase spaceis used to construct a piecewise - linear approximation to the predictive map , .this approximate mapping can then be applied to to obtain an estimate for .this algorithm can then be repeated to find estimates for and subsequent points .the optimal value for can be determined by minimising the error of this predictive algorithm over the known segment of the timeseries .the results of applying this predictor algorithm to the target solution are also shown in figure 5 , where the timeseries predictions are shown as crosses .the prediction is started from the cycle maximum before the mean - field predictor diverges from the target .longer training timeseries lead to a more densely - populated reconstructed attractor , which increases the probability of making more accurate predictions .however , rather than using the entire target run , these predictions are based upon ( approximately ) cycles this will give a fairer comparison between these results and timeseries predictions that are based upon the real sunspot data .the application of the algorithm to earlier segments of the timeseries suggests that a value of is required in order to minimise predictive errors .as can be seen from figure 5 , this algorithm appears to predict the magnitude of the maximum of the following cycle to a reasonable degree of accuracy , although the predictions subsequently diverge from the target .whilst neither of these techniques are capable of producing reliable long - term predictions , these results do suggest that for the short - term prediction of solar magnetic activity , timeseries analysis techniques may provide a viable alternative to predictions based simply upon mean - field dynamo models ( provided stochastic effects can be neglected ) .solar magnetic activity arises as a result of a hydromagnetic dynamo that much we believe to be true .as yet , there is no consensus on the location of the dynamo , the dominant nonlinear or stochastic effects , or even the fundamental processes that are responsible for the operation of such a dynamo .although plausible mechanisms have been proposed , as yet none of these are entirely satisfactory . against this background ,there is a drive to be able to predict solar activity with greater accuracy , due to the importance of this activity in driving solar events .what we have demonstrated here is that no meaningful predictions can be made from illustrative mean - field models , no matter how they are constructed .if the mean - field model is constructed to be a driven linear oscillator then the small stochastic effects that lead to the modulation will have an extremely large effect on the basic cycle and make even short - term prediction extremely difficult .the second scenario , where the modulation arises as a result of nonlinear processes rather than stochastic fluctuations , is clearly a better one for prediction though here too , prediction is fraught with difficulties .owing to the inherent nonlinearity of the dynamo system , long - term predictions are impossible ( even if the form of the model is completely correctly determined ) .furthermore , even short - term prediction from mean - field models is meaningless because of fundamental uncertainties in the form and amplitude of the transport coefficients and nonlinear response .any deterministic nonlinear model that produces chaotically modulated activity cycles will be faced with the same difficulties .the equations that describe dynamo action in the solar interior are known to be nonlinear partial differential equations the momentum equation is nonlinear in both the velocity and the magnetic field .one indication of the role played by nonlinear effects in the solar dynamo is the presence of cyclic variations in the solar differential rotation ( the `` torsional oscillations '' ) .furthermore estimates of the field strength at the base of the convection zone consistent with the observed formation of active regions yield fields of sufficient strength ( g ) for the nonlinear lorentz force to be extremely significant , whilst the flows are vigorously nonlinear and turbulent .it therefore seems extremely unlikely that the dynamics of the solar interior can be described by a forced linear system without throwing away much ( if not all ) of the important physics . in this caseit must be argued _ not only _ that this discarded physics is irrelevant to the dynamo process but also that the parameterisation of the unresolved physics should not include a stochastic component , as this would have an extremely large effect on such a relinearised system .it is certainly tempting to try to use the observed magnetic flux at the solar surface as an input to a model for prediction ( whether nonlinear or stochastic , mean - field or full mhd ) .certainly any fully consistent solar activity model constructed in the future should be capable of reproducing the observed pattern of magnetic activity at the solar surface , although this will require a complete understanding not only of the generation process via dynamo action , but also the processes which lead to the formation and subsequent rise of concentrated magnetic structures from the solar interior to the surface .however it is not clear what role the flux at the solar surface plays in the basic dynamo process .is it inherent to the process ( as modelled by flux transport dynamos ) or simply a by - product of the dynamo process that is occurring deep within the sun ?estimates suggest that between and of the solar flux generated in the deep interior makes it to the solar surface ( e.g. galloway & weiss 1981 ) . for the flux at the solar surface to be the key for dynamo action, it must be explained why the majority of the magnetic flux that resides in the solar interior plays such a little part in the dynamics ( to such an extent that it does not even appear as a small stochastic perturbation to the large - scale flux transport dynamo ) .finally it is important to stress that _ even if _ a model has been tuned so as to reproduce results over a number of solar activity cycles , then there is a good chance of error in the prediction for the next cycle .any advection - diffusion system in which one is free to specify not only the sources and the sinks but also the transport processes can be tuned to reproduce any required features of activity .moreover , the formulation of a prediction in terms of a parameterised mean - field model does not inherently put the prediction on a sounder scientific basis than a prediction based on methods of timeseries analysis alone ( some of which use very sophisticated mathematical techniques ) .this , of course , is not to say that any given prediction from such a model will be incorrect , just that the basis for making the prediction has no strong scientific support .
we discuss the difficulties of predicting the solar cycle using mean - field models . here we argue that these difficulties arise owing to the significant modulation of the solar activity cycle , and that this modulation arises owing to either stochastic or deterministic processes . we analyse the implications for predictability in both of these situations by considering two separate solar dynamo models . the first model represents a stochastically - perturbed flux transport dynamo . here even very weak stochastic perturbations can give rise to significant modulation in the activity cycle . this modulation leads to a loss of predictability . in the second model , we neglect stochastic effects and assume that generation of magnetic field in the sun can be described by a fully deterministic nonlinear mean - field model this is a best case scenario for prediction . we designate the output from this deterministic model ( with parameters chosen to produce chaotically modulated cycles ) as a target timeseries that subsequent deterministic mean - field models are required to predict . long - term prediction is impossible even if a model that is correct in all details is utilised in the prediction . furthermore , we show that even short - term prediction is impossible if there is a small discrepancy in the input parameters from the fiducial model . this is the case even if the predicting model has been tuned to reproduce the output of previous cycles . given the inherent uncertainties in determining the transport coefficients and nonlinear responses for mean - field models , we argue that this makes predicting the solar cycle using the output from such models impossible .
first introduced by gallager , ldpc codes have been the focus of intense research in the past decade and many of their properties are now well - understood .the iterative decoding algorithms for ldpc codes have been analyzed in detail , and asymptotic performance results have been derived . however , estimation of frame - error - rate ( fer ) for iterative decoding of finite - length ldpc codes is still an unsolved problem .a special case of interest is the performance of iterative decoding at high signal - to - noise ratio ( snr ) . at high snrs ,a sudden degradation in the performance of iterative decoders has been observed , .this abrupt change manifested in the fer curve is termed as an `` error - floor . ''the error - floor problem is well - understood for iterative decoding over the binary erasure channel ( bec ) .combinatorial structures called `` stopping sets '' were used to characterize the fer for iterative decoding of ldpc codes over the bec .it was established that decoding failure occurs whenever all the variables belonging to stopping sets are erased . used this fact to construct irregular ldpc codes which avoid small stopping sets thus improving the guaranteed erasure recovery capability of codes under iterative decoding , and hence improving the error - floors . as in the case of bec , a strong connection has been found between the existence of low - weight uncorrectable error patterns and error - floors for additive white gaussian noise ( awgn ) channels and binary symmetric channels ( bsc ) ( see and ) .hence , studying the guaranteed error correction capability of codes under iterative decoding is important in the context of characterization and improvement of the performance of iterative decoding strategies . in the past, guaranteed error correction has been approached from the perspective of the decoding algorithm as well as from the perspective of code construction .sipser and spielman used expansion arguments to derive sufficient conditions for the parallel bit - flipping algorithm to correct a fraction of errors in codes with column - weight greater than four .burshtein proved that for large enough lengths , almost all codes with column - weights greater than or equal to four can correct a certain fraction of errors under the bit - flipping algorithm .burshtein and miller derived the sufficient conditions for message - passing decoding to correct a fraction of errors for codes of column - weight greater than five .however , these proofs were not constructive , i.e. , no explicit code construction which satisfied the sufficient conditions was provided .moreover , the code - lengths required to guarantee the correction of a small number of errors ( say ) is very high .also , these arguments can not be extended for message - passing decoding of codes with column - weight three or four . in order to construct codes with good error correcting properties under iterative decoding , progressive edge growth ( peg ) and constructions based on finite geometries have been used .however , codes constructed from finite geometries typically have very high column - weight .although , it has been proved that minimum distance grows at least linearly for codes constructed using peg , no results proving guaranteed error correction under iterative decoding exist for these codes . in this work ,we derive necessary and sufficient conditions for the correction of three errors in a column - weight - three code under the hard - decision message - passing algorithm .we provide a modified peg construction which yields codes with such an error - correction capability . also , we derive the necessary and sufficient conditions for the correction of three errors in four iterations for the case of codes with column - weight four .again , we provide a modified peg construction which yields codes with such error - correction capability .the remainder of the paper is organized as follows : we establish the preliminaries of the work in section [ section2 ] .the necessary and sufficient conditions for the correction of three errors in column - weight - three codes are derived in section [ section3 ] .the case of column - weight - four codes is dealt with in section [ section4 ] . in section [ section5 ] ,we describe a technique to construct codes satisfying the conditions of the theorems and provide numerical results .we conclude with a few remarks in section [ section6 ] .in this section , we first describe the tanner graph representation of ldpc codes .then , we establish the notation that will be used throughout this paper .finally , we describe the hard - decision message - passing algorithm that will be used for decoding .the tanner graph of an ldpc code , , is a bipartite graph with two sets of nodes : , the variable ( bit ) nodes and , the check ( constraint ) nodes .every edge in the bipartite graph is associated with a variable node and a check node .the check nodes ( variable nodes , respectively ) connected to a variable node ( check node , respectively ) are referred to as its neighbors .the degree of a node is the number of its neighbors . in a -regular ldpc code , each variable node has degree and each check node has degree .the girth is the length of the shortest cycle in .let such that .if for all choices of , there are at least neighbors of in , then we say that the condition is satisfied . in this paper, represents a variable node , represents an even - degree check node and represents an odd - degree check node .let $ ] , a binary -tuple , be the input to the message - passing decoder .let be a variable node with as its corresponding bit and be a check node neighboring .let vcj denote the message that sends to in the first half of the iteration and cvj denote the message that sends to in the second half of the iteration additionally , let v : j be the set of all messages from a variable to all its neighboring checks in the first half of the iteration .let v:\cj be the set of all messages that a variable node sends to all its neighboring checks except in the first half of the iteration .let be the set of all messages received by from all its neighboring in the second half of the iteration .let be the set of all messages received by from all its neighboring check nodes except in the second half of the iteration .c : j , c:\vj , and are defined similarly .the gallager algorithms can be defined as follows : the forward messages , vcj ( from variables to checks ) , are defined as where refers to the total number of messages which are of the value .the backward messages , cvj ( from checks to variables ) , are defined as at the end of each iteration , an estimate of each variable node is made based on the incoming messages and possibly the received value .the decoder is run until a valid codeword is found or until a maximum number of iterations , say , is reached , whichever is earlier . in eqn .( [ equation : forward ] ) , is a threshold which is generally a function of the iteration number , , and the degree of the variable . in this paper , we use for all and for decoding column - weight - three codes . for column - weight - four codes, we use for all when and for all when . _ remark : _ we note that eqns .[ equation : forward ] and [ equation : backward ] then correspond to the gallager - b algorithm . for the gallager - a algorithm , , for all , where is degree of variable node .+ _ a note on the decision rule : _ different rules to estimate a variable node after each iteration are available , and it is likely that changing the rule after certain number of iterations may be beneficial . however , the analysis of such scenarios is beyond the scope of this paper . throughout the paper ,we use the following decision rule : if all incoming messages to a variable node from neighboring checks are equal , set the variable node to that value ; else set it to its received value .we discuss briefly the concept of trapping sets .consider an ldpc code of length .let be the binary vector which is the input to the hard - decision decoder .for output symmetric channels , without loss of generality , we can assume that the all - zero - codeword is transmitted .we make this assumption throughout this paper .the support of a vector denoted by is defined as the set of all positions where .for each , , let be the codeword estimate of the decoder at the end of the iteration .a variable node is said to be _ eventually correct _ if there exists a positive integer such that for all , does not belong to . a decoding failure is said to have occurred if there does not exist such that let denote the set of variable nodes that are not eventually correct . if is not empty , let and be the number of odd - degree check nodes in the subgraph induced by .we say that is an trapping set .let be a trapping set and let .the critical number of trapping set is the minimum number of variable nodes that have to be initially in error for the decoder to end up in the trapping set .that is , . let be a trapping set .if , then is a failure set of . for transmission over the bsc , is a fixed point of the decoding algorithm if for all .it follows that for transmission over the bsc , if is a fixed point , then is a trapping set .now , we have the following theorem which provides the sufficient condition for a set of variables to be a trapping set : [ thm1] let be the tanner graph of a column - weight - three code .let , be a set consisting of variable nodes with induced subgraph .let the checks in be partitioned into two disjoint subsets , namely , consisting of checks with odd degree and consisting of checks with even degree .if ( a ) every variable node in is connected to at least two checks in and at most one check in and ( b ) no two checks of are connected to the same variable node outside , then is a trapping set .see .in this section , we establish necessary and sufficient conditions for a column - weight - three code to correct three errors .we first illustrate three trapping sets and show that the critical number of these trapping sets is three thereby providing necessary conditions to correct three errors .we then prove that avoiding structures isomorphic to these trapping sets in the tanner graph is sufficient to guarantee correction of three errors .[ trappingsets ] shows three subgraphs induced by different numbers of variable nodes .let us assume that in all these induced graphs , no two odd degree checks are connected to the same variable node outside the graph . by the conditions of theorem [ thm1 ] , all these induced subgraphs are trapping sets . fig .[ sixcycle ] is a trapping set , fig .[ 53trappingset ] is a trapping set and fig .[ weight8codeword ] is a trapping set .note that a trapping set is isomorphic to a six - cycle and the trapping set is a codeword of weight eight .we now have the following result : [ sixcycle ] [ 53trappingset ] [ weight8codeword ] the critical number for a trapping set which is also a fixed point is at most three . thereexist ( 5,3 ) and ( 8,0 ) trapping sets with critical number three and no ( 5,3 ) or ( 8,0 ) trapping sets with critical number less than three .[ vc1 ] [ cv1 ] [ vc2 ] [ cv2 ] the proof for case is trivial .we prove the lemma for the case of trapping sets and omit the proof for trapping sets .consider the trapping set shown in fig .[ messagepassing ] .let be the set of variables which are initially in error .let and .also , assume that no variable node in , has two or more neighbors in . in the first iteration , we have : consequently , all variable nodes in are decoded incorrectly at the end of the first iteration . in the second iteration : and all variable nodes in are decoded incorrectly . continuing in this fashion , and is , the messages being passed in the tanner graph would repeat after every two iterations .hence , three variable nodes in error initially can lead to a decoder failure and therefore , this trapping set has critical number equal to three . to guarantee that three errors in a column - weight - three ldpc code can be corrected by the gallager - a algorithm , it is necessary to avoid , and trapping sets in its tanner graph .follows from the discussion above .we now state and prove the main theorem .if the tanner graph of a column - weight - three ldpc codes has girth eight and does not contain a subgraph isomorphic to a trapping set or a subgraph isomorphic to an trapping set , then any three errors can be corrected using the gallager - a algorithm .let be the three erroneous variables and be the set of the checks connected to the variables in . in a column - weight - three code ( free of cycles of length four )the variables in can induce only one of the five subgraphs given in fig .[ errorconfigs ] . in each case , if and is otherwise .the proof proceeds by examining these subgraphs one at a time and proving the correction of the three erroneous variables in each case .[ config1 ] [ config2 ] [ config3 ] [ config4 ] [ config5 ] * subgraph 1 : * since the girth of the code is eight , it has no six cycles .hence , the configuration in fig . [ config1 ] is not possible .* subgraph 2 : * the variables in induce the subgraph shown in fig .[ config2 ] . at the end of the first iteration : there can not exist a variable node which is connected to two or more checks in the set without introducing either a six - cycle or a subgraph isomorphic to trapping set .at the end of first iteration , for all .furthermore , there exists no for which .hence , if a decision is made after the first iteration , a valid codeword is found and the decoder is successful . *subgraph 3 : * the variables in induce the subgraph shown in fig .[ config3 ] . at the end of the first iteration : for no , as this would introduce a four - cycle or a six - cycle in the graph . for any , only if .this implies that has two checks in .let be the set of such variables .we have the following lemma : there can be at most one variable in .suppose .specifically , assume .the proof is similar for .first note that for any , can not be connected to c14 as it would create a six - cycle .next , let and . then , can not have both checks in either c11 or c12 as this would cause a four - cycle .hence , has one check in c11 and one check in c12 .assume without loss of generality that v21 is connected to c11 and c16 .then , v22 can not be connected to c11 and c17 as this would form a six - cycle .v22 can not be connected to c12 and c17 as it would create a trapping set .hence , .let be connected to c11 , c16 and an additional check c21 . in the second iteration: we have the following lemma : [ lemma3:twoincorrectmessages ] there can not exist any variable such that it receives two or more incorrect messages at the end of the second iteration .suppose there existed a variable such that it received two incorrect messages in the second iteration .then , it would be connected to two checks in the set .this is not possible as it would introduce a four - cycle , six - cycle or a trapping set ( _ e.g. _ if is connected to c14 and c21 , it would form a trapping set ) .thus , in the third iteration : at the end of the third iteration , for all . also , we have the following lemma : there exists no such that .suppose there exists such that .then , is connected to three checks in the set .this implies that .however , from lemma [ lemma3:twoincorrectmessages ] it is evident that no such exists .hence , if a decision is made after the third iteration , a valid codeword is found and the decoder is successful . *subgraph 4 : * the variables in induce the subgraph shown in fig .[ config4 ] . at the end of the first iteration : for no , . for any , only if .let be the set of all such variables .we have the following lemma : \(i ) has at most four variables , and ( ii ) no two variables in can share a check in ._ sketch of the proof _ : there exists no variable which is connected to two checks from the set as it would introduce a four - cycle or a six - cycle .however , a variable node can be connected to one check from and to one check from .there can be at most four such variable nodes .when four such variable nodes exist , none are connected to c13 .also , these four variable nodes can not share checks outside the set .let these four variable nodes be labeled v21 , v22 , v23 and v24 and their third checks c21 , c22 , c23 and c24 , respectively .let .hence , in the second iteration : at the end of the second iteration for all . moreover , for no , .so , if a decision is made after the second iteration , a valid codeword is reached and the decoder is successful .* subgraph 5 : * the variables in induce the subgraph shown in fig .[ config5 ] . at the end of the first iteration : if there exists no variable such that , a valid codeword is reached after the first iteration .suppose this is not the case .let be the set of variables which receive two or more incorrect messages .then , we have the following lemma : \(i ) there exists one variable such that , and ( ii ) has at most three variables which receive two incorrect messages at the end of the first iteration. furthermore , they can not share a check in .we omit the proof of part ( ii ) as it is straightforward . part ( i )is proved as follows : if there existed no variable , v21 , such that , then the decoder would converge in one iteration .next , suppose such that .without loss of generality , let v21 be connected to and .then , v21 would share two checks in the set .it is thennot possible to connect v22 without introducing a six - cycle or a trapping set ( _ e.g. _ , if v21 is connected to c12 , c15 and c18 , then it would introduce a trapping set ) .let the third checks connected to v22 , v23 and v24 be c21 , c22 and c23 , respectively and let in the second iteration : there can not exist a variable node which is connected to one check from and to one check from .also , there can not be a variable node which is connected to all three checks in the set as this would introduce a graph isomorphic to the trapping set .however , there can be at most two variable nodes which receive two incorrect messages from the checks in , say v31 and v32 .let the third checks connected to them be c31 and c32 , respectively .let and . at the end of the second iteration, variables v11 , v12 and v13 receive one incorrect message each .variables in the set receive two incorrect messages each .therefore , in the third iteration , we have : at the end of the third iteration , for all .furthermore , for no , .so , if a decision is made after the third iteration , a valid codeword is reached and the decoder is successful .in this section , we derive necessary and sufficient conditions for the correction of three errors in column - weight - four codes in four iterations of iterative decoding .this result is inspired by the analysis of error events in high - rate codes with column - weight four . in simulations , it was found that received vectors which did not converge to a valid codeword in the first 4 to 5 iterations did not converge thenceforth .hence , it is desirable to devise codes and decoding strategies in which vectors having a small number of errors converged rapidly to a codeword . to this end, it was found that a hybrid decoding strategy could correct three errors in four iterations if certain conditions are satisfied by the code .this result is summarized as follows : [ theorem:4:1 ] an ldpc code with column - weight four and girth six can correct three errors in four iterations of message - passing decoding if and only if the conditions , , , , and are satisfied . _remark : _ it is worth noting that if a graph of girth six satisfies the condition , then it satisfies the condition as well .however , the addition of this extra constraint aids in the proof of the theorem .first , we prove the sufficiency of the conditions of theorem [ theorem:4:1 ] .let be the three erroneous variables .let be the set of checks that are connected to the variables in .the variables in can induce only one of the five subgraphs shown in fig .[ figure4 ] .we prove that in each case , the decoding algorithm converges to the correct codeword in four iterations .+ + * subgraph 1 * : the variables in induce the subgraph shown in fig . [ figure4a ] . at the end of the first iteration , for all .moreover , no variable receives four incorrect messages after the first iteration as the existence of such a variable node would create a four - cycle .if a decision is made after the first iteration , the decoder is successful .* subgraph 2 * : the variables in induce the subgraph shown in fig .[ figure4b ] . at the end of the first iteration : for no , as it would introduce a four - cycle . for any , only if .this implies that is connected to three checks in .let denote the set of such variables .we have the following lemma : there can be at most three variables in .furthermore , no two variable nodes in share any check in the set .let .then the set of variable nodes has at most 15 neighboring checks .this violates the condition .hence , can have at most three variables .next , let .suppose they share a fourth check .since v13 can share at most two checks with v21 and v22 , assume that c110 and c111 are not neighbors of v21 , v22 .the neighbors of the variable nodes in the set all belong to the set which has cardinality 10 , thus violating the condition .let the fourth neighboring checks of v21 , v22 and v23 be c21 , c22 and c23 , respectively .let . in the second iteration: for all , . for no , .we now have the following lemma : there exists no variable such that .the proof is by contradiction .let such that .then , is connected to four checks in .note that only two neighbors of can belong to without introducing a four - cycle .this combined with the fact that there are at most three variable nodes in implies that there are only two cases : + ( a ) has two neighbors in and two neighbors in , say c21 and c22 . in this case, the set of variable nodes has check nodes , violating the condition .\(b ) has one neighbor in and three neighbors in .in this case , the set of variable nodes has check nodes , violating the condition . hence, if a decision is made after the second iteration , the decoder is successful .* subgraph 3 * : the variables in induce the subgraph shown in fig .[ figure4c ] . at the end of the first iteration , v11 , v12 and v13receive correct messages from all their neighboring check nodes. moreover , there exists no variable which receives four incorrect messages from checks in the set .hence , if a decision is made after the first iteration , the decoder is successful .* subgraph 4 * : the variables in induce the subgraph shown in fig .[ figure4d ] . at the end of the first iteration : for no , as it this would introduce a four - cycle .for any , only if .this implies that has three checks in the set .let be the set of such variables .we now have the following lemma : there can be at most two variables in .moreover , there exists no check which is shared by two variables in the set .let .then , the set has at most checks which violates the condition .hence , has at most two variables . next , let two variables share a check . then , the set has at most 11 checks which violates the condition .let be the set of checks which are connected to variables in .in the second iteration we have : for no , , for such a structure can not exist without creating a four - cycle or violating one of and conditions . for any , only if .this implies that has three neighbors in the set .let be the set of such variables .we have the following lemma : for the sets and , the following are true : + ( i ) if , then is empty .+ ( ii ) if , then . + ( iii ) .we prove the lemma part by part .+ ( i ) suppose and that is not empty .let .then , the set is of size 6 and has at most checks which violates the condition .+ ( ii ) suppose .let .if is empty , then v31 is connected to three checks in .this is not possible as .next , suppose that .let .then has at most checks which violates the condition .+ ( iii ) suppose . by ( ii ) , there exists a variable .then , has at most checks which violates the condition .suppose .denote the fourth check of by .then , we have at the beginning of the third iteration : at the end of the fourth iteration , for . thus ,if a decision is made at the end of this iteration , all are decoded correctly .now we prove the following lemma : there exists no such that .suppose that is empty and that there exists a variable such that .if is empty , then , is connected to four checks in .this is not possible as it would cause a four - cycle .if is not empty , then is connected to four checks in .then , we would have . however , from above , no such variable exists . next , suppose that and that . then, c31 is the only check such that and .it follows then that for any , if , then is connected to c31 . also , it is connected to three checks in the set .then the set of variables has at most checks .this violates the condition .hence , if a decision is made after the third iteration , the decoder is successful .* subgraph 5 * : the variables in induce the subgraph shown in fig . [ figure4e ] .for all , .there exist no that receive three incorrect messages , for the existence of such a variable would violate the condition .hence , for all , .let be the set of variables that have two checks in the set .let be the remaining two checks of these variables .at the beginning of the fourth iteration , the decoder switches to the gallager - b mode . then : at the end of the fourth iteration , for .moreover , for no , , as the existence of such a variable would either induce a four - cycle or violate one of the , , or conditions ( the arguments used are similar to the ones used for subgraph 2 and subgraph 4 ) . hence ,if a decision is made at the end of the fourth iteration , the decoder is successful .now we prove the necessity of the conditions of the theorem .we prove this by giving subgraphs which violate _ one _ condition and are not successfully decoded in four iterations .since the validity of these claims can be checked easily , a detailed proof is omitted .* necessity of the condition * consider the subgraph shown in fig .[ figure4_10_subgraph ] . in this case, the condition is not satisfied and the errors are not corrected at the end of the fourth iteration .hence , in order to guarantee the correction of three errors in four iterations , the condition must be satisfied .subgraph[figure4_10_subgraph],scaledwidth=50.0% ] * necessity of the condition * there exists no graph of girth six which satisfies the condition but does not satisfy the condition .* necessity of the condition * consider the graph shown in fig .[ figure6_13_subgraph ] . the graph shown satisfies the and the conditions but not the condition .the errors are not corrected in four iterations .hence , in order to guarantee the correction of three errors in four iterations , the condition must be satisfied .subgraph[figure6_13_subgraph],scaledwidth=50.0% ] * necessity of the condition * consider the graph shown in fig . [ figure7_15_subgraph ] . the graph shown satisfies the probability , and the conditions but not the condition .the errors are not corrected at the end of the fourth iteration .hence , in order to guarantee the correction three errors in four iterations , the condition must be satisfied .subgraph[figure7_15_subgraph],scaledwidth=50.0% ] * necessity of the condition * consider the graph shown in fig .[ figure8_17_subgraph ] . the graph shown satisfies the , , and the condition but not the condition .the errors are not corrected at the end of the fourth iteration . hence , in order to guarantee the correction of three errors in four iterations , the condition must be satisfied .subgraph[figure8_17_subgraph],scaledwidth=50.0% ] in this section , we proved necessary and sufficient conditions to guarantee the correction of three errors in column - weight - four codes using an iterative decoding algorithm . by analyzing the messages being passed in subsequent iterations , it may be possible to get smaller bounds on the number of check nodes required in the `` small '' subgraphshowever , we hypothesize that the size of subgraphs to be avoided would be larger .in this section , we describe a technique to construct codes with column - weight three and four which can correct three errors . codes capable of correcting a fixed number of errors show superior performance on the bsc at low values of transition probability .this is because the slope of the fer curve is related to the minimum critical number . a code which can correct errors has minimum critical number at least and the slope of the fer curve is .we restate the arguments from to make this connection clear .let be the transition probability of a bsc and be the number of configurations of received bits for which channel errors lead to codeword ( frame ) error .the frame error rate ( fer ) is given by : where is the minimal number of channel errors that can lead to a decoding error and is length of the code . on a semi - log scalethe fer is given by for small , the expression above is dominated by the first two terms .that is , the vs. graph is close to a straight line with slope equal to the minimal critical number . if two codes and have minimum critical numbers and such that then the code will perform better than for small enough independent of the number of trapping sets . from the discussion in above and in section [ section3 ] , it is clear that for a code to have an fer curve with slope at least , the corresponding tanner graph should not contain the trapping sets shown in fig .[ trappingsets ] as subgraphs .we now describe a method to construct such codes .the method can be seen as a modification of the peg construction technique used by hu _the algorithm is detailed below as algorithm [ algorithm:1 ] .[ algorithm:1 ] note that checking for a graph isomorphic to trapping set is computationally complex .since , the peg construction empirically gives good codes , it is unlikely that it introduces a weight - eight codeword .however , once the graph is grown fully , it can be checked for the presence of weight - eight codewords and these can be removed by swapping few edges . using the above algorithm ,a column - weight - three code with variable nodes and check nodes was constructed .the code has slight irregularity in check degree .there is one check node degree five and one check node with degree seven , but the remaining have degree six .the code has rate 0.5 . in the algorithm, we restrict maximum check degree to seven .the performance of the code on bsc is compared with the peg code of same length .the peg code is empirically the best known code at that length on awgn channel .however , it has fourteen trapping sets .[ pegnewvsold ] shows the performance comparison of the two codes .as can be seen , the new code performs better than the original peg code at small values of ., scaledwidth=60.0% ] unlike column - weight - three codes , the construction of column - weight - four codes involves ensuring certain expansion on subsets of variable nodes .this can be done only in time which grows exponentially with the length of the code .hence , we consider the condition rather than the necessary and sufficient conditions discussed in section [ section4 ] . it can be shown that the condition is sufficient for the , , , and the conditions .there are only two graphs of girth with variable nodes and check nodes[ figure:4_11 ] shows these two graphs .avoiding these two subgraphs will ensure a code which can correct three errors .an algorithm for the construction of such codes is similar to the modified peg algorithm given in algorithm [ algorithm:1 ] .this algorithm was used to generate a code of length , girth and rate .the code constructed has a slight irregularity in that three check nodes have degree nine and three have degree seven . _remark : _ for the code parameters given above , it was possible to generate a code which satisfied the condition . however , it might not be possible to satisfy this condition for codes with higher rate and/or shorter lengths .should such a scenario arise , the set of subgraphs to be avoided should be changed ( _ e.g. _ , to those specified in the necessary and sufficient conditions ) .however , the code construction time will be larger .hence , at the cost of code - construction time and complexity , it is possible to achieve shorter lengths and/or higher rates .[ figure:4_11_subgraph_1 ] variable nodes and check nodes .subgraphs with variable nodes and fewer than check nodes do not exist.[figure:4_11 ] , title="fig:",scaledwidth=40.0% ] [ figure:4_11_subgraph_2 ] variable nodes and check nodes .subgraphs with variable nodes and fewer than check nodes do not exist.[figure:4_11 ] , title="fig:",scaledwidth=40.0% ] fig .[ figure : colwtfourplot ] shows the performance of the code under message - passing decoding .the curve on the left corresponds to four iterations of message - passing .the curve in the right corresponds to iterations of message - passing . after only four iterations , errors of weight four and abovewere encountered which were not corrected by the message - passing decoder . however , after iterations , the smallest weight error pattern still remaining had a weight of .we note that the average slope of the fer curve is which is the weight of the dominant error event at these probabilities of error .this suggests that analysis over a higher number of iterations and on `` larger '' subgraph search will yield a stronger result .however , this is beyond the scope of this paper .also , it is worth noting that the conditions of theorem [ theorem:4:1 ] avoid codewords of length through which improves the minimum distance of the code . ,in this paper , we provided a method to derive conditions that guarantee the correction of a finite number of errors by hard - decision decoding .although more involved than the expander arguments used in previous works , it results in better bounds .moreover , in contrast to previous expansion arguments , our results give rise to code - construction techniques that yield codes with guaranteed error - correction ability under message - massing decoding at _ practically feasible lengths_. this method can be applied to ( a ) provide conditions for guaranteed correction of a larger number of errors , ( b ) yield similar results for higher column - weights and/or higher girths . however, such applications would be more involved than the analysis done in this work .d. j. c. mackay and m. j. postol , `` weaknesses of margulis and ramanujan margulis low - density parity - check codes , '' in _ proc . of mfcsit2002 , galway _ , ser .electronic notes in theoretical computer science , vol .74.1em plus 0.5em minus 0.4emelsevier , 2003 .[ online ] .available : http://www.inference.phy.cam.ac.uk/mackay/abstracts/margulis.html s. k. chilappagari , s. sankaranarayanan , and b. vasic , `` error floors of ldpc codes on the binary symmetric channel , '' in _ proc . of international conference on communications _, vol . 3 , june 11 - 15 2006 , pp .10891094 .y. kuo , s. lin , and m. p. c. fossorier , `` low - density parity check codes based on finite geometries : a rediscovery and new results , '' _ ieee trans .inform . theory _47 , no . 7 , pp . 27112736 , nov .2001 .m. ivkovic , s. k. chilappagari , and b. vasic , `` eliminating trapping sets in low - density parity - check codes by using tanner graph covers , '' _ ieee trans .inf . theory _ ,54 , no . 8 , pp . 37633768 , 2008 .[ online ] .available : http://dx.doi.org/10.1109/tit.2008.926319
in this paper , we give necessary and sufficient conditions for low - density parity - check ( ldpc ) codes with column - weight three to correct three errors when decoded using hard - decision message - passing decoding . additionally , we give necessary and sufficient conditions for column - weight - four codes to correct three errors in four iterations of hard - decision message - passing decoding . we then give a construction technique which results in codes satisfying these conditions . we also provide numerical assessment of code performance via simulation results . submitted to ieee transactions on information theory , october 2008
astronomical polarimetry requires frequent calibration operations to remove the instrumental polarization introduced by the various optical elements encountered by the light beam along its path .this contamination is usually determined by placing calibration optics early in the light path , which is used to feed light in a known state of polarization into the instrument . by measuring the polarization of the light that comes out of the system , it is possible to characterize it in terms of its jones matrix ( or , alternatively , the mueller matrix if one is using the stokes formalism ) .obviously , we can only characterize and calibrate the optical elements that the beam encounters _ after _ the polarization calibration optics .therefore , calibrating a telescope requires placing such optics at the telescope entrance , before the first reflection on the primary mirror ( m1 ) occurs .this approach is being successfully employed for the dunn solar telescope ( at the sacramento peak observatory , managed by the national solar observatory ) and the german vtt on the island of tenerife ( at the observatorio del teide of the instituto de astrof ' isica de canarias ) . in both cases ,an array of linear polarizers and retarders are slided in the light path , on top of the entrance window , for calibration operations .separate calibrations are obtained for the telescope and the instrument , so that the former does not need to be done as frequently .the jones matrix of the complete system is then obtained as the product of the telescope and instrument matrices .unfortunately , entrance window polarizers are not practical for apertures larger than m diameter . in the pastthis has not been a major concern because : a)solar telescopes have apertures that do not exceed 1 m ; and b)large astronomical telescopes have not been used for polarization measurements . however , this scenario is starting to change .polarimetry is proving to be a very powerful tool to explore a broad range of astrophysical problems , resulting in a rapidly increasing interest to develop polarimeters for existing large telescopes .second , the development of the advanced technology solar telescope ( atst) demands a reliable method to calibrate a large telescope for high accuracy spectro - polarimetry .solar telescopes have been calibrated in the past by observing magnetic structures and making assumptions on the underlying physics .this poses important challenges , however , especially when pushing the envolope towards new observational domains .consider for example the atst , which is intended to do polarimetry at the accuracy level .one of the common assumptions that is usually made in solar polarimetry is to consider that the continuum radiation is unpolarized .however , scattering processes can polarize the continuum and generate signals of the order of % in the blue side of the visible spectrum .it has been stated that `` the direct observation of the polarisation of the continuous radiation still is a major outstanding observational challenge '' .considerations on the symmetry properties of stokes profiles are not appropriate either .gradients in the line - of - sight velocity or magnetic field introduce spurious asymmetries that can invalidate these assumptions .furthermore , some physical processes operating in the atoms are known to induce stokes asymmetries even in the absence of gradients .spectral lines forming in the incomplete paschen - back regime become asymmetric .moreover , the alignment - to - orientation conversion mechanism may also cause profile asymmetries . in summary , it is very important to have an absolute calibration that does not rely on preconceived ideas on the objects under study .this is specially true when the instrumentation is being used to explore new scientific realms .the present work has been motivated by the challenge of calibrating the 4-meter primary mirror of the atst to meet its very stringet polarization requirements .it might also be possible to use the calibration method proposed here in other existing large - aperture telescopes .however , the actual design of a practical implementation is beyond the scope of this paper .the main point of this work is to show that a calibration setup with inclined beam incidence can be used to measure the polarization properties of a large - aperture primary mirror .geometrical effects can be calculated and removed from the measured jones or mueller matrices resulting in a good approximation to such matrices in normal observing conditions .in this paper we shall consider two different configurations : the normal observing setup ( os ) and the calibration setup ( cs ) proposed here .fig [ fig : setups ] shows a schematic representation of the os and cs for an on - axis m1 mirror .our ultimate goal is to determine the jones matrix of m1 in the os ( ) .a direct measurement of would require polarization optics of the same diameter as the telescope aperture , which is not practical due to technical difficulties .the matrix , on the other hand , can be determined by mounting appropriate calibration optics ( and a mechanical control system ) at a height over m1 ( as shown in fig [ fig : setups ] , right ) .a small aperture on the dome is probably an ideal location for it .the cs requires some ( small ) amount of additional optics with respect to the os .at least two lenses are required : one at the entrance to open the beam and another at the detector for imaging .these additional elements should be designed with stringent polarization requirements .aberrations , chromatism and other image imperfections can be tolerated in these components , which will only be used for calibration .even if they introduce some residual polarization , this should be easily measurable in the laboratory and removed from the m1 jones matrix .the size of the calibration optics affects the accuracy of the calibration . in the limit where it fills the entire telescope aperture , the cs and the os are identical andno correction is needed . as the calibration aperture becomes smaller , the incidence angles increase resulting in larger corrections ( and errors ) .the simulations presented in this paper consider the pessimistic limit in which the calibration optics has a diameter approaching zero ( a pinhole aperture ) .for the calculations in this paper it is convenient to use cylindrical coordinates with the vertical axis along the propagation direction of the incident light beam .the radial coordinate is the distance from the center of the aperture and is the azimuth angle measured from an arbitrary reference .any given point on the surface of the m1 mirror at coordinates ( , ) is characterized by its complex refraction index .we shall consider here the behavior of a monochromatic plane wave .this will allow us to describe the instrumental polarization of m1 in terms of its jones matrix .appendix [ sec : appendix ] gives the mueller equivalent of the most important jones matrices derived in this work .locally , the behavior of m1 in the vicinity of ( , ) may be approximated by a reflection on a flat mirror of homogeneous refraction index .this process adopts a very simple form in the reference system of the plane of incidence ( the plane formed by the incoming ray and the surface normal , see fig [ fig : angles ] ) .let us denote the components in the plane of incidence with the subindex and those perpendicular to it with . in this frame , the jones matrix of the reflection , , is simply : assuming that the refraction index of air is 1 , and are given by : where is the angle of incidence .this matrix can be transformed to the global reference frame ( , ) : where is the usual rotation matrix . writing down explicitly : an axi - symmetric mirror illuminated by a collimated beam ( os ) .the angle of incidence is constant along concentric rings in the mirror ( ) .for example , a parabolic mirror of focal length is characterized by the condition : suppose that the complex refraction index is constant over the surface of the mirror ( perfect mirror ) . in this case , and are only functions of ( because ) .the dependences of in eq ( [ explicit ] ) are easily separable .the jones matrix of a thin ring of radius is simply : eq ( [ ring ] ) represents the jones matrix of a non - polarizing system ( identical reflectivity and retardance for both components of the electric field ) .this is a well - known property of axi - symmetric systems .notice , however , that the symmetry is broken if one observes away from the center of the field of view .imperfections in the mirror ( irregularities in the refraction index caused by coating degradation , dust , etc ) may also break the symmetry of the system and introduce instrumental polarization .let us now turn to the more general case of an imperfect mirror , defined as one with .if this is the case then and vary across the mirror and eq ( [ ring ] ) is no longer valid . we seek to determine a suitable calibration by means of the ( measurable ) matrix .the differences between and are due to the different incidence angle of the beam ( represented in fig [ fig : angles ] ) .we can expand in a power series of as : and similarly for . inserting this expansion into eq ( [ explicit ] )we obtain : where and have been introduced for notational simplicity : ( and similarly for ) . in the equations above , , and all functions of .the angle can be easily determined from geometrical considerations .however , and are affected by imperfections in the mirror that change over time .let us separate into two components : a nominal derived from eqs ( [ rp ] ) and ( [ dp ] ) with a theoretical refraction index ( e.g. , from manufacturer specifications ) , and an unknown due to coating degradation , dust accumulation , etc : ( and similarly for ) .inserting this into eq ( [ expandm ] ) and integrating over the entire mirror surface , we have : where is the radius of the m1 mirror . can be calculated numerically as the integral of : mirror imperfections are accounted for by the ( measured ) , whereas non - collimated incidence is accounted for by the ( calculated ) . is an unknown second - order term that couples mirror imperfections and non - collimated incidence .this term is small ( as shown below ) and may be neglected for our purposes here .the number of terms to retain in the taylor expansion of eq ( [ deltam ] ) depends on the particular telescope configuration and the accuracy required .typical examples are presented below in which can be neglected entirely ( on - axis mirror ) or needs to be calculated up to second order ( off - axis mirror , see [ sec : offaxis ] ) . in the reminder of this sectioni present the results of numerical simulations that provide some insight into the various terms that are involved in the calibration procedure .the parameters of the simulation are listed in table [ table : onaxis ] .they represent a 4-m on - axis telescope with a silver coating on the m1 mirror .the coating has been degraded so that the complex refraction index fluctuates over the mirror surface as ] .furthermore , we do not know exactly the average refraction index of the mirror , but only an approximation . this approximate value will be used in the calculation of and ( eqs [ [ splitd ] ] and [ [ deltam ] ] ) .i carried out several experiments with different values of and to determine the sensitivity of the calibration to these parameters .some of the calculations include only the first - order dependence of on ( first term in the right - hand side of eq [ [ deltam ] ] ) .these are denoted by ( as opposed to , which considers the second - order term ) .table [ table : offaxis2 ] lists the results of these experiments .the first two rows give the highest polarizing term in for each simulation . the third to fifth rowsshow how the calibration error decreases with successive levels of approximation . by comparing the second and fifth rows, we can see that the calibration is able to reduce the instrumental polarization by an amount between one and two orders of magnitude .this paper introduces a new concept to calibrate telescopes for astronomical polarimetry .the proposed method is particularly useful for modern large - aperture telescopes , for which it is probably the only practical procedure ( at least for purely instrumental calibration ) .an accurate absolute calibration will be crucial for the new weak - signal science that the atst will open .existing night - time telescopes may also take advantage of this calibration procedure .the jones ( and mueller ) matrix of an on - axis mirror is almost unaffected by the non - collimated incidence of the beam in the cs . for the particular configuration considered in [ sec : onaxis ] , the jones matrix obtained from the calibration is good to almost with no need for any additional correction ( see eq [ [ onmcs ] ] ) .this is in spite of the relatively large mirror imperfections in the simulation , which generate polarizing terms of the order of 6% .an off - axis mirror suffers more instrumental polarization due to the asymmetric configuration .a mirror that produces instrumental polarization of a few percent can be calibrated to reach the level ( see table [ table : offaxis2 ] ) .in addition to measuring the jones matrix of the calibration setup ( ) , it is also necessary to calculate the correction term .this calculation is straightforward , though , and does not need to be recalculated unless the mirror is recoated or it degrades to a point where its _ average _ refraction index changes significantly .note that the calibration accuracy includes some uncertainty on the average refraction index of the mirror .i have used in this paper the jones matrix formalism , which deals directly with the components of the electric field of the light wave . sometimes , however , the mueller formalism is more adequate , especially when dealing with partially polarized or non - monochromatic light .many researchers are more familiar with the mueller matrices and the stokes parameters . for these reasonsit is probably useful to provide the mueller equivalent of the jones matrices derived in this work .eq ( [ onmos ] ) becomes : eq ( [ onmcs ] ) becomes : eq ( [ offmos ] ) becomes : eq ( [ offmcs ] ) becomes : eq ( [ offmcsdm ] ) becomes : work utilizes data from the advanced technology solar telescope ( atst ) project , managed by the national solar observatory , which is operated by aura , inc . under a cooperative agreement with the national science foundation .a. gandorfer , `` the second solar spectrum in the ultraviolet , '' in _ solar polarization workshop 3 _ , vol 307 of astronomical society of the pacific conference series , j. trujillo bueno and j. snchez almeida , eds . , asp , san francisco , california ) , p. 399( 2003 ) . h. socas - navarro , j. trujillo bueno , and e. landi deglinnocenti , `` `` signatures of incomplete paschen - back splitting in the polarization profiles of the hei 10830 multiplet '' , '' * _ in press _ * ( 2004 ) .e. landi deglinnocenti , in _ astrophysical spectropolarimetry _ , j. trujillo bueno , f. moreno - insertis , and f. s ' anchez , eds . , xii canary islands winter school of astrophysics , p. 1( cambridge university press , cambridge , uk , 2002 ) .max(|m^os - m^cs| ) & 1.2510 ^ -2 & 1.2510 ^ -2 & 1.2610 ^ -2 & 1.2610 ^ -2 + max [ |m^os- + ( m^cs + m^*)| ] & 4.0410 ^ -3 & 3.89 10 ^ -3 & 3.7110 ^ -3 & 2.5910 ^ -3 + max [ |m^os- + ( m^cs + m)| ] & 2.8410 ^ -4 & 4.1810 ^ -4 & 7.1110 ^ -4 & 1.8010 ^ -3 +
this paper describes a concept for the high - accuracy absolute calibration of the instrumental polarization introduced by the primary mirror of a large - aperture telescope . this procedure requires a small aperture with polarization calibration optics ( e.g. , mounted on the dome ) followed by a lens that opens the beam to illuminate the entire surface of the mirror . the jones matrix corresponding to this calibration setup ( with a diverging incident beam ) is related to that of the normal observing setup ( with a collimated incident beam ) by an approximate correction term . numerical models of parabolic on - axis and off - axis mirrors with surface imperfections are used to explore its accuracy .
we report here on an important step towards the ultimate goal of constructing numerical relativity codes that calculate accurately in 3d the gravitational radiation at future null infinity . by `` accurately ''we mean ( at least ) second - order convergent to the true analytic solution of a well posed initial value problem .thus our goal is to provide an accurate and unambiguous computational map from initial data to gravitational waveforms at infinity . of course, uncertainties will always exist in the appropriate initial data for any realistic astrophysical system ( e.g. in a binary neutron star system , the data for the metric components would not be uniquely determined by observations ) .but such a computational map enables focusing on the underlying physics in a rigorous way .most relativity codes are second - order convergent , but because of boundary problems the convergence may not be to the true analytic solution of the intended physical problem . in order to explain this point , and to give the idea behind our method , we first briefly review some aspects of numerical relativity . the predominant work in numerical relativityis for the cauchy `` 3 + 1 '' problem , in which spacetime is foliated into a sequence of spacelike hypersurfaces .these hypersurfaces are necessarily of finite size so , in the usual case where space is infinite , an outer boundary with an artificial boundary condition must be introduced .this is the first source of error because of artificial effects such as the reflection of outgoing waves by the boundary .next , the gravitational radiation is estimated from its form inside the boundary by using perturbative methods , which ignore the nonlinear aspects of general relativity in the region outside the boundary .for these reasons the numerical estimate of gravitational radiation is not , in general , convergent to the true analytic value at future null infinity .the radiation properties of the robinson - trautman metric will be used to illustrate this effect .an alternative approach in numerical relativity uses the characteristic formalism , in which spacetime is foliated into a sequence of null cones emanating from a central geodesic .this approach has the advantage that the einstein equations can be compactified so that future null infinity is rigorously represented on a finite grid , and there is no artificial outer boundary condition .however , it suffers from the disadvantage that the coordinates are based on light rays , which can be focussed by a strong field to form caustics which complicate a numerical computation .also , to date , the characteristic initial value problem has only been implemented numerically for special symmetries .our ultimate goal is a 3d cauchy - characteristic matching ( ccm ) code , which uses the positive features of the two methods while avoiding the problems .more precisely , the interior of a timelike worldtube is evolved by a cauchy method , and the exterior to future null infinity is evolved using a characteristic algorithm ; boundary conditions at are replaced by a two - way flow of information across . in relativity , under the assumption of axisymmetry without rotation ,there has been a feasibility study of ccm ; see also .ccm has been successfully implemented for nonlinear wave equations and demonstrated to be second - order convergent to the true analytic solution ( which is not true in a pure cauchy formulation with sommerfeld outer boundary condition ) .while ccm has aesthetic advantages , it is important to ask whether it is an efficient approach .the question can be posed as follows . for a given target error ,what is the amount of computation required for ccm compared to that required for a pure cauchy calculation ? it will be shown that the ratio tends to as , so that in the limit of high accuracy the effort is definitely worthwhile .our first step towards ccm is cauchy - characteristic extraction ( cce ) and we will present a partial implementation of cce in this paper .the idea of cce is to run a pure cauchy evolution with an approximate outer boundary condition .a worldtube is defined in the interior of the cauchy domain , and the appropriate characteristic data is calculated on ; then characteristic algorithms are used to propagate this gravitational field to future null infinity .cce is simpler than ccm to implement numerically , because in cce the data flow is one - way ( cauchy to characteristic ) whereas in ccm the data flows in both directions .note that the advantage of computational efficiency applies only to ccm and not to cce .however , we will show that the advantage of second - order convergence to the true analytic solution does apply , under certain circumstances , to cce .the work in this paper is part of the binary black hole grand challenge , which is concerned with the gravitational radiation resulting from the in - spiral and coalescence of two arbitrary black holes .however , the methods described here are not limited to black hole coalescence and could be applied to gravitational radiation from any isolated system , either with or without matter . in sec .[ sec:2 ] , we present a formalism for 3d characteristic numerical relativity in which the coordinates are based on null cones that emanate from a timelike worldtube ( recall that existing codes are in 2d with null cones emanating from a timelike geodesic ) .the characteristic einstein equations are written as a sum of two parts : quasispherical ( in a sense defined below ) plus nonlinear .the discretization and compactification of the einstein equations , with the nonlinear part ignored , is discussed in sec .[ sec:3 ] . a computer code has been written and in sec .[ sec:4 ] this code is tested on linearized solutions of the einstein equations , and extraction is tested on the nonlinear robinson - trautman solutions .the robinson - trautman solutions are also used to investigate the error of perturbative methods in estimating the gravitational radiation at null infinity .[ sec:5 ] uses the formalism developed in sec .[ sec:2 ] to estimate the errors associated with the finite boundary in a pure cauchy computation .this leads to the result concerning computational efficiency of ccm stated above .in the conclusion we discuss the further steps needed for a full implementation of cce , and also of ccm , and investigate under what circumstances cce can provide second - order convergence to the true analytic solution at future null infinity .we finish with appendices on the null cone version of gauge freedom and linear solutions of the einstein equations , and on a stability analysis of our algorithm .this is the first step towards a 3d characteristic evolution algorithm for the fully nonlinear vacuum einstein equations . herewe treat the quasispherical case , where effects which are nonlinear in the asymmetry can be ignored .thus the schwarzschild metric is treated exactly in this formalism .however , rather than developing an algorithm for the linearized equations on a given schwarzschild background , we will approach this problem in a mathematically different way .we adopt a metric based approach in which each component of einstein s equation has ( i ) some quasispherical terms which survive in the case of spherical symmetry and ( ii ) other terms which are quadratic in the asymmetry , i.e terms of where measures deviation from spherical symmetry .we will treat the quasispherical terms to full nonlinear accuracy while discarding the quadratically asymmetric terms .for example , if were a scalar function we would make the approximation although this breakup is not unique , once made it serves two useful purposes .first , the resulting field equations are physically equivalent to the linearized einstein equations in the quasispherical regime .( in the exterior vacuum region , the spherical background must of course be geometrically schwarzschild but the quasispherical formalism maintains arbitrary gauge freedom in matching to an interior solution ) .second , the resulting quasispherical evolution algorithm supplies a building block which can be readily expanded into a fully nonlinear algorithm by simply inserting the quadratically asymmetric terms in the full einstein equations .we use coordinates based upon a family of outgoing null hypersurfaces .we let label these hypersurfaces , ( ) , be labels for the null rays and be a surface area distance . in the resulting coordinates , the metric takes the bondi - sachs form where and , with a unit sphere metric . later , for purposes of including null infinity as a finite grid point , we introduce a compactified radial coordinate .a schwarzschild geometry is given by the choice , , and . to describe a linear perturbation, we would set and would retain only terms in which were of leading order in the linearization parameter . herewe take a different approach .we express in terms of a complex dyad ( satisfying , , , with ) .then the dyad component is related to the linearized metric by .in linearized theory , would be a first order quantity .the 2-metric is uniquely determined by , since the determinant condition implies that the remaining dyad component satisfies .refer to for further details , especially how to discretize the covariant derivatives and curvature scalar of a topologically spherical manifold using the calculus . because the 2-metric also specifies the null data for the characteristic initial value problem , this role can be transferred to .terms in einstein equations that depend upon to higher than linear order are quadratically asymmetric .we do not explicitly introduce as a linearization parameter but introduce it where convenient to indicate orders of approximation .the einstein equations decompose into hypersurface equations , evolution equations and conservation laws . in writing the field equations, we follow the formalism given in .we find : where is the covariant derivative and the curvature scalar of the 2-metric . the quasispherical version of ( [ eq : beta ] ) follows immediately from rewriting it as , where is quadratically asymmetric .this defines the quasispherical equation thus in this approximation , . for a family of outgoing null cones which emanate from a nonsingular geodesic worldline , we could choose coordinate conditions so that .similarly , in minkowski space , we could set for null hypersurfaces which emanate from a non - accelerating spherical worldtube of constant radius . in a schwarzschild spacetime , due to red shift effects , need not vanish even on a spherically symmetric worldtube .thus represents some physical information as well as asymmetric gauge freedom in the choice of coordinates and choice of world tube .we wish to apply the same procedure to equations ( [ eq : u ] ) and ( [ eq : v ] ) . in doing so , it is useful to introduce the tensor field which represents the difference between the connection and the unit sphere connection , e.g. . in solving for , we use the intermediate variable then ( [ eq : u ] ) reduces to the first order radial equations we deal with these equations in terms of the spin - weighted fields and . to obtain quasispherical versions of these equations ,we rewrite ( [ eq : qa ] ) and ( [ eq : ua ] ) as where , \\ n_u & = & r^{-2}e^{2\beta}q_{a } \left(h^{ab}-q^{ab}\right ) q_b.\end{aligned}\ ] ]the quasispherical versions obtained by setting in ( [ eq : wqa ] ) and in ( [ eq : wua ] ) then take the form in terms of the spin - weighted differential operator .since and are asymmetric of , we use the gauge freedom to ensure that and are . since in minkowski space, we set in terms of a quasispherical variable .then ( [ eq : v ] ) becomes where - \frac{1}{4}r^4 e^{-2\beta } h_{ab } u^a_{,r } u^b_{,r}. \label{eq : nw}\ ] ] we set in ( [ eq : ww ] ) to obtain the quasispherical field equation for . next , by the same procedure , the evolution equations take the form where .\label{eq : nj}\end{aligned}\ ] ] the quasispherical evolution equation follows from ( [ eq : wev ] ) by setting .the remaining independent equations are the conservation conditions . for a worldtube given by ,these are given in terms of the einstein tensor by where is any vector field tangent to the worldtube .this expresses conservation of -momentum flowing across the worldtube .these equations simplify when the bondi coordinates are adapted to the worldtube so that the angular coordinates are constant along the streamlines .then on the worldtube and an independent set of conservation equations is given ( in the quasispherical approximation ) in terms of the ricci tensor by in the context of an extraction problem it is assumed that the interior solution satisfies the einstein equations , and therefore that the conservation conditions are automatically satisfied on the extraction worldtube .the above equations define a quasispherical truncation of the vacuum einstein equations .because these quasispherical equations retain some terms which are nonlinear in the asymmetry , their solutions are not necessarily linearized solutions in a schwarzschild background .however , in the perturbative limit off schwarzschild , the linearized solutions to these truncated equations agree with the linearized solutions to the full einstein equations .in this section we describe a numerical implementation , based on second - order accurate finite differences , of the equations presented in sec .[ sec:2 ] .we introduce a compactified radial coordinate , ( with being the extraction radius ) , labeling null rays by the real and imaginary parts of a stereographic coordinate on the sphere , i.e. . the radial coordinate is discretized as for and . here defines a world tube of constant surface area coordinate .the point lies at null infinity .the stereographic grid points are given by and for and .the fields , , and are represented by their values on this rectangular grid , e.g. . however , for stability ( see appendix [ app : stab ] ) , the field is represented by values at the points on a radially staggered grid ( accordingly ) . for the extraction problem , it is assumed that the values of the fields and the radial derivative of are known at the boundary . in the following discussion , it is useful to note that asymptotic flatness implies that the fields , , and are smooth at , future null infinity . in terms of the compactified radial variable ,the quasispherical field equation for reduces to we write all derivatives in centered , second order accurate form and replace the value by its average .the resulting algorithm determines in terms of values of and at the points , and ( here and in what follows , we make explicit only the discretization on the radial direction , and we suppress the angular indices ) . since eq .( [ eq : qdisc ] ) is a 3-point formula , it can not be applied at the second point , however , a suitable formula for is given by where the value of is trivially obtained from the knowledge of at the boundary , and , . after a radial march , the local truncation error compounds to an global error in at . in terms of the compactified radial variable ,the quasispherical field equation for reduces to we again rewrite all derivatives in centered , second order form . because of the staggered placement of , the resulting discretization is the value of at the first point is evaluated from the expansion at the boundary .this leads to an algorithm for determining at the point in terms of values of at the points lying on the same angular ray . after completing a radial march , local truncation error compounds to an global error in at .the quasispherical field equation for ( [ eq : ww ] ) , reexpressed in terms of and , is following the same procedure as in eq .( [ eq : qdisc ] ) we obtain we obtain a startup version of the above with the substitutions , , noting that at the boundary is given .the above algorithm has a local error in each zone . in carrying out the radial march ,this leads to error at any given physical point in the uncompactified manifold .however , numerical analysis indicates an error at . in discretizing the evolution equation ,we follow an approach that has proven successful in the axisymmetric case and recast it in terms of the 2-dimensional wave operator \label{eq:2box}\ ] ] corresponding to the line element , \label{eq:2metric}\ ] ] where is the normal to the outgoing null cones and is a null vector normal inwards to the spheres of constant . because the domain of dependence of contains the domain of dependence induced in the submanifold by the full space - time metric ( [ eq : bmet ] ) , this approach does not lead to convergence problems .the quasispherical evolution equation ( [ eq : wev ] ) then reduces to where because all 2-dimensional wave operators are conformally flat , with conformal weight , we can apply to ( [ eq : wev1 ] ) a flat - space identity relating the values of at the corners , , and of a null parallelogram with sides formed by incoming and outgoing radial characteristics . in terms of , this relation leads to an integral form of the evolution equation , the corners of the null parallelogram can not be chosen to lie exactly on the grid because the velocity of light in terms of the coordinate is not constant . numerical analysis and experimentationhas shown that a stable algorithm results by placing this parallelogram so that the sides formed by incoming rays intersect adjacent -hypersurfaces at equal but opposite -displacement from the neighboring grid points .the elementary computational cell consists of the lattice points and on the `` old '' hypersurface and the points , and .the values of at the vertices of the parallelogram are approximated to second order accuracy by linear interpolations between nearest neighbor grid points on the same outgoing characteristic .then , by approximating the integrand by its value at the center of the parallelogram , we have as a result , the discretized version of ( [ eq : wev1 ] ) is given by where is a linear function of the s and angular indexes have been suppressed .consequently , it is possible to move through the interior of the grid computing by an explicit radial march using the fact that the value of on the world tube is known .the above scheme is sufficient for second order accurate evolution in the interior of the radial domain .however , for startup purposes , special care must be taken to handle the second radial point . in determining the strategy ( [ eq : wev2 ] )is easily modified so that just two radial points are needed on the level ; the parallelogram is placed so that and lie precisely on and respectively .note that the calculation of poses no problems , since the values of , , and are known on the worldtube and the value of on the worldtube can be calculated by ( [ eq : ww ] ) . in order to apply this scheme globallywe must also take into account technical problems concerning the order of accuracy for points near .for this purpose , it is convenient to renormalize ( [ eq : wev3 ] ) by introducing the intermediate variable . this new variable has the desired feature of finite behavior at . with this substitutionthe evolution equation becomes where all the terms have finite asymptotic value .some of the fundamental issues underlying stability of the evolution algorithm are discussed in appendix [ app : stab ] . we have carried out numerical experiments which confirm that the code is stable , subject to the cfl condition , in the perturbation regime where caustics and horizons do not form .the first set of tests consist of evolving short wavelength initial null data , with all world tube data set to zero . in this case, the world tube effectively acts as a mirror to ingoing gravitational waves .the tests were run until all waves were reflected and radiated away to . in particular , data with was run from from to , corresponding to approximately timesteps , at which time it was checked that the amplitude was decaying . in the second set of tests , we included short wavelength data with amplitude for the boundary values of , , , and on the world tube ( with compact support in time ) as well for the initial data for ( with compact support on the initial null hypersurface ) .again the code was run for approximately timesteps ( from to ) , at which time all fields were decaying exponentially .this test reveals a favorably robust stability of the worldtube initial value problem , since in this case the world tube conservation conditions which guarantee that the exterior evolution be a vacuum einstein solution were not imposed upon the worldtube data .we now present code tests for the accuracy of numerical solutions and their waveforms at infinity .the tests are based upon linearized solutions on a minkowski background and linearized robinson - trautman solutions .these solutions provide testbeds for code calibration as well as consistent worldtube boundary values for an external vacuum solution .in addition , we use numerical solutions of the nonlinear robinson - trautman equation to study the waveform errors introduced by the quasispherical approximation .appendices [ app : gauge ] and [ app : lin ] describe how to generate 3-dimensional linearized solutions on a minkowski background in null cone coordinates and their gauge freedom . to calibrate the accuracy of the code ,we choose a solution of ( [ eq : wave ] ) and ( [ eq : ecr ] ) which represents an outgoing wave with angular momentum of the form where is the -translation operator .the resulting solution is well behaved above the singular light cone .convergence was checked in the linearized regime by choosing initial data of very small amplitude .we used the linearized solution ( [ eq : exacsol ] ) to give data at , with the inner boundary at , and we compared the numerically evolved solution at .the computation was performed on grids of size equal , , and , while keeping .convergence to second order was verified in the , and norms .the robinson - trautman space - times contain a distorted black hole emitting purely outgoing radiation .the metric can be put in the bondi form where ] ; thus the total number of grid points per time - step is it follows that the total amount of computation required for the two methods is : thus the method which requires the least amount of computation is determined by whether or .( because of the assumptions ( 1 ) to ( 4 ) this criterion is not exact but only approximate . )as stated earlier , the value of is determined by the physics , specifically by the condition that the nonlinearities outside must be sufficiently weak so as not to induce caustics .the value of is determined by the accuracy condition ( [ eq : ce2 ] ) , and also by the condition that the nonlinearities outside must be sufficiently weak for the existence of a perturbative expansion .thus we never expect to be significantly smaller than , and therefore the computational efficiency of a ccm algorithm is never expected to be significantly worse than that of a we algorithm .if high accuracy is required , the need for computational efficiency always favors ccm .more precisely , for a given desired error , eq s .( [ eq : ce1 ] ) and ( [ eq : ce2 ] ) and assumption ( 2 ) imply thus that this is the crucial result : the computational intensity of ccm relative to that of we goes to zero as the desired error goes to zero .the computer code described in this paper is a partial implementation of cce .that is , given data on an worldtube , the code calculates the gravitational radiation at future null infinity in the quasispherical approximation . a full implementation of cce is currently being developed which addresses the following issues : * the ignored nonlinear terms in the einstein equations must be calculated , discretized and incorporated into the code .* algorithms need to be developed to translate numerical cauchy data near into characteristic data on . * in general be described in terms of cauchy coordinates , and will not be exactly ; the characteristic algorithm needs amendment to allow for this .once a fully nonlinear cce code has been achieved it will be possible , under certain circumstances , to obtain second - order convergence to the true analytic solution at future null infinity .for example , if has radius and the radius of the cauchy domain is ( ) , then causality implies that the gravitational field at will not be contaminated by boundary errors until time , where at the start of the simulation .there is no analytic error in the characteristic computation , so there will be no analytic error in the gravitational radiation at future null infinity for the initial time period ; under some circumstances this may be the time period that is physically interesting .further , this time period may be extended by using results from the characteristic computation to provide the outer boundary condition in the cauchy calculation .this would amount to a partial implementation of ccm since there would be data flow in both cauchy to characteristic , and characteristic to cauchy , directions ( the implementation is only partial because and are very different ) .since the data flow is two - way , the possibility of a numerical instability arises .however , the timescale of the growth of any instability would be , and therefore such a computation could be safely run for a time of several ; the results obtained would be second - order convergent to the true analytic solution .once the technology for cauchy to characteristic , and characteristic to cauchy , data flow across an arbitrary worldtube has been developed , a full implementation of ccm will amount to taking the limit in which the outer boundary approaches the extraction worldtube .we are encouraged to believe that this is feasible , i.e. without numerical instability , because ccm _ has _ been achieved for the model problem of the nonlinear 3d scalar wave equation .this work was supported by the binary black hole grand challenge alliance , nsf phy / asc 9318152 ( arpa supplemented ) , and by nsf phy 9510895 to the university of pittsburgh .thanks the south african frd for financial support , and the university of pittsburgh for hospitality during a sabbatical .computer time has been provided by the pittsburgh supercomputing center under grant phy860023p and by the high performance computing facility of the university of texas at austin .given a metric in a bondi null coordinate system , the gauge freedom is subject to the conditions , and .these latter conditions imply the functional dependencies and for a spherically symmetric background metric we drop quadratically asymmetric terms to obtain and ,\ ] ] where and , in terms of a complex scalar field .this gives rise to the following gauge freedom in the metric quantities : and present a 3d generalization of a scheme for generating linearized solutions off a minkowski background in terms of spin - weight 0 quantities and , related to and by and .we may in this approximation choose a gauge in which or otherwise use the gauge freedom to set . in either case , is given by the radial integration of the linearization of ( [ eq : ww ] ) and the remaining linearized equations reduce to and where is the angular momentum operator .now set and then where is the wave operator it follows that suppose now that is a complex solution of the wave equation . then eq .( [ eq;hz ] ) is satisfied as a result of ( [ eq : alpha ] ) and ( [ eq : z])and ( [ eq : ecr ] ) implies .if is smooth and at the origin , this implies , so that the linearized equations are satisfied globally . the condition that eliminates fields with only monopole and dipole dependence so that it does not restrict the generality of the spin - weight 2 function obtained .any global , asymptotically flat linearized solution may be generated this way .alternatively , given a wave solution with possible singularities inside some worldtube , say , we may generate an exterior solution , corresponding to radiation produced by sources within the worldtube , by requiring or this is a constraint on the integration constants obtained in integrating ( [ eq : alpha ] ) and ( [ eq : z ] ) which may be satisfied by taking and this determines an exterior solution in a gauge such that .in the characteristic formulation , the linearized equations form the principle part of the full system of bondi equations . therefore insight into the stability properties of the full evolution algorithmmay be obtained at the linearized level . herewe sketch the von neumann stability analysis of the algorithm for the linearized bondi equations , generalizing a previous treatment given for the axisymmetric case .the analysis is based up freezing the explicit functions of and stereographic coordinate that appear in the equations , so that it is only valid locally for grid sizes satisfying and . however , as is usually the case , the results are quite indicative of the stability of the actual global behavior of the code . setting and and freezing the explicit factors of and at and , the linearization of the bondi equations ( [ eq : wq ] ) , ( [ eq : wu ] ) and ( [ eq : wev ] ) takes the form and writing , introducing the fourier modes ( with real , and ) and setting , these equations imply \ ] ] and ,\ ] ] representing damped quasinormal modes .consider now the fde obtained by putting on the grid points and on the staggered points , while using the same stereographic grid and time grid .let , , and be the corner points of the null parallelogram algorithm , placed so that and are at level , and are at level , and so that the line is centered about and is centered about .then , using linear interpolation and centered derivatives and integrals , the null parallelogram algorithm for the frozen version of the linearized equations leads to the fde s ( all at the same time level ) and where represents a centered first derivative . again setting and introducing the discretized fourier modes , we have and , where ] .it is easy to check that this is automatically satisfied . as a result , local stability analysis places no constraints on the algorithm .it may seem surprising that no analogue of a courant - friedrichs - levy ( cfl ) condition arises in this analysis .this can be understood in the following vein .the local structure of the code is implicit , since it involves 3 points at the upper time level .the stability of an implicit algorithm does not necessarily require a cfl condition . however , the algorithm is globally explicit in the way that evolution proceeds by an outward radial march from the origin .it is this feature that necessitates a cfl condition in order to make the numerical and physical domains of dependence consistent . in practicethe code is unstable unless the domain of dependence determined by the characteristics is contained in the numerical domain of dependence .it is important to note that if ( or ) are not discretized on a staggered grid then the above analysis shows the resulting algorithm to be unconditionally unstable regardless of any cfl condition .
we treat the calculation of gravitational radiation using the mixed timelike - null initial value formulation of general relativity . the determination of an exterior radiative solution is based on boundary values on a timelike worldtube and on characteristic data on an outgoing null cone emanating from an initial cross - section of . we present the details of a 3-dimensional computational algorithm which evolves this initial data on a numerical grid , which is compactified to include future null infinity as finite grid points . a code implementing this algorithm is calibrated in the quasispherical regime . we consider the application of this procedure to the extraction of waveforms at infinity from an interior cauchy evolution , which provides the boundary data on . this is a first step towards cauchy - characteristic matching in which the data flow at the boundary is two - way , with the cauchy and characteristic computations providing exact boundary values for each other . we describe strategies for implementing matching and show that for small target error it is much more computationally efficient than alternative methods .
electric vehicles ( evs ) are emerging as a sustainable and environmentally friendly alternative to conventional vehicles , provided that the energy used for their charging is obtained from renewable energy sources .the energy generated from renewable sources such as sunlight , wind and waves is , however , dependent on weather conditions . as a consequence ,the electricity production from these sources is inherently uncertain in time and quantity .furthermore , electricity has to be produced and consumed at the same time , as the large - scale storage of the energy generated is , still today , very limited . as a result, the energy obtained from renewables may be wasted in times when the demand for electricity is not high enough to absorb it , with a consequent detrimental effect on the profitability of renewables . since the battery in an ev is basically a storage device for energy , the large - scale integration of evs in the transportation sector may contribute to substantially increasing the socioeconomic value of an energy system with a large renewable component , while reducing the dependence of the transportation sector on liquid fossil fuel .for this reason , evs have received increased interest from the scientific community in recent years ( detailed literature reviews of the state of the art can be found in and ) .special attention has been given to the analysis of the effect of evs integration on the electricity demand profile , emissions and social welfare , and to the design of charging schemes that avoid increasing the peak consumption , help mitigate voltage fluctuations and overload of network components in distribution grids , and/or get the maximum economic benefit from the storage capability of evs within a market environment , either from the perspective of a single vehicle or the viewpoint of an aggregator of evs . in all these publications , though , and more generally in the technical literature on the topic , the charging problem of an ev is addressed either by considering deterministic driving patterns , when the focus is placed on the management of a single vehicle , or by aggregating the driving needs of different ev users , when the emphasis is on modeling a whole fleet of evs .this aggregation , however , obscures the dynamics of each specific vehicle .likewise , the deterministic driving patterns of a single ev are often based on expected values or stylized behaviors , which fail to capture important features of the charging problem such as the daily variation in the use of the vehicle or potential user conflicts in terms of not having the vehicle charged and ready for use .a stochastic model for driving patterns provides more insight into these aspects and becomes fundamental for applying a charging scheme in the real world . despite this , the stochastic modeling of driving patterns has received little attention from the scientific community , as pointed out in .we mention here the research work by , in which they aim to capture the uncertainty intrinsic to the vehicle use by means of a monte carlo simulation approach .they assume , however , an uncontrolled charging scheme .the work developed in this paper departs from the following two premises : 1 .the primary purpose of the battery of an ev is to provide power to drive the vehicle and not to store energy from the electricity grid .consequently , it is essential that enough energy is kept in the battery to cover any desired trip .this calls for a decision tool that takes into account the driving needs of the ev user to determine when charging can be postponed and when the battery should be charged right away .2 . the complexity of human behavior points to a stochastic model for describing the use of the vehicle . in turn , this stochastic model should be integrated into the aforementioned decision tool and exploited by it .that being so , this paper introduces an algorithm to optimally decide when to charge an ev that exhibits a stochastic driving pattern .the algorithm builds on the inhomogeneous markov model proposed in for describing the stochastic use of a single vehicle .the model parameters are then estimated on the basis of data from the use of the specific vehicle .the approach captures the diurnal variation of the driving pattern and does not rely on any assumptions on the use of the vehicle , which makes it general and particularly versatile .our algorithm thus embodies a _ markov decision process _ which is solved recursively using a stochastic dynamic programming approach .the resulting decision - support tool allows for addressing issues related to charging , vehicle - to - grid ( v2 g ) schemes , availability and costs of using the vehicle .the algorithm runs swiftly on a personal computer , which makes it feasible to implement on an actual ev .the remainder of this paper is organized as follows : in section 2 the stochastic model for driving patterns developed in is briefly described , tailored to be used in the present work , and extended to address the problem of driving data limitations through hidden markov models .section 3 introduces the algorithm for the optimal charging of an ev as a markov decision process that is solved using stochastic dynamic programming .section 4 provides results from a realistic case study and explores the potential benefit of implementing v2 g schemes .section 5 concludes and provides directions for future research within this topic .in this section we summarize and extend the stochastic model for driving patterns developed in .we refer the interested reader to this work for a detailed description of the modeling approach .a state - space model is considered to describe the use of the ev . in its simplest form ,it contains two states , according to which the vehicle is either _ driving _ or _ not driving_. a more extensive version of the model would include a larger number of states which could capture information about where the vehicle is parked , how fast it is driving or what type of trip it is on .the basics of the general multi - state stochastic model are described in this section , including how to fit a specific model on an observed data set .let , where , be a sequence of random variables that takes on values in the countable set , called the state space . denote this sequence as .we assume a finite number , , of states in the state space .a markov chain is a random process where future states , conditioned on the present state , do not depend on the past states . in discrete time a markov chain if for all and all .a markov chain is uniquely characterized by the transition probabilities , , i.e. if the transition probabilities do not depend on , the process is called a homogeneous markov chain .if the transition probabilities depend on , the process is known as an inhomogeneous markov chain .when it comes to the use of a vehicle , it is appropriate to assume that the probability of a transition from state to state is similar on specific days of the week .thus , for instance , thursdays in different weeks will have the same transition probabilities .for convenience we further assume that all weekdays ( monday through friday ) have the same transition probabilities .these assumptions can be easily relaxed or interchanged with other assumptions and as such , are not essential to the model . with a sampling time in minutes , and taking into account that there are 1440 minutes in a day , this leads to the assumption : this assumption implies that the transition probabilities , defined by ( [ transprob ] ) , are constrained to be a function of the time , , in the diurnal cycle .let the matrix containing the transition probabilities be denoted by .for the model containing states the transition probability matrix is given by : where .now let define the number of observed transitions from state to state at time . from the conditional likelihood function , the maximum - likelihood estimate of can then be found as : a discrete time markov model can be formulated based on the estimates of .one apparent disadvantage of such a discrete time model is its huge number of parameters , namely , where parameters have to be estimated for each time step .needless to say , the number of parameters to be estimated increases as the number of states grows .we refer to for further details on techniques to reduce the number of parameters to be estimated for each time step for models with more than two states .another problem is linked to the number of observations available to properly carry out the estimation , i.e. if for some , then is undefined . to deal with the large number of parameters as well as undefined transition probability estimates , b - splines are applied to capture the diurnal variation in the driving pattern through a _generalized linear model_. the procedure of applying a _ generalized linear model _ is implemented in the statistical software package r as the function ` glm(\cdot ) ` . for a thorough introduction to b - splines see and for a general treatment of generalized linear models see . next we elaborate on how the fitting of the markov chain model works in our particular case . each day , at a specific minute , a transition from state to state either occurs or does not occur .thus for every on the diurnal cycle we can consider the number of transitions to be binomially distributed , i.e. , where the number of bernoulli trials at , given by , is known and the probability of success , , is unknown .the data can now be analyzed using a _ logistic regression _ , which is a generalized linear model .the explanatory variables in this model are taken to be the basis functions for the b - spline .the logit transformation of the odds of the unknown binomial probabilities are modeled as linear combinations of the basis functions .we model and in particular , we are interested in = p_{jk}(s) ] , with one at each endpoint and equal spacing between them .denote this initial vector of knots by .the model is then fitted using the basis functions as explanatory variables .next , the fit of the model between the knots is evaluated via the likelihood function and an additional knot is placed in the center of the interval with the lowest likelihood value . the new knot vector is then given by .we repeat this procedure until the desired number of knots is reached . to determine the appropriate number of knots and avoid over - parametrization , on the basis of a likelihood ratio principle , we test that adding a new knot does significantly improve the fit .standard markov models are limited in the sense that only those states that are actually observed can be modeled .thus , if the data at our disposal only provide information on when the vehicle is either _ driving _ or _ not driving _ , the standard markov model is restricted to two states . furthermore the time spent in each stateis exponentially distributed , albeit with time - varying intensity , and accordingly , the time until the next transition does not depend on the time spent in the current state .this may be particularly unrealistic for a model with few states for describing driving patterns . to overcome these limitations, we can use a hidden markov model , which allows estimation of additional states that are not directly observed in the data . in fact , we can estimate these states so that the waiting time in each state matches that which is actually observed in the data . adding a hidden state is done by introducing a new state in the underlying markov chain .the new state , however , is indistinguishable from any of the previously observed states .this allows for the waiting time in each observable state to be the sum of exponential variables , which is a more versatile class of distributions .it is worth insisting that the use of hidden markov models is justified here to address insufficient state information in our data , which only include whether the vehicle is _ driving _ or _ not driving_. indeed , the same results could be obtained using the underlying markov chain without hidden states , provided that the hidden states could be observed . in practice , though , more detailed driving data ( e.g. including driving speed and/or location of the vehicle ) could be available once the actual implementation is made on a vehicle , which in turn would avert the need for a hidden markov model . for a detailed introduction to hidden markov models ,see , where techniques and scripts for estimating parameters are also provided .the hidden markov model consists of two parts .firstly , an underlying unobserved markov process , , which describes the actual state of the vehicle . this part corresponds to the markov model with no hidden states as described previously .the second part of the model is a state - dependent process , , such that when is known , the distribution of depends only on the current state . a hidden markov model is thus defined by the state - dependent transition probabilities , , as defined for the standard markov chain and the state - dependent distributions given by ( in the discrete case ) : collecting the s in the matrix , the likelihood of the hidden markov model is given by : where is the initial distribution of .we can now maximize the likelihood of observations to find the estimates of the transition probabilities .the data at our disposal is from the utilization of a single vehicle in denmark in the period spanning the six months from 23 - 10 - 2002 to 24 - 04 - 2003 , with a total of 183 days .the data is gps - based and follows specific cars .one car has been chosen and the model is intended to describe the use of this vehicle accordingly .the data set only contains information on whether the vehicle was _ driving _ or _ not driving _ at any given time .no other information was provided in order to protect the privacy of the vehicle owner .the data is divided into two periods , a training period for fitting the model from 23 - 10 - 2002 to 23 - 01 - 2003 , and a test period from 24 - 01 - 2003 to 24 - 04 - 2003 for evaluating the performance of the model .the data set consists of a total of 749 trips .the time resolution is in minutes .we shall consider a model with one _ not driving _ state and several ( hidden ) _ driving _ states .that is , it is observable if the vehicle is driving , but not the specific driving state . to fit the model to the data , we assume that only the transition probability from the _ not driving _ state depends on the time of day .this is done to reduce the complexity of the estimation procedure , as it is cumbersome to estimate the time - varying parameters of a hidden markov model .it is worth noting that a hidden markov model allows for the probability of ending the current trip to depend on the time since departure , as the vehicle may pass through different driving states before ending the trip .we now elaborate on the fitting of the hidden markov model , which is split into estimation of its time - varying and time - invariant parameters .we need to estimate the probability of a transition from the vehicle being parked to a driving state .we denote this transition estimate by .it holds that .since both the parked state and the transitions from it are directly observable in the data , we can use the procedure described in section 2.1 to estimate .+ the data have been divided into two main periods : weekdays and weekends . the observed number of trips starting every minute for the weekdays is displayed in fig .[ startingweekdays ] .a high degree of diurnal variation is found , with a lot of trips starting around 06:00 and again around 16:00 . also , there are no observations of trips starting between 00:00 and 05:00 .other patterns are found for weekends , but as these do not involve any methodological difference , we limit ourselves to trips starting on weekdays .annual variations may also be present , however the limited data sample does not allow for capturing such seasonality .based on the b - splines and the logistic regression , plotted as the black line over the estimates from ( [ p_hat ] ) , in gray .the red bars indicate the knot positioning.__,scaledwidth=80.0% ] the plot in fig .[ transitionprobability ] illustrates the estimate of using b - splines with eight initial knots placed uniformly on the interval and 22 knots in total .the time - invariant parameters are to be estimated so that an appropriate probability distribution is fitted to the duration of the trips .the time - invariant parameters are estimated by maximizing the likelihood given in ( [ eq : hmm likelihood ] ) . for a given number of _ driving _ states, the transition probabilities can be estimated using the approach in .once a model with states is fitted , we can test if adding an additional state significantly improves the fit . as a model with states is a sub - model of one with or more states , we increase the number of states until no significant improvement test is observed according to the likelihood ratio .[ triplength ] represents the histogram of the empirically observed trip lengths along with the theoretical density function of the trip lengths obtained from the fitted model .we use a model with two driving states , as no significant improvement is found beyond this number .notice that the distribution of the empirically observed trip lengths is adequately captured by the hidden markov model , although the number of observed trips in the range from 10 to 20 minutes has a higher prevalence than the fitted distribution . in practical applications ,more information could be available to model the behavior of the vehicle ( e.g. its location and speed ) , which should facilitate the modeling of the driving patterns . in the following section ,the algorithm for optimally charging the ev is presented .the optimization algorithm makes use of the transition probabilities characterizing the stochastic model for the driving patterns .thus , the optimization algorithm is designed to handle the stochastic nature of the driving needs .the problem of charging an ev can be posed as a conflict between two opposing objectives .the end - user desires to have the vehicle charged and ready for use at his / her discretion , while also minimizing the costs of running the vehicle .demand for electricity varies over the day and so does the electricity generated from renewable sources .this introduces a varying energy price which can make it beneficial for the end - user to postpone charging his / her vehicle .this means the user is faced with the problem of postponing charging to minimize costs or to charge right away so as to maximize the availability of the vehicle .the algorithm for optimal charging of the ev is formulated as a stochastic dynamic programming problem .we first define the relevant parameters and variables , and then the state - transition and objective function . [ cols= "< " , ] let us consider first the strategies in tab .[ tab : outofsample ] under which only charging is allowed , we see that there are no observed events of not having enough charge on the battery to complete a trip . also , we notice , as expected , that the optimal charging strategies have lower costs than the `` rule of thumb '' policies .the low - price charging strategy is indeed the `` rule of thumb '' policy that approximates closest to the optimal policy in terms of costs and availability .it yields , however , an average daily cost which is around 12 - 24% higher than that obtained from implementing the proposed decision - support tool . as for the charging policies that include v2 g operation mode , it becomes apparent that caution should be exercised to prevent the vehicle from being fully discharged when the end - user desires to drive . in thisline , notice that increasing the penalty reduces the number of observed events of not having enough charge on the battery to cover a desired trip . introducing a v2 g charging scheme allows for substantially reducing the cost associated with driving as opposed to charging - only schemes , and may even result in negative average costs .observe that the optimal charging policy developed in this paper clearly outperforms the `` rule of thumb '' v2 g schemes . in the unbounded case , charging costs are substantially reduced , but multiple out - of - battery events are recorded .imposing a lower bound on the discharging solves this problem , but at the expense of considerably increasing the running cost of the vehicle , to such an extent that it nearly doubles . the difference in performance between the optimal charging strategy and the `` rule of thumb '' policiescan be expected to become larger for electric vehicles covering higher distances or with lower battery capacity .lastly , we would like conclude this section by pointing out that , in general , the spot price is not the price observed by the end - user .indeed , the end - user faces a price that includes taxes and other costs on top of the spot electricity price . as an example , consider a country like denmark , where the average price of electricity paid by the end - user , including taxes and fees , is around 300 /mwh , which is 5 - 10 times the average spot price . in the current danish power system ,fees and taxes are imposed on the amount of electricity consumed by the end - user , not on its total cost .this does not encourage the end - user to switch to a smart consumption of energy based on variable prices .in fact , if the taxes and fees were implemented as a function of the total energy cost , the savings from switching to a smart charging policy in denmark could be multiplied by a factor of between five and ten .this paper proposes an algorithm to optimally charge an electric vehicle based on stochastic dynamic programming .the algorithm is built on an inhomogeneous ( hidden ) markov chain model that characterizes the stochastic use of the vehicle .the algorithm determines the optimal charging policy depending on the use of the vehicle , the risk aversion of the end - user , and the electricity price .the costs associated with running the vehicle are decreased significantly when the charging strategy is determined by the proposed optimization model , with little or no inconvenience to the end - user .these costs can be reduced even further if the vehicle is permitted to supply power into the grid .indeed , findings show the possibility of making a net profit from running the vehicle . the proposed stochastic dynamic programming model is versatile and can easily be adapted to any specific vehicle , thus providing a customized charging policy .a possible extension would be to apply the proposed model to data with more markov states , which could be used to investigate the benefits of installing more public charging stations as opposed to home charging , or to capture different driving states such as `` urban '' , `` rural '' , or `` highway '' .in addition , the model could be enhanced to consider transition probabilities that are estimated adaptively in time .an adaptive approach would capture structural changes in the driving behavior , such as variations over the year or a change in use that could follow , for example , from the householder buying an additional vehicle .moreover , adaptivity is relevant for applying the model in practice. further research could be also directed at modeling a fleet of vehicles by using a mixed - effects model . the optimization scheme could be applied individually to each vehicle and the total population load could be evaluated .this would highlight if and how evs could be used to mitigate an increase in peak electricity demand when switching from combustion - based vehicles to evs .other investigations could focus on the relationship between evs and renewable energy sources and how evs could be used to move the excess production to time periods of high demand , possibly making renewables more economically competitive .dsf ( det strategiske forskningsrd ) is to be acknowledged for partly funding the work of emil b. iversen , juan m. morales and henrik madsen through the ensymora project ( no .10 - 093904/dsf ) .furthermore , juan m. morales and henrik madsen are partly funded by the ipower platform project , supported by dsf ( det strategiske forskningsrd ) and rti ( rdet for teknologi og innovation ) , which are hereby acknowledged .finally , we thank dtu transport for providing the data used in this research .23 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , , , in : , , pp . . , , , ( ), , , , ( ) . , , ( ) ., , , in : , , pp . . , , , in : , , , pp . ., , , ( ) . , , , , , california institute of technology , . ., , , , in : , , pp . . , , , , ( ) . , , , ( ) ., , , ( ) . , , , , ( ) ., , , , in : , , pp . . , , , , , in : , , pp . . , , , , , , technical university of denmark , . ., , , ( ) . , , , texts from oxford university press , , . , , , , springer series in statistics , , ., , , chapman & hall / crc texts in statistical science , , . , , , monographs on statistics and applied probability , , .
the combination of electric vehicles ( evs ) and renewable energy is taking shape as a potential driver for a future free of fossil fuels . however , the efficient management of the ev fleet is not exempt from challenges . it calls for the involvement of all actors directly or indirectly related to the energy and transportation sectors , ranging from governments , automakers and transmission system operators , to the ultimate beneficiary of the change : the end - user . an ev is primarily to be used to satisfy driving needs , and accordingly charging policies must be designed primarily for this purpose . the charging models presented in the technical literature , however , overlook the stochastic nature of driving patterns . here we introduce an efficient stochastic dynamic programming model to optimally charge an ev while accounting for the uncertainty inherent to its use . with this aim in mind , driving patterns are described by an inhomogeneous markov model that is fitted using data collected from the utilization of an ev . we show that the randomness intrinsic to driving needs has a substantial impact on the charging strategy to be implemented . electric vehicles , driving patterns , optimal charging , markov processes , stochastic dynamic programming ,
systems biology has rapidly advanced to offer a number of powerful methods for discovery of networks of genes / proteins and collective functions of families of genes / proteins in life processes .analysis of quantitative traits loci ( qtl ) refers to a systematic method for discovery of genes and their functions in systems biology .the major bottleneck in qtl progress appears to be in the more general and challenging question of how to quantify phenotypic traits , such as morphology and dynamics of its variation .qtl and other gene function discovery methods rely on quantification of phenotypic traits that could be observed and used in a systematic way to distinguish between organisms with differing dna sequences or epigenetic signatures . to extract phenotypic traits that distinguish genotypic characteristics , biologists make repeated observations of the wild type and the mutants of the same species during growth , behavior or in the course of response to external stimuli . in the following ,we illustrate and outline the steps for identifying and quantifying phenotypic traits of seedling of the model plant _ arabidopsis thaliana _subject to changes in gravitational force relative to its natural root orientation during a normal course of growth .it will be demonstrated that the selected quantitative traits are together sufficiently informative to carry genomic signatures that are at work differently in the wild type and mutant plants .plant roots vary a great deal in morphology , size and complexity of their architecture . the model plant _ arabidopsis thaliana _ has a root system that is exemplary in both prevalence of its morphological pattern and its relative simplicity .there are several visible morphological features of this root system .the primary root is the main branch that starts its life immediately after germination . besides being a significant method for simplifying morphological measurement, midline extraction serves as an example of high - dimensional data transformation that achieves dimensionality reduction with some loss of information beyond noise , namely , ignoring the signal - parameters underlying phenotypic variation such as morphology of the root due to image texture or image structures such as shadows of root hair . in higher resolution images ,the interpretation will change , and root hairs could also be part of the features whose morphological diversity and variation in growth dynamics become essential biological quantities to be studied that are related to specific gene functions , pathways or gene - protein network dynamics . in this articlewe provide details of a set of algorithms for extraction of root and hair growth information .tropism refers to the directed growth responses of plants to external stimuli such as gravity , water , light , and temperature . beginning with the pioneering work of darwin ,the study of tropism has grown to a major area of research in plant biology in the course of its century - long history .darwin observed that grass seedlings grown in dark tend toward a light source when illuminated from one side .plant roots show a similar response to gravity . when a seed germinates , the roots penetrate the soil and grow downward. however , if a root is reoriented by 90 with respect to the gravitational field , the root responds by altering its direction of growth , curving until it is again vertical .sachs was the first one to propose a quantitative measure for gravitropism , namely , the gravitropic response was proportional to the component of the gravity vector perpendicular to the root axis .early studies , using maize roots , demonstrated distinct regions along the root axis with different physiological response patterns . despite this long and illustrious research history , the molecular mechanisms involved in sensing the gravitational signal and its transduction still need to be studied .other tropic responses are sources of gaining information about the physiological and molecular processes that influence the tropic response , e.g. the plant hormone auxin has been implicated in tropic responses through extensive research by a broad group of scientists ( ) . auxin is involved in asymmetric tropic growth , vascular development and root formation , as well as in a plethora of other processes .the approaches by which auxin has been implicated in tropisms include isolation of mutants altered in auxin transport or response with altered gravitropic or phototropic response , identification of auxin gradients with radiolabeled auxin and auxin - inducible gene reporter systems , and by use of inhibitors of auxin transport that block gravitropism and phototropism .proteins that transport auxin have been identified and the mechanisms which determine auxin transport polarity have been explored . the mechanisms of auxin action in the gravitropic response and phototropism have recently been revealed by the analysis of mutants that are defective in the response to the gravity signal , or have a characteristic response to blue and other spectral bands of light .gravitational stimulus induces curvature of the primary root of _ arabidopsis thaliana_. it is known that the patterns of root curvature reveal quantitative traits that are associated to a number of genes and proteins that play vital roles in growth and development of the plant .the application described below gives an outline of machine - learning methods that distinguish mutant from wild type seedling plants in gravitropism experiments .we note that these applications could be adapted for other experimental protocols in plant biology that attempt to quantify subtle phenotypic traits in order to decipher functions of genes - proteins .we have designed and engineered a complete system , specifically focusing on high throughput plant imaging .this hardware - software product is a novel _ `` portable modular system for automated image acquisition '' _ in the lab and for certain field experiments .our model system is described in .the algorithms below are part of `` the image analyzer '' component of our system .it is an object - oriented software application that can be modified to accommodate essentially any reasonable analysis scenario .the image analyzer package is developed in linux , and it is ready for experimental protocols by the biologist , see .this platform can provide high throughput results in a range of resolutions , and accommodates highly flexible experimental protocols .this article demonstrates the feasibility of using our system towards fully automated high throughput phenotyping in functional genomics and systems biology ( figure [ fig1 ] ). parts of a high resolution image of growth of lateral roots and extraction of morphological through the algorithms developed by the authors .[ d ] the midlines of primary root images frames ( please see below ) could be metaphorically positioned on top of each other to visualize the dynamics of growth in gravitropism as the geometry of the surface .we have developed algorithms for extraction of midlines of root hair and dynamics of growth as the next step in capturing phenotypic variation . ] in earlier contributions by the senior author and liya wang ( , ) image analysis and midline extraction were only partially automated and required _ `` preprocessing tasks '' _ prior to application of the software .preprocessing steps are highly data - dependent image processing steps . in this article, we report successful automation of the preprocessing , and a novel collection of algorithms that are also amenable to generalization beyond the plant roots , as well as parallelization .the preprocessing methods are adapted from earlier seminal research by osher et al ( , called the total variation regularization ( tv ) method .we remark that comparable approaches have been also treated in literature using bounded variation ( bv ) .as expected , direct application of one set of programs to another typically shows varying degrees of success .therefore , it is anticipated that generalization of the automation preprocessing would depend on the setting for image acquisition and would vary in degrees of successful analysis in plant growth dynamics of various kinds . the hardware and automation offered by cyplant solutions inc .have a standardized setting that removes a series of technical issues in order to ensure the quality of data for the same genre of mathematical treatment as in our algorithms and software .our work in progress will provide a critical examination of a number of algorithms and methods in literature , and in appropriate cases , redesigns the needed image preprocessing algorithms from scratch to be optimal for the tasks , such as those in cyplant solutions standardized technologies .we next present the outline of the algorithm .let the image for one gray - scale frame be represented by the basic idea is as follows : the typical image is written as an outcome of an unknown convolution kernel with compact support ( e.g. a discrete gaussian ) , applied to the original image ( or a desirable model that in our case must be suitable for segmentation and midline recovery of the branched structures , cf . ) , and an additive noise , that we propose to model as a gaussian white noise , such that the automation algorithm selects the kernel and a suitable model for noise ( that we must confront in practical applications ) , which will be derived from the high throughput imaging system outputs in lieu of the simplifying assumption above .we will begin with the constrained minimization problem with appropriate norms , without further mention of simplicity . for a class of images obtained by our method , the higher resolution allowed us to select a delta function , and use the simpler form .however , in the case of animal behavior , blurring occurs due to movement of the animal or temporal loss of the automated focus . in the case of plants ,blurring occurs due to accumulation of moisture on inside surfaces of the petri dish covers .this means that we must use the general form below : here , is a scaling constant that is the trade - off between noise and the desired image quality , such as having sharp edges in images of root systems , and arg - min is over all images , where is the desired form of the image suitable for algorithms of ( for an iterative regularization to recover finer scales ) .an intermediate problem that we solved numerically was the parallel implementation of methods based on the following idea that is adapted from a simplified mathematical result by chan - wong - kaveh - osher et al .that amounts to a constrained optimization problem with as a function of joint variables : when we view as a one - variable functional depending only on or only , by fixing the other one , then is convex , while fails to be jointly convex . to overcome this difficulty, we followed and instead numerically solved the two euler - lagrange equations , where the -conjugates are denoted by the `` hat '' : fixing first , we solve for from the first equation and then switch the roles of and and exchange the two equations . for a class of images according to our protocol, the higher resolution allows one to select an iterative scheme that mimics convergence to the appropriate delta function , which in turn allows one to use the simpler form mentioned above .however , in the case of movement of the entire root by sliding on the agar surface , blurring sometimes occurs ; this could also be a result of temporal loss of the automated focus . as mentioned before , in the case of plants ,blurring mostly occurs due to accumulation of moisture on inside surfaces of the petri dish covers . in our companion article , and in our 2008 worldcomp article , and subsequent developments , , , we have developed a set of new algorithms and their object - oriented c - code for massively parallel - distributed hardware .this part of research was done in collaboration with nvidia and sun network.com .having quantified morphological features from the data set of movies , we continue to discover candidates to serve as phenotypic traits that carry the plant genotypic signature that account for distinguishing between the wild type and the mutants .a number of these features depend on computation of the midline of root images as argued in .the algorithm for midline tracing in the present work is different from the main algorithm in .in fact , the second generation of midline tracing algorithms used in the present article is more robust , more accurate and much faster .we computed midlines from different regions of a root image as follows .gravitropism results in anisotropic expansion of cell walls and an uneven change in the distribution of cells in the epidermis of the root in such a way that the root growth tends to generate the observed bending with respect to its growth in the original vertical direction .the part of root before the bending region is referred to as the _horizontal region_. the bending region itself is called the _ hook region _ and finally , the part of the root after the hook region is called the _ vertical region_. lengths of these regions are calculated using the midline in these regions , and it serves as a feature . besides the length associated to the above - mentioned regions , the curve made by the midline carries the information about the root curvature during its gravitropic growth .further considerations ( omitted here ) indicate that the number of segments account for the changes in the growth direction which could potentially be a significant representative feature .the greatest change in the growth direction in the hook region is called the _hook angle_. in addition to the geometric features of the midline of the primary root growth , we also developed algorithms to extract growth information regarding the root hairs .discussion of the biological significance and the algorithms for root hair growth information are available in .let us only mention that the number of root hairs for each region of the root is also a feature that we added to the list of morphological features for the classification purposes . using the root hair information, we formulate additional morphological features which we call _ dynamic features _ because they correspond to the growth velocity and growth acceleration of the primary root and the root hairs .figure [ fig2 ] shows these extracted features .all of the following features highlighted in the figure are considered as representative features of the growth images : * length of the vertical region , * length of the horizontal region , * length of the hook region , * the number of segments , * the hook angle , * the average root hair length , * the number of root hairs , * the root growth velocity , * root growth acceleration , * hair growth velocity , * hair growth acceleration , and * hair density . from a human observers standpoint , the slow rate of growth of the root morphological features results in `` invisible phenotypic traits '' that belong to the complex dynamical system underlying the development and growth .in we have developed the preliminary steps towards the in - depth study of plant development within the framework of complex dynamical systems . for comprehensive details of these algorithms readersare referred to visit our website for and . to recap , we have used image analysis algorithms and mathematical modeling of dynamics of morphological features in order to isolate a number of quantitative morphological and dynamical features , and using these features , every movie from the growth process is assigned a representation by the appropriate vector , as described earlier .the final step towards discovery of quantitative phenotypic traits requires machine learning for classifying the above - mentioned features , and to train the machine through a set of so - called ` training samples ' or ` examples ' in order to extract the appropriate weighted combination of features that could carry genotypic signatures of the mutation ( here , _ mdr1 _ , which is one of the members of the _ mdr _ gene family ,is knocked to provide the mutant ) .the machine learning method of choice here comes from the branch of statistical learning theory , called support vector machine ( svm ) , see .the flexibility of the svm theory has proved advantageous here , because we tailored to our needs an rbf - based svm , where rbf refers to the `` radial basis functions '' as our choice for the svm kernel function .we applied the rbf - svm method to 281 seedling growth movies ( 161 mutant _mdr1 _ and 120 wild type seedlings ) where 12 features captured them as described before .we used the matlab - svm classifier and calculated the precision of our classification outcome .table [ tab1 ] shows the results for each class of movies ..this table shows the result of applying the svm method on the data set , where the radial basis function was used as the kernel function .precision of the results shows efficiency of using the defined features for capturing different classes of genotypic attitudes .[ cols="<,^,^ " , ] the precision was computed via the standard error formula since twelve features were employed we could not show a full -dimensional figure of the classes . instead, the projection of the movies on the two coordinates corresponding to the features `` number of segments '' and `` average hair growth acceleration '' is shown in figure [ fig3 ] .among higher plants , the model organism _ arabidopsis thaliana _ has been studied in detail . as its genomeis sequenced and better understood than other similar plants , there are many opportunities for investigation of the genotype - phenotype mapping .the development of the arabidopsis root system is a centerpiece in plant biology . due to the wide availability of a large amount of literature on root development, the arabidopsis root system continues to serve as an excellent model organ to investigate systems biology of higher plants .the functional landscape of gene - protein network dynamics is believed to be responsible for the regulation of growth and development of roots .multi - scale mathematical modeling of root growth provides a critical element for a systematic study of the mechanisms of regulatory transcription networks that operate on different scales , specific interactions of numerous proteins and tightly intertwined protein interaction networks .gene expression studies need to be performed at temporal and spatial resolutions ( minutes and micrometers ) that are relevant to the dynamics of gene and phenotype crosstalk .historically , tropic growth responses have been at the center of such activities .tropic responses allow plants to redirect their growth in response to their surrounding environment .the temporal dynamics of root growth is relatively accurately measured by following the displacement of features at extremities , such as the tips of roots and root hairs .it is necessary to obtain velocity profiles of neighboring elements as they are moved by expansion .velocity profiles can be obtained by imaging a growing organ over time which provides measures of the position of the externally applied marks , thus computing velocity as a function of position .the mathematical analysis of image sequences goes back to numerous early investigators , e.g. , and algorithmic methods substitute for laborious and often subjective manual measurements ( ) .several improvements were made by developing algorithms that measure the spatial growth profiles using image - processing techniques that could also utilize cell borders , intercellular air spaces or other physiological structures that are followed throughout a sequence of images ( see ) .concepts originally developed by and and improved by establish quantitative relationships for curvature production and curvature angle distribution .nevertheless , there is a continuing need to develop better algorithms that work more accurately in automated high throughput imaging systems such as our system outlined above .high throughput plant imaging systems require appropriate software applications for accurate automated image analysis .we have developed a prototype hardware - software product that operates as a _`` portable modular system for automated image acquisition and analysis '' _ in the lab and some field applications .this article provides evidence for feasibility of using our system towards fully automated high throughput phenotyping in challenging functional genomics and systems biology applications .the methods described above also point to the many opportunities that machine learning could offer plant functional biology , such as distinguishing mutant from wild type seedling plants in gravitropism experiments . as we observed , these applications could be adapted for other experimental protocols in plant biology that attempt to quantify subtle phenotypic traits in order to decipher the functions of genes - proteins .the general problem of quantifying _ all _ phenotypic traits ( for example , in tropism ) for the development and growth of the plant root system remains a formidable challenge that is certain to inspire new insights in machine learning and analysis of massive biological imaging data .s. balasubramanian , c. schwartz , a. singh , n. warthmann , m. c. kim , j. n. maloof , o. loudet , g. t. trainer , t. dabi , j. o. borevitz , j. chory , and d. weigel .qtl mapping in new arabidopsis thaliana advanced intercross - recombinant inbred lines ., 4(2):e4318 , 02 2009 .h. dashti , a. ardalan , and a. assadi .a complex dynamical system approach to modeling of the arabidopsis root gravitropism with applications to discovery and quantification of invisible phenotypic traits .preprint .h. t. dashti , m. e. kloc , t. simas , r. a. ribeiro , and a. h. assadi . , volume 314 of _ ifip advances in information and communication technology _ , chapter introduction of empirical topology in construction of relationship networks of informative objects , pages 3543 .springer boston , 2010 .j. n. maloof , j. o. borevitz , t. dabi , j. lutes , r. b. nehring , j. l. redfern , g. t. trainer , j. m. wilson , t. asami , c. c. berry , d. weigel , and j. chory .natural variation in light sensitivity of arabidopsis ., 29:441446 , 2001 .j. mullen , e. turk , k. johnson , c. wolverton , h. ishikawa , c. simmons , d. sll , and m. l. evans .root - growth behavior of the arabidopsis mutant rgr1 : roles of gravitropism and circumnutation in the waving / coiling phenomenon ., 118(4):11391145 , 1998 .s. savaldi - goldstein , t. baiga , f. pojer , t. dabi , c. butterfield , g. parry , a. santner , n. dharmasiri , y. tao , m. estelle , j. noel , and j. chory .new auxin analogs with growth - promoting effects in intact plants reveal a chemical strategy to improve hormone delivery ., 105(39):1519015195 , 2008 .d. schmundt , m. stitt , b. jhne , and u. schurr . quantitative analysis of the local rates of growth of dicot leaves at a high temporal and spatial resolution , using image sequence analysis ., 16(4):505514 , 1998 .j. tonejc , h. torabi , and a. assadi .mathematical modeling of morphological dynamics in development and growth of the arabidopsis and maize seedlings with applications to quantifying phenotypic variation in their root system . , 2010 .a. walter , h. spies , s. terjung , r. kusters , n. kirchgessner , and u. schurr .spatio - temporal dynamics of expansion growth in roots : automatic quantification of diurnal course and temperature response by digital image sequence processing . , 53(369):689698 , 2002 .l. wang , i. v. uilecan , a. h. assadi , c. a. kozmik , and e. p. spalding .: image analysis software for measuring hypocotyl growth and shape demonstrated on arabidopsis seedlings undergoing photomorphogenesis . , 149(4):16321637 , 2009 .
post - genomic research deals with challenging problems in screening genomes of organisms for particular functions or potential for being the targets of genetic engineering for desirable biological features . ` phenotyping ' of wild type and mutants is a time - consuming and costly effort by many individuals . this article is a preliminary progress report in research on large - scale automation of phenotyping steps ( imaging , informatics and data analysis ) needed to study plant gene - proteins networks that influence growth and development of plants . our results undermine the significance of phenotypic traits that are implicit in patterns of dynamics in plant root response to sudden changes of its environmental conditions , such as sudden re - orientation of the root tip against the gravity vector . including dynamic features besides the common morphological ones has paid off in design of robust and accurate machine learning methods to automate a typical phenotyping scenario , i.e. to distinguish the wild type from the mutants . * keywords : * gravitropism , machine learning , phenotypic traits
the problem of convergence of discrete - time financial models to the models with continuous time is well developed ; see , e.g. , .the reason for such an interest can be explained as follows : from the analytical point of view , it is much simpler to deal with continuous - time models although all real - world models operate in the discrete time . inwhat concerns the rate of convergence , there can be different approaches to its estimation .some of this approaches are established in . in this paper, we consider the cox ingersoll ross process and its approximation on a finite time interval .the cir process was originally proposed by cox , ingersoll , and ross as a model for short - term interest rates .nowadays , this model is widely used in financial modeling , for example , as the volatility process in the heston model .the strong global approximation of cir process is studied in several articles .strong convergence ( without a rate or with a logarithmic rate ) of several discretization schemes is shown by . in , a general framework for the analysis of strong approximation of the cir processis presented along with extensive simulation studies .nonlogarithmic convergence rates are obtained in . in ,the author extends the cir model of the short interest rate by assuming a stochastic reversion level , which better reflects the time dependence caused by the cyclical nature of the economy or by expectations concerning the future impact of monetary policies . in this framework , the convergence of the long - term return by using the theory of generalized bessel - square processes is studied . in , the authors propose an empirical method that utilizes the conditional density of the state variables to estimate and test a term structure model with known price formula using data on both discount and coupon bonds .the method is applied to an extension of a two - factor model due to cox , ingersoll , and ross .their results show that estimates based solely on bills imply unreasonably large price errors for longer maturities .the process is also discussed in . in this article , we focus on the regime where the cir process does not hit zero and study weak approximation of this process . in the first case ,the sequence of prelimit markets is modeled as the sequence of the discrete - time additive stochastic processes , whereas in the second case , the sequence of multiplicative stochastic processes is modeled .the additive scheme is widely used , for example , in the papers .the papers are recent examples of modeling a stochastic interest rate by the multiplicative model of cir process . in , the authors say that the model has the `` strong convergence property , '' whereas they refer to models as having the `` weak convergence property '' when the returns converge to a constant , which generally depends upon the current economic environment and that may change in a stochastic fashion over time .we construct a discrete approximation scheme for the price of asset that is modeled by the cox ingersoll ross process . in order to construct these additive and multiplicative processes ,we take the euler approximations of the cir process itself but replace the increments of the wiener process with iid bounded vanishing symmetric random variables .we introduce a `` truncated '' cir process and use it to prove the weak convergence of asset prices .the paper is organized as follows . in section [ section2 ], we present a complete and `` truncated '' cir process and establish that the `` truncated '' cir process can be described as the unique strong solution to the corresponding stochastic differential equation .we establish that this `` truncated '' process does not hit zero under the same condition as for the original nontruncated process . in section [ section3 ] ,we present discrete approximation schemes for both these processes and prove the weak convergence of asset prices for the additive model . in the next section ,we prove the weak convergence of asset prices for the multiplicative model .appendix contains additional and technical results.=1let be a complete filtered probability space , and be an adapted wiener process . consider a cox ross process with constant parameters on this space .this process is described as the unique strong solution of the following stochastic differential equation : where , .the integral form of the process has the following form : according to the paper , the condition is necessary and sufficient for the process to get positive values and not to hit zero .further , we will assume that this condition is satisfied .for the proof of functional limit theorems , we will need a modification of the cox ingersoll ross process with bounded coefficients .this process is called a truncated cox ingerssol ross process .let .consider the following stochastic differential equation with the same coefficients and : [ lem1 ] for any has a unique strong solution . since the coefficients and satisfy the conditions of theorem [ watanabe2 ] and also the growth condition , a global strong solution exists uniquely for every given initial value . denote with such that .suppose that .then for any such that for , we would have , with positive probability , on the interval , and hence would increase in this interval .this is obviously impossible .therefore , is nonnegative and can be written as the integral form of the process is as follows : [ lem2 ] let and .then the trajectories of the process are positive with probability 1 . in order to prove that the process is positive, we will use the proof similar to that given in for the complete cox ingersoll ross process with corresponding modifications .note that the coefficients and are continuous and on .fix and such that . due to the nonsingularity of on ] in finite time .it follows from the boundary conditions and equality that , and then from we have let us now define the function which has a continuous strictly positive derivative , and the second derivative exists and satisfies .the it formula shows that , for any , and taking the limit as , we get and hence consider the integral first , consider the case .then and if , then now let increase and tend to infinity . denote .then , for , and thus .define and put . fromwe get and , as , we get that , for any , , whence , finally , . similarly , .assume now that .then so the events and can not both have probability 1 .this contradiction shows that , whence if .now , let be fixed . : x_t\neq x_t^c \bigr\}\rightarrow0\ ] ] as . obviously , it suffices to show that }|x_t|\ge c \bigr\}\rightarrow0 \quad \text{as } \ ; c\rightarrow\infty.\ ] ] it is well known ( see , e.g. , ) that follows a noncentral distribution with ( in general ) noninteger degree of freedom and noncentrality parameter .the first and second moments for any are given by , there exists a constant such that , whence , . using the doob inequality, we estimate }|x_t|\ge c \bigr\}\le\frac{1}{c^2}\operatorname{\mathsf e}\sup\limits_{t\in[0,t]}x_t^2\\ & \quad = \frac{1}{c^2}\operatorname{\mathsf e}\sup\limits_{t\in[0,t ] } \biggl\ { \biggl(x_0+\int\limits_0^t ( b - x_s ) ds+{\sigma}\int\limits_0^t\sqrt{x_s}dw_s \biggr)^2 \biggr\}\\ & \quad \le\frac{3}{c^2 } \biggl\{x_0 ^ 2+t\operatorname{\mathsf e}\biggl(\int\limits_0^t{\vert}b - x_s{\vert}ds \biggr)^2+{\sigma}^2\operatorname{\mathsf e}\sup\limits_{t\in[0,t ] } \biggl(\int\limits_0^t\sqrt { x_s}dw_s \biggr)^2 \biggr\}\\ & \quad \le\frac{3}{c^2 } \biggl\{x_0 ^ 2+t\operatorname{\mathsf e}\int\limits_0^t ( b - x_s ) ^2ds+4{\sigma}^2\operatorname{\mathsf e}\int\limits_0^tx_sds \biggr\}\le\frac{b_1}{c^2}\end{aligned}\ ] ] for some constant .the lemma is proved .consider the following discrete approximation scheme for the process .assume that we have a sequence of the probability spaces , . let , be the sequence of symmetric iid random variables defined on the corresponding probability space and taking values , that is , .let further .we construct discrete approximation schemes for the stochastic processes and as follows. consider the following approximation for the complete process : and the corresponding approximations for given by the following lemma confirms the correctness of the construction of these approximations .let , .* if , then all values given and are positive .* we have as . we apply the method of mathematical induction . when , let us show that we denote and reduce to the quadratic inequality which obviously holds because the discriminant when and .so , .assume now that .it can be shown by applying the same transformation that when and , the values .it can be proved similarly that the values given by are positive .\2 ) can be represented as compute \operatorname{\mathsf e}x_{i-1}^{(n ) } \nonumber\\ & \quad + \biggl(1-\frac{t}{n } \biggr)^2\operatorname{\mathsf e}\bigl(x_{i-1}^{(n ) } \bigr)^2.\label{exkn2}\end{aligned}\ ] ] assume that , , for some .then , .we get the quadratic inequality of the form \beta+ \biggl(\frac{bt}{n } \biggr)^2<\beta^2\ ] ] or , equivalently , \beta+ \biggl(\frac { bt}{n } \biggr)^2<0,\ ] ] which obviously holds when .so , for all , .using the burkholder inequality , we estimate therefore , whence the proof follows . consider the sequences of step processes corresponding to these schemes : and thus, the trajectories of the processes and have jumps at the points , and are constant on the interior intervals .consider the filtrations .the processes are adapted with respect to them .therefore , we can consider the same filtrations for all discrete approximation schemes .so , we can identify with for .[ zau3.1 ] now we can rewrite relation as follows : : x_t^{(n)}\neq x_t^{(n , c ) } \bigr\ } \rightarrow0\ ] ] as .denote by and , the measures corresponding to the processes and , respectively , and by and , the measures corresponding to the processes and , respectively .denote by the weak convergence of measures corresponding to stochastic processes .we apply theorem 3.2 from to prove the weak convergence of measures to the measure .this theorem can be formulated as follows .[ conditions_e ] assume that the following conditions are satisfied : * for any , * for any and ] , } \bigl(\operatorname{\mathsf e}\bigl ( \bigl(q_k^{(n , c)}\bigr)^2{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \big{\vert}{\mathcal{f}}_{k-1}^n\bigr ) \\ & \qquad - \bigl(\operatorname{\mathsf e}\bigl(q_k^{(n , c)}{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \big{\vert}{\mathcal{f}}_{k-1}^n \bigr ) \bigr)^2 \bigr)-{\sigma}^2\int\limits_{0}^{t}\bigl(x_s^{(n , c)}\wedge c\bigr ) ds\biggr{\vert}\ge\epsilon \biggr)=0;\end{aligned}\ ] ] then . using theorem [ conditions_e ] , we prove the following result .[ weak_conv_e ] .according to theorem [ conditions_e ] , we need to check conditions ( i)(iii ) .relation implies that .hence , there exists a constant such that .this means that condition ( i ) is satisfied .furthermore , in order to establish ( ii ) , we consider any fixed and such that , that is , . for such , for any , we have }\operatorname{\mathsf e}\bigl(q_k^{(n , c)}{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \bigr{\vert}{\mathcal{f}}_{k-1}^n \bigr)-\int\limits_{0}^{t}\bigl(b-\bigl(x_s^{(n , c)}\wedge c\bigr)\bigr)ds\biggr{\vert}\ge\epsilon \biggr)\\ & \quad = \lim\limits_{n}\operatorname{\mathsf p}^n \biggl(\sup\limits_{t\in{\mathbb{t}}}\biggl{\vert}\sum\limits_{1\le k\le [ \frac{nt}{t } ] } \frac{(b-(x_{k-1}^{(n , c)}\wedge c))t}{n}-\sum\limits_{0\le k\le [ \frac{nt}{t}]-1}\bigl(b-\bigl(x_{k}^{(n , c)}\wedge c\bigr)\bigr)\frac{t}{n}\\ & \qquad -\bigl(b-\bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedge c\bigr)\bigr ) \biggl(t-\frac { [ \frac{nt}{t } ] t}{n } \biggr)\biggr{\vert}\ge\epsilon \biggr)\\ & \quad = \lim\limits_{n}\operatorname{\mathsf p}^n \biggl(\sup\limits_{t\in{\mathbb{t}}}\biggl{\vert}\bigl(b-\bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedge c\bigr)\bigr ) \biggl(t-\frac { [ \frac{nt}{t}]t}{n } \biggr)\biggr{\vert}\ge\epsilon \biggr)=0,\end{aligned}\ ] ] and hence condition ( ii ) is satisfied . now let us check condition ( iii ) .we have therefore , for any , } \bigl(\operatorname{\mathsf e}\bigl ( \bigl(q_k^{(n , c)}\bigr)^2{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \bigr{\vert}{\mathcal{f}}_{k-1}^n\bigr ) \\ & \qquad - \bigl(\operatorname{\mathsf e}\bigl(q_k^{(n , c)}{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \big{\vert}{\mathcal{f}}_{k-1}^n \bigr ) \bigr)^2 \bigr)-{\sigma}^2\int\limits_{0}^{t}\bigl(x_s^{(n , c)}\wedge c\bigr ) ds\biggr{\vert}\ge\epsilon \biggr)\\ & \quad = \lim\limits_{n}\operatorname{\mathsf p}^n \biggl(\sup\limits_{t\in{\mathbb{t}}}\biggl{\vert}\sum\limits_{1\le k\le [ \frac{nt}{t } ] } \biggl ( \biggl(\frac{(b-(x_{k-1}^{(n , c)}\wedge c ) ) t}{n}\biggr)^2+{\sigma}^2\frac{t}{n } \bigl(x_{k-1}^{(n , c)}\wedge c \bigr)\\ & \qquad - \biggl(\frac { ( b-(x_{k-1}^{(n , c)}\wedge c))t}{n } \biggr)^2 \biggr)-\sum\limits_{0\le k\le [ \frac{nt}{t}]-1 } \biggl({\sigma}^2 \frac{t}{n}\bigl(x_{k}^{(n , c)}\wedge c \bigr ) \biggr)\\ & \qquad -{\sigma}^2 \bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedgec \bigr)\biggl(t-\frac { [ \frac{nt}{t } ] t}{n } \biggr)\biggr{\vert}\ge \epsilon \biggr)\\ & \quad = \lim\limits_{n}\operatorname{\mathsf p}^{n } \biggl(\sup\limits_{t\in{\mathbb{t } } } \biggl({\sigma}^2 \bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedge c \bigr ) \biggl(t-\frac{[\frac{nt}{t } ] t}{n } \biggr ) \biggr)\ge\epsilon \biggr)=0.\end{aligned}\ ] ] the theorem is proved .[ weak_con ] , .according to theorem [ bill ] and theorem [ weak_conv_e ] , it suffices to prove that however , due to remark [ zau3.1 ] , : x_t^{(n)}\neq x_t^{(n , c ) } \bigr\}=0.\qedhere\end{aligned}\ ] ]in this section , we construct a multiplicative discrete approximation scheme for the process , ] , * for any ] , }\operatorname{\mathsf e}\bigl(q_k^{(n , c)}{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \bigr{\vert}{\mathcal{f}}_{k-1}^n \bigr)\\ & \qquad -\int\limits_{0}^{t}\bigl(b - x_s^{(n , c)}\wedge c\bigr)ds\biggr{\vert}\ge \epsilon \biggr)=0;\end{aligned}\ ] ] * for any and ] , starting from some number , we have whence condition ( ii ) holds .now , implies that , for all ] , we have }\operatorname{\mathsf e}\bigl ( \bigl(q_k^{(n , c)}\bigr)^2{\mathbb{i}}_{|q_k^{(n , c)}|\le a } \bigr{\vert}{\mathcal{f}}_{k-1}^n\bigr)-{\sigma}^2\int\limits_{0}^{t}\bigl(x_s^{(n , c)}\wedge c\bigr ) ds\biggr{\vert}\ge\epsilon \biggr)\\ & \quad = \lim\limits_{n}\operatorname{\mathsf p}^n \biggl(\sup\limits_{t\in{\mathbb{t}}}\biggl{\vert}\sum\limits_{1\le k\le [ \frac{nt}{t } ] } \biggl ( \biggl(\frac{(b-(x_{k-1}^{(n , c)}\wedge c ) ) t}{n}\biggr)^2+{\sigma}^2\frac{t}{n } \bigl(x_{k-1}^{(n , c)}\wedge c \bigr ) \biggr)\\ & \qquad - \sum\limits_{0\le k\le [ \frac{nt}{t } ] -1 } \biggl({\sigma}^2 \frac{t}{n}\bigl(x_{k}^{(n , c)}\wedge c \bigr ) \biggr)-{\sigma}^2\bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedge c \bigr ) \biggl(t-\frac{[\frac{nt}{t } ] t}{n}\biggr)\biggr{\vert}\ge\epsilon\biggr)\\ & \quad \le\lim\limits_{n}\operatorname{\mathsf p}^{n } \biggl(\sup\limits_{t\in{\mathbb{t } } } \biggl(\frac{(|b|+c)^2tt}{n}+{\sigma}^2 \bigl(x _ { [ \frac{nt}{t } ] } ^{(n , c)}\wedge c\bigr ) \biggl(t-\frac { [ \frac{nt}{t } ] t}{n } \biggr ) \biggr)\ge \epsilon \biggr)=0.\end{aligned}\ ] ] the theorem is proved . .the proof immediately follows from theorem [ bill ] , theorem [ weak_conv ] , and remark [ zau3.1 ] .indeed , : x_t^{(n)}\neq x_t^{(n , c ) } \bigr\}=0.\qedhere\end{aligned}\ ] ] the weak convergence can be proved in a similar way .we state here theorem 4.2 from : [ bill ] suppose that we have sets of processes , , and a stochastic process on the interval $ ] .let , , , and be their corresponding measures .suppose that , for any , , , and that as .suppose further that , for any , then , .[ watanabe1] . if and are continuous functions satisfying the condition for some positive constant , then for any solution of such that , we have for all .* there exists a strictly increasing function on such that for all .* there exists an increasing and concave function on such that for all
in this paper , we consider the cox ingersoll ross ( cir ) process in the regime where the process does not hit zero . we construct additive and multiplicative discrete approximation schemes for the price of asset that is modeled by the cir process and geometric cir process . in order to construct these schemes , we take the euler approximations of the cir process itself but replace the increments of the wiener process with iid bounded vanishing symmetric random variables . we introduce a `` truncated '' cir process and apply it to prove the weak convergence of asset prices . we establish the fact that this `` truncated '' process does not hit zero under the same condition considered for the original nontruncated process . ./style / arxiv - vmsta.cfg cox ingersoll ross process , discrete approximation scheme , functional limit theorems 60f99,60g07,91b25
on september 2015 , the gravitational - wave ( gw ) was directly detected by the advanced laser interferometer gravitational - wave observatory ( aligo ) for the first time .the detected gw , labeled by gw150914 , was the signal from the inspiral and merger of two black holes with the masses of about 30 .the signal was the first evidence of the binary black hole merger .as another event of black hole merger , gw151226 , had been detected , further details of binary black hole systems , such as distribution and population of black holes , might be revealed by further observations .while aligo detected the transients with the durations of a few hundreds milliseconds , a longer observations of the binary systems would provide much information of the binary systems , such as spins of black holes .binary systems with the masses of about 30 emit gws with the frequencies of about 0.1 hz at 15 days before the merger .such a long term observation of the gw signals from the binary systems would effectively solve the degeneracy of the parameters , such as their spins , which is expected to be a clue about the evolution of the black holes .in addition to the black holes with the masses of the order of 10 , it is important to search for black holes with various masses .observation of various black hole mergers would elucidate the evolution process of super - massive black holes . in order to observe the gw signals from binary systems with various masses, it is necessary to observe gws at various frequencies .binaries with heavier masses emit gws at lower frequencies than the observation band of the interferometers . a torsion - bar antenna ( toba ) is a gravitaional - wave ( gw ) detector that is sensitive to gws at around 1 hz , while the observation band of interferometric gw detectors , such as ligo , are above about 10 hz .one of the astronomical targets of toba is intermediate - mass black hole binaries .the space - borne interferometric detectors , such as laser interferometer space antenna ( lisa ) and deci - herz gravitational - wave observatory ( decigo ) , would also have the observation frequency band below 10 hz . comparing to such detectors, toba can be realized on the ground with an accessibility for repair and upgrade , and without risk for large cost .though the final target sensitivity of toba is not as good as space detectors , the observation range is expected to be as far as 10 gpc with the luminosity distance for the intermediate - mass black hole mergers .toba has two bar - shaped test masses which are suspended at their centers .gws are detected by monitoring their relative rotation excited by the tidal force from gws . since the resonant frequencies of the test masses in the torsional modes are as low as a few mhz , toba has sensitivity at low frequencies even on the ground . as a proof of concept ,the first prototype of toba had been developed with a single test mass bar and set the first upper limit on a stochastic gw background from 0.03 to 0.8 hz .on the other hand , it is necessary to use more than three detectors in order to determine the parameters of binaries , such as masses of the objects , polarization angle , and the sky position of the source because the conventional detectors have poor directivity .therefore , we proposed _ the multi - output toba _ , which provides three independent signals from a single detector by monitoring multiple rotational degrees of freedom of the test masses .the multi - output system improves the event rate and the angular resolution , which would enhance the low - frequency gw astronomy even with fewer detectors . in this paper, we introduce the first multi - output toba detector .its main feature is the new suspension system for the multi - output configuration , which also performs as a vibration isolation system . in the section [ sec : toba ] , we explain its principle and the target sensitivity .the detector configuration that we developed is described in the section [ sec : detector ] .its characteristics is mentioned in the following section .toba has two bar - shaped test masses that rotate differentially by the tidal force from gws as shown in fig .[ fig : toba ] . schematic view of the toba test masses.,width=264 ] the equation of the motion of the test mass bar in the rotational mode is where and are angular fluctuation , the moment of inertia , the damping constant , and the spring constant around -axis , respectively . is the amplitude of the gw , and is the dynamical quadrupole moment of the test mass for the rotation along the axis .the frequency response of the angular fluctuation of the test mass is derived from eq .( [ eq : eom ] ) as where and are the loss angle and the amplitude of the gw coming along -axis with the polarizations of and , respectively . is the resonant frequency of the torsional mode , above which the angular fluctuation due to the gw is approximated to be independent from its frequency . in the multi - output system, we also consider rotations of the bars on the vertical planes . considering the test mass 1 in fig .[ fig : toba ] , the angular fluctuation along the -axis also obeys the similar equation as eq .( [ eq : eom ] ) : it means that the bar also rotates vertically due to gws coming along -axis .therefore , we can derive two independent signals and from the single test mass bar . since we have two orthogonal test mass bars , it is possible to derive three independent signals from the single detector , i.e. , , , and , where the suffix indicates the two test masses .the sensitivity to gw signal derived from , calculated as , would be better by than the sensitivity derived only from one of the rotational signals , when the noise appeared in , and are un - correlated and the same level . this multi - output system would increase the expected detection rate by about 1.7 times , since the three signals have different sensitive areas in the sky . also , parameter estimation accuracy for short - duration signals would be improved since the three independent information helps to break the degeneracies of the parameters .the final target sensitivity of toba is about in strain at 1 hz as described in , which is limited mainly by the shot noise , the radiation pressure noise , and gravity gradient noise .this sensitivity can be achieved by sensing the rotation of large cryogenic test mass bars with the length of 10 m , using fabry - perot interferometer .the test mass bars and the suspension wires should be cooled down in order to reduce the thermal noise . since several advanced technologies are necessary for the final toba configuration , it is required to develop each component using prototypes .the first prototype toba had been developed previously .the first prototype has a single bar shaped test mass with a magnetic levitation system in order to suspend the test mass softly in the rotational degree of freedom .the test mass has the length of 20 cm , and its horizontal rotation is measured by a michelson interferometer , which means that it has the single - output configuration .it successfully tested the basic principle of toba , and set the first upper limit on a stochastic gravitational - wave background at 0.2 hz .the sensitivity of the first prototype toba is limited by the magnetic noise induced by the magnetic suspension system , and the seismic noise coupling . as a next step ,we have developed a multi - output toba , as described in the following sections .its main target is the development of the suspension system that realizes the multi - output system .also , the passive and active vibration isolation systems attenuate the noise caused by the seismic motion .( color online ) . overview of the multi - output toba.,width=264 ] the schematic view of the multi - output toba is shown in fig . [fig : schemview ] .the two orthogonal test masses that sense the gws are suspended from a hexapod - type active vibration isolation table ( avit ) via an intermediate mass , which is magnetically damped by a damping mass .the optical bench where the sensors and actuators are set is also suspended from the intermediate mass .the suspension system except the actuators and sensors of the avit is in a vacuum tank .the picture of the test masses is shown in fig .[ fig : testmass ] .the test masses are designed for the multi - output system and the test of common mode noise reduction .( color online ) .picture of the test masses .each test mass is suspended by two wires so that the center of each mass can be located at the same position.,width=264 ] the two orthogonal test mass bars with the length of 24 cm are suspended by the two parallel wires respectively , so that the centers of the masses can be located at the same position in order to maximize the common mode noise reduction rate in horizontal rotation signal .this design is implemented since it may reduce the noise caused from the common rotational displacements of the bars .the common mode noise reduction rate is expected to be large when the sensitivity is limited by the environmental disturbance that effects the test mass rotations commonly , while the reduction rate is in strain at the minimum when the noise sources of the two signals are independent .the resonant frequency in the horizontal rotational mode is , where , , and are the mass of the test mass , the acceleration of gravity , the distance between the two suspension wires as shown in fig.[fig : testmass ] , moment of inertia of the test mass , and the length of the wire , respectively .the resonant frequency in the vertical rotational mode is written as , where is the height distance between the suspension points and the center of mass as shown in fig.[fig : testmass ] .the suspension points of the test masses are set to be close to their center of mass in order to minimize the resonant frequency of the vertical rotation . in our set up ,the resonant frequency of the horizontal and vertical rotational modes are 0.10 hz and 0.15 hz , respectively .the resonant frequencies are set at around 100 mhz in our prototype in order to realize the new suspension system with the compact setup .the resonant frequencies would be pushed down by using larger test masses in the future upgrade in order to widen the observation band .the main sensors that we used for the observation in the multi - output toba are the michelson interferometeric sensors .the motion of the bar is monitored by measuring the phase shift of the beam reflected by the mirrors attached at the bar .since it is necessary to have several sensors around the test masses in order to monitor three independent rotational signals , the fiber optics are used for the space saving .the sensor configuration is shown in fig .[ fig : sensor ] .the type-1 interferometers , which measure the displacement of each mirror attached on the test masses , monitor the yaw , longitudinal , and side motion of the bar .the type-2 interferometers that sense the differential displacement of the two end mirrors are set as sensors for the roll motion .the position of the test masses are controlled by the coil - magnet actuators so that the fringe of the interferometer can be kept at their middle .the gw signals are derived from the feedback signal on the actuators .the test masses are controlled in longitudinal , side , yaw , and roll modes .these sensors are set on the suspended optical table explained in the next sub - section .( color online ) .configuration of the fiber laser interferometers.,width=264 ] though the seismic motion in the rotational degree of freedom is small , the seismic vibration isolation system is necessary since the translational vibration couples to the rotational signals .for example , non - parallel mirrors at the both ends of the test mass induce the translational seismic noise coupling .the test masses , and the optical table where the sensors and actuators are set , are suspended in order to attenuate the seismic motion above resonant frequencies of those pendulum modes. however , since the resonant frequencies of the pendulum modes are about 1 hz , an additional seismic isolation system for low frequencies are necessary for toba .therefore , we developed the active vibration isolation table ( avit ) for low - frequency vibration isolation and the whole system is suspended from the avit .( color online ) .schematic view of the hexapod - type active vibration isolation table ( avit ) .the whole suspension system is suspended from the top table .it has six piezoelastric actuators as legs so that they can actuate the top table in six degrees of freedom .the six seismometers are set in order to sense the vibration of the top table.,width=264 ] the avit is a table with six legs composed of piezoelastic elements ( pzts ) as shown in fig .[ fig : hexa ] .the pzts ( p844.30 , products of phisik instrumente ) move the position of the top plate .those pzts have tips with slits at the both ends in order to avoid non - linear effect when they push or pull the table .the vibration of the top table is suppressed by the feedback control using six seismometers ( l-4c geophones , products of sercel ) set on the top plate and the pzts .note that reflective type position sensors are used in order to measure the dc position of the top table relative to the ground since the lack of the sensitivity of seismometers at low frequencies causes drift of the top table .the seismic displacement of the top table is shown in fig .[ fig : seis_hexa ] . the vibration isolation ratio from the ground motion at 1 hz is achieved to be almost 10 times .its performance is mainly limited by the range of the pzts and the resonance of the frame where the avit is sitting .the avit can attenuate the vibration at around 1 hz even with its compact body of 45 cm in diameter , while the passive vibration isolation requires large setup for low - frequency vibration isolation , such as inverted pendulums .also the avit is effective to attenuate the vibration of the heat link in the cryogenic system for thermal noise reduction that is planned to be implemented in the future upgrade .it is because the rigid structure enables the avit to suppress the vibration induced directly into the vibration attenuated plate , i.e. , the vibration of the heat link attached at the suspension point .seismic isolation performance of the avit in each degrees of freedom .the dashed lines represent the seismic displacement with the avit off , and the solid lines represent one with the avit on .the coordinate is defined in fig .[ fig : schemview].,width=347 ]the angular fluctuations of the test masses are read by the laser interferometers .the angles are calibrated into the gw signal outputs as follows : where are defined as and are the transfer functions from the gw signal to the angular fluctuations of the test masses and the optical bench derived from eq .( [ eq : h ] ) , respectively . here , are considered to be zeros since when the test masses are suspended along -axis and -axis .note that the optical bench is also sensitive to gws since it is also suspended from the intermediate mass .in our case , the are above their resonant frequencies , where and . for the optical bench , are not zero because the side view of the optical bench does not show four - fold symmetry , while is derived to be zeros from . therefore , the calibration factor for the and are about 10 times smaller than in our setup as shown in figs .[ fig : calib ] and [ fig : calib2 ] above 2 hz .calibration factor from the horizontal rotational angle to , which is the gw amplitude equivalent signal derived from the horizontal rotation of the bars . above the resonant frequency of the horizontal rotation at 0.1 hz, the calibration factor is constant as derived from eq .( [ eq : h]).,width=264 ] calibration factors from the vertical rotational angle to , which is the gw amplitude equivalent signals derived from the vertical rotation of the each bar .the peak at 0.15 hz is the resonant frequency of the vertical rotation of the bar . above the the resonant frequency of the optical bench , which is the 1.1 hz with the quality factor of 10 ,the calibration factor is decreased because the optical bench also rotates due to the gw.,width=264 ] gw equivalent noise spectra obtained from the multi - output toba compared with the spectrum of the previous detector .the black , dark gray , and light gray lines are the spectra of , and , respectively .the dotted gray line represents the sensitivity of the previous detector .,width=264 ] the solid lines in fig .[ fig : strain ] are the gw strain equivalent noise spectra obtained from the multi - output toba .the dotted line is the sensitivity of the first prototype .it shows that the sensitivity of that is represented in the black line in fig .[ fig : strain ] is improved by about 100 times at the maximum compared to the first prototype .figure [ fig : sens_limits ] shows the dominant noise sources in the three signals .the sensitivity of the multi - output toba is limited mainly by the interferometer readout noise and the seismic coupling noise .the readout noise , shown in the solid dark gray lines , is estimated from the readout signal measured with the test masses fixed on the optical bench .it is considered to be induced from the fiber optics since the contribution from the other laser noise source , such as the intensity fluctuation and the frequency fluctuation , are lower than the measured readout noise .the light gray lines are the seismic noise estimated from the motion of the avit table measured by the seismometers on the avit and the transfer functions from the seismometers to the laser sensors directly measured by exciting the avit .while the coupling mechanism to is now under investigation , the seismic noises of and are induced because the translational seismic motions excite the vertical rotations of the test masses .since the heights of the suspension point and of the center of mass are different , the translational force on the suspension point applies the torque in the roll direction .the dotted dark gray lines in the two bottom spectra in fig . [ fig : sens_limits ] are the seismic noise calculated from the theoretical transfer functions from the ground to the test masses .the sensitivity curve , the seismic noise estimated from the measured transfer functions , and the theoretical seismic noise fit well in and .in addition to the large seismic coupling , the small calibration factor from the rotation to the gw amplitude , as derived in the previous section , worsen the sensitivity of and by about 1,000 times than .gw equivalent noise spectra and the spectra of the respective noise sources .the left top , left bottom , and right bottom graphs are the sensitivities and the noise sources of , , and , respectively .the black lines are the gw equivalent noise spectra .the dark gray lines show the readout noise measured with the test mass fixed to the optical bench .the light gray blue lines are the seismic noise estimated from the transfer function from the ground to the sensor directly measured using the avit .the dotted dark gray lines are the seismic noise calculated using the theoretical transfer function from the ground to the sensor .the theoretically calculated seismic noise is not plotted in the left top , since the coupling mechanism of seismic noise to is unknown.,width=321 ] effect of the common mode noise rejection .the black and the dark gray lines are the strain sensitivities calculated using single test mass , 1 and 2 , respectively .the dotted light gray line is the sensitivity derived by subtraction of the signals from the two test masses.,width=226 ] the performance of the common mode noise rejection between the two test masses is shown in fig .[ fig : cmr ] . as described in the section [ sec : tm ] ,the centers of masses are designed to be at the same position since the noise may be reduced when the sensitivity is limited by the common rotational displacement . however , subtraction of the two signals is not effective for the noise reduction in our case , since there are almost no coherence in two rotational signals .the readout noise is not correlated between the two independent interferometers .also the seismic coupling noise is not correlated , since the seismic motion in direction couples to the rotation signal of the test mass 2 , and the motion in direction couples to the signal of the test mass 1 in fig .[ fig : toba ] . since there are no correlation between the seismic motions in the orthogonal directions , the two rotational signals are not correlated .sensitivities of signals with the avit on and off .the top , left bottom and right bottom graphs represent the sensitivity of ; , and , respectively .the black lines are the sensitivities with the avit on , while the dashed gray lines are the ones with avit off.,width=347 ] figure [ fig : aviteffect ] shows the efficiency of the avit .the dotted gray lines are the sensitivity measured without vibration attenuation using the avit , while solid black lines are the sensitivity measured with the avit working .the sensitivity at around 1 hz largely improved thanks to the avit . at 4 - 10 hz ,the noise levels become worse by the avit .it is supposed to be the control noise induced by the avit since the resonance of the support frame disturbs the avit control loops to have enough phase margin .we performed the observational run from 19:50 jst , december 10 , 2014 to 19:50 jst , december 11 , 2014 .the observation system stably continued for more than 24 hours .the spectrograms of the three signals during the observation are shown in fig .[ fig : spectrogram ] .the noise levels are almost the same for 24 hours except for the earthquake which occurred at around 20:00 , december 10 .( color online ) .spectrograms of the power density of the three signals during the 24-hour observation from 0.1 to 100 hz.,width=347 ] histograms of the normalized power spectral density of , , and at 1 , 2 , and 8 hz .the dotted line at each panel corresponds to a gaussian distribution.,width=347 ] the histograms of the power spectrum density of the obtained data are shown in fig .[ fig : gaussianity ] .the columns represent the signals , , , and , and the lines represent the frequencies of the collected data , 1 hz , 2 hz , and 8 hz , respectively .the sensitivities shown here are normalized by the average sensitivity over the whole observation .the all three signals are distributed according to gaussian distributions . using these data, we performed several gw search analysis : the search for continuous gws , stochastic gw backgrounds , and intermediate mass black hole binaries .continuous gw signals had been searched for at 6 - 7 hz and set an upper limit of on the dimensionless gw strain at 6.84 hz . also , at 2.58 hz , a new upper limit is set on the energy of the stochastic gw background , where is the gw energy density per logarithmic frequency interval in units of the closure density and is the hubble constant in units of 100 km / s / mpc .the gws from the binary systems with the mass of 100 , and 200 - 1000 were searched using the matched filtering method , and no signals had been discovered . the continuous gw signals and gws from the binary systems were searched at around 1 - 10 hz for the first time .the upper limit on the energy of the stochastic gw background has also been updated using the detector with improved sensitivity .the sensitivity of toba was improved mainly due to the passive and active vibration isolation systems .the pendulum suspension system is used as a passive vibration isolation in order to reduce the seismic motion above 1 hz . the vibration isolation system for the sensors , as well as for the test masses , is effective .in addition , the avit are introduced in order to reduce the seismic motion around 1 hz . while its performance was limited by the resonance of the frame where the avit is sitting and the actuation range of the pzts , the seismic motion of the table is reduced by about 10 times from the ground in the translational three degrees of freedom .the combination of those vibration isolation system improves the sensitivity at around 1 - 10 hz by 10 - 100 times compared with the previous prototype toba .also , the three independent signals are successfully obtained simultaneously and stably as the multi - output signals , while the sensitivity of the signals from the vertical rotational motion are worse than that from the horizontal rotational motion .the sensitivity was different among the three signals because of the large coupling from the seismic motion and the cancellation of the gw signals between the test masses and the optical bench . in order to improve the detector performance ,careful optimization of the suspension design is necessary .the reinforcement of the frame for the avit is required in order to gain more vibration attenuation ratio .the shape of the optical bench should have four - fold symmetry in order to improve the sensitivities of and so that the optical bench would not react to gws .also , the positions of the centers of masses of the test masses and the optical bench would be adjusted from the outside of the vacuum tank using moving masses and actuators , such as pico - motors , in order to search for the state that minimize the seismic coupling .such modification also helps to investigate the seismic coupling mechanism in the torsion pendulum further .in addition to the suspension design , the readout system is required to be modified in order to improve the sensitivity .having more space on the optical bench would enable us to use the spacial laser beams instead of the fiber optics . for further sensitivity improvement , it is necessary to upgrade several technologies .other than the seismic attenuation system developed in our paper , the low loss suspension system and the cryogenic system are critical path to achieve the final target sensitivity .also , the newly introduced observation method , the multi - output system , is required to be investigated further , though the demonstration was successful .for example , reduction of the suspension thermal noise in the vertical rotational signals would be the one of the important subjects .also , it is necessary for the two vertical rotation signals to improve the detector configuration by the methods as explained in the previous section in order to achieve the same sensitivities as the horizontal rotation signal .however , since the fundamental noise sources for the vertical rotation are the same as the ones for the horizontal rotation , the achievable sensitivities of and are expected to be almost same as . for the midterm upgrade ,gw strain equivalent noise of at 0.1 hz , which is the sensitivity such that the gravity gradient effect , so called newtonian noise , could be observed , would be the target . besides the further optimization for the seismic coupling reduction , the key technology for the mid - term upgrade would be the low - loss suspension for the reduction of the thermal noise in the midterm phase .the gravity gradient effect is the noise caused by the gravity perturbation due to the seismic motion , acoustic sound , motion of the object around the detector , and so on .it is necessary to establish the method of newtonian noise canceling in this phase for the further sensitivity improvement below 1 hz .conversely , precise detection of gravity gradient signals caused by earthquakes would also be applicable for the early alert of large earthquakes .the gravity gradient signal from a large earthquake is considered to be used as the prompt alert of the earthquake since the gravity signal propagates at the speed of light , which is much faster than the seismic waves .the full - tensor configuration realized by the multi - output system would also be effective in terms of the gravity gradient detection and cancellation . for the long term upgrade to achieve the final target sensitivity , ,the 10-m scale large test mass is necessary .the rotation of the test mass is required to be read by the fabry - perot interferometer in order to reduce the shot noise .also , the cryogenic system to reduce the thermal noise would be critical in order to improve the sensitivity at low frequencies . with this sensitivity , intermediate mass black holes binaries at the luminosity distance of 10 gpcwould be observed with the signal to noise ratio of 5 .such observational results would be a key to reveal the evolution processes of stellar and supermassive black holes , globular clusters , and galaxies .we have developed the new toba detector that employed the new suspension system for the multi - output system .it successfully worked as the multi - output detector , i.e. , the three independent signals was derived simultaneously from the single detector .it demonstrates the new technique to improve the event rate and the parameter estimation accuracy .also , the sensitivity obtained from the horizontal rotational signal is improved from the previous prototype because of the passive and active vibration systems .especially , avit , which is the compact active vibration isolation system realized with the pzt actuators and seismometers , reduced the seismic displacement at lower frequencies than the resonant frequency of the pendulum .avit is also expected to be effective for the vibration isolation for the heat link for the cryogenic system which is planned to be used in the future .those new technologies would be a new step towards the low - frequency gw astronomy .while it is necessary to develop other technologies , such as a cryogenic system and a large system , observation of the binary black hole systems with multi - output toba would provide various astronomical information .this work is supported by grants - in - aid from the japan society for the promotion of science ( jsps ) , jsps fellows grants no .26.8636 ( k. e. ) , 24.7531 , and 15j11064 ( a. s. ) .this work also is supported by jsps grants - in - aid for scientific research ( kakenhi ) grant no .24244031 ( m. a. ) , 24103005 , 15h02082 , and 15k05070 ( y. i. ) .23ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] http://link.aps.org/doi/10.1103/physrevlett.116.061102 [ * * , ( ) ] * * , ( ) link:\doibase 10.3847/2041 - 8205/818/2/l22 [ * * , ( ) ] link:\doibase 10.1093/ptep / ptw127 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.161101 [ * * , ( ) ] link:\doibase 10.1007/s10714 - 014 - 1730 - 2 [ * * , ( ) ] link:\doibase 10.1088/0264 - 9381/28/9/094011 [ * * , ( ) ] link:\doibase 10.1016/j.physc.2010.05.219 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.161101 [ * * , ( ) ] * * , ( ) http://stacks.iop.org/0264-9381/28/i=12/a=125023 [ * * , ( ) ] link:\doibase 10.1103/physrevd.90.064039 [ * * , ( ) ] _ _ , ph.d .thesis , ( ) _ _ , ph.d . thesis ,( ) link:\doibase 10.1016/j.nima.2007.08.161 [ * * , ( ) ] link:\doibase 10.1016/j.precisioneng.2014.09.010 [ * * , ( ) ] link:\doibase 10.1063/1.1144249 [ * * , ( ) ] link:\doibase 10.1093/ptep / ptv179 [ * * , ( ) ] link:\doibase 10.1103/physrevd.94.042003 [ * * , ( ) ] ( ) link:\doibase 10.1103/physrevd.30.732 [ * * , ( ) ] link:\doibase 10.1093/gji / ggv090 [ * * , ( ) ] * * , ( )
we have developed a new gravitaional - wave ( gw ) detector , torsion - bar antenna ( toba ) , with multiple - output configuration . toba is a detector with bar - shaped test masses that rotate by the tidal force of the gws . in our detector , three independent information about the gw signals can be derived by monitoring multiple rotational degrees of freedom , i.e. , horizontal rotations and vertical rotations of the bars . since the three outputs have different antenna pattern functions , the multi - output system improves the detection rate and the parameter estimation accuracy . it is effective in order to obtain further details of the gw sources , such as population and directions . we successfully operated the multi - output detector continuously for more than 24 hours with stable data quality . also , the sensitivity of one of the signals is improved to be at 3 hz by the combination of the passive and active vibration isolation systems , while sensitivities to possible gw signals derived from the vertical rotations are worse than that from the horizontal rotation .
cells display complex nonlinear and time - scale dependent rheological properties . a broad range of relaxation timescales results in power - law spectra for the frequency dependence of the linear viscoelastic response . under nonlinear loading conditions , cells can display apparently contradicting behaviors , ranging from fluidization to reinforcement . understanding these mesoscale behaviors in terms of underlying non - equilibrium processes , such as cytoskeletal remodeling ,motor activity or reversible crosslink binding or folding , remains an important theme in current biomechanical research . over recent years, reconstituted f - actin networks have become a popular model system in which these phenomena can be studied in detail .much of previous research in this field focused on the frequency - dependent rheology of _ permanently crosslinked _filament networks .key questions revolve around the high - frequency modulus and its dependence on frequency , the nature of network deformations ( affine vs. non - affine ) at intermediate freuqencies and the nonlinear response properties of the network .theoretical models and simplified simulation schemes have been proposed that aim at explaining one or the other of these non - trivial features . in these studies ,the filaments and their mechanical and thermal properties are assumed to dominate the effective rheology of the system .this may be different in f - actin networks crosslinked with the rather compliant crosslinking protein filamin .experimental and theoretical work suggest a second , crosslink - dominated regime , where network rheology is set by the crosslink stiffness , while filaments effectively behave as rigid , undeformable rods . in between these two extreme scenariosa proper treatment would have to consider the full interplay between crosslink and filament mechanical properties .this has not been addressed theoretically before .recent experiments have indicated that at low driving frequencies effects due to crosslink binding become important .this is evidenced , for example , as a peak in the loss modulus or a broad distribution of time - scales leading to an anomalous scaling with frequency , .theoretical modelling in this field are only beginning to emerge .some of the pertinent problems are : what is the force on a crosslink and how does it depend on network deformation ?how does this affect binding ?once unbound , how does this affect the network ?a noteworthy recent development is the phenomenological `` glassy wormlike chain '' model . in that approach , filament - filament interactions , as for example mediated by specific crosslinking , are not modeled explicitly , but are assumed to lead to an exponential stretching of the single filament relaxation times .network deformation is accounted for by a pre - stretching tensile force in the filament .the same force is assumed to enter the unbinding rate constant of the crosslinks via a bell - like model .here , we go beyond previous studies by investigating the interplay between filament and crosslink elasticity and its effects on the crosslink binding behavior . we will present a simple model that accounts for the individual crosslinks , their mechanical properties as well as their binding state .the model is a fully thermodynamic treatment , where crosslink binding is equilibrated for a given network deformation .no rate effects will be considered . as a result of our analysis, we will be able to selfconsistently calculate the nonlinear elastic modulus of the network , which incorporates as key ingredient the ensuing tendency for crosslinks to unbind under load .the manuscript is structured as follows . in section [ sec : model ]we will define our model and relate it to existing approaches in the literature . in the modelthe filament network is represented as an effective elastic medium with a given , fixed modulus .we will introduce a hamiltonian that describes the properties of a test filament embedded in this medium . in section [ sec : results ] we will present results of metropolis monte - carlo simulations for the hamiltonian introduced . in section [ sec : theor - fram ] a theoretical framework will be developed that allows some analytical results to be obtained . finally , in section [ sec : selfc - determ - medi ] we will discuss the question of how to obtain in a self - consistent way the stiffness of the effective medium in terms of the response properties of the test filament .we will consider the properties of a test filament crosslinked into a network .the filament is described in terms of the worm - like chain model . in `` weakly - bending '' approximationthe bending energy of the filament can be written as where is the filament bending stiffness and is the transverse deflection of the filament from its ( straight ) reference configuration at . in these expressions the arclength , ] , where is the energy difference between initial and final state . during a crosslink move , a bond is selected randomly , and the corresponding occupation variable is flipped ( ) .the new state is accepted with probability $ ] , with the crosslink chemical potential , the change in the number of crosslinks . in the followingwe show data where the filament bending stiffness serves as unit of energy .the chemical potential is .the inverse temperature is such that the persistence length is , measured in units of lattice sites .the filament length is taken to be , i.e. . for simplicity , we assume the filament to have zero deflection at its ends , .the tube potential is taken as with , which corresponds to the longest possible wavelength compatible with the chosen boundary conditions . in fig.[n.energy ]we monitor the average crosslink occupation as well as an average energy , which is obtained by minimizing the total elastic energy for given crosslink occupation .we vary the network strain as well as the crosslink stiffness .it can readily be seen that crosslink stiffness has a dramatic effect on the thermodynamic state of the system .for mechanically weak crosslinks network strain has no strong influence on the binding state .only few crosslinks unbind upon increasing network deformation . in this regimethe crosslinks are not strong enough to enforce filament bending .the elastic energy is small , and primarily stored in the crosslinks . as a consequence ,the energy of the bound state is raised and the statistical weight is shifted towards the unbound state .this is the regime discussed in in the context of filamin - crosslinked f - actin networks .note , that in that model no crosslink unbinding is accounted for .the crosslinks are modeled as nonlinear elastic elements , leading to significant stiffening of the network under strain .we have checked that incorporating nonlinear crosslink compliance in our model reproduces this behavior . moreover , with the possibility of crosslink unbinding , this stiffening due to crosslink mechanics generally competes with a softening effect due to crosslink unbinding . when the crosslinks are sufficiently strong , their deformation energy starts to compete with the filament bending energy .now crosslinks can force filaments into deformation and the elastic energy is mainly stored in the filaments . at large network strains this energy is too high , however , and unbinding becomes favourable .this is evident as a discontinuous unbinding transition , where nearly all remaining crosslinks unbind simultaneously .associated with such a transition is a free - energy barrier . the escape time over this barrier sets the time - scale of relaxation of the imposed deformation mode .this will be exponential in the number of crosslinking sites , reminiscent of what has been proposed in .theoretical progress can be made in the continuum limit of equations ( [ eq : hb ] ) to ( [ eq : kx ] ) . making the mean - field assumption , the energy of the test filament can be written as going to fourier space , , the can be integrated out .the resulting effective free energy can be written as with a deformation - dependent part a deformation amplitude and the scale for crosslink density .the prefactor is an entropic contribution that specifies the entropic cost of binding to crosslinks .clearly , binding suppresses bending undulations and therefore reduces the entropy stored in these modes .the interplay between entropy reduction and -dependent enthalpy gain may lead to a thermal unbinding transition . for the case considered heresuch an unbinding transition is not relevant .instead , we want to focus on the deformation - dependent part . just as in the simulations, we make the single - mode assumption , and .then it is easy to see that the free energy has two saddle - points at and and a discontinuous transition between them at .this condition gives a critical network strain of , which compares well with the actual transition as seen in figure [ n.energy ] .this analysis predicts a discontinuous unbinding transition for any value of crosslink stiffness , in apparent disagreement with the simulations .it turns out that this is a result of the saddle - point approximation . in a more refined treatment , which also includes an `` entropy of mixing '' term , ,the discontinuous transition is destroyed when the crosslinks are sufficiently soft .finally , let us comment on the use of the single - mode assumption for the tube center .the precise form of and its dependence on network strain is , in principle , unkown and one would like to calculate it selfconsistently . as this is unfeasible, one has to rely on assumptions and physical plausibility .conceptually , the tube deformation in response to network strain represents the missing link between affine deformations on scales larger than the filament length and the actual crosslink motion on the scale of the mesh - size . as such it will be sensitive to the local structure of the network . in has been shown how such a link can be constructed in terms of local binding angles and the mesh - size distribution .for our purposes we note , that using one or few higher modes will not fundamentally alter the proposed picture of continuous vs. discontinuous crosslink unbinding .the necessary condition for such a feature to persist is a free energy contribution that grows slower than linear with crosslink occupation . in this casethe linear contribution from the binding enthalpy will eventually take over . with one or a finite number of modes present the free energywill eventually saturate .this happens , when the filament is strongly constrained by the crosslinks to precisely follow the tube centerline .clearly , binding even more crosslinks can not lead to a more efficient confinement .a similar result is obtained , if one assumes infinitely many modes , with an amplitude that depends on mode - number as , i.e. like a thermalized bending mode of a worm - like chain . in this case , the free energy does not saturate but asymptotically grows like , i.e. also slower than linear .in the previous sections we assumed the test filament to be coupled to a medium of infinite stiffness . in reality the medium itself is made of crosslinked filaments .thus the medium properties can not be set externally , but should be determined self - consistently from the properties of the test filament and its crosslinks . in particular , the studied crosslink unbinding processes reduce the connectivity , and therefore stiffness , of the medium .unbinding should , therefore , be reflected as a change in the tube potential . in the following, we will therefore assume the medium to be characterized by an energy function , which quantifies the energy cost of a deformation .the total tube potential then contains contributions from both the crosslinks _ and _ the effective medium . as we expect the medium to be nonlinear , is not necessarily a harmonic function of deformation amplitude . with a finite medium compliance ,any transverse displacements , at a crosslink , has to be shared between the crosslink and the medium , .the relative stiffness of the two elements dictate the magnitude of the deformations via a force balance condition . with thisthe total confinement strength is no longer set by the crosslink stiffness , , but by an effective stiffness determined by the serial connection of the crosslink and the medium .unbinding events are expected to reduce the stiffness of the medium and therefore of . in a softer environment , however , the test filament and its crosslinks will have a _ reduced _tendency for further crosslink unbinding. there is , therefore , a negative feedback loop between medium stiffness and crosslink unbinding .this may smooth out the sudden unbinding transition that was observed in figure [ n.energy ] . to calculate we need a condition of self - consistency .this is based on previous approaches . as discussed above, a macroscopic strain leads to a local medium deformation of .the associated energy cost is given by .at the same time the strain deforms the test filament and its surrounding tube .therefore , the energy that is needed to impose the strain needs to be balanced by the energy that is build up in the test filament . this condition can be written in the simple form where and the average is taken over crosslink occupation and network disorder . despite its simple form ,equation ( [ eq : km.sc ] ) is rather difficult to handle , as the unknown potential is required for the evaluation of the statistical average .a possible solution could proceed iteratively , starting from a suitably chosen initial guess .we have not tried such a scheme . instead , and to make analytical progress , we propose a simplified ansatz for the confinement potential .as we will see , this ansatz successfully describes the intuitive result of a softening of the medium as a result of crosslink unbinding .let us assume the confinement to be harmonic in the -degrees of freedom but with a confinement strength that depends on the deformation amplitude .the form of equation ( [ eq : serial_springs ] ) mimics the serial connection of crosslink and medium .the energy can then easily be calculated , and equation ( [ eq : km.sc ] ) takes the simple form which has to be solved for . the dependence on strain is implicit in the averaging procedure , as the tube potential ( [ eq : v.nonlinear.harmonic ] ) depends on .thus , the effective medium stiffness is obtained as a serial connection of the three mechanical elements fiber , , crosslinks , , and the medium itself , .figure [ graphical.sol ] presents a graphical solution of equation ( [ eq : km.sc.2 ] ) .the symbols give the effective fiber stiffness ( right - hand side of the equation , from the mc simulations ) plotted as a function of and taken at different deformation amplitudes .if we assume to be given and constant ( as in figure [ n.energy ] ) , a vertical line would give us the fiber stiffness as a function of amplitude .for large enough ( indicated by the vertical dashed line ) , a discontinuous transition is evident in the data . to extract the actual network modulus we have to find the intersection with the curve , equation ( [ eq : serial_springs ] ) , which is drawn as solid black line .the resulting is clearly decreasing with deformation amplitude , however , the discontinuous nature is not obvious anymore .a quantitative analysis is presented in figure [ km.sim.th ] .the resulting network modulus is plotted for different crosslink stiffness . for small strainthe modulus is constant , as for a linear elastic material .this linear stiffness first increases with crosslink stiffness but then saturates , when the crosslinks become stiffer than the fiber .this behavior is in line with the serial connection between fiber , crosslink and medium as embodied in equation ( [ eq : km.sc.2 ] ) . in a serial connection it is always the softer element that governs the mechanical properties . at higher strain and for stiff crosslinks ,the network modulus decreases .this decrease is not discontinuous as expected from figure [ n.energy ] , but rather smooth and gradual .in fact , a zero - temperature analysis along the lines of section [ sec : theor - fram ] shows that the discontinuity turns into a second - order transition , with a cusp as indicated in the right panel of figure [ km.sim.th ] . in the limit , , we find with and the critical amplitude .finally , figure [ n.energy.sc ] shows the resulting crosslink occupation as well as the average energy of the test filament .qualitatively , the conclusion is similar as in figure [ n.energy ] .the crosslink stiffness is identified as key factor in mediating crosslink unbinding processes .quantitatively , we see that the unbinding under load is more gradual in this second scenario , where the medium stiffness is determined self - consistently .this reflects the anticipated negative feedback of medium stiffness on crosslink unbinding . looking back at figure [ graphical.sol ]it is clear that the strength of the variation in or depends on the relative location of the two curves , and the effective fiber stiffness as embodied in the right - hand side of equation ( [ eq : km.sc.2 ] ) . for this geometrical factors could be importantthese could arise from network structural randomness , like bond angles or crosslink distances . in our calculationthese geometrical factors have been disregarded , which corresponds to a regular network architecture .in this study , we discussed the interplay between filament and crosslink elasticity in semiflexible polymer networks . in particular , we were interested in the force - induced unbinding of crosslinks in response to external load .importantly , we considered the limiting case of slowly changing load ( `` quasistatic '' ) , where the system is given time enough to reach an equilibrium state at each load level .the model presented is therefore purely thermodynamic in nature , and no rate constants for the crosslink dynamics are needed .possible extensions of the present work may include the effect of a load applied at finite rates .we model the filament network as an elastic medium with modulus .the stiffness is calculated on a self - consistent basis from the response of a test filament that is embedded into the medium . on the microscopic levelthe effect of the network is to confine the filament to a tube - like region in space .we quantify the applied load in terms of a network strain , which is homogeneous on a macroscopic scale . on the local scale of the individual filaments, however , even a homogeneous strain leads to an inhomogeneous ( `` nonaffine '' ) deformation of the effective medium .this is a natural consequence of network heterogeneity and filament mechanical anisotropy .we modeled this inhomogeneous deformation in terms of a distortion of the center - line of the confinement tube of the filaments .the filament ( with its bending stiffness ) resists this tube deformation , which leads to a frustration effect between filament bending , crosslink deformation and medium deformation .this competition is formalized in equation ( [ eq : km.sc.2 ] ) which we rewrite here as ^{-1}\right\rangle_n\,.\end{aligned}\ ] ] with an effective filament bending stiffness , the crosslink stiffness and the network modulus . by solving this equation , the latter is thereby obtained self - consistently from a serial connection of the filament , the crosslinks and the medium itself .the most interesting feature in this equation is the dependence on the number of bound crosslinks . with the possibility of crosslink unbinding ( decreasing ) the competition between the different mechanical elementsis avoided .this leads to a reduction of the medium stiffness with increasing network strain , and eventually to network failure , when all crosslinks are unbound .different scenarios can be distinguished .if the crosslinks are soft ( small ) , then the network modulus is dominated by the crosslinks , , and the filament effectively behaves as a rigid rod .the tendency for crosslink unbinding is weak as it does not lead to a significant stress relaxation . on the other hand ,if crosslink and filament stiffness compete ( large ) , then unbinding events do help relax the imposed stress and reduce the amount of stored energy .this unbinding can be sudden and discontinuous or take the form of a second - order transition , where or display a kink at a critical load .experimentally , strain - induced stress relaxation is observed in living cells after transient pulses of stretch or in - vitro when the loading rates are small . for larger loading rates ,a pronounced stiffening is found in the in - vitro system .this is consistent with the first - order crosslink unbinding scenario discussed here . in this picture , a free - energy barrier , and the associated time - scale ,prevents crosslink unbinding when the loading rate is too large .related phenomena are important for the aging behavior of kinetically trapped actin networks , where built in stresses only relax slowly and by the action of crosslink binding events .similarly , red blood cells owe their remarkable ability to undergo reversible shape changes to a rewiring of the spectrin network .this work goes beyond previous models in considering both the filament and the crosslink stiffness as factors for the rheological properties of crosslinked filament networks .moreover , we show how this interplay affects the tendency of crosslinks to bind / unbind from the filaments during a rheological experiment .the strain field imposed by the rheometer leads to nonaffine deformations on the scale of the filaments .as compared to previous models , the unrealistic assumption of affine deformations is abandoned in favour of a model that incorporates the filament length as the fundamental non - affinity scale . in extensions of the present model one should incorporate nonlinear elastic compliances of the filaments and of the crosslinks .this is believed to be important for the nonlinear strain stiffening of f - actin networks .the complex interplay between stiffening and softening described in rheological experiments would then reflect the relative importance of filament / crosslink elasticity , which lead to stiffening , and softening as due to crosslink unbinding .interestingly , unbinding processes may under some conditions also lead to the reverse ( i.e. stiffening ) effect .this further contributes to the rich nonlinear behavior of these systems , which is far from being fully understood .support by the deutsche forschungsgemeinschaft , emmy noether program : he 6322/1 - 1 , and by the collaborative research center sfb 937 is acknowledged .10 j. liu , g. h. koenderink , k. e. kasza , f. c. mackintosh , and d. a. weitz . visualizing the strain field in semiflexible polymer networks : strain fluctuations and nonlinear rheology of -actin gels ., 98:198304 , may 2007 .kasza , c.p .broedersz , g.h .koenderink , y.c .lin , w. messner , e.a .millman , f. nakamura , , t.p .stossel , f.c .mackintosh , and d.a .actin filament length tunes elasticity of flexibly cross - linked actin networks . , 99:1091 , 2010 .
the mechanical properties of cells are dominated by the cytoskeleton , an interconnected network of long elastic filaments . the connections between the filaments are provided by crosslinking proteins , which constitute , next to the filaments , the second important mechanical element of the network . an important aspect of cytoskeletal assemblies is their dynamic nature , which allows remodeling in response to external cues . the reversible nature of crosslink binding is an important mechanism that underlies these dynamical processes . here , we develop a theoretical model that provides insight into how the mechanical properties of cytoskeletal networks may depend on their underlying constituting elements . we incorporate three important ingredients : nonaffine filament deformations in response to network strain ; interplay between filament and crosslink mechanical properties ; reversible crosslink ( un)binding in response to imposed stress . with this we are able to self - consistently calculate the nonlinear modulus of the network as a function of deformation amplitude and crosslink as well as filament stiffnesses . during loading crosslink unbinding processes lead to a relaxation of stress and therefore to a reduction of the network modulus and eventually to network failure , when all crosslink are unbound . this softening due to crosslink unbinding generically competes with an inherent stiffening response , which may either be due to filament or crosslink nonlinear elasticity .
microwave rb vapor - cell atomic clocks , based on optical - microwave double resonance , are today ubiquitous timing devices used in numerous fields of industry including instrumentation , telecommunications or satellite - based navigation systems .their success is explained by their ability to demonstrate excellent short - term fractional frequency stability at the level of 10 , combined with a small size , weight , power consumption and a relatively modest cost . over the last decade ,the demonstration of advanced atom interrogation techniques ( including for instance pulsed - optical - pumping ( pop ) ) using narrow - linewidth semiconductor lasers has conducted to the development in laboratory of new - generation vapor cell clocks .these clocks have succeeded to achieve a 100 times improvement in frequency stability compared to existing commercial vapor cell clocks . in this domain ,clocks based on a different phenomenon , named coherent population trapping ( cpt ) , have proven to be promising alternative candidates . since its discovery in , coherent population trapping physics has motivated stimulating studies in various fields covering fundamental and applied physics such as slow - light experiments , high - resolution laser spectroscopy , magnetometers , laser cooling or atomic frequency standards .basically , cpt occurs by connecting two long - lived ground state hyperfine levels of an atomic specie to a common excited state by simultaneous action of two resonant optical fields . at null raman detuning , i.e. when the frequency difference between both optical fields matches perfectly the atomic ground - state hyperfine frequency , atoms are trapped through a destructive quantum interference process into a noninteracting coherent superposition of both ground states , so - called dark state , resulting in a clear decrease of the light absorption or equivalently in a net increase of the transmitted light .the output resonance signal , whose line - width is ultimately limited by the cpt coherence lifetime , can then be used as a narrow frequency discriminator towards the development of an atomic frequency standard . in a cpt - based clock , unlike the traditional double - resonance rb clock , the microwave signal used to probe the hyperfine frequency is directly optically carried allowing to remove the microwave cavity and potentially to shrink significantly the clock dimensions .the application of cpt to atomic clocks was firstly demonstrated in a sodium atomic beam . in , n. cyret al proposed a simple method to produce a microwave clock transition in a vapor cell with purely optical means by using a modulated diode laser , demonstrating its high - potential for compactness . in , a first remarkably compact atomic clock prototype was demonstrated in nist .further integration was achieved later thanks to the proposal and development of micro - fabricated alkali vapor cells , leading to the demonstration of the first chip - scale atomic clock prototype ( csac ) and later to the first commercially - available csac .nevertheless , this extreme miniaturization effort induces a typical fractional frequency stability limited at the level of , not compliant with dedicated domains requiring better stability performances . in that sense , in the frame of the european collaborative mclocks project , significant efforts have been pursued to demonstrate compact high - performance cpt - based atomic clocks and to help to push this technology to industry . in standard cpt clocks ,a major limitation to reach better frequency stability performances is the low contrast ( , the amplitude - to - background ratio ) of the detected cpt resonance .this low contrast is explained by the fact that atoms interact with a circularly polarized bichromatic laser beam , leading most of the atomic population into extreme zeeman sub - levels of the ground state , so called `` end - states '' .several optimized cpt pumping schemes , aiming to maximize the number of atoms participating to the clock transition , have been proposed in the literature to circumvent this issue ( , and references therein ) , but at the expense of increased complexity . in that sense , a novel constructive polarization modulation cpt pumping technique , named double - modulation ( dm ) scheme , was recently proposed .it consists to apply a phase modulation between both optical components of the bichromatic laser synchronously with a polarization modulation .the phase modulation is needed to ensure a common dark state to both polarizations , allowing to pump a maximum number of atoms into the desired magnetic - field insensitive clock state .this elegant solution presents the main advantage compared to the push - pull optical pumping or the lin technique to avoid any optical beam separation or superposition and is consequently well - adapted to provide a compact and robust linear architecture setup . in this article, we demonstrate a high - performance cw - regime cpt clock based on the dm technique .optimization of the short - term frequency stability is performed by careful characterization of the cpt resonance versus relevant experimental parameters .a short - term frequency stability at the level of up to 100 s , comparable to best vapor cell frequency standards , is reported .a detailed noise budget is given , highlighting a dominant contribution of the microwave power fluctuations .section ii describes the experimental setup .section iii reports the detailed cpt resonance spectroscopy versus experimental parameters .section iv reports best short - term frequency stability results .noise sources limiting the stability are carefully analysed . in sectionv , we study the clock frequency shift versus each parameter and estimate the limitation of the clock mid - term frequency stability .our setup is depicted in fig .a dfb laser diode emits a monochromatic laser beam around , the wavelength of the cs line . with the help of a fiber electro - optic phase modulator ( eopm ) , modulated at with about microwave power , about of the carrier poweris transferred into both first - order sidebands used for cpt interaction .the phase between both optical sidebands , so - called raman phase in the following , is further modulated through the driving microwave signal .two acousto - optic modulators ( aoms ) are employed .the first one , aom1 , is used for laser power stabilization .the second one , aom2 , allows to compensate for the buffer - gas induced optical frequency shift ( ) in the cpt clock cell .a double - modulated laser beam is obtained by combining the phase modulation with a synchronized polarization modulation performed thanks to a liquid crystal polarization rotator ( lcpr ) .the laser beam is expanded to before the vapor cell .the cylindrical cs vapor cell , diameter and long , is filled with of mixed buffer gases ( argon and nitrogen ) .unless otherwise specified , the cell temperature is stabilized to about . a uniform magnetic field of is applied along the direction of the cell axis by means of a solenoid .the ensemble is surrounded by two magnetic shields in order to remove the zeeman degeneracy .half(quarter)-wave plate , bg buffer gas , pd photodiode.,scaledwidth=45.0% ] we first utilized a fabry - perot cavity to investigate the eopm sidebands power ratio versus the coupling microwave power ( ) , see fig .[ sideband ] .we choose around to maximize the power transfer efficiency into the first - order sidebands .the sidebands spectrum is depicted in the inset of fig .[ sideband ] . obtained by scanning the fp cavity length ,notice the log scale of the y - axis ., scaledwidth=45.0% ] since the laser intensity noise is known as being one of the main noise sources which limit the performances of a cpt clock , laser power needs to be carefully stabilized . for this purpose ,a polarization beam splitter ( pbs ) reflects towards a photo - detector a part of the laser beam , the first - order diffracted by aom1 following the eopm .the output voltage signal is compared to an ultra - stable voltage reference ( lt1021 ) .the correction signal is applied on a voltage variable power attenuator set on the feeding rf power line of the aom1 with a servo bandwidth of about . the out - loop laser intensity noise ( rin )is measured just after the first pbs with a photo - detector ( pd ) , which is not shown in fig .[ fig1 ] . the spectrum of the resulting rin with and w / o locking is shown in fig . [ fig3 ] .a improvement at ( lo modulation frequency for clock operation ) is obtained in the stabilized regime .it is worth to note that the dfb laser diode we used , with a linewidth of about , is sensitive even to the lowest levels of back - reflections , e.g. , the coated collimated lens may introduce some intensity and frequency noise at the regime of .finding the correct lens alignment to minimize the reflection induced noise while keeping a well - collimated laser beam was not an easy task . to reduce light feedback from the eopm fiber face, we use a isolator before the eopm .the fiber - coupled eopm induces additional intensity noise depicted in fig .nevertheless , thanks to the laser power locking , we can reduce most of these noises by at least in the range of to .line in the vacuum reference cell and in the clock cell recorded with the bichromatic laser .the two absorptions from left to right correspond to the excited level and , respectively .for the reference cell signal : laser power , beam diameter , cell temperature , aom frequency .inset , the atomic levels involved in line of cesium . , scaledwidth=45.0% ] our laser frequency stabilization setup , similar to , is depicted in fig .we observe in a vacuum cesium cell the two - color doppler - free spectrum depicted in fig .the bi - chromatic beam , linearly polarized , is retro - reflected after crossing the cell with the orthogonal polarization .only atoms of null axial velocity are resonant with both beams .consequently , cpt states built by a beam are destroyed by the reversed beam , leading to a doppler - free enhancement of the absorption .the laser frequency detuning in fig .[ fig4 ] corresponds to the laser carrier frequency tuned to the center of both transitions and in d line of cesium , where is the hyperfine quantum number . for this record ,the microwave frequency is , half the cs ground state splitting , and the dfb laser frequency is scanned . the frequency noise with and w / o locking are presented in fig . [ fig5 ] .the servo bandwidth is about and the noise is found to be reduced by about at = ( local oscillator modulation frequency in clock operation ) .we studied the response time of the lcpr ( fpr-100 - 895 , meadowlark optics ) .as illustrated in fig .[ fig6 ] , the measured rise ( fall ) time is about and the polarization extinction ratio is about . in comparison ,the electro - optic amplitude modulator ( eoam ) used as polarization modulator in our previous investigations showed a response time of ( limited by our high voltage amplifier ) and a polarization extinction ratio of . here , we replace it by a liquid crystal device because its low voltage and small size would be an ideal choice for a compact cpt clock , and we will show in the following that the longer switching time does not limit the contrast of the cpt signal .the electronic system ( local oscillator and digital electronics for clock operation ) used in our experiment is depicted in fig .[ microwave - chain ] .the microwave source is based on the design described in .the local oscillator ( lo ) is a module ( xm16 pascall ) integrating an ultra - low phase noise 100 mhz quartz oscillator frequency - multiplied without excess noise to .the signal is synthesized by a few frequency multiplication , division and mixing stages .the frequency modulation and tuning is yielded by a direct digital synthesizer ( dds ) referenced to the lo .the clock operation is performed by a single field programmable gate array ( fpga ) which coordinates the operation of the dds , analog - to - digital converters ( adc ) and digital - to - analog converter ( dacs ) : \(1 ) the dds generates a signal with phase modulation ( modulation rate , depth ) and frequency modulation ( , depth ) .\(2 ) the dac generates a square - wave signal to drive the lcpr with the same rate , synchronous to the phase modulation .\(3 ) the adc is the front - end of the lock - in amplifier .another dac , used to provide the feedback to the local oscillator frequency , is also implemented in the fpga .the clock frequency is measured by comparing the lo signal with a signal delivered by a h maser of the laboratory in a symmetricom allan deviation test set .the frequency stability of the maser is at integration time .modulation frequency of the signal , polarization and phase modulation frequency , pumping time , detection window.,scaledwidth=45.0% ] as illustrated in fig . [ fig8 ] , the polarization and phase modulation share the same modulation function . after a pumping time prepare the atoms into the cpt state , we detect the cpt signal with a window of length . in order to get an error signal to close the clock frequency loop ,the microwave frequency is square - wave modulated with a frequency , and a depth . in our case , we choose hz , as a trade - off between a low frequency to have time to accumulate the atomic population into the clock states by the dm scheme and a high operating frequency to avoid low frequency noise in the lock - in amplification process and diminish the intermodulation effects . clock transition . working parameters : , , , , , ., scaledwidth=45.0% ] a typical experimental cpt signal , recorded with this time sequence , showing all the cpt transitions allowed between zeeman sub - levels of the cs ground state is reported in fig .[ zeeman ] .the raman detuning is the difference between the two first sideband spacing and the cs clock resonance .the spectrum shows that the clock levels ( transition ) are the most populated and that the atomic population is symmetrically distributed around the sub - levels .the distortion of neighbouring lines is explained by magnetic field inhomogeneities .it can be shown that the clock short - term frequency stability limited by an amplitude noise scales as , with the full width at half maximum ( fwhm ) of the clock resonance , and the contrast of the resonance .usually , the ratio of contrast to is adopted as a figure of merit , _i.e. _ .the best stability should be obtained by maximizing .the stability of the clock is measured by the allan standard deviation , with the averaging time .when the signal noise is white , with standard deviation , and for a square - wave frequency modulation , the stability limited by the signal - to - noise ratio is equal to with the clock frequency , and the slope of the frequency discriminator . in cpt clocks ,one of the main sources of noise is the laser intensity noise , which leads to a signal noise proportional to the signal .therefore it is more convenient to characterize the quality of the signal of a cpt atomic clock by a new figure of merit , , where is the slope of the error signal ( in v / hz ) at raman resonance ( ) , and is the detected signal value ( in v ) at the interrogating frequency ( the clock resonance frequency plus the modulation depth ) , see fig .[ cptsignal ] .note that an estimation of the discriminator slope is also included in , since the contrast is the signal amplitude divided by the background . then equals , is a rough approximation of the slope , and an approximation of the working signal . in our experimental conditions . , , , , , .,scaledwidth=45.0% ] we investigated the effect of relevant parameters on both figures of merit to optimize the clock performances .in order to allow a comparison despite different conditions , the error signals are generated with the same unit gain .since the resonance linewidth is also subject to change , it is necessary to optimize the 4.6 ghz modulation depth to maximize . here for simplicity , we first recorded the cpt signal , then we can numerically compute optimized values of and . in the following ,we investigate the dependence of and on several parameters including the cell temperature ( ) , the laser power ( ) , the microwave power ( ) , the detection window duration ( ) , the detection start time ( ) , and the polarization and phase modulation frequency ( ) . , , contrast and width of the clock transition as function of cell temperature .all other working parameters is the same as fig .10.,scaledwidth=45.0% ] from the figures of merit shown in fig .[ tcell ] , the optimized cell temperature is around for .the narrower linewidth observed at higher , already observed by godone _ , can be explained by the propagation effect : the higher the cell temperature , the stronger the light absorption by more atoms , and less light intensity is seen by the atoms at the end side of the vapour cell .this leads to a reduction of the power broadening and a narrower signal , measured by the transmitted light amplitude .the optimum temperature depends on the laser power as depicted in fig .[ tcell-2 ] .nevertheless , the overall maximum of is reached with at . as a function of cell temperature for various laser powers .all other working parameters is the same as fig ., scaledwidth=45.0% ] the figures of merit , , the contrast and the width are plotted as a function of the laser power in fig .[ pl ] for .the laser powers maximizing and are and , respectively .the allan deviation reaches a better value at , which justifies our choice of as figure of merit . and in the following parameters investigation, we only show the as figure of merit for clarity . , , and width of the clock transition as a function of laser power with .all other working parameters is the same as fig .10.,scaledwidth=45.0% ] , and width of the clock transition as function of microwave power with .all other working parameters is the same as fig ., scaledwidth=45.0% ] , and width versus the microwave power are shown in fig .the behaviour of is basically in agreement with the fractional power of first ( ) sidebands of fig .[ sideband ] .the optimized microwave power is around . as we can see on fig .[ tw ] , a short detection window would generate a higher contrast signal and higher figures of merit .however , we found that a longer time , e.g. , , results in a better allan deviation at one - second averaging time .it is due to the conflict between the higher signal slope ( ) and the increased number of detected samples which help to reduce the noise , see eq.([sigma_yp_i-3 ] ) . , and width of the clock transition as a function of . with and all other working parametersis the same as fig .10.,scaledwidth=45.0% ] the same parameters are plotted versus the pumping time in fig .the figures of merit and firstly increase and then decrease at , because when the detection window , the signals of two successive polarizations are included .the dynamic behaviour of the atomic system induces the decrease of the cpt amplitude ( see fig .10 in ) .after , increases again and reach a maximum .thus we can say that , in a certain range , a longer will lead to a greater atomic population pumped into the clock states as depicted in fig .[ td ] , yielding higher figures of merit .the behaviour of the linewidth versus the pumping time is not explained to date .nevertheless , note that here the steady - state is not reached and the width behaviour results certainly from a transient effect . , and width of the clock transition versus .all other working parameters is the same as fig .10.,scaledwidth=45.0% ] figure [ fm ] shows , , and width versus the polarization ( and phase ) modulation frequency the maxima of is reached at low frequency . in one hand, this is an encouraging result to demonstrate the suitability of the lcpr polarization modulator in this experiment . in an other hand, the higher rate would be better for a clock operation with lock - in method to modulate and demodulate the error signal , to avoid the low frequency noises such as noise .therefore , we chose and .we have noticed that the behavior of is not exactly the same than the one observed in our previous work with a fast eoam , where the signal amplitude was maximized at higher frequencies .this can be explained by the slower response time of the polarization modulator and the lower laser intensity used .the linewidth reaches a minimum around .this behaviour will be investigated in the future . , and width of the clock transition as function of .all other working parameters is the same as fig .the high contrast and narrow line - width cpt signal obtained with the optimized values of the parameters is presented in fig .[ cptsignal ] , with the related error signal .the allan standard deviation of the free - running lo and of the clock frequency , measured against the h maser , are shown in fig .[ allan ] .the former is in correct agreement with its measured phase noise . in the offset frequency region , the phase noise spectrum of the free - running lo signalis given in by with = , signature of a flicker frequency noise .this phase noise yields an expected allan deviation given by ( ) 1.2 , close to the measured value of at. the measured stability of the cpt clock is up to averaging time for our best record .this value is close to the best cpt clocks , demonstrating that a high - performance cpt clock can be built with the dm - cpt scheme .a typical record for longer averaging times is also shown in fig .[ allan ] . for averaging times longer than , the allan deviation increases like , signature of a random walk frequency noise .( ) , respectively . , scaledwidth=45.0% ] we have investigated the main noise sources that limit the short - term stability . for a first estimation , we consider only white noise sources , and for the sake of simplicity we assume that the different contributions can independently add , so that the total allan variance can be computed as with the contribution due to the phase noise of the local oscillator , and the allan variance of the clock frequency induced by the fluctuations of the parameter .when modifies the clock frequency during the whole interrogation cycle , can be written with the variance of measured in 1 hz bandwidth at the modulation frequency , is the clock frequency sensitivity to a fluctuation of . here , the detection signal is sampled during a time window with a sampling rate , where is a cycle time . in this case eq.([sigma_yp_i ] ) becomes with the variance of sampled during ; with the value of the power spectral density ( psd ) of at the fourier frequency ( assuming a white frequency noise around ) . when induces an amplitude fluctuation with a sensitivity , eq.([sigma_yp_i-2 ] ) becomes with the slope of the frequency discriminator in v / hz . we review below the contributions of the different sources of noise . detector noise : the square root of the power spectral density ( psd ) of the signal fluctuations measured in the dark is shown in fig . [ snr ]it is nv in bandwidth at the fourier frequency . according to eq.([sigma_yp_i-3 ] )the contribution of the detector noise to the allan deviation at one second is .shot - noise : with the transimpedance gain and the detector current , eq.([sigma_yp_i-3 ] ) becomes with the electron charge .the contribution to the allan deviation at one second is .laser fm - am noise : it is the amplitude noise induced by the laser carrier frequency noise .the slope of the signal with respect to the laser frequency is at optical resonance . according to eq.([sigma_yp_i-3 ] ) with data of laser - frequency - noise psd of fig .[ fig5 ] at 125 hz , we get a allan deviation of at one second .laser am - am noise : it is the amplitude noise induced by the laser intensity noise .the measured signal sensitivity to the laser power is at , combined with the laser intensity psd of fig .[ fig3 ] it leads to the amplitude noise nv , and an allan deviation of at one second .lo phase noise : the phase noise of the local oscillator degrades the short - term frequency stability via the intermodulation effect . it can be estimated by : our microwave source is based on which shows an ultra - low phase noise at fourier frequency .this yields a contribution to the allan deviation of at one second .microwave power noise : fluctuations of microwave power lead to a laser intensity noise , which is already taken into account in the rin measurement .we show in the next section that they also lead to a frequency shift ( see fig .[ f0-puw ] ) .the allan deviation of the microwave power at is , see inset of fig .[ f0-puw ] . with a measured slope of hz / dbm, we get a fractional - frequency allan deviation of , which is the largest contribution to the stability at .note that in our set - up the microwave power is not stabilized . + the other noise sources considered have much lower contributions , they are the laser frequency - shift effect , _i.e. _ am - fm and fm - fm contributions , the cell temperature and the magnetic field . table [ 1s ] resumes the short - term stability noise budget ..[tab : table1 ] noise contributions to the stability at .[ cols="^,^,^",options="header " , ] [ 1000s ]we have implemented a compact vapor cell atomic clock based on the dm cpt technique . a detailed characterization of the cpt resonance versus several experimental parameters was performed .a clock frequency stability of up to averaging time was demonstrated .for longer averaging times , the allan deviation scales as , signature of a random walk frequency noise .it has been highlighted that the main limitation to the clock short and mid - term frequency stability is the fluctuations of the microwave power feeding the eopm .improvements could be achieved by implementing a microwave power stabilization .another or complementary solution could be to choose a finely tuned laser power value minimizing the microwave power sensitivity .this adjustment could be at the expense of the signal reduction and a trade - off has to be found .nevertheless , the recorded short - term stability is already at the level of best cpt clocks and close to state - of - the art rb vapor cell frequency standards .these preliminary results show the possibility to a high - performance and compact cpt clock based on the dm - cpt technique .we thank moustafa abdel hafiz ( femto - st ) , david holleville and luca lorini ( lne - syrte ) for helpful discussions .we are also pleased to acknowledge charles philippe and ouali acef for supplying the thermal insulation material , michel abgrall for instrument symmetricom 5125a lending , david horville for laboratory arrangement , jos pinto fernandes , michel lours for electronic assistance , pierre bonnay and annie grard for manufacturing cs cells .p. y. is supported by the facilities for innovation , research , services , training in time & frequency ( labex first - tf ) .this work is supported in part by anr and dga(isimac project anr-11-astr-0004 ) .this work has been funded by the emrp program ( ind55 mclocks ) .the emrp is jointly funded by the emrp participating countries within euramet and the european union .t. bandi , c. affolderbach , c. stefanucci , f. merli , a. k. skrivervik and g. mileti , coninuous - wave double - resonance rubidium standard with stability , ieee ultrason .. contr . * 61 * , 11 , 17691778 ( 2014 ) .s. kang , m. gharavipour , c. affolderbach , f. gruet , and g. mileti , demonstration of a high - performance pulsed optically pumped rb clock based on a compact magnetron - type microwave cavity , j. appl. phys . * 117 * , 104510 ( 2015 ) .j. e. thomas , s. ezekiel , c. c. leiby , r. h. picard , and c. r. willis , ultrahigh - resolution spectroscopy and frequency standards in the microwave and far - infrared regions using optical lasers , opt . lett . * 6 * , 298 - 300 ( 1981 ) .j. e. thomas , p. r.hemmer , s. ezekiel , c. c. leiby , , r. h. picard , and c. r. willis , observation of ramsey fringes using a stimulated , resonance raman transition in a sodium atomic beam , phys .lett . * 48 * , 867 - 870 ( 1982 ) .j. kitching , n. vukicevic , l. hollberg , s. knappe , r.wynands , and w. weidemann , a microwave frequency reference based on vcsel - driven dark line resonances in cs vapor , ieee trans .. measur . * 49 * , 1313 - 1317 ( 2000 ) .v. shah and j. kitching , advances in coherent popualtion trapping for atomic clocks , in advances in atomic , molecular , and optical physics , edited by e. arimondo , p. r. berman , and c. c. lin , vol. 59 ( elsevier , amsterdam , 2010 ) .peter yun , doctor thesis , exploring new approaches to coherent population trapping atomic frequency standards , wuhan institute of physics and mathematics chinese academy of sciences , wuhan , china , 2012 .x. liu , j .-mrolla , s. gurandel , c. gorecki , e. de clercq , and r. boudot , coherent population trapping resonances in buffer - gas - filled cs - vapor cells with push - pull optical pumping , phys .a * 87 * , 013416 ( 2013 ) .t. zanon , s. gurandel , e. de clercq , d. holleville , n. dimarcq and a. clairon , high contrast ramsey fringes with coherent - population - trapping pulses in a double lambda atomic system , phys .* 94 , 193002 ( 2005 ) .* r.schmeissner , n. von bande , a.douahi , o.parillaud , m.garcia , m.krakowski , m.baldy , the optical feedback spatial phase driving perturbations of dfb laser diodes in an optical clock , proceedings of the 2016 european frequency and time forum , york ( 2016 ) , available at http://www.eftf.org/previousmeetings.php .b. franois , c. e. calosso , m. abdel hafiz , s. micalizio , and r. boudot , simple - design ultra - low phase noise microwave frequency synthesizers for high - performing cs and rb vapor - cell atomic clocks , rev .instrum . * 86 * , 094707 ( 2015 ) . c. e. calosso , s. micalizio , a. godone , e. k. bertacco , f. levi .electronics for the pulsed rubidium clock : design and characterization .ieee trans .control * 54 * , 1731 - 1740 ( 2007 ) .p. yun , s. mejri , f. tricot , m. abdel hafiz , r. boudot , e. de clercq , s. gurandel , double - modulation cpt cesium compact clock , 8th symposium on frequency standards and metrology 2015 , j. of physics : conference series * 723 * , 012012 ( 2016 ) .m. zhu and l.s .cutler , theoretical and experimental study of light shift in a cpt - based rb vapor cell frequency standard , in proceedings of the 32nd precise time and time interval systems and applications meeting , p. 311, ed . by l.a .breakiron ( us naval observatory , washington , dc , 2000 ) . c. affolderbach ,c. andreeva , s. cartaleva , t. karaulanov , g. mileti , and d. slavov , light - shift suppression in laser optically pumped vapour - cell atomic frequency standards , appl .b * 80 * , 841 - 8 ( 2005 ) . v. shah , v. gerginov , p. d. d. schwindt , s. knappe , l. hollberg and j. kitching , continuous light- shift correction in modulated coherent population trapping clocks , appl . phys . lett . * 89 , 151124 ( 2006 ) . *
we demonstrate a vapor cell atomic clock prototype based on continuous - wave ( cw ) interrogation and double - modulation coherent population trapping ( dm - cpt ) technique . the dm - cpt technique uses a synchronous modulation of polarization and relative phase of a bi - chromatic laser beam in order to increase the number of atoms trapped in a dark state , _ i.e. _ a non - absorbing state . the narrow resonance , observed in transmission of a cs vapor cell , is used as a narrow frequency discriminator in an atomic clock . a detailed characterization of the cpt resonance versus numerous parameters is reported . a short - term frequency stability of up to averaging time is measured . these performances are more than one order of magnitude better than industrial rb clocks and comparable to those of best laboratory - prototype vapor cell clocks . the noise budget analysis shows that the short and mid - term frequency stability is mainly limited by the power fluctuations of the microwave used to generate the bi - chromatic laser . these preliminary results demonstrate that the dm - cpt technique is well - suited for the development of a high - performance atomic clock , with potential compact and robust setup due to its linear architecture . this clock could find future applications in industry , telecommunications , instrumentation or global navigation satellite systems .
energy is fundamental to science in general and the life sciences in particular . for this reason , the energy - based bond graph modelling method , originally developed in the context of engineering ,has been applied to modelling biomolecular systems .bond graphs represent the energy consumption per unit time , or power flow , for each component of a given system .power flows are calculated as the product of appropriately chosen ` signal quantities ' named ` efforts ' and ` flows ' .examples of effort include force , voltage , pressure and chemical potential . corresponding examples of flow are velocity , current , fluid flow rate and molar flow rate . by describing systems under the unifying principle of energy flows and due to the abstract representation of these flows in terms of generalised effort and flow quantities , the bond graph approach is particularly appropriate for modelling systems with multiple energy domains .a comprehensive account of the use of bond graphs to model engineering systems is given in the textbooks of , , and and a tutorial introduction for control engineers is given by .chemical reactions are considered by and . in particular , as an engineering example , has looked at chemoelectrical energy flows in hybrid vehicles .chemoelectrical energy transduction is fundamental to living systems and occurs in a number of contexts including oxidative phosphorylation and chemiosmosis , synaptic transmission and action potentials in excitable membranes .for this reason , this paper extends the energy - based bond graph modelling of biomolecular systems to biological systems which involve chemoelectrical energy transduction .a simple model of the action potential in excitable membranes is used as an illustrative example of the general approach .although such simple models could easily be derived by other means , our general approach could be used to build thermodynamically compliant models of large hierarchical systems such as those describing metabolism , signalling and neural transmission .understanding the biophysical processes which underlie the generation of the action potential in excitable cells is of considerable interest , and has been the subject of intensive mathematical and computational modelling . since the early work of on modelling the ionic mechanisms which give rise to the action potential in neurons , mathematical models of the action potentialhave incorporated ever - increasing biophysical and ionic detail , and have been formulated to describe both normal and pathophysiological mechanisms .generation of the action potential comes at a metabolic cost .energy is required to maintain the imbalance of ionic species across the membrane , such that when ion channels open there is a flux of ions ( current ) across the membrane initially carried by sodium ions generating rapid membrane depolarisation ( the upstroke of the action potential ) .each action potential reduces the ionic imbalance and each ionic species needs to be transported across the membrane against an adverse electrochemical gradient to restore the imbalance this requires energy .the role of energy in neural systems has been widely discussed in the literature and it has been suggested that metabolic cost is a unifying principle underlying the functional organisation and biophysical properties of neurons .furthermore , posed the question `` does impairment of energy metabolism result in excitotoxic neuronal death in neurodegenerative illnesses ? '' more recently , it has been suggested that an energy - based approach is required to elucidate neuro - degenerative diseases such as parkinson s disease . in such studies , the flow of across the membraneis taken as a proxy for energy consumption associated with action potential generation , as has to be pumped back across the membrane by an energy - consuming atpase reaction .this energetic cost is often quoted as an equivalent number of atp molecules required to restore the ionic concentration gradient through activity of the sodium - potassium atpase ( the na pump ) , as calculated via stoichiometric arguments . while this provides a useful indication of energetic cost , this is however an imprecise approach , which can not produce reliable estimates of energy flows under all conditions ( physiological and pathophysiological ) .what is required instead is a way of simulating and calculating the actual energy flows associated with these ionic movements through a physically - based modelling approach . herewe develop a general bond graph based modelling framework that enables explicit calculation of the energy flows involved in moving ions across the cell membrane .our model can be applied to the regulation of the membrane potential via ion channels in any cell type obvious examples include , for example , cardiac and skeletal muscle cells but in order to give a specific application we investigate the energy cost for generating a neural action potential . because ions carry electrical charge , differences in the concentration of a particular ionic species on either side of a membrane generate both a chemical potential difference due to the concentration gradient as well as an electrical potential difference due to charge imbalance .we derive a bond graph formulation for the voltage - dependent ionic flow across a membrane that comprises both the classical hodgkin - huxley model and its more detailed variants focussing on the associated flow of energy . one component of this formulation accounts for the behaviour of the ion channel as an electrical resistor that can be investigated experimentally via current - voltage relationships , also known as - curves .popular models for - curves are , for example , the linear - dependency that corresponds to ohm s law and the non - linear goldman - hodgkin - katz ( ghk ) equations .hodgkin - huxley - like models assume a linear - relationship . represented the voltage - dependent opening and closing of ion channels by an empirical model , their gating variables , which led to excellent agreement with experimental data .this paper replaces this empirical model of gating by an physically - based voltage - dependent markov model with one open and one closed state .whereas this model accounts for the energy needed for moving the ion channel between conformations where the channel is in an open or a closed state , respectively , the empirical gating variables of the hodgkin - huxley model do not have a similar interpretation . as our models are formulated using bond graphs , important physical quantities such as mass and energyare conserved ( in the sense that any dissipative processes are directly accounted for , and any inputs to or losses from the system are quantified ) .this theoretical approach is then applied to analyse energy consumption in retinal ganglion cells based on _ in vitro _ experimental data collected and analysed from retinal ganglion cells ( rgcs ) of wild - type ( wt ) and degenerative ( rd1 ) mice .our use of the rd1 degenerate retina mouse model ensures that the outcomes of this project are directly relevant to human patients since rd1 mice have a degenerate retina that has distinct similarities to that observed in human patients with _retinitis pigmentosa _ a set of hereditary retinal diseases that results from the degenerative loss of the photoreceptors in the retina .a virtual reference environment is available for this paper at https://github.com/uomsystemsbiology/energetic_cost_reference_environment .external to the membrane is represented by : , the ion internal to the membrane is represented by : and the internal to external molar flow rate is .: is the electrostoichiometric transformer where is the integer ionic charge .: represents the membrane potential as an electrogenic capacitance with electrogenic potential and electrogenic flow .the ionic flow is determined by the reaction component : and modulated by the gating affinity associated with the : .the two components : and : and associated junctions may be relaced by ports ; this enables the model to be reused within a hierarchical framework ., title="fig : " ] [ subfig : channelreac_abg ] we represent the flow of ions through the open pore of an ion channel in analogy to a chemical reaction : { } } } x_e + g\ ] ] the intracellular and extracellular concentrations of a particular ion are represented as different chemical species , the intracellular species and the extracellular species .conversion from to and vice versa is modelled by an enzymatic reaction with the _ gating species _ that `` catalyses '' the flow of ions across the cell membrane and accounts for the voltage - dependent opening and closing of the channel as well as the behaviour of the channel as an electrical resistor .the influence of the membrane potential is represented in this reaction by the _electrogenic species _ . further below wewill explain how an electrical potential can be converted to an equivalent chemical potential .a bond graph representation of according to the framework developed by is given in figure [ fig : bg_channel ] .using this specific example we briefly review the bond graph approach developed in our previous work .the components of a bond graph are distinguished based on how they transform energy .capacitors or springs store energy ( ) , resistors or dampers dissipate energy ( ) , and transducers ( or transformers ) ( ) which transmit and convert , but do not dissipate , power .the chemical species appearing in are represented in the bond graph by capacitors : , : , : and : . because a given type of component usually occurs more than once in a given system , the ` colon ' notation is adopted to distinguish between different instances of each component type : the symbol preceding the colon identifies the type of component , and the label following the colon identifies the particular instance .let us first consider the species and .these are simply the concentrations of a particular ion within the cell ( ) and outside the cell ( ) so that their rates of change must equal the molar flow rate in with opposite sign : it is assumed that in the body biochemical reactions occur under conditions of constant pressure ( isobaric ) and constant temperature ( isothermal ) . under these conditions ,the chemical potential of substance measured in is given in terms of its mole fraction as : where is the value of when is pure ( ) , is the universal gas constant , is the absolute temperature and is the natural ( or napierian ) logarithm . by introducing the _ thermodynamic constant _ where is the total number of moles in the mixture we can express the chemical potential as a function of molar amount of the species : as discussed by the chemical potential and the molar flow are appropriate _ effort _ and _ flow _ variables for modelling chemical reactions .the product of chemical potential and molar flow is the energy flow into the bond graph components and has the unit of _ power_.this is shown in figure [ fig : bg_channel ] by power bonds ( or more simply ` bonds ' ) , drawn as harpoons : .these bonds can optionally be annotated with specific effort and flow variables , for example {e} ] imposes the gating affinity and the corresponding flow . with the gating affinity represent two characteristics of the ionic flow through a channel .the term accounts for the fact that an ion channel has an electrical resistance that opposes the ionic current .the electrical resistance is commonly investigated experimentally by determining current - voltage relationships .it will be demonstrated that the two most commonly used models for current - voltage relationships of ion channels , ohm s law and the goldman - hodgkin - katz ( ghk ) equations , can be obtained by suitable choices of . whereas represents the conductance through an open channel, provides a model for the voltage - dependent opening and closing of the channel . from equations ( [ eq : a^f]-[eq : v_exp ] ) we obtain our model for the ionic flow through an ion channel : \\ & = \kappa k_{ion } \exp{a^g(v ) } \left ( c_i \exp { \bar{v}}- c_e \right ) \label{eq : v_1}\\ \text{where } c_i & = \frac{x_i}{c_i},\ ; c_e = \frac{x_e}{c_e } , \ ; v_n=\frac{r t}{f } \text { and } { \bar{v}}:=\frac{v}{v_n } \end{aligned}\]]define as the voltage for which of equation ( [ eq : v_1 ] ) is zero : using equations ( [ eq : vv_0 ] ) and , equation becomes : where is given by the quantity is known as the _ ussing flux ratio _ .the ionic flow ( ion current ) model of equation will be used in the sequel to construct a bond graph model from the hodgkin huxley model . model an ion channel according to ohm s law i.e. as a linear conductance , modulated by a function in series with the nernst potential represented by a voltage source ( see figure [ fig : hh ] ) .[ subfig : hh ] where is the gating function , which is a dynamic function of membrane potential . in terms of ionic flow , equation becomes , using : it is easy to see that our model contains the hodgkin - huxley model by a suitable choice of : , equation ( [ eq : g_hh ] ) only makes sense if is positive for all .if , both and are negative ; if , both and are positive ; and , as , is positive for for all .a number of alternative physically - based models for the ion channel are available .in particular , the goldman - hodgkin - katz ( ghk ) model ( see ( * ? ? ?* ( 2.123 ) ) , ) & can be rewritten in a similar form to ( [ eq : v_mass - action ] ) as : comparing equations ( [ eq : v_mass - action ] ) and ( [ eq : ghk ] ) , it follows that the mass action model and ghk model are the same if the model - dependent function is : note that is of the same form as except that is replaced by . from equations and ,both the hh and ghk ion channel models give zero ionic flow when the membrane voltage equals the nernst voltage : that is the models match at .moreover , the ghk model of equation has a parameter that can be chosen to fit the data . in this case , is chosen so that the ghk and hh models also match at another voltage ; in this case chosen as minus the nernst voltage .figure [ fig : comparison ] shows the ionic currents plotted against membrane voltage for each of the three channels and they match at the two voltages .the ghk model is used in the sequel . .ghkparameter [ cols="<,<,<,<",options="header " , ] we have extended our earlier work on energy - based bond graph modelling of biochemical systems to encompass chemoelectrical systems . in particular , we have introduced the electrogenic capacitor and the electrostoichiometric transformer to bridge the chemical and electrical domains . as a particular example illustrating the general approach , we have constructed a bond graph model of the model of the axon ; and we have used this model to show that calculation of energy consumption during generation of the action potential by counting ions crossing the membrane underestimates true energy consumption by around 20% . in this particular situation , the concentrations are constant and thus equation is appropriate .moreover , the contribution of the leakage and the gating currents are small and can be neglected .the values of table [ tab : conc ] and equation gives the molar free energy values of table [ tab : res ] for , ; that for atp is taken from ( * ? ? ?* ( 1.23 ) ) .because the actual energy consumption depends on both the amounts of , as well as on the internal and external concentrations , there is no way that the atp - proxy formula ( based on only the amount of ) can give the correct value under all circumstances . to illustrate this , figure [ fig :vary ] shows how actual and atp - proxy energy varies with internal concentration expressed as a ratio to the nominal concentration of table [ tab : conc ] .the discrepancy between actual and atp - proxy energy varies with .moreover , the method of this paper does _ not _ require the concentrations to be constant during an action potential and is thus applicable to more general situations .the wider significance of our approach is that it provides a framework within which biophysically based models are robustly thermodynamically compliant , as required for example when considering the energetic costs and consequences of cellular biological processes .furthermore , the bond graph approach provides a basis for modular modelling of large , multi - domain electro - chemical biological systems , such as is now commonplace in systems biology models of excitable membranes in the neuronal and cardiac contexts .components and modules which are represented as bond graphs are physically plausible models which obey the basic principles of thermodynamics , and therefore larger models constructed from such modules will also consequentially be physically plausible models .future work will further develop these concepts in order to represent ligand - gated ion channels , ion pumps ( such as the pump serca and the pump , as are required for current generation neuronal and cardiac cell models .this modular approach allows simpler modules to be replaced by more complex modules , or empirical modules to be replaced by physically - based modules , as the underlying science advances .furthermore , the multi - domain nature of bond graphs makes possible extension of the approach to mechano - chemical transduction . in actively contracting cardiac muscle , for example , energetic considerations are dominated by force production , where approximately 7580% of atp consumption in cardiomyocytes over a cardiac cycle is due to formation of contractile cross - bridges , 510% due to the pump , and extrusion and uptake into stores accounting for the remainder . in cardiac muscle, energetics is known to play a critical role in the health of cardiac muscle , with many studies implicating energetic imbalance or inadequacy of energy production in heart disease .models which provide a mechanism with which to assess the energetic aspects of cell function are therefore much needed . combining metabolism ,electro - chemical and chemo - mechanical energy transduction to examine energy flows within the heart is therefore a major goal of our work .conservation of mass and energy have been used by and to examine the dynamics of seizure and spreading depression .it would be interesting to reexamine this work in the more general context of bond graph modelling .the atp - proxy approach is based on assuming that the biological entity is operating in a normal state and therefore could lead to misleading conclusions in a pathophysiological state .in contrast , our approach makes no assumption of normality and may be expected to be of use in pathophysiological states in general and , in particular , the retinal example discussed in this paper .our use of the rd1 degenerate retina mouse model ensures that the outcomes of this project are directly relevant to human patients since rd1 mice have a degenerate retina that has distinct similarities to that observed in human patients with _retinitis pigmentosa _ a set of hereditary retinal diseases that results from the degenerative loss of the photoreceptors in the retina .it has been proposed that the death of rod photoreceptors results in decreased oxygen consumption .in addition , it has been shown that potassium channel - opening agents directly affect mitochondria .therefore , it is important to understand how energy consumption in degenerative retina is altered . the proposed methodology allows a comparison between the energy consumption in healthy and degenerate mice , even when the differences in action potentials between the two types are small .the modularity of the bond graph approach allows the action potential models of this paper to be combined with models of the various trans - membrane pumps and transporters to give a thermodynamically correct model of atp consumption .it would therefore be interesting to reexamine the optimality arguments of and using this approach .future work will combine the chemoelectrical bond graph models of this paper with the bond graph models of anaerobic metabolism developed by , and bond graph models of aerobic metabolism currently under development , to give integrated models of neuronal energy transduction suitable for investigating neuronal dysfunctions such as parkinson s disease .peter gawthrop would like to thank the melbourne school of engineering for its support via a professorial fellowship .this research was in part conducted and funded by the australian research council centre of excellence in convergent bio - nano science and technology ( project number ce140100036 ) .the theory was developed by pjg , is and ec .the theoretical part of the paper was written by pjg , is and ec and the experimental part by tk .experiments were conducted by ss and mi .a virtual reference environment is available for this paper at https://github.com/uomsystemsbiology/energetic_cost_reference_environment .45 natexlab#1#1url # 1`#1`urlprefix atkins , p. , de paula , j. , 2011 .physical chemistry for the life sciences , 2nd edition .oxford university press .beal , m. f. , 1992 .does impairment of energy metabolism result in excitotoxic neuronal death in neurodegenerative illnesses ?annals of neurology 31 ( 2 ) , 119130 .borutzky , w. , 2011 .bond graph modelling of engineering systems : theory , applications and software support .springer .carter , b. c. , bean , b. p. , 2009 .sodium entry during action potentials of mammalian neurons : incomplete inactivation and reduced metabolic efficiency in fast - spiking neurons .neuron 64 ( 6 ) , 898 909 .cellier , f. e. , 1991 . continuous system modelling .springer - verlag .cloutier , m. , bolger , fiachra , b. , lowry , john , p. , wellstead , p. , 2009 .an integrative dynamic model of brain energy metabolism using in vivo neurochemical measurements .journal of computational neuroscience 27 ( 3 ) , 391414 .cloutier , m. , middleton , r. , wellstead , p. , june 2012 .feedback motif for the pathogenesis of parkinson s disease .systems biology , iet 6 ( 3 ) , 8693 . gawthrop , p. j. , bevan , g. p. , april 2007 .bond - graph modeling : a tutorial introduction for control engineers .ieee control systems magazine 27 ( 2 ) , 2445 .gawthrop , p. j. , crampin , e. j. , 2014 .energy - based analysis of biochemical cycles using bond graphs .proceedings of the royal society a : mathematical , physical and engineering science 470 ( 2171 ) , 125 , available at arxiv:1406.2447 .gawthrop , p. j. , crampin , e. j. , march 2016 .modular bond - graph modelling and analysis of biomolecular systems .iet systems biology 10 , available at arxiv:1511.06482 .gawthrop , p. j. , cursons , j. , crampin , e. j. , 2015 .hierarchical bond graph modelling of biochemical networks .proceedings of the royal society a : mathematical , physical and engineering sciences 471 ( 2184 ) , 123 , available at arxiv:1503.01814 .gawthrop , p. j. , smith , l. p. s. , 1996 .metamodelling : bond graphs and dynamic systems .prentice hall , hemel hempstead , herts , england .greifeneder , j. , cellier , f. , 2012 . modeling chemical reactions using bond graphs . in : proceedingsicbgm12 , 10th scs intl . conf . on bond graph modeling and simulation .genoa , italy , pp .110121 .hasenstaub , a. , otte , s. , callaway , e. , sejnowski , t. j. , 2010 .metabolic cost as a unifying principle governing neuronal biophysics .proceedings of the national academy of sciences 107 ( 27 ) , 1232912334 .hille , b. , 2001 .ion channels of excitable membranes , 3rd edition .sinauer associates , sunderland , ma , usa .hodgkin , a. l. , huxley , a. f. , 1952 . a quantitative description of membrane current and its application to conduction and excitation in nerve . the journal of physiology 117 ( 4 ) , 500544 .hurley , d. g. , budden , d. m. , crampin , e. j. , 2014 .virtual reference environments : a simple way to make research reproducible .briefings in bioinformatics .kameneva , t. , meffin , h. , burkitt , a. , 2011 .modelling intrinsic electrophysiological properties of on and off retinal ganglion cells .journal of computational neuroscience 31 ( 3 ) , 547561 . karnopp , d. , 1990 .bond graph models for electrochemical energy storage : electrical , chemical and thermal effects .journal of the franklin institute 327 ( 6 ) , 983 992 .karnopp , d. c. , margolis , d. l. , rosenberg , r. c. , 2012 .system dynamics : modeling , simulation , and control of mechatronic systems , 5th edition .john wiley & sons .katz , a. m. , 2011 .physiology of the heart , 5th edition .lippincott williams and wilkins , philadelphia .keener , j. p. , sneyd , j. , 2009 . mathematical physiology : i : cellular physiology , 2nd edition .vol . 1 .springer .koch , c. , 2004 .biophysics of computation : information processing in single neurons .oxford university press , oxford .kulawiak , b. , kudin , a. p. , szewczyk , a. , kunz , w. s. , 2008 .bk channel openers inhibit ros production of isolated rat brain mitochondria . experimental neurology 212 ( 2 ) , 543 547 .mukherjee , a. , karmaker , r. , samantaray , a. k. , 2006 .bond graph in modeling , simulation and fault indentification . i.k .international , new delhi , .neubauer , s. , 2007 .the failing heart an engine out of fuel .new england journal of medicine 356 ( 11 ) , 11401151 .niven , j. e. , laughlin , s. b. , 2008 .energy limitation as a selective pressure on the evolution of sensory systems .journal of experimental biology 211 ( 11 ) , 17921804 .oster , g. , perelson , a. , katchalsky , a. , december 1971 .network thermodynamics .nature 234 , 393399 .oster , g. f. , perelson , a. s. , katchalsky , a. , 1973 .network thermodynamics : dynamic modelling of biophysical systems .quarterly reviews of biophysics 6 ( 01 ) , 1134 .paynter , h. m. , 1961 .analysis and design of engineering systems .mit press , cambridge , mass .sengupta , b. , stemmler , m. , may 2014 .power consumption during neuronal computation .proceedings of the ieee 102 ( 5 ) , 738750 .sengupta , b. , stemmler , m. , laughlin , s. b. , niven , j. e. , 07 2010 .action potential energy efficiency varies among neuron types in vertebrates and invertebrates .plos comput biol 6 ( 7 ) , e1000840 .sengupta , b. , stemmler , m. b. , friston , k. j. , 07 2013 .information and efficiency in the nervous system a synthesis .plos comput biol 9 ( 7 ) , e1003157 .shen , j. , yang , x. , dong , a. , petters , r. m. , peng , y .- w . , wong , f. , campochiaro , p. a. , 2005 .oxidative damage is a potential cause of cone cell death in retinitis pigmentosa .journal of cellular physiology 203 ( 3 ) , 457464 . smith , n. , crampin , e. , 2004 .development of models of active ion transport for whole - cell modelling : cardiac sodium - potassium pump as a case study .progress in biophysics and molecular biology 85 ( 2 - 3 ) , 387 405 .sterratt , d. , graham , b. , gillies , a. , willshaw , d. , 2011 .principles of computational modelling in neuroscience .cambridge university press .terkildsen , j. r. , niederer , s. , crampin , e. j. , hunter , p. , smith , n. p. , 2008 . using physiome standards to couple cellular functions for rat cardiac excitation - contraction .experimental physiology 93 ( 7 ) , 919929 .tran , k. , loiselle , d. s. , crampin , e. j. , 2015 .regulation of cardiac cellular bioenergetics : mechanisms and consequences .physiological reports 3 ( 7 ) , e12464 .tran , k. , smith , n. p. , loiselle , d. s. , crampin , e. j. , 2009 .a thermodynamic model of the cardiac sarcoplasmic / endoplasmic ca2 + ( serca ) pump .biophysical journal 96 ( 5 ) , 2029 2042 .ullah , g. , wei , y. , dahlem , m. a. , wechselberger , m. , schiff , s. j. , 08 2015 .the role of cell volume in the dynamics of seizure , spreading depression , and anoxic depolarization .plos comput biol 11 ( 8) , e1004414 .van rysselberghe , p. , 1958 .reaction rates and affinities . the journal of chemical physics 29 ( 3 ) , 640642 .wei , y. , ullah , g. , schiff , s. j. , 2014 .unification of neuronal spikes , seizures , and spreading depression .the journal of neuroscience 34 ( 35 ) , 1173311743 .wellstead , p. , 2012 . a new look at disease : parkinson s through the eyes of an engineer .control systems principles , stockport , uk .wellstead , p. , cloutier , m. , 2011 .an energy systems approach to parkinson s disease .wiley interdisciplinary reviews : systems biology and medicine 3 ( 1 ) , 16 .wellstead , p. , cloutier , m. ( eds . ) , 2012 .systems biology of parkinson s disease .springer new york ._ in vitro _data was collected and analysed from retinal ganglion cells ( rgcs ) of wild - type ( wt ) ( n=8 ) and degenerative rd1 ( n=6 ) mice 4 - 4.5 month old .experimental procedures were approved by the animal welfare committee at the university of melbourne and are in accordance with local and national guidelines for animal care .animals were housed in temperature - regulated facilities on a 12h light / dark cycle in the animal house and had plentiful access to food and water .neither wt nor rd1 mice were dark adapted for these experiments .retinae from wt and rd1 mice were treated identically .mice were anaesthetised with simultaneous ketamine ( 67 mg / kg ) and xylazine ( 13 mg / kg ) injections , the eyes were enucleated and then the mice were killed by cervical dislocation .their eyes were bathed in carbogenated ( 95 o2 and 5 co2 ) ames medium ( sigma - aldrich , st .louis , mo ) , hemisected at the ora serata , and the cornea and lens were removed .the retina was continuously superfused with carbogenated ames medium at a rate of 4 - 8 ml / min .all of the procedures were performed at room temperature and in normal room light .the flat - mount retina was viewed through the microscope with the use of nomarski dic optics and also on a video monitor with additional 4x magnification using a ccd camera ( ikegami , icd-48e ) .for whole cell recording , a small opening was first made with a sharp tip pipette ( resistance above 14 m ) through the inner limiting membrane and optic fibre layer that covered a selected retinal ganglion cell .prior to recording , the pipette voltage in the bath was nullified .the pipette series resistance was measured and compensated for using standard amplifier circuitry ( sec-05x ; npi electronic instruments ) .pipette resistance was in the range of 7 - 14 m for all experiments .membrane potentials were amplified ( as above with sec-05x , npi ) and digitised at 50 khz ( usb-6221 , national instruments ) , acquired and stored in digital form by custom software developed in matlab ( mathworks ) . to calculate the energy consumption in wt and rd1 rgcs, hodgkin - huxley - type model parameters were fitted to the experimental data described above .experimentally recorded maximum amplitude and width of spontaneous action potentials in two groups were averaged and used for model constraints .a variety of voltage - gated ionic currents in rgcs have been identified experimentally : a calcium current ( ) , three types of potassium currents ( a - type ( ) , ca - activated ( ) , and delayed rectifier ( ) ) , t - type ca ( ) , hyperpolarization - activated ( ) and leakage ( ) currents .the equation governing the membrane potential , , was obtained by summing all membrane currents using kirchoff s law , where a specific membrane capacitance .the dynamics of each voltage - gated ionic current are governed by hodgkin - huxley - type gating variables , which are described by first - order kinetic equations as given in , but are omitted here for brevity . in this study , we sought to account for the differences in the energy consumption between wt and rd1 rgcs on the basis of differences in the magnitudes of the maximal conductance of sodium and potassium currents , and respectively . while all other parameters were kept fixed , and were systematically varied in the range [ in variable steps ( higher resolution for smaller values ) . for conservative calculation of the difference in the energy consumption ,the smallest difference between and in wt and rd1 types that replicated experimental data is reported here .model parameters used in simulations are given in table 5 ; , are maximum conductance and reversal potential of the current `` i '' . and on the intracellular ca concentration . at the moment ,no experimental data is available on the properties of gating parameters for rgs in rd1 mice . due to this ,the same gating parameters were used to model action potential in healthy and degenerative tissue .a single - compartment hodgkin - huxley type neurons was simulated in neuron .the standard euler integration method was used in simulations .data was analysed in matlab ( mathworks ) .l l + c & f/ + mv & is varied in simulations + is variable , refer to & s/ + mv & is varied in simulations + & s/ + & is variable , refer to + mv & s/ + mv & s/ + mv & s/ + the following simulation parameters reproduced the maximum amplitude and width of the experimentally recorded action potentials in wt and rd1 mice and gave the smallest difference in values between the two retina types .parameters for wt : s/ , s/ .parameters for rd1 : s/ , s/ .
energy - based bond graph modelling of biomolecular systems is extended to include chemoelectrical transduction thus enabling integrated thermodynamically - compliant modelling of chemoelectrical systems in general and excitable membranes in particular . our general approach is illustrated by recreating a well - known model of an excitable membrane . this model is used to investigate the energy consumed during a membrane action potential thus contributing to the current debate on the trade - off between the speed of an action potential event and energy consumption . the influx of is often taken as a proxy for energy consumption ; in contrast , this paper presents an energy based model of action potentials . as the energy based approach avoids the assumptions underlying the proxy approach it can be directly used to compute energy consumption in both healthy and diseased neurons . these results are illustrated by comparing the energy consumption of healthy and degenerative retinal ganglion cells using both simulated and _ in vitro _ data . biomolecular systems ; energy - based modelling ; excitable membranes ; retinal ganglion cells .
the shape of a set of points , the shape of a signal , the shape of a surface , or the shapes in an image can be defined as follows : the remainder after we have filtered out the position and the orientation of the object . statistics on shapes appear in many fields .paleontologists combine shape analysis of monkey skulls with ecological and biogeographic data to understand how the _ skull shapes _ have changed in space and time during evolution .molecular biologists study how _ shapes of proteins _ are related to their function .statistics on misfolding of proteins is used to understand diseases , like parkinson s disease .orthopaedic surgeons analyze _bones shapes _ for surgical pre - planning . in signal processing ,the _ shape of neural spike trains _ correlates with arm movement . in computer vision , classifying _ shapes of handwritten digits _ enables automatic reading of texts . in medical imaging and more precisely in neuroimaging , studying _ brain shapes _ as they appear in the mris facilitates discoveries on diseases , like alzheimer .what do these applications have in common ?position and orientation of the skulls , proteins , bones , neural spike trains , handwritten digits or brains do not matter for the study s goal : only _ shapes _ matter .mathematically , the study analyses the statistical distributions of _ the equivalence classes of the data _ under translations and rotations .they project the data in a quotient space , called the _ shape space_. the simplest - and most widely used - method for summarizing shapes is the computation of the mean shape .almost all neuroimaging studies start with the computation of the mean brain shape for example .one refers to the mean shape with different terms depending on the field : mean configuration , mean pattern , template , atlas , etc .the mean shape is an average of _ equivalence classes of the data _ : one computes the mean after projection of the data in the shape space .one may wonder if the projection biases the statistical procedure .this is a legitimate question as any bias introduced with this step would make the conclusions of the study less accurate .if the mean brain shape is biased , then neuroimaging s inferences on brain diseases will be too .this paper shows that a bias is indeed introduced for the mean shape estimation under certain conditions .we review works on the shape space s geometry as a quotient space , and existing results on the mean shape s bias .* shapes of landmarks : kendall analyses * the theory for shapes _ of landmarks _ is introduced by kendall in the 1980 s .he considers shapes of labeled landmarks in .the size - and - shape space , written , takes also into account the overall size of the landmarks set . the shape space , written , quotients by the size as well .both and have a riemannian geometry , whose metrics are given in .these studies model the probability distribution of the data directly in the shape space .they do not consider that the data are observed in the space of landmarks and projected in the shape space .the question of the bias is not raised .* shapes of landmarks : procrustean analyses * procrustean analysis is related to kendall shape spaces but it also considers shapes of landmarks .kendall analyses project the data in the shape space by explicitly computing their coordinates in .in contrast , procrustean analyses keep the coordinates in : they project the data in the shape space by `` aligning '' or `` registering '' them .orthogonal procrustes analysis `` aligns '' the sets of landmarks by rotating each set to minimize the euclidean distance to the other sets . procrustean analysis considers the fact that the data are observed in the space but does not consider the geometry of the shape space . the bias on the mean shape is shown in with a reducto ad absurdum proof .but there is no geometric intuition given about how to control or correct the phenomenon .* shapes of curves * the curve data are projected in their shape space by an alignment step , in the spirit of a procrustean analysis .the bias of the mean shape is discussed in the literature .unbiasedness was shown for shapes of signals in but under the simplifying assumption of no measurement error on the data .some authors provide examples of bias when there is measurement error .their experiments show that the mean signal shape may converge to pure noise when the measurement error on simulated signals increases .the bias is proven in for curves estimated from a finite number of points in the presence of error .but again , no geometric intuition nor correction strategy is given .we are missing a global geometric understanding of the bias . which variables control its magnitude ?is it restricted to the mean shape or does it appear for other statistical analyses ?how important is it in practice : do we even need to correct it ?if so , how can we correct it ?our paper is addressing these questions .we use a geometric framework that unifies the cases of landmarks , curves , images etc .[ [ contributions ] ] contributions + + + + + + + + + + + + + we make three contributions .first , we show that statistics on shapes are biased when the data are measured with error .we explicitly compute the bias in the case of the mean shape .second , we offer an interpretation of the bias through the geometry of the shape space . in applications ,this aids in deciding when the bias can be neglected in contrast with situations when it must be corrected .third , we leverage our understanding to suggest several correction approaches .[ [ outline ] ] outline + + + + + + + the paper has four sections .section 1 introduces the geometric framework of shape spaces .section 2 presents our first two contributions : the proof and geometric interpretation of the bias .section 3 describes our third contribution : the procedures to correct the bias .section 4 validates and illustrates our results on synthetic and real data .we introduce two simple examples of shape spaces . we will refer to them constantly to provide intuition . first , we consider two landmarks in the plane ( figure [ fig : simple ] ( a ) ) .the landmarks are parameterized each with 2 coordinates .for simplicity we consider that one landmark is fixed at the origin on .thus the system is now parameterized by the 2 coordinates of the second landmark only , e.g. in polar coordinates .we are interested in the shape of the 2 landmarks , i.e. in their distance which is simply .second , we consider two landmarks on the sphere ( figure [ fig : simple ] ( b ) ) .one of the landmark is fixed at the origin of .the system is now parameterized by the 2 coordinates of the second landmark only , i.e. . the shape of the two landmarks is the angle between them and is simply .the data are objects that are either sets of landmarks , curves , images , etc .we consider that each object is a point in a riemannian manifold .we restrict in this paper to finite dimensional manifolds in order to avoid complications .we have in the plane example : a flat manifold of dimension 2 .we have in the sphere example : a manifold of constant positive curvature and of dimension 2 . by definition ,the objects shapes are their equivalence classes \}_{i=1}^n ] in the sphere example .this is the space of all possible angles between the two landmarks , see figure [ fig : leadingcases1 ] ( f ) .we consider that the action of on is _ isometric with respect to the riemannian metric of . this implies that the distance between two objects in does not change if we transform both objects in the same manner . in the plane example , rotating the landmark and another landmark with the same angle does not change the distance between them .the distance in induces a quasi - distance in : .the distance between the shapes of and is computed by first registering / aligning onto by the mean of , and then using the distance in the ambient space . in the plane example, the distance between two shapes is the difference in distances between the landmarks .one can compute it by first aligning the landmarks , say on the first axis of .then , one uses the distance in . both object space and shape space are stratified because of the notion of isotropy group .the _ isotropy group of _ is the subgroup of transformations of that leave invariant .for the plane example , every has isotropy group the identity and has isotropy group the whole group of 2d rotations .objects on the same orbit , i.e. objects that have the same shape , have conjugate isotropy groups .the _ orbit type _ of an orbit is the corresponding conjugate class . _principal shapes _ are shapes with smallest isotropy group conjugation class . in the plane example, is the set of objects with principal shapes .it corresponds to in the shape space and is colored in blue on figure [ fig : leadingcases1 ] ( c ) ._ singular shapes _ are shapes with larger isotropy group conjugation class . in the plane example, is the only object with singular shape .it corresponds to in and is colored in red in figure [ fig : leadingcases1 ] ( c ) .the ( connected components of ) the _ orbit types _ form a stratification of , called the _ orbit - type stratification of . the principal type is predominant in the following sense : the set of principal strata , which we denote , is open and dense in .this means that there are objects with non - degenerated shapes almost everywhere .the stratification of into orbit types strata gives a stratification of the shape space .also , non - degenerated shapes are dense in the shape space .we have focused on an intuitive introduction of the concepts .we refer to for mathematical details . from now on ,the mathematical setting is the following : we assume a proper , effective and isometric action of a finite dimensional lie group on a finite dimensional riemannian manifold .we recall that the data are the that are sets of landmarks , curves , images , etc .we interpret the data s as random realizations of the generative model : where the observed object is a shape with a given position or parameterization and observed with noise . hereexp(p , u ) denotes the riemannian exponential of at point .the are themselves i.i.d .realizations of random variables . drawing themlead to the following three step interpretation of the generative model [ eq : genmodel ] .[ [ step-1-generate - the - shape - y_i - in - mg ] ] step 1 : generate the shape + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we assume that there is an probability density of shapes in , with respect to the measure on induced by the riemannian measure of .the s are i.i.d .samples drawn from this distribution .for example , it can be a gaussian as illustrated in figure [ fig : genmodel1 ] on the shape spaces for the plane and sphere examples .this is the variability that is meaningful for the statistical study , whether we are analyzing shapes of skulls , proteins , bones , neural spike trains , handwritten digits or brains .we assume in this paper that the distribution is simply a dirac at which we call the _ template shape_. this is the most common assumption in these generative models .[ [ step-2-generate - its - positionparameterization - g_i - in - g ] ] step 2 : generate its position / parameterization + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we can not observe shapes in .we rather observe objects in , that are shapes posed or parameterized in a certain way .we assume that there is a probability distribution on the positions or parameterizations of , or equivalently a probability distribution on principal orbits with respect to their intrinsic measure .we assume that the distribution does not depend on the shape that has been drawn .the s are i.i.d . from this distribution .for example , it can be a gaussian as illustrated in figure [ fig : genmodel2 ] on the shape spaces for the plane and sphere examples .[ [ step-3-generate - the - noise - epsilon_i - in - t_g_i - cdot - y_im ] ] step 3 : generate the noise + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the observed s are results of noisy measurements .we assume that there is a probability distribution function on representing the noise .we further assume that this is a gaussian centered at , the origin of the tangent space , and with standard deviation , see figures [ fig : genmodel3 ]. the parameter will be extremely important in the developments of section , as we will compute taylor expansions around .other generative models may be considered in the literature .we find in the model : and in the model : .our goal is to unveil the variability _ of shapes in _ while we in fact observe the noisy _ objects s in . first , we focus on the case where the variability in the shape space is assumed to be a dirac at ( step 1 of generative model ) .our goal is thus to estimate the template shape .one may consider the maximum likelihood estimate of : we have hidden variables , the s .the expectation - maximization ( em ) algorithm would be the natural implementation for computing the ml estimator . but the em algorithm is computationally expensive , above all for tridimensional images .thus , one usually relies on another procedure that is an approximation of the em . .[ [ estimating - the - template - shape - with - the - frchet - mean - in - the - shape - space ] ] estimating the template shape with the frchet mean in the shape space + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one initializes the estimate with .then , one iterates the following two steps until convergence : ( 1 ) is an estimation of the hidden observations and an approximation of the e - step of the em algorithm .( 2 ) is the m - step of the em algorithm : the maximization of the surrogate in the m - step amounts to the maximization of the variance of the projected data .this is exactly the minimization of the squared distances to the data of ( 2 ) .the procedure converges because it decreases at each step a cost bounded below by zero .the estimator computed with this procedure is : the term in equation [ eq : frechet ] is the distance in the shape space between the shapes of and .thus , we recognize in equation [ eq : frechet ] the frchet mean on the shape space .the frchet mean is a definition of mean on manifolds : it is the point that minimizes the squared distances to the data in the shape space .all in all , one projects the probability distribution function of the s from to and computes its `` expectation '' , in a sense made precise later .we illustrate the procedure with the examples of the plane and the sphere .we take , , three objects in in figure [ fig : estimator ] ( a ) and on in figure [ fig : estimator ] ( b ) .step ( 1 ) is the registration / alignment step .one filters out the position / parameterization component , i.e. the coordinate on the orbit .one projects the objects , , in the shape space using the blue arrows .step ( 2 ) is the computation of the frchet mean of the registered data .we implemented the generative model and the estimation procedure on the plane and the sphere in shiny applications available online : https://nmiolane.shinyapps.io/shinyplane and https://nmiolane.shinyapps.io/shinysphere .we invite the reader to look at the web pages and play with the different parameters of the generative model .figure [ fig : shinyplane1 ] shows screen shots of the applications .[ fig : shinyplane1 ] our main result is to show that this procedure gives an inconsistent estimate of the template shape of the generative model .the estimator converges when the number of data goes to infinity .however it has an asymptotic bias with respect to the parameter it is designed to estimate : ] for linear spaces .we could also consider the variance of the estimator .the variance is defined as )^2] ] for the sphere example .the green vertical bar represents the template shape , which is 1 in both cases .the red vertical bar is the expectation of in each case .it is , the estimate of we see on these plots that is not centered at the template shape : the green and red bars do not coincide . is skewed away from 0 in the plane example and away from and in the sphere example .the skew increases with the noise level .the difference between the green and red bars is precisely the bias of with respect to .figure [ fig : bias ] shows the bias of with respect to , as a function of , for the plane ( left ) and the sphere ( right ) .increasing the noise level takes the estimate away from .the estimate is repulsed from in the plane example : it goes to when .it is repulsed from and in the sphere example : it goes to when .one can show numerically that the bias varies as around in both cases .this is also observed on the shiny applications at https://nmiolane.shinyapps.io/shinyplane and https://nmiolane.shinyapps.io/shinysphere .these examples already show the origin of the asymptotic bias of . _the bias comes from the curvature of the template s orbit ._ figure [ fig : curvature ] shows the template s orbit in blue , in ( a ) for the plane and ( b ) for the sphere . in both casesthe black circle represents the level set of the gaussian noise .the probability of generating an observation outside of the template s shape orbit is bigger than the probability of generating it inside : the grey area in the black circle is bigger than the white area in the white circle .there will be more registered data that are greater than the template .their expected will therefore be greater than the template and thus biased .we prove this in the general case in the next section .we show the asymptotic bias of in the general case and prove that it comes from the external curvature of the template s orbit . we show it for a principal shape and for a gaussian noise of variance , truncated at .our results will need the following definitions of curvature .the _ second fundamental form _ of a submanifold of is defined on by , where denotes the orthogonal projection of covariant derivative onto the normal bundle .mean curvature vector of _ is defined as : . intuitively , and are measures of extrinsic curvature of in .for example an hypersphere of radius in has mean curvature vector .[ th : pdf ] the probability distribution function on the shapes induced by the generative model is : here is the riemannian logarithm ] at the template shape , is the second fundamental form of the orbit of , is the ricci curvature , are constants independent of and .the proof is given in appendix [ app : proofpdf ] . the exponential in the expression of belongs to a gaussian distribution centered at .this is a riemannian gaussian centered at the template shape , because are coordinates at the tangent space of at .however the whole distribution differs from the gaussian because of the -dependent term in the right parenthesis .this induces a skew of the distribution away from the singular shapes , as observed for the examples in figure [ fig : pdfs ] .[ th : bias ] the asymptotic bias of the template s shape estimation writes : here is the mean curvature vector of the template shape s orbit and the variance of the noise on the objects . the proof is given in appendix [ app : proofbias ] .this generalizes the quadratic behavior observed in the examples on figure [ fig : bias ] .the asymptotic bias has a geometric origin : it comes from the external curvature of the template s orbits , see figure [ fig : curvature ] .we can vary two parameters in equation [ eq : bias ] : and .the external curvature of orbits generally increases when is closer to a singularity of the shape space ( see section 1 ) .the singular shape of the two landmarks in arises when their distance is 0 . in this case, the mean curvature vector is : it is inversely proportional to , the radius of the orbit . is also the distance of to the singularity .[ [ beyond - y - being - a - principal - shape ] ] beyond being a principal shape + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our results are valid when the template is a principal shape .this is a reasonable assumption as the set of principal shapes is dense in the shape space .what happens when approaches a singularity , i.e. when changes stratum in the stratified space ?taking the limit in the coefficients of the taylor expansion is not a legal operation .therefore , we can not conclude on the taylor expansion of the bias for .indeed , the taylor expansion may even change order for .we take with the action of and the template : the bias is linear in in this case .[ [ beyond - sigma-1 ] ] beyond ++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the assumption is reasonable as we hope that the noise on the data is not too large. nevertheless it would be very interesting to study the asymptotic bias for any , including large noises ( ) .the distribution over the s in will be spread on the whole manifold .we can not rely on local computations on ( at the scale of ) anymore .we have to make global assumptions on the manifold .the plane example is the canonical example of a flat manifold .the sphere example is the canonical example of manifold with constant ( positive ) curvature .the bias as a function of is plotted in figure [ fig : bias ] .it leads us to the conjecture that the estimate converges towards a barycenter of shape space s singularities when the noise level increases .singularities have a repulsive action on the estimation of each template s shape .such repulsive force acts on each estimators . as a result ,the estimators of the mean shape finds an equilibrium position : the barycenter .[ [ beyond - one - dirac - in - q - several - templates ] ] beyond one dirac in : several templates + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we have considered so far that there is a unique template shape : the generative model has a dirac distribution at in the shape space .what happens for other distributions ?we assume that there are template shapes .observations are generated in from each template shape with the generative model of section 2 .our goal is to unveil the structure of the shape distribution , i.e. the template shapes here , given the observations in .the distributions on shapes projected on the shape space is a mixture of probability density functions of the form of equation [ eq : f ] .its modes are related to the template shapes .the k - means algorithm is a very popular method for data clustering .we study what happens if one uses k - means algorithms on shapes generated with the generative model above . the goal is to cluster the shape data in distinct and significant groups .one performs a coordinate descent algorithm on the following function : in other words , one minimizes by successively minimizing on the assignment labels s and the cluster s centers s .given the , minimizing with respect to the s is exactly the simultaneous computation of frchet means in the shape space .one looks for meaningful well separated clusters ( high inter - clusters dissimilarity ) whose members are close to each other ( high intra - cluster similarity ) . in other words ,the quality of the clustering is evaluated by the following criterion : which is the dissimilarity between clusters quotiented by the diameter of the clusters . in the absence of singularity in the shape space, the projected distribution looks like figure [ fig : kmeans ] ( a ) and . the criterion is worse in the presence of singularities .figure [ fig : kmeans ] illustrates this behavior for the plane example .we consider any two clusters and call , the estimated centroids .the criterion writes : even in the best case with correct assignments to the clusters and , the k - means algorithm looses an order of validation when computed on shapes .[ [ beyond - the - finite - dimensional - case ] ] beyond the finite dimensional case + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our results are valid when is a finite dimensional manifold and a finite dimensional lie group .some interesting examples belong to the framework of infinite dimensional manifold with infinite dimensional lie groups .this is the case for the lddmm framework on images . it would be important to extend these results to the infinite dimensional case .we take with the action of .we have a analytic expression of in this case .figure [ fig : finitedims ] shows the influence of the dimension for the probability distribution functions on the shape space and for the bias .the bias increases with .this leads to think that it appears in infinite dimensions as well .we propose two procedures to correct the asymptotic bias on the template s estimate .they rely on the bootstrap principle , more precisely a parametric bootstrap . as such , they are directly applicable to any type of data .we assume that we know the variance from the experimental setting . the first procedure is called iterative bootstrap .algorithm [ alg : iterative ] details it .figure [ fig : iterativebootstrap ] illustrates it on the plane example .algorithm [ alg : iterative ] starts with the usual template s estimate , see figure [ fig : iterativebootstrap ] ( a ) . at each iteration , we correct with a better approximation of the bias .first , we generate bootstrap data by using as the template shape of the generative model .we perform the template s estimation procedure with the frchet mean in the shape space .this gives an estimate of .the bias of with respect to is .it gives an approximation of the bias , see figure [ fig : iterativebootstrap ] ( b ) .we correct by this approximation of the bias .this gives a new estimate , see figure [ fig : iterativebootstrap ] ( c ) .we recall that the bias depends on , see theorem [ th : bias ] . is closer to the template than .thus , the next iteration gives a better approximation of .we correct the initial with this better approximation of the bias , etc .the procedure is written formally for a general manifold in algorithm [ alg : iterative ] .[ alg : iterative ] * input : * objects , noise variance + * initialization : * + \}_{i=1}^n) ] + + + + * until convergence : * + * output : * in algorithm [ alg : iterative ] , denotes the parallel transport from to . for linear spaces , , , .algorithm [ alg : iterative ] is a fixed - point iteration where : in a linear setting we have simply .one can show that is a contraction and that , the template shape , is the unique fixed point of ( using the local bijectivity of the riemannian exponential and the injectivity of the estimation procedure ) .thus the procedure converges to in the case of an infinite number of observations .figure [ fig : iterativebootstrap_fixedpoint ] illustrates the convergence for the plane example , with a gaussian noise of standard deviation .the template shape was initially estimated at .algorithm [ alg : iterative ] corrects the bias .figures [ fig : iterationsplane ] and [ fig : iterationssphere ] show the iterations of iterative bootstrap for the plane and the sphere example . the second procedure is called the nested bootstrap .algorithm [ alg : nested ] details it .figure [ fig : nestedbootstrap ] illustrates it on the plane example .algorithm [ alg : nested ] starts like algorithm [ alg : iterative ] with , see figure [ fig : nestedbootstrap ] ( a ) .it also performs a parametric bootstrap with as the template , computes the bootstrap replication and the approximation of , see figure [ fig : iterativebootstrap ] ( b ) .now algorithm [ alg : nested ] differs from algorithm [ alg : iterative ] .we want to know how biased is as an estimate of ? this is a valid question as the bias depends on the template , see theorem [ th : bias ] .we want to estimate this dependence .we perform a bootstrap , nested in the first one , with as the template .we compute the estimate and the approximation of , see figure [ fig : iterativebootstrap ] ( c ) .we observe how far is from .this gives the blue arrow , which is the bias of as an estimate of , see figure [ fig : iterativebootstrap ] ( d ) .the blue arrow is an approximation of how far is from .we correct our estimation of the bias ( in red ) by the blue arrow .we correct by the bias - corrected estimate of its bias , see figure [ fig : iterativebootstrap ] ( e ) .[ alg : nested ] * input : * objects , noise variance + * initialization : * + \}_{i=1}^n) ] + + * nested bootstrap : * + for each : * generate bootstrap sample from * \}_{k=1}^n) ]. then we add bivariate gaussian noise on each landmark .these experiments are illustrated in figure [ fig : iterationstriangles ] .the number of iterations required for the convergence of algorithm 1 with respect to the noise level are shown in figure [ fig : iterationstriangles ] .we observe the convergence in the three experiments for less than 10 iterations .now we go to real triangle data .we have 24 images of rhesus monkeys eyes , acquired with a heidelberg retina tomograph . for each monkey ,an experimental glaucoma was introduced in one eye , while the second eye was kept as control .one seeks a significant difference between the glaucoma and the control eyes . on each image ,three anatomical landmarks were recorded : for the superior aspect of the retina , for the nose side of the retina , and for the side of the retina closest to the temporal bone of the skull .the data are matrices where the landmark coordinates form the rows . for the onh example , is the space of landmarks in 3d , and the rotations act isometrically on each object .* analysis * this simple example illustrates the estimation of the template shape .we use the following procedure to compute the mean shape for each group .we initialize with and repeat the following two steps until convergence : figure [ fig : onh ] shows the mean shapes of the control group ( left ) and of the glaucoma group ( right ) in orange , while the initial data are in grey . the difference between the two groups is quantified by the distance between their means : m .we want to determine if this analysis presents an bias that significantly changes the estimated shape difference between the groups .we use the nested bootstrap to compute an approximation of the asymptotic bias on each mean shape , for a range of noise s standard deviation in .the asymptotic bias on the template shape of the glaucoma group is and of the control group is .the corrected template shape differences are .in particular , for , we observe that the bias in the template shape are respectively for the healthy group and for the glaucoma group .this follows the rule - of - thumb : the bias is more important for the healthy group , for which the overall size is smaller than the glaucoma group , for a same noise level .the bias of the template shape estimate accounts for less than in this case , which is less than % of the shapes sizes .this computation guarantees that this study has not been significantly affected by the bias .we estimate the impact of the bias on statistics on protein shapes .a standard hypothesis in biology is that structure ( i.e. shape ) and function of proteins are related .fundamental research questions about protein shapes include structure prediction - given the protein amino - acid sequence , one tries to predict its structure - and design - given the shape , one tries to predict the sequence needed .one relies on experimentally determined 3d structures gathered in the protein data base ( pdb ) .they contain errors on the protein s atoms coordinates .average errors range from 0.01 to 1.76 , which is of the magnitude of the length of some covalent bonds .these values are averaged over the whole protein and in general , the main - chain atoms are better defined than the side - chain atoms or the atoms at the periphery .this is illustrated on figure where we have plot the b - factor ( related to coordinates errors ) as a colored map on the atoms for proteins of pdb - codes 1h7w and 4hbb . [ [ proteins - radius - of - gyration ] ] protein s radius of gyration + + + + + + + + + + + + + + + + + + + + + + + + + + + + a biased estimate of a protein shape has consequences for studies on proteins folding .stability and folding speed of a protein depend on both the estimated shape of the denatured state ( unfolded state ) and of the native state ( folded state ) .one may study if compact initial states yield to faster folding .the protein compactness is represented by the protein s radius of gyration , defined as : , where is the number of non - hydrogen atoms , , are resp .the coordinates of atoms and centers and their mass .note that we assume ( as it is usually the case ) that all masses for non - hydrogen atoms are equal and that hydrogen atoms have mass .error on atoms coordinates give a bias on the estimate of the radius of gyration : the radius of protein hjsj ( 85 residues ) is known to be around 10 .the error on is of % with an average error of positions on the atoms of .it is 8.6% for an error of .the error will be greater if one consider binding sites at the periphery of the proteins rather than the whole protein .indeed sites size is smaller and they have less atoms. one could think about doing clustering on radii of gyration using the k - means algorithm on shapes .the index of section [ sec : correction ] is : clustering on radii of gyration may lead to a misleading indicator . indicates that the clustering performs better that it actually does .[ [ false - positive - probability - in - proteins - motif - detection ] ] false positive probability in protein s motif detection + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the relation between a protein s shape and function is linked to its motifs , which define the supersecondary structure .motifs have biological properties : for example the helix turn helix motif responsible for the binding of dna within several prokaryotic proteins .automatic motif detection is another challenge in the study of protein shapes .we investigate the impact of bias on the false positive probability estimation in motif detection .let us consider a set of proteins each with atoms .one is interested in the motifs of atoms that can be detected in the protein s set , where .we define that represents an allowed error zone . the number of detected motifs increases if : ( i ) one decreases , or ( ii ) one increases , or ( iii ) increases .thus how many detected motifs actually come from chance , with respect to the parameters , , ?the false positives probability indicates when one detects truth and when one detects noise .the usual estimate of the false positive probability is .here is the volume of the error zone allowed . is the total volume of the protein , thus the a ball of radius the radius of gyration .thus may be biased and overestimated .the probability of false positive is underestimated .we consider the example of .one tries to find motifs between the tryptophan repressor of escherichia coli ( pdb code 2wrp ) and the cro protein of phage 434 ( pdb code 2cro ) .these two proteins are known to share the helix - turn - helix motif .the radius of gyration of 2wrp is , the total volume is : .we assume an error zone that takes the form of a diagonal covariance matrix with standard deviations .we get the error zone volume : and the estimation of the false positive probability : .we find that is underestimated by using the expression of the radius of gyration s bias .we apply the rule of thumb of section [ sec : correction ] to determine when the bias needs a correction in the computation of a brain template from medical images . here and will be infinite dimensional .nevertheless we apply our results to get intuition for this application . in neuroimaging ,a template is an image representing a reference anatomy .computing the template is often the first step in medical image processing .then , the subjects anatomical shapes may be characterized by their spatial deformations _ from the template_. these deformations may serve for ( i ) a statistical analysis of the subject shapes , or ( ii ) for automated segmentation by mapping the template s segmented regions into the subject spaces . in both cases , if the template is not centered among the population , i.e. if it is biased , then the analyzes and conclusions could be biased .we are interested in highlighting the variables that control the template s bias .the framework of large deformation diffeomorphic metric mapping ( lddmm ) embeds the template estimation in our geometric setting .the lie group of diffeomorphisms acts on the space of images as follows : the isotropy group of writes : .its lie algebra consists of the infinitesimal transformations whose vector fields are parallel to the level sets of : .the orbit of is : .the `` shape space '' is by definition the space of orbits .two images that are diffeomorphic deformations of one another are in the same orbit .they correspond to the same point in the shape space .topology of an image is defined as the image s properties that are invariant by diffeomorphisms .consequently , the shape space is the space of the images topology , represented by the topology of their level sets .we get a stratification of the shape space when we gather the orbits by orbit type .a stratum is more singular than another , if it has higher orbit type , i.e. larger isotropy group .the manifold has an infinite stratification .one changes stratum every time there is a change in the topology of an image s level sets .singular strata are toward simpler topology .`` principal '' strata are toward a more complicated topology .indeed , the simpler the topology of the level sets is , the higher is the `` symmetry '' of the image .thus the larger is its isotropy group .note that strata with smaller isotropy group ( more detailed topology ) do not represent `` singularities '' from the point of view of a given image and do not influence the bias .in fact , such strata are at distance 0 : an infinitesimal local change in intensity can create a maximum or minimum , thus complexifying the topology . using the rule - of - thumb of section[ sec : correction ] , the template s bias depends on its distance to the next singularity , at the scale of the intersubjects variability .the template is biased in the regions where the difference in intensity between maxima and minima is of the same amplitude as the variability .the template may converge to pure noise in these regions .we introduced tools of statistics on manifolds to study the properties of template s shape estimation in medical imaging and computer vision .we have shown its asymptotic bias by considering the shape space s geometry .the bias comes from the external curvature of the template s orbit at the scale of the noise on the data .this provides a geometric interpretation for the bias observed in .we investigated the case of several templates and the performance k - mean algorithms on shapes : clusters are less well separated because of each centroid s bias .the variables controlling the bias are : ( i ) the distance in shape space from the template to a singular shape and ( ii ) the noise s scale .this gives a rule - of - thumb for determining when the bias is important and needs correction .we proposed two procedures for correcting the bias : an iterative bootstrap and a nested bootstrap .these procedures can be applied to any type of shape data : landmarks , curves , images , etc .they also provide a way to compute the external curvature of an orbit .our results are exemplified on simulated and real data .many studies use the template s shape estimation algorithm in molecular biology , medical imaging or computer vision .their estimations are necessarily biased . but these studies often belong to a regime where the bias is not important ( less than ) .for example , the bias is important in landmark shapes analyses when the landmarks noise is comparable to the template shape s size .studies are rarely in this regime .we have considered shapes belonging to infinite dimensional shape spaces .our results do not apply to the infinite dimensional case .we have used them to gain intuition about it .the bias might be more important in infinite dimensions and need the correction we have suggested .biblabel[1]#1 . 10 , _ the riemannian geometry of orbit spaces . the metric , geodesics , and integrable systems _ , publdebrecen , 62 ( 2003 ) . , _ towards a coherent statistical framework for dense deformable template estimation _ , journal of the royal statistical society . , 69 ( 2007 ) ,329 . , _ estimating the template in the total space with the frchet mean on quotient spaces may have a bias ._ , proceedings of the fifth international workshop on mathematical foundations of computational anatomy ( mfca15 ) , 2015 , pp . 131142 . ,_ convergent stochastic expectation maximization algorithm with efficient sampling in high dimension .application to deformable template model estimation _ , computational statistics & data analysis , 91 ( 2015 ) , pp . 4 19 . , _ the protein data bank _ , nucleic acids res , 28 ( 2000 ) , pp .235242 ., _ on the consistency of frchet means in deformable models for curve and image analysis _ , electronic journal of statistics , ( 2011 ) , pp .10541089 . , _ a deconvolution approach to estimation of a common shape in a shifted curves model _ , ann, 38 ( 2010 ) , pp .24222464 . ,_ the helix - turn - helix dna binding motif ._ , journal of biological chemistry , 264 ( 1989 ) , pp . 19036 . , _ analyse biomtrique de lanneau pelvien en 3 dimensions propos de 100 scanners _ ,revue de chirurgie orthopdique et traumatologique , 100 ( 2014 ) , pp .s241 . , _ statistical shape analysis _ , john wiley & sons , new york , 1998 . , _bootstrap methods : another look at the jackknife _ , the annals of statistics , 7 ( 1979 ) , pp . 126 . , _ morphometrics for nonmorphometricians _ , springer , 2012 . , _ brain templates and atlases _ , neuroimage , 62(2 ) ( 2012 ) , pp. 911922 ., _ intrinsic shape analysis : geodesic principal component analysis for riemannian manifolds modulo lie group actions . _ , statistica sinica , 20 ( 2010 ) , pp .1100 . , _ riemannian structures on shape spaces : a framework for statistical inferences _ , in statistics and analysis of shapes , 2006 , pp . 313333 . ,_ the diffusion of shape _ , advances in applied probability , 9 ( 1977 ) , pp .428430 ., _ shape manifolds , procrustean metrics , and complex projective spaces _ , bulletin of the london mathematical society , 16 ( 1984 ) , pp. 81121 . , _signal estimation under random time - warpings and nonlinear signal alignment _ , in advances in neural information processing systems 24 , 2011 , pp. 675683 . , _ the riemannian structure of euclidean shape spaces : a novel environment for statistics _ , the annals of statistics , 21 ( 1993 ) , pp .12251271 . ,_ euclidean distance matrix analysis ( edma ) : estimation of mean form and mean form difference _ , mathematical geology , 25 ( 1993 ) , pp .573602 . . ,_ mapping the effects of a levels on the longitudinal changes in healthy aging : hierarchical modeling based on stationary velocity fields _ , in proceedings of medical image computing and computer assisted intervention ( miccai ) , vol .6892 of lncs , springer , 2011 , pp .663670 . , _ curvature explosion in quotients and applications _ , j. differential geom. , 85 ( 2010 ) , pp . 117140 . , _ biased estimators on quotient spaces _ , proceedings of the 2nd international of geometric science of information ( gsi2015 ) , ( 2015 ) . , _ nonparametric statistics on manifolds and their applications to object data analysis _ , taylor & francis group , 2016 . , _ toward a generic framework for recognition based on uncertain geometric features _ ,videre : journal of computer vision research , 1 ( 1998 ) , pp .5887 . , _ intrinsic statistics on riemannian manifolds : basic tools for geometric measurements _ , journal of mathematical imaging and vision , 25 ( 2006 ) , pp . 127154, _ a geometric algorithm to find small but highly similar 3d substructures in proteins ._ , bioinformatics , 14 ( 1998 ) , pp . 516522 . ,_ riemannian geometry_ , encyclopaedia of mathem .sciences , springer , 2001 . , _ easy web applications in r. _ , 2013 .url : http://www.rstudio.com / shiny/. , _ error estimates of protein structure coordinates and deviations from standard geometry by full - matrix refinement of * *b- and * *b2-crystallin _ , acta crystallographica section d , 54 ( 1998 ) , pp .243252 . ,_ shapes and diffeomorphisms _ , applied mathematical sciences , springer london , limited , 2012 .here is a point in .we consider that belongs to a principal orbit .this will have no impact on the integration because the set of principal orbits is dense in .we write the projection of in the shape space .we write the template shape .we write its estimate .we take a normal coordinate system centered at the template .we have the decomposition , where is the orbit of .we note that , so that are _ not _ the coordinates of in the normal coordinate system at , see figure [ fig : notations ] .we denote the dimension of , the dimension of the principal orbits and the dimension of the quotient space .we write coordinates in with indices ] .the generative model implies the following riemannian normal distribution on the objects : the distance expressed in the normal coordinate system at is simply . the riemannian measure at in the normal coordinate system at has the taylor expansion : .we recognize the riemannian curvature tensor .we truncate the riemannian gaussian : where is the normalization coefficient coming of the univariate truncated gaussian at . , and refer to geodesic balls of radius in their respective spaces .we denote and remark that where is the dimension of the quotient space and the dimension of the principal orbits .we represent in as the graph of a smooth function from to , around by : the local graph is : .its 0-th and 1-th order derivatives are zero because the graph goes through and is tangent at .its second order derivative is and represents the best quadratic approximation of the graph .we need the coordinate of in the ncs at . we parallel transport from to : .the -th order term of the parallel transport at in a ncs at is 0 : this concludes .we take an ncs at .we denote the coordinate of in the shape space .we compute the induced probability distribution on shapes by integrating the distribution on the orbit of out of : the point has coordinates where is the integration coordinate . the lie group action is isometric so the riemannian metric splits onto at any point .moreover , the taylor expansion of the metric around still respects the splitting at the third order .the measure for close enough to is the restriction of this taylor expansion to the orbit .we have : so that : where is the ricci curvature . for the last term ( 3 ) , we assume again that the graph of the orbit is smooth enough so that it admits a taylor serie at , ie : . by plugging this , we get the moments with the same manipulation as in the proof of theorem 1. as the moments of order ( and of odd order ) are zero : .we want to compute the systematic bias .to this aim , we compute the mean of the distribution of shapes in , in a coordinate system centered at the real .it writes : the coordinates of in the euclidean coordinate system at are simply .thus . from the lemmaabove : we denote ( 1 ) the first term of the sum and ( 2 ) the second term of the sum .we compute them independently . where the last line comes from the definitions of the moments of the truncated gaussian .we perform a taylor expansion under the integration , in , using the chain rule : again , the terms in integrate to 0 .we have : the term is precisely the normalization constant of the truncated gaussian , so that : the coefficient comes from the fact that we have truncated the gaussian .its expression is independent of because we have truncated at a multiple of .now we show that the second term ( 2 ) is of order .we assume that the local graph of the orbit is smooth enough so that it has a multivariate taylor serie : .we have : here the depend on .we first perform a majoration on each on them by a ( integration on a compact ball ) . we compute : so that : where we recognize the moment of the truncated gaussian , which are non zero only for even power .this gives : .
we use tools from geometric statistics to analyze the usual estimation procedure of a template shape . this applies to shapes from landmarks , curves , surfaces , images etc . we demonstrate the asymptotic bias of the template shape estimation using the stratified geometry of the shape space . we give a taylor expansion of the bias with respect to a parameter describing the measurement error on the data . we propose two bootstrap procedures that quantify the bias and correct it , if needed . they are applicable for any type of shape data . we give a rule of thumb to provide intuition on whether the bias has to be corrected . this exhibits the parameters that control the bias magnitude . we illustrate our results on simulated and real shape data . shape , template , quotient space , manifold
let us consider a familiar case of `` systems '' : the human beings .human beings are generally considered as the highest peak of biological evolution .their behavioral and teleological characteristics set them apart from other system classes and make them appear to be more `` gifted '' than other beings , e.g. , dogs .but how do the superior qualities of mankind translate in terms of _ resilience _ ? under stressful or turbulent conditions we know that often a man will result `` better '' than a dog : superior awareness , consciousness , manual and technical dexterity , and reasoning ; advanced ability to reuse experience , learn , develop science , as well as other factors , they all lead to the apparently `` obvious '' conclusion that mankind has a greater ability to tolerate adverse conditions . and though, it is also quite easy to find counterexamples .if a threat , e.g. , comes with ultrasonic noise , a dog may perceive the threat and react for instance by running away while a man may stay unaware until too late . or consider the case of miners : inability to perceive toxic gases makes them vulnerable to , e.g. , carbon monoxide and dioxide , methane , and other lethal gases . a simpler system able to perceive the threat and flee would have more chances to survive .perception of course is but one of a number of `` systemic features '' that need to be available in order to counterbalance a threat .so how do we tell whether a system is fit to stand the new conditions characterizing a changing environment ? how do we reason about the quality of resilience ? and , even more importantly , how do we make sure that a system `` stays fit '' if the environment changes ?the above questions are discussed and , to some extent , addressed in this paper .our starting point here is the conjecture that _ resilience is no absolute figure _ ; rather , it is _ the result of a match with a deployment environment_. whatever its structure , organization , architecture , capabilities , and resources , a system is only robust as long as its `` provisions '' ( its system characteristics , including the ability to develop knowledge and `` wisdom '' ) match the current environmental conditions .a second cornerstone of the present discussion is given by the assumption that the interactions between systems and environments can be expressed and reasoned upon by considering the behaviors expressed during those interactions .in other words , a system - environment fit is the result of the match between the behaviors exercised by a system and those exercised by its environment ( including other systems , the users , etc . )a third and final assumption is that reasoning about a system s resilience is facilitated by considering the behaviors of those system `` organs '' ( namely , sub - systems ) responsible for the following abilities : 1 .the ability to perceive change ; 2 .the ability to ascertain the consequences of change ; 3 .the ability to plan a line of defense against threats deriving from change ; 4 .the ability to enact the defense plan being conceived in step 3 ; 5 . and , finally , the ability to treasure up past experience and continuously improve , to some extent , abilities 14 .as can be clearly seen , the above abilities correspond to the components of the so - called mape - k loop of autonomic computing .we shall refer to those abilities as well as the organs that embed them as to the `` systemic features . '' in what follows we first focus in sect .[ s : sf ] on the concept of behavior and recall the five major classes of behaviors according to rosenblueth , wiener , and bigelow and boulding .we then introduce a system s _ cybernetic class _ by associating each of the systemic features with its own behavior class .after this , in sect .[ s : fit ] , we introduce a behavioral formulation of the concepts of supply and system - environment fit as measures of the optimality of a given design with respect to the current environmental conditions .section [ s : or ] then suggests how proactive and/or social behaviors that would be able to track supply and system - environment fit would pave the way to systems able to self - tune their systemic features in function of the experienced or predicted environmental conditions .an application of the concepts presented in this work is briefly described in sect .[ s : ls ] .our conclusions are finally stated in sect .[ s : end ] .as mentioned above , an important attribute towards achieving robustness is given by what we called in sect .[ s : intro ] as the `` systemic features '' , or the behaviors typical of the system under scrutiny .such behaviors are the subject of the present section . in what follows we first recall in sect .[ s : sf : bc ] what are the main behavioral classes .the main sources here are the classic works by rosenblueth , wiener , and bigelow and boulding . in the first work, classes were identified by the authors by considering the system in isolation . in the second oneboulding introduced an additional class considering the social dimension .after this , in sect .[ s : sf : cc ] , we consider an exemplary system ; we identify in it the main system organs responsible for resilience ; and associate behavioral classes to those organs . by doingso we characterize , namely the `` cybernetic class '' of the system under consideration .already 71 years ago rosenblueth , wiener , and bigelow introduced the concept of the `` behavioristic study of natural events '' , namely `` the examination of the output of the object and of the relations of this output to the input '' .the term `` object '' in the cited paper corresponds to that of `` system '' . in that renowned textthe authors purposely `` omit the specific structure and the intrinsic organization '' of the systems under scrutiny and classify them exclusively on the basis of the quality of the `` change produced in the surroundings by the object '' , namely the system s behavior .the authors identify in particular four major classes of behaviors : : : : random behavior .this is an active form of behavior that does not appear to serve a specific purpose or reach a specific state .a source of electro - magnetic interference exercises random behavior .: : : purposeful behavior .this is behavior that serves a purpose and is directed towards a specific goal . quoting the authors , in purposeful behavior we can observe a `` final condition toward which the movement [ of the object ] strives '' .servo - mechanisms are examples of purposeful behavior . : : : reactive behavior .this is behavior that `` involve[s ] a continuous feed - back from the goal that modifies and guides the behaving object '' .examples of this behavior include phototropism , namely the tendency we observe , e.g. , in certain plants , to grow towards the light , and gravitropism , viz . the tendency of plant roots to grow downward .reactive behaviors require the system to be open ( able that is to continuously perceive , communicate , and interact with external systems and the environment ) and to embody some form of feedback loop .: : : proactive behavior .this is behavior directed towards the extrapolated future state of the goal .the authors in classify proactive behavior according to its `` order '' , namely the amount of context variables taken into account in the extrapolation .kenneth boulding in his classic paper introduces an additional class : : : : social behaviors .this class is based on the concept of social organization . quoting the author , in such systems ``the unit is not perhaps the person the individual human as such but the ` role'that part of the person which is concerned with the organization or situation in question , and it is tempting to define social organizations , or almost any social system , as a set of role tied together with channels of communication . ''social behaviors may take different forms and be , e.g. , mutualistic , commensalistic , co - evolutive , or co - opetitive . for more informationwe refer the reader to .we shall define as a projection map returning , for each of the above behavior classes , an integer in ( , , ) . for any behavior and any set of context figures , notation will be used to denote that is exercised by considering the context figures in . thus if , for instance , , then refers to a reactive behavior that responds to changes in speed and light . for any behavior and any integer , notation will be used to denote that is exercised by considering context figures , without specifying which ones . as an example , behaviour , with defined as above , identifies an order2 proactive behavior while says in addition that that behavior considers both speed and luminosity to extrapolate the future position of the goal .we now introduce the concept of partial order among behaviors .[ d : partial.order ] given any two behaviors and we shall say that if and only if either of the following conditions holds : 1 . + . + . whenever two behaviors and are such that , it is possible to define some notion of distance between the two behaviors by considering an arithmetization based on , e.g. , the following factors used as exponents of three different prime numbers : 1 .2 . .3 . . in what followswe shall assume that some metric function , , has been defined .the behavioral classes recalled in [ s : sf : bc ] may be applied to the five `` systemic features '' introduced in sect .[ s : intro ] . for any system we shall refer to the systemic features of through the following 5-tuple : whose components orderly correspond to the abilities introduced in sect .[ s : intro ] as well as to the stages of mape - k loops .system will me omitted when it can be implicitly identified without introducing ambiguity .for any given system we define as cybernetic class the 5-tuple where , for any , represents the behavior class assigned to systemic feature of , or if does not include altogether . as can be clearly understood , a system s cybernetic class is a qualitative metric that does not provide a full coverage of the systemic characteristics of the system .as such it should be complemented with quantitative assessments of the quality of service of its system organs namely the sub - systems responsible for hosting its systemic features . in particular for and , the features corresponding to the abilities of perception and actuation it is useful to complement the notion of behavior with a characterization of the set of context variables that are under the `` sphere of action '' of the corresponding organs . for means specifying the set of context figures that may be timely perceived by .interestingly enough , this concept closely corresponds to that of the powers of representation in leibniz .when considering , the sphere of action could be represented by the set of the context figures that may be controlled to a certain extent through system behaviors .we observe that features and are intrinsically purposeful .we believe that notation provides a convenient and homogeneous way to express the behavior class and the spheres of action of both and organs .it is now possible to characterize a system s cybernetic class through notation . as an example , by following the assessments proposed in , the adaptively redundant data structures described in have the following cybernetic class while the adaptive -version programming system introduced in is we believe the notion and notation of cybernetic class provide a convenient way to compare qualitatively the systemic features of any two systems with reference to their robustness . as an example , by comparing the above 5-tuples and one may easily realize how the major strength of those two systems lies in their analytic organs , both of which are capable of proactive behaviors ( ) though in a simpler fashion in .another noteworthy difference is the presence of a knowledge organ in , which indicates that the second system is able to accrue and make use of the past experience in order to improve its action to some extent and exclusively through behaviors .we conjecture that the action of the knowledge organ in this case corresponds to so - called _ antifragility _ , namely the ability to `` treasure up '' the past experience so as to improve one s system - environment fit .what presented in sect . [ s : sf ] allows for a system to be characterized to some extent in terms of its `` systemic features''the provisions that is that play a role when responding to change . as a way to identify the `` quality '' of those provisions in that section we made use of the different behavioral classes as defined in , and introduced as well as its components .here we move our attention to a second aspect that , we conjecture , needs to be considered when assessing a system s resilience .this second aspect tells us how the cybernetic class matches the requirements of dynamically changing environmental conditions . as already anticipated in sect .[ s : intro ] , in what follows we assume that the evolution of an environment may also be expressed as a behavior . said behavior may be of any of the types listed in sect .[ s : sf : bc ] and as such it may result in the dynamic variation of a number of `` firing context figures '' .in fact those figures characterize and , in a sense , set the boundaries of an _ ecoregion _ , namely `` an area defined by its environmental conditions '' . an environment may be the result of the action of , e.g. , a human being ( a `` user '' ) , or a software managing an ambient , or for instance it may be the result of purposeless ( random ) behavior such as a source of electro - magnetic interference . as a consequence ,an environment may behave randomly or exhibit a recognizable trend ; in the latter case the variation of its context figures may be such that it allows for tracking or speculation ( extrapolation of future states ) .moreover , an environment may exhibit the same behavior for a relatively long period of time or it may vary dynamically its character. we shall refer in what follows to the dynamic evolution of environmental behavior as to an environment s * turbulence*. diagrams such as the one in fig .[ f : env ] may be used to represent the dynamic evolution of environments .it is now possible to propose a definition of two indicators for the quality of resilience : the system supply relative to an environment and the system - environment fit .[ d : supply ] given a system deployed in an environment , characterized respectively by behaviors and ; and given a metric function ; we define as supply at time with respect to the following value : supply can be positive ( oversupply ) , negative ( undersupply ) , or zero ( perfect supply ) .[ d : fit ] given the same conditions as in definition [ d : supply ] , we define as the system - environment fit at time the function the above definition expresses system - environment fit as a function returning 1 in the case of best fit ; slowly scaling down with oversupply ; and returning in case of undersupply .it is not the only possible such definition of course : an alternative one is given , for instance , by having instead of .figure [ f : fitset ] exemplifies a system - environment fit in the case of two behaviors and with . consists of five context figures identified by integers while consists of context figures .the system behavior is assumed to be constant ; if this means that the system s perception organ constantly monitors the four figures . on the contrary with time .five time segments are exemplified ( ) during which the following context figures are affected : : : : figures . : : : figure and figure . : : : figure . : : : figures . : : : figures . figures are represented as boxed integers , with an empty box meaning that the figure is not affected by the environment and a filled box meaning the figure is affected .the behaviour of the environment is constant within a time segment and changes at the next one .this is shown through the sets at the bottom of fig .[ f : fitset ] : for each segment the superset is while the subset is , namely .the relative supply and the system - environment fit also change with the time segments . during and is perfect supply and best fit : the behavior exercised by the environment is evenly matched by the features of the system . during and the systemic features are more than enough to match the current environmental conditions a case of what we referred to as `` oversupply '' .correspondingly , fit is rather low .in we have the opposite situation : the systemic features for instance , pertaining to a perception organ are insufficient to become aware of all the changes produced by the environment .in particular here changes connected with figure 5 go undetected .this is a case of `` undersupply '' , corresponding to the `` worst possible '' system - environment fit .the two functions introduced in sect . [ s : fit ] , and , may be interpreted as measures of the optimality of a given design with respect to the current environmental conditions .whenever those conditions allow it and a partial order `` '' exists for the behaviors at play , then it is possible to consider system behaviors of the following forms : 1 . , with including figures and .[e : opt.pro ] + such behavior , when exercised by system organs for analysis , planning , and knowledge management , translates in the possibility to become aware and speculate on the possible future robustness requirements . if this is coupled with the possibility to revise one s system organs by enabling or disabling , e.g. , the ability to perceive certain context figures depending on the extrapolated future environmental conditions , then a system could proactively improve its own system - environment fit .2 . , with including figures and .[e : opt.soc ] + analysis , planning , and knowledge management behaviors of this type aim at artificially augmenting or reducing the system features by establishing / disestablishing collaborative relationships as exemplified in the `` canary - in - the - coal - mine '' scenario of . as we did in the paperjust cited we propose to call behaviors such as [ e : opt.pro ] ) and [ e : opt.soc ] ) as * auto - resilient*. finally , we remark how the formulation of system - environment fit presented in this work may also be tailored so as to include overheads and costs .littlesister is an icon project financed by the iminds research institute and the flemish government agency for innovation by science and technology ( iwt ) .the project aims to deliver a low - cost telemonitoring solution for home care and is to run until the end of year 2014 .littlesister adopts a connectionist approach in which the collective action of an interconnected network of simple units ( battery - powered mouse sensors ) replaces the adoption of more powerful and expensive complex devices ( smart cameras ) . in order for this approach to be effectivethe mentioned collective action is to guarantee that an optimal trade - off between energy efficiency , performance , and safety is dynamically sustained .we plan to express this optimal trade - off in terms of a system - environment fit. obviously the formulation of the littlesister system - environment fit will be considerably more complex than the one introduced in the present work .a key role will be played in particular by the littlesister awareness organ , which will be used to determine the level of criticality of the current situation and set an operative mode ranging from `` energy - saving - first '' to `` safety - first '' .this operative mode will be included in the set of context figures of the social behavior of littlesister s sensors .depending on the requirements expressed by the current operative mode and other context figures , the system - environment fit will vary , which will translate in a variable selection and number of sensors to be activated .the goal we aim to reach is being able to sustain at the same time both maximum safety and minimum energy expenditure .the questions we have posed in sect . [ s : intro ] have been answered , to some extent , by defining a conceptual framework for their discussion .the nature of our framework is behavioral and `` sits on the shoulders '' of the work carried out in the first half of last century by `` giants '' such as bogdanov , wiener , von bertalanffy , boulding , and several others in turn based on the intriguingly modern ideas of `` elder giants '' such as leibniz and aristotle . within our frameworkwe have introduced a behavioral formulation of the concepts of supply and system - environment fit as measures of the optimality of a system with respect to the current conditions of the environment in which the system is deployed .moreover , we have suggested how complex abilities such as auto - resilience and antifragility may be expressed in terms of behaviors able to track supply and fit measures and evolve the systemic features in function of the hypothesized future environmental conditions .practical application of the concepts in this article has been briefly discussed by considering a strategy for optimizing the collective behavior of the mouse sensors used in project littlesister . as can be clearly understood , our work is far from being exhaustive or complete .in particular discussing context figures without referring to a `` range '' , or sphere of action , makes it difficult to compare behaviors such as auditory perception in animals .our future work will include extending our conceptual framework accordingly .another direction we intend to take is the application of our concepts towards the design of antifragile computing systems ; the reader may refer to for a few preliminary ideas about this .i would like to express my gratitude to alan carter for helping me with the pictures in this paper .this work was partially supported by iminds interdisciplinary institute for technology , a research institute funded by the flemish government as well as by the flemish government agency for innovation by science and technology ( iwt ) .the iminds littlesister project is a project co - funded by iminds with project support of iwt .partners involved in the project are universiteit antwerpen , universiteit gent , vrije universiteit brussel , xetal , christelijke mutualiteit vzw , niko projects , jf oceans bvba , and sbd nv .a. rosenblueth , n. wiener , and j. bigelow , `` behavior , purpose and teleology , '' _ philosophy of science _ ,vol . 10 , no . 1 , pp . 1824 , 1943 .[ online ] .available : http://www.journals.uchicago.edu/doi/abs/10.1086/286788 v. de florio , `` preliminary contributions towards auto - resilience , '' in _ proceedings of the 5th international workshop on software engineering for resilient systems ( serene 2013 ) , lecture notes in computer science 8166_.1em plus 0.5em minus 0.4emkiev , ukraine : springer , october 2013 , pp .141155 .f. heylighen , `` basic concepts of the systems approach , '' in _ principia cybernetica web _ , f. heylighen , c. joslyn , and v. turchin , eds.1em plus 0.5em minus 0.4emprincipia cybernetica , brussels , 1998 . [ online ] .available : http://pespmc1.vub.ac.be/sysappr.html r. adner and r. kapoor , `` value creation in innovation ecosystems : how the structure of technological interdependence affects firm performance in new technology generations , '' _ strategic management journal _31 , pp . 306333 , 2010 .k. gdel , `` on formally undecidable propositions of principia mathematica and related systems , ''november 2000 , translation and adaptation by martin hirzel of gdel s `` ber formal unentscheidbare stze der principia mathematica und verwandter systeme '' .v. de florio , `` on the role of perception and apperception in ubiquitous and pervasive environments , '' in _ proceedings of the 3rd workshop on service discovery and composition in ubiquitous and pervasive environments ( supe12 ) _ , august 2012 .[ online ] .available : http://www.sciencedirect.com/science/article/pii/s1877050912005297 g. leibniz and l. strickland , _ the shorter leibniz texts : a collection of new translations _ , ser .continuum impacts.1em plus 0.5em minus 0.4emcontinuum , 2006 .[ online ] .available : http://books.google.be/books?id=ofocy3xj8nkc v. de florio , `` on the constituent attributes of software and organisational resilience , '' _ interdisciplinary science reviews _ , vol . 38 , no . 2 , june 2013 .[ online ] .available : http://www.ingentaconnect.com / content / maney / isr/2013/00000038/00000002/% art00005[http://www.ingentaconnect.com / content / maney / isr/2013/00000038/00000002/% art00005 ] v. de florio and c. blondia , `` reflective and refractive variables : a model for effective and maintainable adaptive - and - dependable software , '' in _ proc . of the 33rd euromicro conference on software engineering and advanced applications ( seaa 2007 ) _ , lbeck , germany , august 2007 .j. buys , v. de florio , and c. blondia , `` towards context - aware adaptive fault tolerance in soa applications , '' in _ proceedings of the 5th acm international conference on distributed event - based systems ( debs)_. 1em plus 0.5em minus 0.4emassociation for computing machinery , inc .( acm ) , 2011 , pp . 6374 .j. buys , v. de florio , and c. blondia , `` towards parsimonious resource allocation in context - aware n - version programming , '' in _ proceedings of the 7th iet system safety conference_.1em plus 0.5em minus 0.4em the institute of engineering and technology , 2012 . v. de florio , `` antifragility = elasticity + resilience + machine learning : models and algorithms for open system fidelity , '' _ arxiv e - prints _2014 , submitted for publication in the proceedings of the 1st international workshop `` from dependable to resilient , from resilient to antifragile ambients and systems '' ( antifragile 2014 ) .s. meystre , `` the current state of telemonitoring : a comment on the literature , '' _ telemed j e health _ , vol .11 , no . 1 ,pp . 6369 , 2005 .[ online ] .available : http://www.ncbi.nlm.nih.gov/pubmed/15785222
already 71 years ago rosenblueth , wiener , and bigelow introduced the concept of the `` behavioristic study of natural events '' and proposed a classification of systems according to the quality of the behaviors they are able to exercise . in this paper we consider the problem of the resilience of a system when deployed in a changing environment , which we tackle by considering the behaviors both the system organs and the environment mutually exercise . we then introduce a partial order and a metric space for those behaviors , and we use them to define a behavioral interpretation of the concept of system - environment fit . moreover we suggest that behaviors based on the extrapolation of future environmental requirements would allow systems to proactively improve their own system - environment fit and optimally evolve their resilience . finally we describe how we plan to express a complex optimization strategy in terms of the concepts introduced in this paper .
in bayesian optimization ( bo ) , we wish to optimize a derivative - free expensive - to - evaluate function with feasible domain , with as few function evaluations as possible . in this paper , we assume that membership in the domain is easy to evaluate and we can evaluate only at points in .we assume that evaluations of are either noise - free , or have additive independent normally distributed noise .we consider the parallel setting , in which we perform more than one simultaneous evaluation of .bo typically puts a gaussian process prior distribution on the function , updating this prior distribution with each new observation of , and choosing the next point or points to evaluate by maximizing an acquisition function that quantifies the benefit of evaluating the objective as a function of where it is evaluated . in comparison with other global optimization algorithms ,bo often finds `` near optimal '' function values with fewer evaluations . as a consequence ,bo is useful when function evaluation is time - consuming , such as when training and testing complex machine learning algorithms ( e.g. deep neural networks ) or tuning algorithms on large - scale dataset ( e.g. imagenet ) .recently , bo has become popular in machine learning as it is highly effective in tuning hyperparameters of machine learning algorithms .most previous work in bo assumes that we evaluate the objective function sequentially , though a few recent papers have considered parallel evaluations .while in practice , we can often evaluate several different choices in parallel , such as multiple machines can simultaneously train the machine learning algorithm with different sets of hyperparameters . in this paper , we assume that we can access evaluations simultaneously at each iteration .then we develop a new parallel acquisition function to guide where to evaluate next based on the decision - theoretical analysis . * our contributions . * we propose a novel batch bo method which measures the information gain of evaluating points via a new acquisition function , the parallel knowledge gradient ( ) .this method is derived using a decision - theoretic analysis that chooses the set of points to evaluate next that is optimal in the average - case with respect to the posterior when there is only one batch of points remaining . naively maximizing be extremely computationally intensive , especially when is large , and so , in this paper , we develop a method based on infinitesimal perturbation analysis ( ipa ) to evaluate s gradient efficiently , allowing its efficient optimization . in our experiments on both synthetic functions andtuning practical machine learning algorithms , consistently finds better function values than other parallel bo algorithms , such as parallel ei , batch ucb and parallel ucb with exploration . provides especially large value when function evaluations are noisy .the code in this paper is available at https://github.com/wujian16/qkg .the rest of the paper is organized as follows .section [ sect : related ] reviews related work .section [ sect : gaussian ] gives background on gaussian processes and defines notation used later .section [ sect : qkg ] proposes our new acquisition function for batch bo .section [ sect : practice ] provides our computationally efficient approach to maximizing .section [ sect : numerical ] presents the empirical performance of and several benchmarks on synthetic functions and real problems .finally , section [ sect : conclusion ] concludes the paper .[ sect : related ] within the past several years , the machine learning community has revisited bo due to its huge success in tuning hyperparameters of complex machine learning algorithms .bo algorithms consist of two components : a statistical model describing the function and an acquisition function guiding evaluations . in practice ,gaussian process ( gp ) is the mostly widely used statistical model due to its flexibility and tractability .much of the literature in bo focuses on designing good acquisition functions that reach optima with as few evaluations as possible . maximizing this acquisition functionusually provides a single point to evaluate next , with common acquisition functions for sequential bayesian optimization including probability of improvement ( pi) , expected improvement ( ei ) , upper confidence bound ( ucb ) , entropy search ( es ) , and knowledge gradient ( kg ) .recently , a few papers have extended bo to the parallel setting , aiming to choose a batch of points to evaluate next in each iteration , rather than just a single point . suggests parallelizing ei by iteratively constructing a batch , in each iteration adding the point with maximal single - evaluation ei averaged over the posterior distribution of previously selected points . also proposes an algorithm called constant liar " , which iteratively constructs a batch of points to sample by maximizing single - evaluation while pretending that points previously added to the batch have already returned values .there are also work extending ucb to the parallel setting . proposes the gp - bucb policy , which selects points sequentially by a ucb criterion until filling the batch .each time one point is selected , the algorithm updates the kernel function while keeping the mean function fixed . proposes an algorithm combining ucb with pure exploration , called gp - ucb - pe . in this algorithm , the first point is selected according to a ucb criterion; then the remaining points are selected to encourage the diversity of the batch .these two algorithms extending ucb do not require monte carlo sampling , making them fast and scalable .however , ucb criteria are usually designed to minimize cumulative regret rather than immediate regret , causing these methods to underperform in bo , where we wish to minimize simple regret .the parallel methods above construct the batch of points in an iterative greedy fashion , optimizing some single - evaluation acquisition function while holding the other points in the batch fixed .the acquisition function we propose considers the batch of points collectively , and we choose the batch to jointly optimize this acquisition function . other recent papers that value points collectively include which optimizes the parallel ei by a closed - form formula , , in which gradient - based methods are proposed to jointly optimize a parallel ei criterion , and , which proposes a parallel version of the es algorithm and uses monte carlo sampling to optimize the parallel es acquisition function .we compare against methods from a number of these previous papers in our numerical experiments , and demonstrate that we provide an improvement , especially in problems with noisy evaluations .our method is also closely related to the knowledge gradient ( kg ) method for the non - batch ( sequential ) setting , which chooses the bayes - optimal point to evaluate if only one iteration is left , and the final solution that we choose is not restricted to be one of the points we evaluate .( expected improvement is bayes - optimal if the solution is restricted to be one of the points we evaluate . )we go beyond this previous work in two aspects .first , we generalize to the parallel setting .second , while the sequential setting allows evaluating the kg acquisition function exactly , evaluation requires monte carlo in the parallel setting , and so we develop more sophisticated computational techniques to optimize our acquisition function .recently , studies a nested batch knowledge gradient policy . however , they optimize over a finite discrete feasible set , where the gradient of kg does not exist . as a result ,their computation of kg is much less efficient than ours .moreover , they focus on a nesting structure from materials science not present in our setting .in this section , we state our prior on , briefly discuss well known results about gaussian processes ( gp ) , and introduce notation used later .we put a gaussian process prior over the function , which is specified by its mean function and kernel function .we assume either exact or independent normally distributed measurement errors , i.e. the evaluation at point satisfies where is a known function describing the variance of the measurement errors .if is not known , we can also estimate it as we do in section [ sect : numerical ] . supposing we have measured at points and obtained corresponding measurements , we can then combine these observed function values with our prior to obtain a posterior distribution on .this posterior distribution is still a gaussian process with the mean function and the kernel function as follows this section , we propose a novel parallel bayesian optimization algorithm by generalizing the concept of the knowledge gradient from to the parallel setting .the knowledge gradient policy in for discrete chooses the next sampling decision by maximizing the expected incremental value of a measurement , without assuming ( as expected improvement does ) that the point returned as the optimum must be a previously sampled point .we now show how to compute this expected incremental value of an additional iteration in the parallel setting .suppose that we have observed function values .if we were to stop measuring now , would be the minimum of the predictor of the gp .if instead we took one more batch of samples , would be the minimum of the predictor of the gp .the difference between these quantities , , is the increment in expected solution quality ( given the posterior after samples ) that results from the additional batch of samples .this increment in solution quality is random given the posterior after samples , because is itself a random vector due to its dependence on the outcome of the samples .we can compute the probability distribution of this difference ( with more details given below ) , and the algorithm values the sampling decision according to its expected value , which we call the parallel knowledge gradient factor , and indicate it using the notation .formally , we define the factor for a set of candidate points to sample as , \label{eq : multikg}\ ] ] where : = \mathbb{e } \left[\cdot |{\bm{x}^{(1:n ) } } , { y^{(1:n ) } } \right] ] , rosenbrock3 on the domain ^ 3 ] , and hartmann6 on the domain ^ 6 ] , we would like to prove that ( [ eqn : qkg_grad_interchange ] ) is correct . before proceeding ,we define one more notation where equals to component - wise except for . to prove it, we cite theorem 1 in , which requires three conditions to make ( [ eqn : qkg_grad_interchange ] ) valid : there exists an open neighborhood of where is the dimension of point in such that ( i ) is continuous in for any fixed and , ( ii ) is differentiable except on a denumerable set in for any given and , ( iii ) the derivative of ( when it exists ) is uniformly bounded by for all , and the expectation of is finite . under the condition that the mean function and the kernel function are continuous differentiable, we see that for any given , is continuous differentiable in by the result that the multiplication , the inverse ( when the inverse exists ) and the cholesky operators preserve continuous differentiability .when is finite , we see that is continuous in .then is also continuous in by the definition of the function . by the expression that , if both and are unique , then is differentiable at .we define to be the set that is not differentiable , then we see that where . depend on if ( ) where is the point of . as is finite ,we only need to show that and is denumerable . defining on , one can see that is continuous differentiable on .we would like to show that is denumerable . to prove it, we will show that contains only isolated points .then one can use a theorem in real analysis : any set of isolated points in is denumerable ( see the proof of statement 4.2.25 on page 165 in ) . to prove that only contains isolated points, we use the definition of an isolated point : is an isolated point of if and only if is not a limit point of .we will prove by contradiction , suppose that is a limit point of , then it means that there exists a sequence of points all belong to such that .however , by the definition of derivative and , , a contradiction .so we conclude that only contains isolated points , so is denumerable .defining on , is also continuous differentiable on , then one can similarly prove that is denumerable . recall that from section 5 of the main document , where , , and we can calculate the as follows using the fact that is continuously differentiable and is compact , then is bounded by some . by the result that is continous ,it is bounded by a vector as is compact .then where . and .in this section , we will prove that sga converges to a stationary point .we follow the same idea of proving the theorem 2 in .first , it requires the step size satisfying as , and .second , it requires the second moment of the gradient estimator is finite . in the above section 1.3 ,we have show that , then .
in many applications of black - box optimization , one can evaluate multiple points simultaneously , e.g. when evaluating the performances of several different neural networks in a parallel computing environment . in this paper , we develop a novel batch bayesian optimization algorithm the parallel knowledge gradient method . by construction , this method provides the one - step bayes optimal batch of points to sample . we provide an efficient strategy for computing this bayes - optimal batch of points , and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms , especially when function evaluations are noisy .
there are many examples , both man made and naturally occurring , of phenomena which begin with a small disturbance and end in a catastrophe . such events include electrical network failures , forest fires , avalanches , nuclear chain reactions , snapping ropes and landslides .a common feature of the distribution of the sizes of such events , and the mathematical property which links the very small to the very big , is power - law scaling .such scaling often appears over a number of orders of magnitude with the end of the power law region marked by an exponentially decaying probability density , referred to as an `` exponential cut off '' .the values of the power law exponent , and the cut off point will depend on the nature of the system , but often not on its fine details a phenomenon known as universality .the physical origin of the cut off may be the physical size of the system or its inherent ability propagate the cascade . for infinite systems, there exists a critical level of instability at which the distribution is a pure power - law . as the system approaches this critical point , the cut off moves increasingly rapidly toward infinity , and the expected cascade size diverges . in this work it is our aim to investigate the phenomenon of power law crossover in cascade size distributions .crossover occurs when the distribution of large cascades follows a different power law to that of small cascades .this type of behaviour was discovered in the distribution of bursts of snapping events in bundles of fibres under tension and close to complete breakdown .it is our aim to show that similar behaviour may be exhibited by a system in which cascades occur repeatedly over an extended period .each cascade has the effect of increasing the stability of the system so that subsequent cascades propagate less freely .such a stabilizing effect is seen , for example , in regions prone to forest fires where the extent and frequency of large fires has been reduced by planned burning of limited areas .another example is where the spread of disease through a population is reduced by the presence of recovered ( or vaccinated ) individuals , who act as firewalls preventing transmission .we might also expect to see a similar effect in terrain susceptible to landslides .in fact , there is some evidence of crossover behaviour in landslide size distributions .however , our model is not tied to one particular physical system , but rather it puts forward a generic explanation of how crossover behaviour might arise .our approach is to produce the simplest possible model ( and explanation ) of driven cascades , that is mathematically tractable , and that preserves some key features of real cascading phenomena. the fundamental quantity of interest to us will be the ability of the system to propagate a cascade , which will be described by a single number , , the propagation power .in physical terms , this quantity might be determined by the connectivity of a network of unstable nodes , the density of unstable material in a system , or the average proximity of the components of a system to failure .because we view the propagation power as a measure of how easily cascades propagate between parts of system it will not depend on the absolute volume of the system it is an intensive property .the dynamical properties of will be determined by two processes .first , will tend to increase over time , but with an element of unpredictability , described by brownian noise so that in the absence of cascades : where is the mean rate of increase of , is a standard wiener process , and controls the magnitude of the noise .the physical origin of such a process in , for example , a forest fire model might be the drying out of vegetation due to unpredictable weather , or in a landslide model , the natural variability of pore water pressure .we assume that the magnitude of the noise is independent of system size , and therefore it represents an external driving process .the second influence on arises from the cascades themselves , which act to stabilize the system . to capture this stabilizing effect, we suppose that if the size of the cascade since an arbitrary reference time is given by the continuous random variable , then in response changes by .the parameter measures the sensitivity of to cascades , and will depend on the size of the system , and the stabilizing effect on those parts of it that are involved in the cascade . for convenience , we will refer to as the `` inverse system size '' .note that the distribution of will itself depend on .we define to be the sum of all cascade sizes that have occurred since and let count the number that have occurred .we then have that : together with the driving dynamics ( [ driving ] ) we have a complete stochastic differential equation for : we will assume that cascades begin when a small part of the system spontaneously fails. since a larger system will have more potentially unstable material then the rate at which cascades begin should scale in proportion to the system size .assuming that the cascade rate is constant and independent of the cascade history , then the times between cascades will be exponentially distributed and will be a poisson process with intensity , where is a constant of proportionality , independent of system size . without loss of generality in the model we can set by re scaling time and adjusting and .as we mentioned above , the distribution from which individual cascades sizes , , are drawn will depend on the value of when they occur .we write the probability density function of this distribution . in section [ casc ]we introduce a continuous state branching process to describe cascades , which we cap at size ( this cap is proportional to system size , and ensures that the propagation power can not become negative ) .we find that : where as approaches a `` critical value '' . at this critical point , is a pure power law and the mean cascade size diverges . for , the power law region is `` cut off '' at approximately . when , the mean cascade size increases further still and is infinite in the limit .the divergence of the mean cascade size as the critical point is approached from below will create a self stabilizing effect which pushes the system away from .the combination of this automatic stabilization and the upward drift of the driving process means that the propagation power will fluctuate about a mean value lying in the subcritical region .we will find that the magnitude of fluctuations is controlled largely by for large systems . of central interest to usis the long term cumulative record of cascade sizes in the system , which we describe with a probability density function .this will be an average of over the values of at which cascades take place .we define to be the probability density function for the value of the propagation power at time .the `` steady state '' density function for is then .we then have that this expression may also be thought of as the probability density function for the size of the next cascade observed after some arbitrary ( but large ) time .one of our main results is to show analytically that fluctuations in about its typical subcritical value , causing it to approach temporarily closer to criticality , are what generates crossover behaviour in the `` averaged '' cascade distribution . from the above arguments it is clear that typical value of , corresponding to the peak of , will be increased by increasing the driving rate . in the limit of infinite system sizewe will find that the stabilizing effect of diverging mean cascade size near means that it is always the case that .therefore , in the limit the system will sit at the critical point .similar behaviour is exhibited by the forest fire model , where the equivalent of is the rate of tree growth . because there are a wide range of parameter values for which both our model and the forest fire model are near critical ,both may be seen as examples of `` self organised criticality '' ( s.o.c . )the requirement that be sufficiently large in order for the system to be near criticality means that our model does not exhibit s.o.c . in its purest form , as seen in the sandpile . however , we can still draw analogy between the steady increase in and the addition of energy or particles in truly self organising models .suppose that each cascade begins when a small volume of the system experiences a `` failure '' event ( for example it may catch fire , explode , get infected or begin motion ) .this may induce further parts to fail and so on , forming a sequence of failures whose volumes are referred to as _generations_. we assume that the generations follow a _ continuous state branching process _ .the relationship between the sizes of the successive generations is encoded by an _ offspring _ distribution , .this is the probability distribution for the amount of failed material that each unit of the current generation triggers in the next .we assume that remains the same throughout the cascade .suppose that the zeroth generation of the process , , is an integer , then the size , , of the first generation is the sum of independent copies of a -distributed random variable . in this casethe distribution of is simply the convolution of with itself times , written . if is not an integer , we extend the idea of -fold convolution , following seneta and vere jones , as follows .suppose that is a random variable drawn from the distribution .the function : is the laplace transform of . given this definition , is the laplace transform of where . by relaxing the constraint that be an integer, we may define the laplace transform of , conditional on , to be , and then extend this rule to later generations : this recursive equation defines the relationships between the distributions of successive generations , giving a complete characterisation of the process once is defined .we will take to be the gamma distribution which has density function defined for , where the variables and are referred to as the shape and scale parameters .our choice of is motivated by the requirement that it have a physically plausible shape ( not bimodal ) , have defined moments , lead to mathematical tractability , and that ] , yielding the following expression for : } ( 1-k p)^{\frac{2}{k \sigma^2 } } \exp\left[-\frac{2 \mu}{k \sigma^2}(1-kp)\right]\ ] ] for all parameter values of interest to us , the probability weight in the invalid region ] .if is a gamma process with parameters and then : we now define a new stochastic process , where is the size of the first generation of our branching process . by defining to be the cumulative cascade size up to the step, we may show that the processes and have the following relationship : provided that the cascade has not ended for some . here denotes equality in distribution .we may deduce that this relationship holds inductively .we note first that from the defining properties of the gamma process : assuming that relationship ( [ map ] ) holds for all then since then ( [ map ] ) holds when and therefore for all by induction .the cascade ends at the first generation for which , at which point , so the total size of the cascade is equal to the first time that the process meets the origin . in order to find the cascade size distribution we need to solve this first passage time problem .our first passage time problem is most easily solved by viewing the stochastic process as the limit of a discrete state random walk . herewe show how the appropriate random walk is constructed .we begin by noting that the negative binomial distribution , which has probability mass function : provides an arbitrarily close discrete approximation to the gamma distribution for appropriate choice of the parameters and .the approximation is set up in the following way .we divide ] stands for the coefficient of in the taylor series of .we now have the probability mass function for the cascade size in the discrete branching process which approximates the continuum process that we are interested in .now that we have the solution to the discrete problem we solve the continuous problem by taking the continuum limit .we find expressions for the moments of the cascade size using a similar method .using a martingale method we determine the probability that the cascade is of finite size in the supercritical case .we obtain the continuum cascade density function , which we will call , by setting and and then taking the limit of : the asymptotic properties of may be determined by making use of stirling s approximation : .the result is : \frac { e^{- \kappa z } } { z^{3/2 } } \text { as } z \rightarrow \infty\ ] ] where provided , the distribution is normalised and its moments are defined .it is useful to have explicit expressions for the first two moments of in this case .we may compute the moments of the ( discrete ) distribution of by differentiating the generating function relationship : , and then solving for and . using the expression for , together with the fact that when , , we find that : the total cascade size , , has the same distribution as the sum of copies of , so and , yielding the first two moments of the cascade distribution in exact form : numerical integration of the exact distribution ( [ exact ] ) reveals that it is not normalized when .this is the `` supercritical '' regime . in general the total probability weight is equal to , which is less than one in the supercritical case because there is a non zero probability of seeing an infinite cascade .we may deduce this probability by considering the stochastic process : taking the expectation value of this process , conditional on its value at we find that : if we let be the solution to the equation then this expectation will be independent of time .the value of which solves this equation is : letting be the first time at which the process meets the origin , then we have that this gives the result presented in equation ( [ mom ] ) .the author would like to acknowledge murad banaji , samia burridge , alexey kuznetsov , andreas kyprianou and malcolm whitworth for useful discussions , as well as the anonymous referees for careful reading and constructive criticism .
we propose a model which explains how power law crossover behaviour can arise in a system which is capable of experiencing cascading failure . in our model the susceptibility of the system to cascades is described by a single number , the _ propagation power _ , which measures the ease with which cascades propagate . physically , such a number could represent the density of unstable material in a system , its internal connectivity , or the mean susceptibility of its component parts to failure . we assume that the propagation power follows an upward drifting brownian motion between cascades , and drops discontinuously each time a cascade occurs . cascades are described by a continuous state branching process with distributional properties determined by the value of the propagation power when they occur . in common with many cascading models , pure power law behaviour is exhibited at a critical level of propagation power , and the mean cascade size diverges . this divergence constrains large systems to the subcritical region . we show that as a result , crossover behaviour appears in the cascade distribution when an average is performed over the distribution of propagation power . we are able to analytically determine the exponents before and after the crossover .
one of the most common tasks in microarray analysis is to identify a list of genes that are differentially expressed under two conditions , such as being affected by a disease vs. normal , before vs. after a medical treatment , and one vs. another disease subtype .the number of genes on the top - ranking list is usually much smaller than the total number of genes on the chip , .if the same type of microarray chip is used for two different studies ( e.g. disease - a vs. control , and disease - b vs. control ) , two differentially expressed gene lists can be obtained , with and genes .researchers often find the same genes appear in both lists and hypothesize that these common genes are involved the etiology of both diseases .however , for such a hypothesis to be convincing , one has to first estimate the probability for overlapping genes by chance alone . in other words ,if two lists of genes are selected out of genes randomly , we would like to calculate the probability for genes in common in the two lists , with the lengths of the two lists being and .this overlapping probability is known to follow the hypergeometric distribution .the name hypergeometric distribution was first used in , and was popularized by its role in fisher s exact test . in microarray analysis ,overlapping probability and hypergeometric distribution mainly appear in testing the enrichment of genes in certain functional category . in this application ,the first list is the top - ranking differentially expressed genes , and a gene selection process is involved .the second list is nevertheless given : genes are known to be in a pathway , a member of a protein family , described by a gene ontology term , etc .one asks the question on chance probability for out of selected genes to be in a given pathway , a protein family , and describable by a gene ontology term .fixing or not is the main difference between their application and ours . when a different gene selection criterion is used , the number of genes in the two top - ranking lists of two studies ( and ) will also change . because the stringency of a gene selection criterion is always adjustable and to some extent arbitrary, we would like to examine whether these changes will affect the overlapping probability . at two extreme situations , very small and very large ,it is clear that the number of overlapping genes is and .these values appear 100% of the times , so the corresponding -value is equal to 1 , i.e. , not significant . for intermediate values , it is not clear what the overlapping probability and significance will be , and it is the topic of this abstract .given integers , , , ( and ) ) , the hypergeometric distribution is defined as where is the number of possibilities of choosing objects out of objects : $ ] . when genes are randomly chosen from the total of genes , and another random sampling leads to genes , the probability that the two lists of genes have in common is exactly the hypergeometric probability .this can be proven by the following steps : 1 ) the total number of possible choices for the two lists of genes is .2 ) there are possibilities for choosing the first list .3 ) among the genes in the first list , there are possibilities for choosing genes to be in common with the second list .4 ) in the second list , besides the genes that are in common with the first list , the remaining genes are chosen among the leftover " genes not in the first list , thus possibilities . the is simply ( # 2 # 3 # 4 ) / # 1 . notethat and can be switched without changing the value .it is usually more interesting to calculate the sum of for s equal or larger than the observed value ( i.e. , the -value ) : in statistical package ( _ http://www.r-project.org/_ ) , there are at least two ways to calculate the overlapping -value .the first is to use the accumulative distribution of hypergeometric distribution , _phyper(m , , , ) _ : -value if , and -value=1 if .the second method is to use the -value from the fisher s exact test on the following 2-by-2 table : the two approaches lead to the identical result . -90in hypergeometric distribution , the number of overlapping elements is an independent variable from the the list lengths . in order to get a rough idea on how changes with the list lengths , we use three real microarray datasets .theese studies concern three autoimmune diseases : rheumatoid arthritis ( ra ) , systemic lupus erythematosus ( sle ) , and psoriatic arthritis ( psa ) , described in details in .the number of controls ( c ) and patients ( p ) in these three datasets are ( c=39 , p=46 ) , ( c=41 , p=81 ) , and ( c=19 , p=19 ) , respectively .the total number of genes / probe - sets is , and the expression levels are log transformed .genes are ranked for their degree of differential expression which can be measured by various tests or models , such as -test and logistic regression . for any pair of studies , with a fixed number of top - ranking gene lists , one can count the number of overlapping genes and the proportion .fig.[fig1 ] ( left column ) shows this proportion as a function of for three study - pairs ( ra - sle , sle - psa , ra - psa ) as well as for two ranking methods ( -test and logistic regression ) .similar overlapping proportion of two random shuffled lists is also indicated in fig.[fig1 ] as crosses . when is small , is more likely to be zero , so the proportion is also zero .when approaches the total number of genes , , all genes are overlapping genes , and the proportion is 1 . fig .[ fig1 ] indeed shows these trends at the two extreme points . in order to check behavior in - between ,we draw a reference line in fig.[fig1 ] ( left column ) that assume a linear relationship between and .most of the points on fig.[fig1 ] are above this line , and the overlapping proportion of two random lists is exactly on this line .to have an idea of the absolute number of common genes more than expected by random chance , fig.[fig1 ] ( right column ) plots the observed subtract the expected as a function of . the maximum difference between the observed and expectedis reached between and .the difference of observed and expected s can be as much as 600800 . -90the overlapping -value corresponding to the counts plotted in fig.[fig1 ] was calculated by the hypergeometric distribution , and is shown in fig.[fig2 ] : -axis is -value ) , and -axis is .six lines are shown for three comparisons ( ra - sle , sle - psa , ra - psa ) and two measurements of the differential expression ( -test and logistic regression ) .zero -values are converted to 2.2 which is the minimum value reported by program .fig.[fig2 ] shows that besides the two ends ( and ) where the -value is 1 , the overlapping significance quickly increases with the length of top - ranking gene list ) , and can be extremely significant when a large number of genes are kept in the two lists for comparison .this result confirm our previous suspicion that overlapping significance is a function of the gene list lengths .if the selection of is arbitrary , the overlapping significance thus calculated is also arbitrary .it is not surprising that overlapping significance may keep increasing ( or , -value decreasing ) with the increase of , because -value in general depends on the sample size .when a signal is real ( true positive ) , -value will monotonically decrease with the sample size . on the contrast , if a true signal is absent , the sample size does not affect the conclusion . as can be seen in fig.[fig2 ] , the overlapping significance for two random lists does not really change with . one may argue that it is unlikely to consider top 5000 genes as being differentially expressed , because by a typical selection criterion ( e.g. -value of -test smaller than 0.01 , with or without multiple testing correction ) , the number of genes selected is less than a few hundreds . however , as can be seen in fig.[fig2 ] , even in the range of 10500 , the overlapping -value changes dramatically .this pitfall of gene - list - length dependence of overlapping -values has not been noticed before perhaps because in other application of hypergeometric distribution for calculating overlapping probability , the length of the second list is fixed , for example , in the study of overrepresentation of genes in certain pathway .the number of overlapping genes is then constrained from above by even though the length of the first list , , might increase by relaxing the gene selection criterion . -90there are many genes / probe - sets on the microarray chip that do not register much signal .since these low - expressed genes are lowly expressed in both control and patient samples , they usually do not appear in the top - ranking differentially expressed gene list .fig.[fig3 ] shows -value ) of each gene of 3 -tests sorted by average expression ( log - transformed ) across all 245 samples in 3 datasets ( for both cases and controls ) .although we can not use the average expression level to predict the degree of differential expression , there is a general trend for low - expressed genes to rank lower in the differentially expressed list as seen from fig.[fig3 ] .we removed 7000 genes with lower overall expression across all samples , leaving genes .figs.[fig1 ] and [ fig2 ] are reproduced in fig.[fig4 ] for the dataset with a reduced gene pool . as in figs.[fig1 ] and[ fig2 ] , the observed number of overlapping genes is much larger than the expected , though the difference peaks at 400600 , as versus 600 - 800 in fig.[fig1 ] .the overlapping significance as measured by -value ) again quickly moves up with as shown in the last column of fig.[fig4 ] .the qualitative similarity between figs.[fig1 ] , [ fig2 ] and fig.[fig4 ] indicates that the presence of low - expressed genes does not affect our conclusion .using the hypergeometric distribution to calculate the overlapping probability between two top - ranking differentially expressed genes in two studies , we have shown that the overlapping significance depends on the stringency of gene selection criterion , or equivalently , the length of the gene lists .this observation presents a problem when an overlapping -value is reported but the gene selection criterion is not specified . on the other hand , the increase of the overlapping significance with the gene list length can be an indication that the significant overlapping of genes is a true signal .the overlapping probability calculated here assumes the two top - ranking gene lists are selected from the same pool of genes .if the two studies are based on different chip platforms , the two initial gene pools are not identical , though there are perhaps certain common genes .we plan to derive the overlapping distribution for this situation .we also plan to study the probability for genes appearing in three top - ranking gene lists .although a permutation based approach comparing multiple studies was proposed in , there is no analytic formula available .we would like to thank prof .richard friedberg for suggestions .g. finocchiaro , f. mancuso , h. muller , mining published lists of cancer related microarray experiments : identification of a gene expression signature having a critical role in cell - cycle control " , _ bmc bioinf ._ , vol 6(suppl 4 ) , 2003 , s14 . l. tian , s.a .greenberg , s.w .kong , j. altschuler , i.s .kohane , p.j .park , discovering statistically significant pathways in expression profiling studies " , _ proc ._ , vol 102 , 2005 , pp 13544 - 13549batliwalla , e.c .baechler , x. xiao , w. li , s. balasubramaniuan , h. khalili , a. damle , w.a .ortmann , a. perrone , a.b .kantor , m. kern , p.s .gulko , m. kern , r. furie , t.w .behrens , p.k .gregersen , peripheral blood gene expression profiling in rheumatoid arthritis " , _ gene and immunity _ , vol 6 , 2005 , pp 388 - 397 .baechler , f.m .batliwalla , g. karypis , p.m. gaffney , w.a .ortmann , k.j .espe , k.b .shark , w.j .grande , k.m .hughes , v. kapur , p.k .gregersen , t.w .behrens , interferon - inducible gene expression signature in peripheral blood cells of patients with severe lupus " , _ proc ._ , vol 100 , 2003 , pp 2610 - 2615 .batliwalla , w. li , c.t .ritchlin , x. xiao , m. brenner , t. laragione , t. shao , r. durham , s. kemshetti , e. schwarz , r. coe , m. kern , e.c .baechler , t.w .behrens , p.k .gregersen , p.k .gulko , microarray analyses of peripheral blood cells identifies unique expression signature in psoriatic arthritis " , _ mol_ , 2006 , to appear .rhodes , j. yu , k. shanker , n. deshpande , r. varambally , d. ghosh , t. barrette , a. pandey , a.m. chinnaiyan , large - scale meta - analysis of cancer microarray data identifies common transcriptional profiles of neoplastic transformation and progression " , _ proc ._ , vol 101 , 2004 , pp 9309 - 9314 .
when the same set of genes appear in two top ranking gene lists in two different studies , it is often of interest to estimate the probability for this being a chance event . this overlapping probability is well known to follow the hypergeometric distribution . usually , the lengths of top - ranking gene lists are assumed to be fixed , by using a pre - set criterion on , e.g. , -value for the -test . we investigate how overlapping probability changes with the gene selection criterion , or simply , with the length of the top - ranking gene lists . it is concluded that overlapping probability is indeed a function of the gene list length , and its statistical significance should be quoted in the context of gene selection criterion .
recent observations indicate that the universe is accelerating , and it is spatially flat ( ) .approximately of the total energy density of the universe g/ consists of ordinary matter ( ) , and of the energy density corresponds to dark energy ( ) .there are three basic scenarios describing the evolution of the universe filled by dark energy .\i ) from a purely phenomenological point of view , the simplest possibility is that the dark energy is represented by a positive vacuum energy ( cosmological constant ) .if this is the case , the universe will reach de sitter ( ds ) regime and expand exponentially for an indefinitely long time , .this possibility seems much more natural than the other two possibility to be discussed below .\ii ) it may also happen that the dark energy is the energy of a slowly changing scalar field with equation of state , .in most of the models of dark energy it is assumed that the cosmological constant is equal to zero , and the potential energy of the scalar field driving the present stage of acceleration , slowly decreases and eventually vanishes as the field rolls to , see e.g. . in this case , after a transient ds - like stage , the speed of expansion of the universe decreases , and the universe reaches minkowski regime .\iii ) it is also possible that has a minimum at , or that it does not have any minimum at all and the field is free to fall to . in this case _ the universe eventually collapses , even if it is flat _the simplest way to understand this unusual effect is to analyse the friedmann equation ( in units ) .the positive energy density of a normal matter , as well as the positive kinetic energy density of the scalar field , tend to decrease in an expanding universe . at some moment, the total energy density , including the negative contribution , vanishes .once it happens , the universe , in accordance with the equation , stops expanding and enters the stage of irreversible collapse . the last possibility fora while did not attract much attention .there was no specific reason to expect that the present regime of acceleration is going to end , and there was even less reason to believe that the universe is going to collapse any time soon .unfortunately , despite many attempts , we were unable to obtain a good theoretical description of the models i ) and ii ) in the context of m - theory .we will discuss this issue in sect .2 . meanwhile , in it was found that one can describe the present state of acceleration of the universe in a broad class of models based on n=8 extended supergravity ( the theory closely related to m - theory ) .however , the universe described by these models typically collapses within the time comparable to its present age billion years . in the beginning , this seemed to be a model - specific result that should be taken seriously only if one can construct fully realistic models of elementary particles based on extended supergravity .however , this result is valid not only in for the models based on extended supergravity but for many phenomenological models of dark energy based on the minimal n=1 supergravity . in this paperwe will argue that this result is even more general .it can be obtained in almost every model of dark energy , either based on supergravity or not , if one takes into account the possibility that the effective potential of the field may have a minimum at or may be unbounded from below .we will first review the possibilities to describe the accelerating universe starting with m / string theory and supergravity .then we will describe our general argument .the standard approach to the description of our world in m / string - theory is based on the assumption that our space - time is 11 or 10 dimensional , but 7 or 6 of these dimensions are compactified .the finitness of the volume of the compactified space is required so that the original d=11 or d=10 theory can be related to the resulting d=4 theory .one can describe the scale of compactification for example , in string theory by introducing scalar moduli field , which appears as a coefficient in front of the potential energy in 4d .( we use the units , where . ) here is the value of the potential at which other scalar fields of the higher - dimensional theory are stabilized .however , the mechanism of stabilization of the compactified space is still unknown . in the absence of such mechanism, the term leads to the runaway behaviour , which implies _decompactification_. during this process , the energy density falls down very quickly .a similar result is valid if we consider d=10 string theory as a result of compactification of d=11 supergravity .the bottom line is : _ during the cosmological evolution in d=4 the runaway moduli represent decompactification of all internal dimensions of m - theory_. the scale factor of the universe in the theories with exponential potentials of the type of grows as . inorder to describe acceleration of the universe in such theories one would need to have .in the theory with the potential the universe can only decelerate . in application to string cosmology these observationsimply that until we learn how to stabilize the compactified space , we can not describe the accelerating universe approaching ds regime , as well as the accelerating universe with the energy density slowly approaching zero .one way to avoid the problems discussed above is to consider the models with a non - compact internal space .one may start with d=11 or d=10 supergravity with internal space with an infinite volume and relate it to n=8 supergravity in d=4 .this is called ` non - compactification . ' in this approach the connection between the original d=10 or d=11 theory and our d=4 world is more complicated than in the usual case of dimensional reduction . however , one can study these theories directly in d=4 .these theories are interesting because they have maximal amount of supersymmetry , related to d=11 and/or d=10 supergravities with non - compact internal spaces .the number of such models successfully describing an accelerated universe is very limited , due to the maximal amount of supersymmetries .some of these theories have ds solutions and can describe dark energy .these ds solutions correspond to the extrema of the effective potentials for some scalar fields .an interesting and very unusual feature of these scalars in all known theories with is that their mass squared is quantized in units of the hubble constant corresponding to ds solutions : , where are some integers of the order 1 .this property was first observed in for a large class of extended supergravities with unstable ds vacua , and confirmed and discussed in detail more recently in with respect to a new class of gauged supergravities with stable ds vacua .the meaning of this result can be explained in the following way .the simplest potential for a scalar field in this theory has the form [ simplepot0 ] v ( ) = ( 2 - 2 ) . usually the potential near its extremum can be represented as , where and are two free independent parameters . however , in extended supergravities with one always has , where are integers ( we are using units ) . taking into account that in ds space the hubble constant is given by ,one has , for , .in particular , in all known versions of supergravity ds vacuum corresponds to an unstable maximum , , _ i.e. _ at one has [ simplepot ] v ( ) = ( 1 -^2 ) = 3h^2(1 -^2 ) .one can easily verify that the simplest potential ( [ simplepot0 ] ) satisfies this rule .the main property of this potential is that .one can show that a homogeneous field with in the universe with the hubble constant grows as follows : , where . consequently , in the universe with the energy density dominated by it takes time until the scalar field rolls down from to the region , where becomes negative .once it happens , the universe rapidly collapses .= 5.3 cm note , that the present age of the universe is approximately equal to , and the total time of the development of the instability leading to the global collapse of the universe is given by , which is also of the same order as , unless is exponentially small .this explains the main result of ref . : the universe described by this class of theories is going to collapse within the time comparable to its present age billion years , see fig .[ scalefactorcoll ] .there are also 3 different cosmological models of dark energy based on n=8 supergravity , with the following potentials of two fields , and .we will list them here , leaving a detailed description of their cosmological implications for the future publication [ 332 ] v ( , ) = e^- ( 3- ( 2 ) ) .this is an improved version of townsend s model of m - theory quintessence .townsend used this potential at and found the exponential potential which may describe the current acceleration with .the complete form of the potential shows that the full potential of this theory is unbounded from below and unstable with respect to the generation of the field .the speed of the development of the instability is determined by the curvature of the potential in the -direction , , i.e. .this allows for the existence of a stage of accelerated expansion , but eventually the universe collapses , just like in the theory ( [ simplepot0 ] ) discussed above .the second model is [ 332a ] v ( ) = e^- ( 24 e^- -8 e^2- 3 e^-8 ) .the part of the potential has a ds maximum at .it may describe the current acceleration during the slow growth of the field : . the instability with respect to the field eventually develops and the universe collapses . the last model of this type has a saddle point ds solution at [ 332b ] v ( ) = ( 2- 2 ) . at the point this model reduces to the model ( [ simplepot0 ] ) which describes dark energy and eventual collapse of the universe .a lot of work should be done to incorporate usual matter fields and construct realistic cosmology in n=8 supergravity .however , it is quite encouraging that there is a class of models based on n=8 supergravity which can describe the present stage of acceleration of the universe .all of these models share the same property : at some moment expansion of the universe stops and the universe collapses within time comparable to its present age billion years .the main reason for the coincidence of the two different time scales , the present age of the universe and the time until the big crunch , is the relation . but this relation appears not only in the extended supergravity .it is often valid for the moduli fields in supergravity . in particular , it is valid in the simplest polnyi - type toy model for dark energy in supergravity .if one does not fine - tune the value of the cosmological constant in this model to be equal to zero , one has two equally compelling options : and in the minimum of the effective potential . in the first case, the universe enters ds regime of eternal expansion . in the second case, the universe collapses .the time until the global collapse depends on the parameters of the model and the initial conditions , but typically it is of the same order as billion years , just as in the extended supergravity .another interesting model is the axion quintessence . in the m - theory motivatedversion of this model proposed in one has , where the value of constant depends on the details of the model . for , one finds . according to ,this version of the axion quintessence model can successfully describe the present stage of acceleration of the universe , but , just like the models , it leads to a global collapse of the universe in the future within the typical time billion years .in fact , the crucial relation is valid in most of the dark energy models .it is the standard inflationary slow - roll condition , which should be valid at the present stage of the late - time inflation / acceleration of the universe .sometimes this slow - roll condition can be violated , but typically one can not have a prolonged stage of acceleration of the universe for . as a result , one can give a simple argument suggesting that the coincidence of the two different time scales , the present age of the universe and the time remaining until the big crunch , is a generic property of many models of dark energy .let us make the simplest assumption that the expansion of the universe at can be approximately described by the simple power - law equation where is some constant .the hubble constant in the universe with is given by , which means that the total energy density is equal to .the parameter is determined by the continuity condition for : the last part of this equation is the observed relation between the present value of and the age of the universe in the simplest model .this gives .the energy density will become times smaller than now at the time .in particular , the energy density will become 2 times smaller at the time from now , and it will become 9 times smaller at .how large is this time interval ?acceleration of the universe implies that .taking for definiteness , one finds that the density of the universe will decrease 2 times at the moment billion years from now , and it will drop 9 times billion years from now .now let us take any model of dark energy with and add to the scalar potential a tiny negative cosmological constant with .the evolution of the universe up to the present moment will not change significantly .one can make this model even better by multiplying by some factor to ensure that the present value of remains equal even after we add a negative constant to .this model will remain a viable model of dark energy .however , after some time the value of will drop down more than n times , the total energy density of the universe will vanish , the universe will stop expanding , and soon after that it will collapse . if is not too large , the universe collapses at the time .one could think that the argument given above works only for the marginal situation , when with .however , the final result is very general .our procedure of modification of the potential ( subtraction of a constant and a subsequent compensation of the decrease of via the multiplication of by a constant ) works until the moment when the theory no longer represents dark energy because the potential becomes too steep .this happens for any theory of dark energy , even if the original potential was extremely flat ( which corresponds to ) .once it happens , the field rolls down within the time .the rolling field rapidly approaches the region of negative , which typically leads to the collapse of the universe within the time .this suggests that many models of dark energy considered in the literature have viable counterparts that can be obtained from the original models by adding a negative cosmological constant .unless the absolute value of this extra term is many orders of magnitude smaller than , these models will describe the universe collapsing within the time comparable to the present age of the universe .as an example illustrating the general argument given in the previous section , let us consider dark energy described by the scalar field with an exponential potential we already mentioned that this theory describes an accelerating universe for .however , in the realistic situation this condition can be slightly relaxed .one should take into account that the post - inflationary universe rapidly expanded , being dominated by hot matter and then by cold dark matter .this expansion does not allow the field to move until the energy density of matter becomes sufficiently small .therefore in the beginning the kinetic energy of the scalar field is very small , so it has the same equation of state as the cosmological constant .then the field starts moving slowly , loosing its energy at a much slower paste than cdm .eventually , the universe enters the stage with ( the present time ) , and then continues to grow . as a result , one can have and at present for . in our investigation we will assume , without loosing the generality , that the initial value of the field was ( one can always rescale the field and the potential ). then one should find such value of the parameter that the universe enters the stage with at the same time when its hubble constant acquires its present value .this requires fine - tuning , but this is the same fine - tuning that hampers all models of dark energy .now we will consider a class of potentials of a more general type , containing a constant negative contribution , as suggested in the previous section : here is some positive constant .the constant should be found anew for each new value of , just as we did for the case described above .this can be done , and the cosmological evolution of this model can be easily studied using the methods described in . herewe will only present the results of our investigation of a model with .= 5.3 cm figure [ scalefactorcoll2 ] shows the evolution of the scale factor of the universe for 4 different values of parameter : and . for all of these cases one can find such parameters that the value of the hubble constant at coincides with its present value ( the curves have the same derivative at in figure [ scalefactorcoll2 ] ) .= 5.3 cm all of the models with with the potentials shown in figure [ pot ] can represent dark energy with at the present moment ( the point in figure [ omegacoll ] ) .however , the largest value of in the case is , and the equation of state in this case blows up near , see figure [ wcoll ] .= 5.7 cm as we see , the universe with ( thick red line in figures [ pot]-[acc ] ) always continues its accelerated expansion .however , in all other cases the universe collapses within the time ranging from billion years ( for ) to billion years ( for ) , in agreement with the argument given in the previous section .the universe with never accelerates , so this model is ruled out by the existing observations .thus , for every model successfully describing dark energy there exist many other models with which provide an equally good description of the present stage of acceleration , but lead to the global collapse of the universe within the next years .similar results can be obtained for many other models of dark energy , e.g. for the theories with all possible values of , or for the models with the inverse power law potentials .recent discovery of acceleration of the universe is one of the major challenges for the modern theory of fundamental interactions .after many attempts to explain why the cosmological constant must be zero , the theorists switched to the new paradigm and started trying to explain why it should be positive and why , consequently , the universe should expand forever .the first attempts to do so in the context of m / string theory and extended supergravity revealed many problems described , e.g. , in . then we learned that one can describe acceleration of the universe in extended supergravity , but in all models based on extended supergravity , with the exception of the n=2 model of , the regime of acceleration is unstable .typically it ends by a global collapse of the universe within the time comparable with the present age of the universe , billion years . in this paperwe have shown that the possibility of a global collapse is not specific to supergravity but is , in fact , quite generic .for every model of dark energy describing eternally expanding universe one can construct many closely related models which describe the present stage of acceleration of the universe followed by its global collapse .this does not mean that we are making a doomsday prediction .none of the existing theoretical models of dark energy look particularly natural and attractive .it may happen that eventually we will find good theoretical models describing an eternally accelerating universe , or conclude , on the basis of anthropic considerations , that dark energy should change in time extremely slowly , so that the collapse will occur exponentially far away in the future .we hope to return to this question in the future publications .however , in the absence of a compelling theory of dark energy one may also consider a more humble approach and try to compare predictions of various models of dark energy with observations , see e.g. .if the universe is going to collapse , then in the beginning of this process the speed of expansion of the universe should gradually slow down , see fig .this is accompanied by the rapid growth of the parameter at small , as shown in fig .[ wcoll ] .we find it quite significant that some of the models predicting global collapse can be already ruled out by the existing observational data .for example , all models predicting the global collapse within the next 18 billion years , do not describe the present stage of acceleration , and therefore contradict the recent cosmological observations .thus , even though the observational data can not rule out the general possibility of the global collapse in the distant future , they can help us to put strong constraints on the time of the possible big crunch .it is a pleasure to thank r. bond , l. kofman , j. kratochvil , e. linder , s. prokushkin and m. shmakova , for useful discussions .this work was supported by nsf grant phy-9870115 .the work by a.l . was also supported by the templeton foundation grant no . 938-cos273 .s. perlmutter _ et al ._ , `` measurements of omega and lambda from 42 high - redshift supernovae , '' astrophys.j . * 517 * , 565 ( 1999 ) [ astro - ph/9812133 ] , see also http://snap.lbl.gov ; a. g. riess _ et al ._ , `` observational evidence from supernovae for an accelerating universe and a cosmological constant , '' astron .j. * 116 * , 1009 ( 1998 ) [ astro - ph/9805201 ] .j. l. sievers _ et al . _ , `` cosmological parameters from cosmic background imager observations and comparisons with boomerang , dasi , and maxima , '' astro - ph/0205387 ; j. r. bond _ et al . _ , `` the cosmic microwave background and inflation , then and now , '' arxiv : astro - ph/0210007 . a. d. dolgov , `` an attempt to get rid of the cosmological constant , '' in : _ the very early universe _ ,gibbons , s.w . hawking and s.siklos ( cambridge university press 1983 ) , pp . 449 - 458 ; c. wetterich , `` cosmology and the fate of dilatation symmetry , '' nucl . phys .b * 302 * , 668 ( 1988 ) ; p. g. ferreira and m. joyce , `` cosmology with a primordial scaling field , '' phys .d * 58 * , 023503 ( 1998 ) [ arxiv : astro - ph/9711102 ] ; b. ratra and p. j. peebles , `` cosmological consequences of a rolling homogeneous scalar field , '' phys .d * 37 * , 3406 ( 1988 ) ; i. zlatev , l. m. wang and p. j. steinhardt , `` quintessence , cosmic coincidence , and the cosmological constant , '' phys .lett . * 82 * , 896 ( 1999 ) [ arxiv : astro - ph/9807002 ] .linde , `` inflation and quantum cosmology , '' print-86 - 0888 ( june 1986 ) , in : _ three hundred years of gravitation _ , ( eds . : hawking , s.w . andisrael , w. , cambridge univ . press , 1987 ) , 604 - 630 ; l. m. krauss and m. s. turner , `` geometry and destiny , '' gen.rel . grav .* 31 * , 1453 ( 1999 ) [ arxiv : astro - ph/9904020 ] ; a. a. starobinsky , `` future and origin of our universe : modern view , '' grav .cosmol .* 6 * , 157 ( 2000 ) [ arxiv : astro - ph/9912054 ] ; g. n. felder , a. frolov , l. kofman and a. linde , `` cosmology with negative potentials , '' phys .d * 66 * , 023507 ( 2002 ) [ arxiv : hep - th/0202017 ] .r. kallosh , a. linde , s. prokushkin and m. shmakova , `` supergravity , dark energy and the fate of the universe , '' phys .d * 66 * , 123503 ( 2002 ) [ arxiv : hep - th/0208156 ] .r. kallosh and a. linde , `` m - theory , cosmological constant and anthropic principle , '' arxiv : hep - th/0208157 . c. m. hull and n. p. warner ,`` noncompact gaugings from higher dimensions , '' class . quant .grav . * 5 * , 1517 ( 1988 ) .p. fre , m. trigiante and a. van proeyen , `` stable de sitter vacua from n = 2 supergravity , '' class .* 19 * , 4167 ( 2002 ) [ arxiv : hep - th/0205119 ] .m. dine , w. fischler and d. nemeschansky , `` solution of the entropy crisis of supersymmetric theories , '' phys .b * 136 * , 169 ( 1984 ) ; g. d. coughlan , r. holman , p. ramond and g. g. ross , `` supersymmetry and the entropy crisis , '' phys.lett .b * 140 * , 44 ( 1984 ) ; m. dine , l. randall and s. thomas , `` baryogenesis from flat directions of the supersymmetric standard model , '' nucl .b * 458 * , 291 ( 1996 ) [ arxiv : hep - ph/9507453 ] ; a. d. linde , `` relaxing the cosmological moduli problem , '' phys .d * 53 * , 4129 ( 1996 ) [ arxiv : hep - th/9601083 ] .j. a. frieman , c. t. hill , a. stebbins and i. waga , `` cosmology with ultralight pseudo nambu - goldstone bosons , '' phys .lett . * 75 * , 2077 ( 1995 ) [ arxiv : astro - ph/9505060 ] ; i. waga and j. a. frieman , `` new constraints from high redshift supernovae and lensing statistics upon scalar field cosmologies , '' phys .d * 62 * , 043521 ( 2000 ) [ arxiv : astro - ph/0001354 ]. k. choi , `` string or m theory axion as a quintessence , '' phys.rev .d * 62 * , 043509 ( 2000 ) [ arxiv : hep - ph/9902292 ] . g. w. gibbons , `` apects of supergravity theories , '' in _ supersymmetry , supergravity and related topics _ , eds. f. del aguila , j.a .de azcrraga and l.e .ibaez ( world scientific 1985 ) pp . 346 - 351 ; j. maldacena and c. nuez , `` supergravity description of field theories on curved manifolds and a no - go theorem , '' int .a16 * ( 2001 ) 822 .s. hellerman , n. kaloper and l. susskind , `` string theory and quintessence , '' jhep * 0106 * , 003 ( 2001 ) [ arxiv : hep - th/0104180 ] ; w. fischler , a. kashani - poor , r. mcnees and s. paban , `` the acceleration of the universe , a challenge for string theory , '' jhep * 0107 * , 003 ( 2001 ) [ arxiv : hep - th/0104181 ] .j. garriga and a. vilenkin , `` testable anthropic predictions for dark energy , '' arxiv : astro - ph/0210358 .p. s. corasaniti and e. j. copeland , `` constraining the quintessence equation of state with snia data and cmb peaks , '' phys .d * 65 * , 043004 ( 2002 ) [ arxiv : astro - ph/0107378 ] ; j. weller and a. albrecht , `` future supernovae observations as a probe of dark energy , '' phys .d * 65 * , 103512 ( 2002 ) [ arxiv : astro - ph/0106079 ] ; i. maor , r. brustein , j. mcmahon and p. j. steinhardt , `` measuring the equation - of - state of the universe : pitfalls and prospects , '' phys .d * 65 * , 123003 ( 2002 ) [ arxiv : astro - ph/0112526 ] ; w. hu , `` dark energy and matter evolution from lensing tomography , '' arxiv : astro - ph/0208093 ; j. a. frieman , d. huterer , e. v. linder and m. s. turner , `` probing dark energy with supernovae : exploiting complementarity with the cosmic microwave background , '' arxiv : astro - ph/0208100 ; j. j. mohr , b. oshea , a. e. evrard , j. bialek and z. haiman , `` studying dark energy with galaxy cluster surveys , '' arxiv : astro - ph/0208102 .
it is often assumed that in the course of the evolution of the universe , the dark energy either vanishes or becomes a positive constant . however , recently it was shown that in many models based on supergravity , the dark energy eventually becomes negative and the universe collapses within the time comparable to the present age of the universe . we will show that this conclusion is not limited to the models based on supergravity : in many models describing the present stage of acceleration of the universe , the dark energy eventually becomes negative , which triggers the collapse of the universe within the time years . the theories of this type have certain distinguishing features that can be tested by cosmological observations .
in a recent article ` coherent lagrangian vortices : the black holes of turbulence ' the authors draw parallels between certain fluid dynamical configurations and certain properties of black hole geometry .the goal the article is to characterize vortex structures which remain coherent over long times .some such vortices ( known as the agulhas rings ) are observed in the south atlantic and are believed to be relevant for the long range transport of water with relatively high salinity and temperature , and possibly also as moving oases for the food chain ( and references therein ) .specifically , the authors of describe how to associate a 1 + 1 dimensional _lorentzian _ effective metric tensor to spatial ( constant time ) snapshots of a fluid flow .one claim made in is that certain closed spatial curves ( at fixed time ) in the fluid flow are analogous to the ` photon spheres ' that exist around black holes .a photon sphere necessarily occurs in a black hole space - time and here we draw intuition ( for a subsequent two dimensional treatment ) from the simplest black hole , which is described ( in four dimensions ) by the schwarzschild metric ( see below ) . while we we do not take issue with the interesting fluid - dynamical subject matter of , we wish to point out a number of conceptual difficulties associated with the geometric interpretation of the results and hopefully elucidate some of the ( perhaps counter - intuitive ) features of lorentzian geometry and in particular of black hole geometries .specifically , we make the following clarifications : * the circular photon orbit ( associated with the photon sphere ) around a schwarzschild black hole is _ not _ a closed null geodesic .* closed null curves in general relativity are extremely pathological and probably forbidden by reasonable physics arguments ( for example , globally hyperbolic spacetimes do not admit closed null curves ) . *the existence of a photon sphere is not a necessary nor is it a sufficient condition for the existence of a black hole . *the circular photon orbit is _ not _ circular - its 3-dimensional _ projection onto the fixed time slices of the schwarzschild geometry in spherical polar coordinates _ is . * a singularity in the metricdoes not imply a singularity of the geometry .a singularity of a metric coefficient in one coordinate system is a weak condition and does not indicate the presence of a real physical singularity . below we will expand on these points . in sec .[ s:2 ] we explore the construction of the ` fluid - metric ' as defined in and its geometry . in sec .[ s:3 ] we review some relevant aspects of the schwarzschild geometry , in particular the circular photon orbit and the photon sphere , while in sec . [ s:4 ] we present some examples of fluid flows which give rise to interesting lorentzian geometries and which serve to illustrate our observations .in this section we briefly review the main theoretical result of ref . by deriving the -dimensional lorentzian metric associated with certain fluid configurations .the primary mathematical result of the article is the identification and characterisation of certain closed curves along which the fluid flow is ` coherent ' : closed curves of material flow which remain closely associated .such curves are shown in to satisfy a differential equation which is interpreted as an integrability condition of a vector field which is null with respect to an auxiliary metric tensor .that is , one discovers a null vector field and subsequently integrates it to yield the integral curves which are null curves .we shall introduce some concepts from continuum mechanics to this end .consider a sufficiently smooth fluid flow velocity ( for example , could be a solution of the navier - stokes equations , but this is not required ) .then a fluid element propagates in time along the flow lines with a world line which satisfies = ( t , ) .[ e : ode ] the flow given by represents a map from to ( we restrict attention to a 2-dimensional fluid flow ) where the reference configuration given by the initial condition gets mapped to the solution to at time _ t:_0 ( t ) .here we have denoted the coordinates in the plane at time with a capital letter for clarity while reserving the lower case variable names for the initial un - deformed coordinates . in general , the flow will deform the lines of constant initial coordinate label , say . or . , where are cartesian coordinates , so that lines of constant initial condition or will , at some fixed time , be deformed to some curved lines in the plane .we may characterise such a deformation with the so - called ` deformation gradient ' mixed two - point tensor whose components with respect to an initial and final coordinate basis are given by f^i _ j:= .note the mixed - coordinate definition ( two - point tensor structure ) by observing that these components are given with respect to the two coordinate bases , initial and final in a specific way .we have , in coordinate - independent notation , = f^i_j ( dx^j ) . for example , and for later reference , in a polar coordinate basis we have f^i_j= & + + & , which is written in the more convenient orthonormal polar basis and ( and ) as f^a_b= & + + r(t ) & .the orthonormal basis is preferable to the polar coordinate basis due to the possibility to raise and lower indices with the identity matrix , which coincides with the matrix of metric coefficients in such a basis . the `right cauchy - green ' deformation tensor carries the same deformation information modulo transformations which merely shift or rotate an initial configuration without genuine deformation and is defined as c^i_j:=f_k^i f^k_j = g_km g^in , where and are the metric coefficients in the two coordinate bases , respectively . in orthonormal baseswe have c_ab=_c .we will be concerned with the eigenvectors and the eigenvalues of .note that as well as the eigenvectors and eigenvalues depend on the initial point as well as on the time at which we calculate the strains .being the ` square ' of another tensor it is simple to show that the eigenvalues of satisfy 0_1(_0,t)_2(_0,t ) _ 0 , t and that form an orthonormal basis for .let and define the so - called ` generalised green - lagrange tensor ' in an orthonormal frame by e_ab:=. [ e : e ] this object can be thought of as acting as a time - dependent bilinear form on the initial conditions space and depends on the constant . in the ` moving orthonormal frame ' defined by the eigenvectors of we can write e_ab= _ 1- & 0 + 0 & _ 2- .[ e : eeigen ] the authors of define an effective metric tensor induced by the action of in on the space of initial conditions which , remarkably , can have lorentzian signature ( although the tensor is of the wrong type to be a metric , we gloss over this and talk about as ` being ' the metric tensor ) . in principle, this fact allows one to draw parallels between this structure and a black hole metric with an associated photon sphere . the first thing to notice about the tensor is that its signature can change from point to point and over time depending on the relative magnitude of the eigenvalues and is only lorentzian at time if for all points . assuming this ` lorentzian condition ' on the eigenvalues at some time , we can construct the two independent ` e - null ' vector fields in the orthonormal eigen - basis of as where we have ( arbitrarily but without loss of generality ) normalised the vector fields to have unit norm in the background euclidean metric .the integral curves of these vector fields are null curves with respect to the metric .it is shown in that these null curves are curves for which the flow up to time _ uniformly _ scales the tangent vectors to the curve .that is , given an -null curve which gets mapped under the flow to the curve , we have ||_t()(s)||^2=||(s)||^2 uniformly for all parametrizing the curve and where the prime denotes taking the tangent vector at the given point on the curve . such closed curves which possess the uniform stretching property are intuitively ` invariant curves ' of the flow ; their fluid dynamical relevance is discussed in where the outermost curve in a family of such curves is said to define the boundary of a ` coherent material vorticex ' in 2-dimensional flow .the authors of refer to a photon sphere in an analogy with vortices through the lorentzian metric .a photon sphere is associated with a black hole spacetime metric . in general relativitythere exists a vast array of black hole and black hole - like solutions with varying degrees of physical sensibility . in the vacuum ,spherically symmetric case in -dimensions , there is a uniqueness theorem ( birkoff s theorem ) which points us to the static schwarzschild black hole metric [ schwarzschild ] ds^2=-(1-)dt^2 + ( 1-)^-1dr^2+r^2 d_(2)^2 here written in standard spherical polar coordinates , where is the metric on the unit 2-sphere , is newton s constant , is the black hole mass , and units in which the speed of light is unity are used , as is common in relativity .the metric ( [ schwarzschild ] ) describes the simplest black hole spacetime in dimensions .a crucial feature of this metric is the existence of an horizon a lower - dimensional region on which the metric is degenerate and at which time and space ` swap their roles ' . without getting into technical details ,a precise statement is that the norm of the timelike killing vector field associated with the time symmetry changes sign , becoming a space - like vector field at a co - dimension 2 hyper - surface generated by null geodesics of the metric ( [ schwarzschild ] ) and known as the _ event horizon_. the event horizon is the key structure which , if present in a gravitational configuration , indicates the presence of a black hole . in the above schwarzschild case ,the time - like killing vector is simply the time direction , with norm ||k_t||^2=-(1- ) which changes sign at , a value of the radial coordinate known as the _ schwarzschild radius _ or black hole radius .this is the radius of the event horizon . without an event horizon, a spacetime can not be sensibly said to contain a black hole . in a careful relativistic treatment , one can show that the effective potential for ` free ' ( geodesic ) radial motion in the schwarzschild geometry is given by the effective 1-dimensional potential [ effectivepotential ] v(r)=- + - , where for massive particles and for massless particles , and is the angular momentum represents the angular momentum per unit mass .] of the particle .the first two terms on the right hand side of eq .( [ effectivepotential ] ) are identical to the two terms which appear in the effective potential of the newtonian treatment , while the third one is a correction due to general relativity .circular orbits exist at radii for which and we have r_=(l ) . in the massless limit , there exists a single orbit at , which is unstable , while in the massive case there exist two orbits , one stable and the other ( at smaller radius ) unstable .in contrast , in newtonian gravity ( in which the term proportional to is absent in the effective potential ) , there exists a single stable circular orbit for massive particles and no circular orbit for massless particles .it is important to note that this closed circular orbit is _ not _ a closed null geodesic , such curves being highly pathological and most likely unphysical in general relativity .indeed , it is not even a geodesic nor is it a null curve .instead it is the spatial projection of an open and infinitely extended null geodesic curve in space and time as shown in fig .[ f : helix ] . ] the tangent vector to the null geodesic is given by = ( 3e,0,0 , ) in the spherical coordinate basis whose integral curve is the helical null geodesic in spacetime x^()=(3e , 0,0 ) .a photon sphere is a sphere of radius equal to that of the closed circular photon orbit and coincides with a 2-sphere of symmetry of the spacetime .the photon sphere ( a spacelike hypersurface ) of radius has nothing to do with the black hole horizon ( a null hypersurface ) of radius , which traps all particles .in this section we consider various background fluid flows and compute the associated null curves of the auxiliary metric as defined in .the purpose of this section is to use examples to highlight the distinction between the closed null curves and any kind of photon sphere . in an attempt to construct simple examples possessing closed curves uniformly stretched by a flow, we consider rotational symmetry .it is clear that circles concentric about the origin are uniformly stretched in circularly symmetric flows and hence constitute examples of the curves sought in ref . as solutions to the optimization problem and shown to define ` coherent material vortices ' .we consider sequentially more complex flows : rigid body , irrotational , and some rotating and draining vortex flow .consider a rigid body flow profile _ t:(r_0,_0)(r_0 , _0+t ) where is a fixed angular frequency .then coincides with the identity , , and the effective metric is not of lorentzian signature for any .hence we must move to a more complicated flow in order to explore the construction . in a certain sense , is null in this geometry , highlighting the total uniform and coherent ( non - deforming ) nature of the flow .such a flow is highly unphysical and does in no way invalidate the results of : it serves only as a first simple example which shows that the metric construction can be non - trivial .consider instead the differential rotating flow _ t:(r_0,_0)(r_0 , _ 0 + ) .[ e : irrot ] such a flow is irrotational and is commonly refered to as a ` vortex line ' flow , in this case of strength .then the eigenvalues of are easily calculable as _ = and we see that they are independent of the angular variable .they satisfy , as functions of and , the bounds as well as the bounds at fixed ( see fig .[ f : values ] ) .hence the only value of for which is of lorentzian signature in the entire plane is . of the matrix as a function of at finite time for the irrotational vortex flow .as time proceeds the red solid curve diverges to while the blue dashed curve converges to neither curve crosses the black dot - dashed line at any time .[ f : values ] ] associated with the irrotational vortex flow .this vector field is independentr of the time up to which the flow is computed .[ f : nulls1 ] ] [ e : eta- ] ( here at time ) associated with the irrotational flow .this additional null vector field depends on the time up to which we compute the flow . at late times , or small radiusit converges to , i.e. tangent to concentric circles about the origin while at early times it is purely radially pointing .[ f : nulls2 ] ] in the orthonormal polar basis for this choice of we have e_ab= ( ) ^2 & - + - & 0 in this case , one of the two null vector fields of is time - independent and tangent to circles concentric on the origin ( see fig . [f : nulls1 ] ) , in the orthonormal polar basis , _ + = ( 0,1 ) while the other is time - dependent , _-= ( 1 , ) [ e : eta- ] tending to a purely angular pointing ( in the direction ) vector field as ( see fig .[ f : nulls2 ] ) .it is straightforward to show that the geodesic equation ( one might find it easier to work in the non - orthonormal polar coordinate basis at this stage ) becomes the pair from which we see that our null curves of constant are indeed null geodesics .furthermore one can simply show that also the second set of null curves are geodesics . using standard techniquesone can show that there are three killing vector fields , two of which are ` spacelike ' for all and and one of which is null for all and . here ` spacelike ' is an arbitrary definition which we take to mean positive norm ( with respect to the metric ) .the null killing vector field is purely rotational being tangent to circles about the origin . since there is no 1-dimensional line on which any of the killing vectors change norm we can conclude that there does not exist a horizon in this geometry .again , this flow is highly idealised being unbounded at the origin whereas in practice when working with real turbulent flows one would have a region which might even be time dependent .one might consider also a _ draining _ vortex type fluid flow _ t:(r_0,_0 ) ( ( t)r_0 , _ 0 + ) .where is a function which parametrises the radial flow , in the hopes of unearthing some lorentzian geometry which more closely resembles that of a black hole .indeed , with the addition of radial flow one might hope to have some kind of `` trapped region '' inside of which all time - like vectors point towards the origin , reminiscent of a similar construction in the singularity theorems of hawking and penrose .for example one might choose such that the flow describes a draining ` bathtub vortex ' which to a first order approximation can be described by so that and the radial velocity is inversely proportional to the radial position as a function of time .the flow and geometry given here are more complex than in the previous example and , for brevity , we shall not present them in any depth here .it can be shown , however , that at fixed time the matrix in this case has two eigenvalues which behave analogously to the irrotational vortex case but separated now by the time - dependent ` constant ' . in this case , and in line with our intuition , circles concentric about the origin are invariant curves and are null geodesics of the metric : a circle of radius gets mapped under the flow to circles of radius so that the tangent vectors squared are uniformly scaled by .this is intuitive since an experimentalist who drops ink droplets in a perfect ring into such a flow will observe a shrinking of the ring ( or an expansion , depending on the character of ) but it will be _ coherent _ ( not deforming ) as time progresses .while the characterisation of fluid vortices in terms of an auxiliary lorentzian metric on fixed time slices of a fluid flow should be of practical utility , as pointed out in , the interpretation of the metric and its lorentzian geometry as being ` close to ' or analogous to that of a black hole is lacking .we have shown that no event horizons exist for simple flows which possess the characteristic features discussed in as being ` photon sphere ' or ` black - hole - like ' .further , we have clarified the nature of the circular photon orbit in the schwarzschild geometry ( the prototypical black hole geometry containing an horizon surface which traps massive and massless particles ) ) and shown it to be very different from the closed null curves which hearald the ` coherent material vortices ' discussed in .specifically the -lines discussed in as being analogous to the photon sphere are closed null curves while the photon sphere contains closed space - like curves .it is interesting to point out that the use of lorentzian geometry and auxiliary metrics in fluid dynamics in fact has a healthy and vibrant research community and literature associated with it known as the ` analogue gravity ' program where the effective metric discussed there has a sound physical basis , being the metric which describes the real causal structure of a physical system for various kinds of physical propagating signals such as sound waves or small perturbations . in this manner , the metrics in analogue gravity are expected to be sufficiently ` physical ' , for example they can not contain closed null or closed time - like curves .the cutting edge of that discipline is the construction and observation of analogue event horizons or analogue photon spheres in analogue systems ( see for example ) . in still other corners of the literaturethe introduction of an effective metric in a fictitious space is the key feature of the jacobi form of the maupertuis variational principle in point particle mechanics . while in principle intriguing , the introduction of such effective metrics in that context ( see also for effective metrics in different contexts ) has , thus far , been of little utility in developing theories and solving practical problems .it is hoped that the characterization of eddies by means of an effective lorentzian geometry will break this spell .we thank g. haller and f.j .beron - vera for comments on a previous version of the manuscript .this work is supported by the natural sciences and engineering research council of canada and by bishop s university .g. haller and f. j. beron - vera , `` coherent lagrangian vortices : the black holes of turbulence , '' _ j. fluid mech ._ * 731 * ( 2013 ) r4 , http://arxiv.org/abs/1308.2352[arxiv:1308.2352 [ physics.ao-ph ] ] .s. m. carroll , _ spacetime and geometry_. addison - wesley , san francisco , 2005 .s. hawking and r. penrose , `` the singularities of gravitational collapse and cosmology , '' _ proc.roy.soc.lond . _ * a314 * ( 1970 ) 529548 . c. barcelo , s. liberati , and m. visser , `` analogue gravity , '' _ living reviews in relativity _* 14 * ( 2011 ) , no . 3 , .g. rousseaux , p. maissa , c. mathis , p. coullet , t. g. philbin , _ et al ._ , `` horizon effects with surface waves on moving water , '' _ new j.phys._ * 12 * ( 2010 ) 095018 , http://arxiv.org/abs/1004.5546[arxiv:1004.5546 [ gr - qc ] ] .s. weinfurtner , e. w. tedford , m. c. penrice , w. g. unruh , and g. a. lawrence , `` classical aspects of hawking radiation verified in analogue gravity experiment , '' _ lect.notes phys . _* 870 * ( 2013 ) 167180 .t. g. philbin , c. kuklewicz , s. robertson , s. hill , f. konig , _ et al ._ , `` fiber - optical analogue of the event horizon , '' _ science _ * 319 * ( 2008 ) 13671370 , http://arxiv.org/abs/0711.4796[arxiv:0711.4796 [ gr - qc ] ] .a. n. kolmogorov in _ proceedings of the international congress on mathematics _ ,north holland , amsterdam , 1954 .n. abe , `` notes on the kolmogorov s remark concerning classical dynamical systems on closed surfaces , '' in _ geometry of geodesics and related topics , advanced studies in pure mathematics 3 _ , k. shiohama , ed . north holland , amsterdam , 1984 .
in this letter we point out some interpretational difficulties associated with concepts from general relativity in a recent article which appeared in _ j. fluid mech . _ ( 2013 ) r4 where a lorentzian metric was defined for turbulent fluid flow and interpreted as being analogous to a black hole metric . we show that the similarity with black hole geometry is superficial at best while clarifying the nature of the black hole geometry and the work above with some examples .
the watt balance , an experiment proposed by dr b. p. kibble in 1976 , is widely employed at national metrology institutes ( nmis ) for precisely measuring the planck constant towards the redefinition of one of the si base units , the kilogram .the new definition of the kilogram will be realized by fixing the numerical value of the planck constant , which is expected to be determined with a relative uncertainty of two parts in .the role of a watt balance experiment is to transfer the mass standard from the only mass in bureau international des poids et mesures ( bipm ) , i.e. , the international prototype of kilogram ( ipk ) , to a value that makes the planck constant exactly equal to j .the detailed origin , principle and recent progress of the watt balance is presented in several review papers , e.g. , . here a brief summary of the measurement is given .the watt balance is operated in two separated measurement modes , conventionally named as the weighing mode and the velocity mode . in the weighing mode ,the magnetic force produced by a coil with dc current in the magnetic field is balanced by the gravity of a test mass , and a force balance equation can be written as where denotes the magnetic flux density , the current in the coil , the wire length of the coil , the test mass and the local gravitational acceleration . in the velocity mode ,the coil moves along the vertical direction in the same magnetic field , and generates an induced voltage , i.e. where is the coil velocity and the induced voltage . by combining equations ( [ 1 ] ) and ( [ 2 ] ), the geometrical factor can be eliminated and a virtual watt balance equation is obtained as in equation ( [ 3 ] ) , the induced voltage is measured by a josephson voltage standard ( jvs ) linked to the josephson effect , i.e. where denotes a known frequency , the electron charge .the current is measured by the josephson effect in conjunction with the quantum hall effect as where is the voltage drop on a resistor in series with the coil , the known frequency , and an integer number .a combination of equations ( [ 3])-([5 ] ) yields the expression of the planck constant in the si unit as it has been known that all the quantities on the right side of ( [ 6 ] ) can be measured with a relative uncertainty lower than one part in , and therefore , on the current stage , the planck constant is expected to be determined by watt balances with a relative uncertainty less than for achieving the purpose of redefining the kilogram . the watt balance is considered as one of the most successful experiments to precisely determine the planck constant and is employed in many nmis , e.g. . in order to generate a kilogram level magnetic force to balance the gravity of a mass while keep the power assumption of the coil as low as possible in the weighing mode , a strong magnetic field , e.g. 0.5 t , is required at the coil position for the watt balance .accordingly , permanent magnets with high permeability yokes are introduced to achieve the strong magnetic field .one of such magnetic systems ( shown in figure [ fig2 ] ) , developed by the bipm watt balance group , is the most preferred . in the shown magnet construction ,two permanent magnet rings with opposite magnetization poles are installed inside the inner yoke .the magnetic flux of the permanent magnet rings is guided by high permeability yokes through the air gap .as the work area of the coil , the air gap , is designed to be long and narrow , a strong magnetic field with good uniformity can be generated along the radial direction . as the total flux through the air gapis roughly a constant , the magnetic flux density along the radial direction , , decays along direction following an approximative relation . in a magnetic field, it can be proved that the of the coil is a constant , independent to any coil deformation or horizontal displacements .as the magnetic flux is closed with yokes , the shown watt balance magnet has a good feature of self shielding , lowing additional flux exchange between inside and outside of the magnet .based on these advantages , the shown magnet has been widely adopted by other nmis , such as the federal institute of metrology ( metas ) , switzerland , the national institute of standards and technology ( nist ) , usa , the measurement standards laboratory ( msl ) , new zealand and the korea research institute of standards and science ( kriss ) , south korea . in reality , since the air gap length is finite and hence the decay of the magnetic field is not true over the whole vertical range , i.e. , the fringe effect will introduce a vertical magnetic component at any coil positions where .this fact indicates that the unknown magnetic field component can cause alignment problems , e.g. , the vertical magnetic field component can exert undesired radial force or torque in the weighing mode and could produce additional voltage in the velocity mode . considering a strict requirement of measurement accuracy in watt balances ,any aspect may bring a systematic error should be carefully analyzed .an accurate 3d mapping of the magnetic field in the air gap is a useful tool to provide important information on systematic error elimination .at least three benefits will be created by a full air gap 3d magnetic field mapping .the first is the misalignment error relaxation : the value of geometrical factor in equations ( [ 1 ] ) and ( [ 2 ] ) can be actually affected by parasitic coil motions when the fringe field is considered .this misalignment error can be corrected if the 3d magnetic field profile is obtained , which in theory can relax the alignment requirement of a watt balance .the second benefit is that with knowing a full field profile , a best coil diameter can be chosen in order to employ the maximum flat profile in the air gap , and a flat profile can reduce uncertainties in both velocity and voltage measurements .thirdly , the damping of the coil is usually applied in the breath between weighing and velocity sweeps . during the breath ,the damping device is either at the top or the bottom of the air gap , where a coupling between the damping current and both magnetic components ( the radial magnetic field and vertical magnetic field ) should be considered . in this case, the 3d magnetic field can supply better feedback to the current parameters for simultaneously damping all dimensional motions of the coil .as is mentioned in , the narrow air gap makes it difficult to directly measure the global magnetic field profiles .only profiles of the radial magnetic flux density along the vertical direction , , can be precisely measured by either a high resolution magnetic probe or the gradient coil ( gc ) method .a polynomial estimation algorithm based on at least two measurements of profiles has been developed in to calculate the global magnetic field in the air gap .the algorithm essentially approximates the magnetic field lines by a polynomial fitting and the approximation precision depends on the fitting order , i.e. the quantities of the measured .a higher accuracy requires more profile measurements , which in reality is difficult to be done at different radii of the narrow air gap . besides, the algorithm presented in would be much more complicated when the estimator order increases , e.g. , .we later noticed the information in measured is not fully utilized in the polynomial estimation algorithm , and hence in theory it is possible to develop an analytical algorithm based on only one measurement . this motivation leads to an improved analytical algorithm to map the 3d magnetic field in the air gap region , which has been presented in this article .the new algorithm is based on fundamental electromagnetic natures of the magnet and has advantages in both convenience and accuracy .the rest of this paper is organized as following . in section [ sec2 ] ,the new analytical algorithm is presented . in section [ sec3 ] ,numerical simulations are employed to verify the accuracy of the analytical algorithm .some discussions on the watt balance magnet design are shown in section [ sec4 ] and a conclusion is drawn in section [ sec5 ] .a detailed dimension of the air gap in the watt balance magnet is shown in figure [ fig3 ] .the area , is our model region where and is the inner and outer radii of the air gap and is a half of the air gap height .the radial magnetic flux density along the vertical direction , or the radial magnetic field along the vertical direction , can be measured by a gc or a magnetic probe at the radial coordinate .is measured at the horizontal coordinate ( the green line).,scaledwidth=40.0% ] in the analysis , the magnetic scalar potential is chosen as a quantity to be solved .the magnetic field can be expressed as the negative gradient of the magnetic scalar potential , i.e. , . is the sum of two components in an axisymmetrical coordinate system as where and denote the unit vectors in the and directions ; and are two components of the magnetic field in and directions . as , equation ( [ eq1 ] ) can also be written as the magnetic scalar potential in the air gap area , i.e. ( , ) , ( , ) , can be described by the laplace s equation as the separation of variables is applied to solve equation ( [ eq2 ] ) . in this case , the magnetic scalar potential is supposed to be expressed as the product of two independent functions and , where and are functions of a single variable , i.e. and . substituting into equation ( [ eq2 ] ) yields sincethe two terms on the left side of equation ( [ sep_vars ] ) are independent in dimension , equation ( [ sep_vars ] ) can be written as where is a constant .note that the solution form of equation ( [ sep_vars ] ) will be determined by the sign of .when , the solution can be written as where , , and are constants .when ( e.g. ) , the solution can be obtained as \left[{{c_1}{j_0}(\lambda r ) + { d_1}{y_0}(\lambda r ) } \right ] , \end{array}\ ] ] where and denote hyperbolic cosine function and hyperbolic sine function ; and are the zeroth order bessel functions of the first and second kinds , , , , and are all constants . when ( e.g. ) , the solution can be written as \left [ { { c_2}{i_0}(\rho r ) + { d_2}{k_0}(\rho r ) } \right ] , \end{array}\ ] ] where and denote the zeroth order modified bessel functions of the first and second kinds , , , , and are all constants . the general solution of equation ( [ sep_vars ] ) is the linear combination of , and , i.e. \left[c_{1n}j_0(\lambda_n r)+d_{1n}y_0(\lambda_n r)\right]\\ + \sum\limits_{n=1}^{\infty}\left[a_{2n}\cos ( \rho_n z)+b_{2n}\sin ( \rho_n z)\right ] \left[c_{2n}i_0(\rho_n r)+d_{2n}k_0(\rho_n r)\right ] .\label{eq.general } \end{array}\end{aligned}\ ] ] based on equations ( [ eq1.1 ] ) and ( [ eq.general ] ) , the vertical magnetic field component can be expressed as \left[c_{1n}j_0(\lambda_n r)+d_{1n}y_0(\lambda_n r)\right]\\ + \sum\limits_{n=1}^{\infty}\rho_n\left[a_{2n}\sin ( \rho_n z)-b_{2n}\cos ( \rho_n z)\right ] \left[c_{2n}i_0(\rho_n r)+d_{2n}k_0(\rho_n r)\right ] .\label{eq4 } \end{array}\end{aligned}\ ] ] since the yoke permeability in the presented watt balance magnet is very high , two yoke - air boundaries , i.e. and , can be considered as equipotential surfaces . as a result, the vertical magnetic field along the vertical direction should be zero at both and , i.e. , . using this boundary condition ,the following equations are established , i.e. and in equation ( [ eq.condition11 ] ) , since is a function of monotonicity where , we have when , and hence . in equation ( [ eq.condition12 ] ), we can set and to ensure the condition , i.e. , , to be always satisfied . in this case , should be set to values to establish another condition , i.e. . in equation ( [ eq.condition13 ] ) , because is a monotone increasing function while is a monotone decreasing function , we have and .therefore , and is obtained .the distribution of is symmetrical about the line and thus the odd symmetrical function should be removed from the expression of . then equation ( [ eq.after_condition3 ] ) is reduced to .\label{eq.after_condition2 } \end{array}\ ] ] using the measured profile , the remaining unknown constants in equation ( [ eq.after_condition2 ] ) can be solved .based on equations ( [ eq1 ] ) and ( [ eq.after_condition2 ] ) , can be written as \\ \displaystyle = c'+\sum\limits_{n=1}^{\infty}a'_n \cosh(\lambda_n z ) , \label{eq.compare1 } \end{array}\end{aligned}\ ] ] where and denote the first order bessel functions of the first and second kinds ; ; $ ] ( ) .for convenience of expression , we can set and , and then . as a result , equation ( [ eq.compare1 ] ) can be expressed as the values of can be obtained by expanding in forms of .however , the implementation process is complicated because of the non - orthogonality of the base functions , i.e. where is defined as the inner product of two functions and . to reduce the complexity due to the non - orthogonality , in this paper, the process of obtaining is as follows : first , the base functions are transformed into orthogonal normalized base functions , then is expanded in forms of in a convenient way , and finally base functions are replaced with . in this way , can be easily expanded in forms of and hence is obtained . to practise the above idea , herethe gram - schmidt orthogonalization procedure is employed to transform into orthogonal normalized base functions .the definition of the norm of function is expressed as then base functions are orthogonalized and normalized through a recursion formula , expressed as in recursion formula ( [ eq.orthogonal_2 ] ) the inner product when and when .after the orthogonalization and normalization procedure , can be expanded in forms of , i.e. where the coefficient can be calculated by solving the inner product of and , because the last step of obtaining is replacing in equation ( [ eq.orthogonal_3 ] ) with , and thus should be solved by expressing it with combinations of .the relation between and has been shown in recursion formula ( [ eq.orthogonal_2 ] ) .it is found difficult to directly express as a linear combination of , but instead it is easy to write as a linear combination of , i.e. where , and are constants that have been solved in equation ( [ eq.orthogonal_2 ] ) . setting and as column vectors as equation ( [ eq.orthogonal_6 ] ) can be written as the product of a matrix and the column vector , i.e. where denotes a lower triangular matrix , i.e. based on equation ( [ matrix1 ] ) , can be solved as the linear combination of , i.e. where denotes the element on line numbered column numbered of , the inverse of the matrix . knowing the expression with a linear combination of , the unknown constants in equation ( [ eq.compare11 ] ) can be solved .replacing in equation ( [ eq.orthogonal_3 ] ) with equation ( [ ex ] ) , is written as the combination of , i.e. \\ = \sum\limits_{i=0}^{\infty}\big[\sum\limits_{n=0}^{\infty}f_n\mathbf{m}^{-1}(n , i)\big]\cosh(\lambda_i z)\\ = \sum\limits_{n=0}^{\infty}\big[\sum\limits_{i=0}^{\infty}f_i\mathbf{m}^{-1}(i , n)\big]\cosh(\lambda_n z ) , \label{eq.compare2 } \end{array}\end{aligned}\ ] ] comparing equations ( [ eq.compare11 ] ) and ( [ eq.compare2 ] ) , is solved as accordingly , the constant in equation ( [ eq.compare1 ] ) is calculated as , \label{eq.c } \end{array}\end{aligned}\ ] ] and in equation ( [ eq.compare1 ] ) is solved as }\\ \displaystyle=\frac{\sum\limits_{i=0}^{\infty}f_i\mathbf{m}^{-1}(i , n)}{\lambda_n\left[j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0)\right]}. \label{eq.a1n } \end{array}\end{aligned}\ ] ] by a combination of equations ( [ eq.after_condition2 ] ) , ( [ eq.c ] ) and ( [ eq.a1n ] ) , the magnetic scalar potential is obtained as (\ln r-\ln b)\\ + \sum\limits_{n=1}^{\infty}\left[\sum\limits_{i=0}^{\infty}f_i\mathbf{m}^{-1}(i , n)\right ] \displaystyle\frac{\cosh ( \lambda_n z)[j_0(\lambda_n r)y_0(\lambda_n b)-j_0(\lambda_n b)y_0(\lambda_n r)]}{\lambda_n\left[j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0)\right]}. \label{eq.solution } \end{array}\end{aligned}\ ] ] based on equation ( [ eq1 ] ) , the two magnetic field components and can be calculated respectively as \displaystyle\frac{\sinh ( \lambda_n z)\left[j_0(\lambda_n r)y_0(\lambda_n b)-j_0(\lambda_n b)y_0(\lambda_n r)\right]}{j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0 ) } , \label{eq.hz1 } \end{array}\ ] ] and \displaystyle\frac{\cosh ( \lambda_n z)\left[j_1(\lambda_n r)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r)\right]}{j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0)}. \label{eq.hr1 } \end{array}\end{aligned}\ ] ] it can be proved that the magnetic field solutions obtained in equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) are unique . the proof of the uniqueness theorem is attached in the appendix .over ( the blue curve ) and the horizontal axis ( the red curve).,scaledwidth=55.0% ] in order to evaluate the analytical algorithm accuracy , numerical simulations based on the finite element method ( fem ) are performed . in these fem simulations ,the parameters are set close to a 1:1 real watt balance magnet : , , and are set as mm , mm , mm and mm respectively ; the relative permeability of the yoke is set as and the magnetic strength of the permanent magnet is set as 800kam in the vertical direction . in the simulation , the magnetic flux density profile (210 mm , ) is calculated by fem simulation as a known condition of the presented analytical algorithm to simulate the actual measurements of either a gc coil or a magnetic probe .note that in reality it is impossible to calculate infinite number of terms in equation ( [ eq.solution ] ) , and hence the first nine are adopted .the solution of the magnetic scalar potential , accordingly , can be written as (\ln r-\ln b)\\ + \sum\limits_{n=1}^{9}\left[\sum\limits_{i=0}^{9}f_i\mathbf{m}^{-1}(i , n)\right ] \displaystyle\frac{\cosh ( \lambda_n z)[j_0(\lambda_n r)y_0(\lambda_n b)-j_0(\lambda_n b)y_0(\lambda_n r)]}{\lambda_n\left[j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0)\right]}. \label{solution_4 } \end{array}\end{aligned}\ ] ] in equation ( [ solution_4 ] ) , , as demonstrated in solving equation ( [ eq.condition12 ] ) , should satisfy , and their values are calculated by a numerical method . as shown in figure [ figure2 ] , the solutions are the intersections of function and the horizontal axis .the first nine intersections are solved as ( ) .as presented in equation ( [ eq.orthogonal_2 ] ) , the base functions are transformed into orthogonal normalized functions , where is set to , as it is defined in section [ sec2 ] .using equation ( [ eq.orthogonal_31 ] ) , constants are solved as the in this example is a matrix , whose elements have been calculated based on equation ( [ eq.matrix ] ) , i.e. the matrix is obtained by inversing , and two magnetic field components , i.e. , and , then can be solved based on equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) . figure [ figure25 ] shows the comparison of results calculated by the fem simulation and the analytical equation in ( [ eq.hr1 ] ) , where and denotes the vacuum ( air ) permeability .it can be seen that the two curves agree well with each other , which indicates that the order of is high enough to estimate . in order to further check the agreement with different numbers of , a relative fit error is defined , i.e. where is the measurement magnetic flux density , fit value of the magnetic flux density , and the average of , is analyzed on its decay rate over .the calculation result is shown in figure [ figure26 ] .it can be seen that the fit accuracy is mainly limited by the magnetic field profile measurement . when , the fit residual is already comparable to the sensitivity of the magnetic field measurement instruments , i.e. the accuracy of fem simulation in this case .obtained by fem simulation ( the blue curve ) and the analytical method ( the red curve).,scaledwidth=55.0% ] over different orders .,scaledwidth=55.0% ] figure [ figure4 ] shows the calculation result of the 3d magnetic flux density and based on the analytical equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) . the air gap region (200 mm , 230 mm ) , (-50 mm , 50 mm ) , where the watt balance is conventionally operated , has been focused .the calculation result clearly shows the fringe effect : the absolute value of the vertical magnetic flux density increases at both vertical ends of the air gap ; the horizontal magnetic flux density component is also bent from the decay surface . to evaluate the accuracy of the presented analytical algorithm ,some typical and curves given by fem simulation and the analytical equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) are compared in figure [ figure3 ] .it can be seen that the calculation obtained by the analytical algorithm agrees very well with that simulated by fem . for a full view of the field difference of fem and the analytical algorithm ,the differential maps of and are calculated in the air gap region (200 mm , 230 mm ) , (-50 mm , 50 mm ) and the calculation results are shown in figure [ figure5 ] . .( b ) magnetic flux density distribution of ., scaledwidth=90.0% ] and curves obtained by the analytical method and fem .the blue curves are obtained by fem simulation and the red curves are calculated by the analytical method .( a ) the curves , which from top to bottom are presented with different values of , i.e. 50 mm , 45 mm , 40 mm , 30 mm , 0 mm , -30 mm , -40 mm , -45 mm and -50 mm .( b ) the curves , which from top to bottom have different values of , i.e. 201 mm , 205 mm , 210 mm , 215 mm , 220 mm , 225 mm and 229mm.,scaledwidth=90.0% ] for comparison of the two analytical algorithms , the field difference between the fem simulation and the polynomial estimation algorithm in has been also calculated with the same permeability . to quantize the comparison ,the average difference between the analytical method and the fem is defined as where is the radical index number and is the vertical index number ; is the magnetic flux density difference between the proposed analytical method and fem simulation .the polynomial estimation algorithm is adopted in its highest precision case , i.e. three different profiles are measured in radii mm , mm and mm .the average differences of the polynomial estimation algorithm are 0.0595mt for and 0.0894mt for while the average difference of the new analytical algorithm is 0.0060mt for and 0.0048mt for .the accuracy of the new analytical algorithm in this case is respectively 10 and 20 times better for and than that of the polynomial estimation algorithm . ;( b ) the magnetic flux density error of .the bottom subgraphs are the calculation error between the polynomial estimation algorithm and fem simulation : ( c ) magnetic flux density error of ; ( d ) the magnetic flux density error of .,scaledwidth=95.0% ]the numerical simulation in section [ sec3 ] exhibits advantages in field representation accuracy and convenience in measurement for the new analytical algorithm .it should be emphasized that the shown analytical algorithm , as well as the polynomial estimation algorithm proposed in , is based on the assumption that the normal component of the magnetic field on the air - yoke boundary strictly equals zero .however , in reality a weak normal component exists on the air - yoke boundaries duo to the finite permeability of the yoke , which creates the main part of the calculation error . for a further check , we calculated the magnetic field difference with different values of the yoke permeability .figure [ fig9 ] shows the calculation errors between the proposed analytical algorithm and fem simulation when and .it can be seen that the difference error is getting smaller when the permeability is higher .this indicates that the analytical algorithm will have better performance when applied in the watt balance magnet with high permeability , e.g. the bipm watt balance magnet . ;( b ) the calculation error of .the bottom subgraphs are the calculation error map when the relative permeability is set as : ( c ) the calculation error of ; ( d ) the calculation error of .,scaledwidth=95.0% ] .( b ) the simulation results of and . is defined as where is the average value of .,scaledwidth=95.0% ] as mentioned in section [ sec1 ] , one of the main purpose for presenting the 3d magnetic field in the air gap is to find a best coil diameter for obtaining a widest profile . herewe give a suggestion by taking the calculation in section [ sec3 ] as an example .knowing the 3d magnetic field , the coordinate where has a peak value , i.e. \\ \displaystyle\times\frac{\lambda_n\sinh ( \lambda_n z_0)\left[j_1(\lambda_n r_{\rm p})y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_{\rm p})\right]}{j_1(\lambda_n r_0)y_0(\lambda_n b)-j_0(\lambda_n b)y_1(\lambda_n r_0)}=0 , \label{new } \end{array}\ ] ] can be solved .the calculation result of has been shown in figure [ figure10](a ) .it shows that for the typical watt balance magnet , a best coil radius should be smaller than the air gap center radius .if the velocity sweep range is set in the velocity mode , the best coil radius is calculated by means of the as for the example shown in figure [ figure10](a ) , mm , mm , and is calculated as 214.54 mm . by the calculation , the should have a wider flat profile than . to check it, we have plotted these two profiled in figure [ figure10](b ) by fem and the result verifies the conclusion .an analytical algorithm , employing only one measured , is presented for calculating the 3d magnetic field profile in watt balance magnets . compared to the polynomial estimation algorithm , the new algorithm is based on fundamental electromagnetic natures of the magnet and has significant advantages in both convenience and accuracy .it is shown that the new analytical algorithm can improve the field mapping accuracy by more than one magnitude than that of a simple polynomial estimation , which has a good potential in application of high permeability cases , e.g. , the bipm magnet .the presented work can supply necessary information for misalignment analysis and parameter determinations in watt balances .base on the study , a best coil radius , which should be designed slightly smaller than the air gap center , is suggested .the discussion shows that the accuracy of the proposed analytical algorithm is mainly limited by a finite permeability of the yoke material .therefore , a correction model of yoke permeability should be focused in a following investigation .also , in this paper the symmetry of the watt balance magnet is assumed , which in reality may be not true .further studies with considerations of the asymmetry of the magnet system may be addressed in the future .the authors would like to thank mr .xuanyu dong for advice about using the gram - schmidt orthogonalization procedure and dr qing wang at durham university , uk for language proofing .this work is supported by the national natural science foundation of china ( grant no .51507088 ) .in this appendix , the uniqueness of magnetic field solutions , i.e. , and in equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) , is proved by a reductio ad absurdum .we assume that the solution of equation ( [ eq.solution ] ) is not unique in the magnet air gap region , . without losing generality , two different solutions and supposed to satisfy the condition , and .based on the symmetry of the air gap , the magnetic scalar potential at for and , i.e. , , and , should meet according to the uniqueness theorem for static magnetic fields , , , and form the boundary conditions of region , and determines a unique solution of , which is expressed similarly as equation ( [ eq.after_condition2 ] ) , i.e. .\end{array } \label{eq.app1}\end{aligned}\ ] ] the unknown constants in equation ( [ eq.app1 ] ) , i.e. and , can be solved by expanding in forms of fundamental functions and .the solved constants are symbolized as and , and then the solution can be written as .\label{eq.phim1 } \end{array}\end{aligned}\ ] ] similarly , is the unique solution with boundary conditions , , and , which can be expressed as .\label{eq.phim2 } \end{array}\end{aligned}\ ] ] based on equations ( [ eq1 ] ) , ( [ eq1.1 ] ) and ( [ eq.phim1 ] ) , the radial component of the magnetic field at , i.e. , can be expressed as \cosh ( \lambda_n z),~~~~ \end{array}\end{aligned}\ ] ] and based on equations ( [ eq1 ] ) , ( [ eq1.1 ] ) and ( [ eq.phim2 ] ) , the expression of is obtained as \cosh ( \lambda_n z).~~~~ \end{array}\end{aligned}\ ] ] it has been supposed that both and can both satisfy the condition , therefore . as is known , a function of can not be expressed by linear combinations of where , thus and are established .further , based on equations ( [ eq.phim1 ] ) and ( [ eq.phim2 ] ) , we have obviously , equation ( [ eq.appxz ] ) and the stated assumption are contradictory , and hence the supposition is false and the theorem , i.e. the solution of equation ( [ eq.solution ] ) is unique , is valid .therefore and in equations ( [ eq.hz1 ] ) and ( [ eq.hr1 ] ) are the unique solutions .et al _ 2015 field representation of a watt balance magnet by partial profile measurements _ metrologia _ 445 - 453 kibble b p 1976 a measurement of the gyromagnetic ratio of the proton by the strong field method _ atomic masses and fundamental constants 5 _ pp 545 - 551
a yoke - based permanent magnet , which has been employed in many watt balances at national metrology institutes , is supposed to generate strong and uniform magnetic field in an air gap in the radial direction . however , in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap , which should either be measured or modeled towards some optimizations of the watt balance . a recent publication , i.e. , _ metrologia _ 52(4 ) 445 , presented a full field mapping method , which in theory will supply useful information for profile characterization and misalignment analysis . this article is an additional material of , which develops a different analytical algorithm to represent the 3d magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction , . the new algorithm is based on the electromagnetic nature of the magnet , which has a much better accuracy .
the development of new algorithms and of powerful computers currently allows to study in explicit solvent large conformational changes of small proteins and peptides and smaller conformational changes in large proteins , but still with some computational effort .more complex simulations , like those of large changes in large systems , of aggregation of many protein chains , or of systematic mutation scans still require the use of models with simplified degrees of freedom .pasrticularly useful in this respect are implicit - solvent models controlled by simple potentials , like those involving only contact functions ( and thus not requiring the lengthy calculation of accessible surface areas ) . while a thorough sampling of the conformational space of a protein system described by such simplified models is now rather affordable even for large proteins and even describing explicitely all the heavy atoms of the system , the determination of a simple potential capable of recapitulate the properties of a protein is still a challenging problem .the basic requirement for such a potential is to make the native conformation of proteins stable , as entailed by the thermodynamic hypotesis .several different approaches were used to implement this requirement . using associative - memory potentials which encodes for correlation between protein sequence and native structure , motivated by the theory of neural networks , it was possible to predict the native conformation of a number of proteins from the knowledge of their sequence , even if in a simplified geometry . minimizing simulateneously the potential in the native conformation of several proteins with respect to their competitive conformations allowed to design a potential capable of identifying the native conformation within the framework of a minimal model of protein - like polymers .however , this potential failed to distinguish the native from alternative conformations in the case of real proteins , mainly because the exploration of competitive conformations was computationally too demanding .similar approaches were carried out sampling competitive conformations with a monte carlo algorithm within a bead model , or through a variational approach .anyway , they have not been completely succesful for real proteins , always yielding poor results for full - sized protein molecules .simple potentials originally designed for structure prediction have also been succesfully used to sample non - native states of small proteins .a simpler approach is to use structure - based models , specific for each protein . in this case onedesists from building a universal potential , capable of predicting the native conformation from its sequence , and focuses on the investigation of the properties of a protein of known structure .this is the case , for example , of the popular go model , which is a direct implementation of the principle of minimal frustration .for example , with a coarse - grained go model it was possible to simulate the cotranslational folding of 100-residues proteins within the whole , explicitly - represented ribosome .structure - based models have been succesful in reproducing a number of features of proteins , expecially related to the native state and to the transition between the native and the denatured state . however , they are not able to describe properly non - native interactions , and consequently can not account for the properties of the denatured state , for intermediate states stabilized by non - native interaction , for protein aggregation , and for all those properties that emerge from the competition between native and non - native interactions . in the present work we build an implicit - solvent , model which describes all heavy atoms and which retains the computational handiness of minimally - frustrated models , but do not suffer their limitations .a key feature of this model is that it must fold to the native conformation of the protein . for this purpose, we develop a strategy to design a potential between the different atom types to make the equilibrium state of the model at low temperature unique and equal to the native conformation .once the potential has been designed , one can sample the conformational space of the system , and thus study thermodynamic quantities other than those used as input in the design algorithm . in this way it is possible to understand what properties of the protein are a necessary consequence of the stability of its native state .we show that the low - temperature equilibrium state is unique and is identical to the experimental the two - state character of the transition between native and denatured state , the energetic effect of experimentally - characterized mutations and some features of the denatured state can be reproduced without any further input to the system .the potential is chosen as the sum of two - body terms , accounting for the interactions between pairs of atoms , and shaped as a double spherical well .this choice allows a remarkably fast sampling of the conformational space of the protein system by means of monte carlo ( mc ) algorithms .each two - body term is determined by a single parameter which determines the depth of the energy wells and which depends of the chemical species involved .operatively , the set of energy parameters associated with all pairs of chemical species are optimized according to an iterative monte carlo algorithm , employing the reweighting scheme developed by norgaard and coworkers , to make the thermal averages of a set of inter - atomic distances match the value they display in the experimental native conformation of the protein. a sequence - dependent potential on the backbone dihedrals is also introduced to favour the formation of secondary structures. the resulting potential will be minimally frustrated if this is required by the system to display a stable native state , but this ingredient is not pushed by hand . in fact , atoms of the same type but belonging to different positions along the chain interact in the same way , and consequently can stabilize , even strongly , non - native interactions .in the model we developed proteins are described through all their heavy atoms .all bond distances , backbone angles and dihedrals of the peptidic bond are mantained rigidly fixed , corresponding to their experimental values .the ramachandran dihedrals can move freely , while the residue can move among the rotamers defined in ref . .starting from the knowledge of the native conformation of the protein , the potential which controls it has the form the former , two body term is a two - well spherical potential which depends on the positions and on the kind of the atoms involved , in the form atoms that are separated by less than 9 other atoms along the backbone do not display attractive two body interactions .the minimum of the well has energy that depends on the types , of the atoms involved .different atoms in different amino acids are regarded as different atom types , giving a total of 163 atom types . defining a native contact between two atoms if the two atoms are closer than , we label the maximum distance between atoms of kind and in all native contacts of the protein . the hard - core radius for that pair of atom types is then defined as , at the radius the energy depth of the well is decreased by a factor 2 , and the overall interaction range is .the potential on the ranachandran dihedrals and is meant to account for the interactions between atoms close along the chain , and thus to induce the formation of local secondary structure .it has the form ,\end{aligned}\ ] ] where are the energy constants that set the weight of the dihedral potential with respect to the two body potential and to each other and are chosen as and in order to allow the formation of secondary structure at but , at the same time , not to make the two body potential irrelevant .the quantities , , and are the averages of ramachandran dihedrals in typical and conformations , respectively , while the quantities , , and are the associated standard deviations ( see ) .the quantities and are the sequence dependent propensities for the amino acid of and structure , respectively , calculated with psipred .we choose not to make it dependent on the specific native conformation not to bias the formation of secondary structures which could be stabilized by tertiary contacts .the dihedral potential is not affected by the optimization procedure . before starting the simulation , for each protein a set of pairs of atoms are selected in such a way that they do not belong to amino acids closer than 4 along the sequence , and the distances between each pair in the native conformation recorded .this choice guarantees that the implementation of all the in a conformation of the protein makes it identical to the native conformation , with an rmsd smaller than 1 .the whole idea is to optimize the interaction matrix so that the thermal average of the distance between each pair of atoms and is equal to the distance they have in the experimental native conformation , that is and consequently that the equilibrium conformation of the protein is the native one . to implement this idea , we start from an interaction matrix in which if there is al least one pair of atoms and such that and 0 otherwise .the choice of the initial matrix is not really critical .making use of the potential ( [ eq : potential ] ) , a mc sampling is carried out and a set of conformations at temperature , which is regarded as reference temperature and sets the energy units ( boltzmann s constant is also set to 1 ) , is recorded . at the end of the mc sampling , the average distances are calculated from the recorded conformations and the between them and the native distances is evaluated , using error allowed for all contacts in the definition of .the are optimized to minimize the making use of a zero - temperature random minimization . at each step of the minimization ,the average distances according to the modified potential are calculated following the reweigting scheme described in ref . , that is ,\ ] ] where \ ] ] and the index runs over 5000 conformations recorded during the mc sampling carried out with the potential .then , a new mc simulation is carried out with the new potentials and the procedure is repeated iteratively 100 times .the mc sampling is carried out with a parallel - tempering scheme .the mc moves are pivots on the backbone dihedrals , combinations of pivots on adjacent backbone dihedrals to produce local moves , and discrete moves of the side chains among all possible rotamers . in each simulation 8replicas of the system are used , at temperatures ranging from 1 to 1.75 .each mc iteration is carried out for steps for each replica .every steps after the half of the simulation the conformation belonging to the replica at is recorded .more details about the model and the optimization scheme are given in .important questions concerning the optimization procedure are whether the optimal potential is unique and to which extent it is portable among different proteins .a comparison of two interaction matrices for protein g , optimized independently on each other , give a correlation coefficient of 0.74 , with matrix elements more similar towards the ends of the distribution and more dissimilar towards zero .this suggests that the most stabilizing matrix elements are rather independent on the realization of the optimization procedure , but depends only on the protein . on the other hand , the correlation between the matrix elements associated with the same atom types in two proteins , specifically protein g and villin , is 0.08 , indicating that the optimized potential is not portable among proteins .a necessary condition that the optimized models have to satisfy is to display the experimental native conformation as low temperature equilibrium state .although the optimization was carried out towards the native distances , it is not straightforward that this is enough to let the model satisfy such a necessary condition . in the present model the interaction between two atoms depend on their kind , not on their position in the proteinthis introduces frustration in the system as , differently from the go models , the optimal interaction matrix is not simply that in which two atoms strongly attract each other if they are in contact in the experimentally - determined native conformation .the model satisfies the above necessary conditions if the optimization procedure is able to lower the energy of the native conformation below that of the competing conformations or , in other words , if it can minimize its degree of frustration .we have tested the model on three widely studied proteins .these are the villin headpiece ( pdb code 1vii ) , the b1 domain of protein g ( pdb code 1pgb ) and the sh3 domain of src ( pdb code 1fmk ) .the optimization procedure is illustrated in fig .[ fig : chi2 ] , where the to the set of native distances and the average rmsd to the native conformation is displayed as a function of the number of iterations .each iteration consists of a mc sampling and an optimization of the interaction matrix . in the case of protein g and sh3there is a sharp drop of both and average rmsd in the first 20 iterations .protein g reaches a stationary and an average rmsd nm , while sh3 reaches and an average rmsd nm .interestingly , while , the average rmsd reaches it stationary value around the 20th iteration and remains stationary since then , the takes a longer time to find its minimum , indicating that rmsd does not capture completely all structural features of the native state .the behavior of villin is more noisy , most likely because its size is smaller than that of the other two proteins .anyway , it can converge to and nm after 100 iterations .the minimum energy conformations found with the interaction matrix obtained in the last iteration of the optimization process is displayed in figure [ fig : native ] for each of the three proteins .the rmsd to the experimental native conformations are nm for villin , nm for protein g and nm for sh3 .no low energy conformations with rmsd markedly larger than these are observed .the thermodynamic properties of the three proteins as a function of temperature , calculated with a weighted histogram algorithm , are summarized in figs .[ fig : cv_villin ] , [ fig : cv_protg ] and [ fig : cv_sh3 ] , respectively .all of them display two peaks in the specific heat .the lower temperature one ( at for villin , for protein g and for sh3 ) marks the folding transition , as testified by the change and in average rmsd ( calculated on all heavy atoms ) and fraction of native contacts that takes place at those temperatures .the higher temperature peak ( for villin , for protein g and for sh3 ) corresponds to the coil globule transition ( cf .the change in average gyration radius at those temperatures ) . in agreement with the experimental findings , andnot unexpectedly because of their difference in size , villin results less stable than protein g ( folding temperatures at neutral ph are for villin and for protein g ) , and the folding transition less cooperative . in fact , the ratio between calorimetric and vant hoff enthalpy , which takes its minimum value of 1 for a pure two state transition , results from model calculations to be for villin and for protein g , to be compared with the experimental values for villin and for protein g , while it is for our model of sh3 .however , in all cases the model understimate the two body character of the folding transition , as already observed for other models which only include two body interactions .it should be noted that the model displays a folding transition for all the three proteins at temperatures lower than 1 , that is the temperature at which the interaction potential has been optimized to reproduce the native distances .this suggests that the computational limitations in the optimization of the interaction matrix result not much in errors in the conformational properties of low temperature states , but in a decreased thermodynamic stability .the free energies of the three proteins as a function of the rmsd and of the gyration radius , calculated with a weighted histogram algorithm , are displayed in fig .[ fig : free ] in the case of a temperature below the folding transition , a temperature between the folding and the coil globule transition and a temperature above the coil globule transition . for none of the proteinsthe free energy profile highlights detectable intermediates .the globular denatured state ( at ) is in all cases rather native like , displaying rmsd of the order of 0.50.6 nm . the reason for such a low rmsd is the formation of residual , largely native like , structure in the denaturated state , as shown in fig .[ fig : stride ] . in the case of villin , residual alpha helical structure is larger in the n terminal segment , slightly smaller in the c - terminal segment , and marginal in the central segment .these ratios are in agreement with circular dochroism spectra of isolated fragments of villin and with explicit solvent molecular dynamics simulations . the denatured state of protein g displays in native like residual structure in the two hairpins and in the helix , but not the non native turns osberved in the acid denatured state by nmr .the denaturated state of sh3 is enriched in beta - starnd structure , a feature that is not observed in nmr experiments with urea , which indicates abundance of non - native helices. there can be two straightforward reasons for this dicrepancy .first , our model simulates a thermal denatured state , while in nmr experiments the protein is destabilized by urea .moreover , while the agreement with experiments of the other two proteins concerns native like structure , in the case of sh3 the model is not able to predict non native residual structure .this could be due to the fact that the optimization of the potential to stabilize the native conformation over minimize the frustration of the system .however , a nice feature of the present approach is that , in principle , one can optimize the interaction matrix to reproduce the native distances at low temperature and , simultaneously , the data observed in the denatured state at higher temperature .in the case of protein g and sh3 , the free energy changes of the native state upon mutation was measured for a large number of mutations . within the present model, the relatively small computational cost of sampling the conformational space allows to simulate the effect of each mutation , and compare the result with the experimental data .these simulations have two goals .first , the comparison between experimental and calculated can contribute to validate the model .moreover , the simulation has access to conformational properties of the mutated system that can not be studied experimentally in a direct way .operatively , a mutation means changing the atom types of the mutated residue , which interact with the same matrix elements of the wild type protein ( no further optimization is carried out ) , and updating the secondary structure propensities in the dihedral potential .for each of the mutation reported for protein g and sh3 we have carried out an equilibrium simulation , obtaining the free energy profile of the wild type ( ) and of the mutated ( ) protein , as a function of rmsd and exposed area of the tryptophanes .the reason for the choice of is that experimental were obtained from kinetic experiments in which the measured quantity is the fluorescence of the tryptophanes , which depend on their molecular environment . from these free energies, we have calculated the free energy differences in a two state approximation , that is where \nonumber\\ p_n^{mut}&\equiv\int_{\cal n}d\,\text{rmsd}\,da_w\;\exp[-f^{mut}(\text{rmsd},a_w)/t ] \end{aligned}\ ] ] and the native region in the free energy profilesis that defined in fig .[ fig : freemut ] .the comparison between experimental and computed is displayed in figs .[ fig : mut_g ] and [ fig : mut_sh3 ] for protein g and sh3 , respectively .the correlation coefficients are , respectively , 0.57 and 0.50 , which increase , respectively , to 0.79 and 0.73 if we exclude four outliers .these values correspond to the optimal choice of the native region .interestingly , such outliers correspond to sites which display in the calculations large native like structure or does not display the non native secondary structures measured by nmr .consequently , one could make the hypothesis that the poor agreement between theoretical and experimental is associated with the overstimation of native structure in the denatured state discussed in the previous section .moreover , it is interesting to note that the good overall correlation with the experimental data can be obtained only defining the native state using rmsd and . calculating the values of and as integral over rmsd and gyration radius ( cf .[ fig : free ] ) , on rmsd only , or on only give correlations in the range 0.20.4 .the reason for this difference in the results seems to be that rmsd and are less correlated than rmsd and ( cf . figs .[ fig : free ] and [ fig : freemut ] ) , and consequently are better in defining the native region . specifically , the effect of mutations increase the probability of conformations with values of larger than that of the wild type protein , mantaining a rather small rmsd , as shown in fig . [fig : freemut ] .a nice feature of the potential developed above is that , being defined with respect to atom types ( and not on atom identifiers , like in go models ) , it include some degree of frustration , which is known to be present in proteins .one can thus inspect the energy map of the native conformations of the three proteins already discussed , to identify repulsive contacts , defined as those displaying .such contacts are highlighted in red in fig .[ fig : frust ] .there are 11.9% frustrated contacts in villin , 8.6% in protein g and 20.3% in sh3 , numbers that are comparable to those found in similar calculations carried out with other potentials . in the case of villin, they are localized mainly in the third helix and in the tertiary contacts between the first helix and the other two .this agrees qualitatively with the result of a similar investigations carried out with the help of an evolutionary derived potential and of an associative memory potential , which emphasise the frustration of contacts within the third helix and between the first and the second helix . in the case of protein g ,the present model identifies frustrated contacts in the helix and in the terminal part of the first hairpin , while the associative memory potential in the helix and in the secon hairpin . in the case of sh3, the optimized potential reveals frustrated contacts in the terminal beta - sheet , between the rt loop and the distal hairpin and in the stem of the distal hairpin , while the associative memory potential in the stem of distal hairpin and in the stem of of the rt loop and the evolutionary derived potential in the stem of the distal hairpin , in the rt loop , between these two and in the n src loop .the small differences observed in the frustration maps generated in present and in other works are most probably due to the fact that our potential is atom - based , while the others are amino acid - based .this means that repulsive and attractive interaction between pairs of atoms between two given amino acids , as predicted by the present model , can sum together to give a total interaction which can be either repulsive or attractive .consequently , the present model provide an information which is complementary to that of the other two .in spite of the continuously growing capability of algorithms and computers to perform longer simulations of larger systems with portable , explicit solvent potentials , implicit solvent models of biomolecules interacting with simplified potentials can still be useful for many applications , like very - large systems , mutation scans and aggregation studies .so far , this kind of problems were tackled making use of go models , which neglects the residual frustration present in all proteins . the model discussed in the present workis based on an optimization of the matrix which controls the interaction between atom types to make the experimental native conformation as the low temperature equilibrium state of the system .this model can reproduce a number of known data about proteins , like the stability of their native state , the two state transition , the energetic effect of mutations on their stability , while still displaying a realistic degree of frustration .we think that the strength of this approach is its versatility .one can use as input for the optimization of the potential any set of experimental data , even an heterogeneous one , provided that they can be expressed as thermal averages of some conformational property .for example , we showed that the structure of the denatured state of the proteins used in the present work is not in complete agreement with the nmr data in denaturing conditions .this is not really unexpected , since the input data we used describe the native conformation , and consequently the predictions of the model can not but worsen as they involve states which are distant from the native state . to improve the model, one can thus introduce in the optimization data concerning the denatured state .moreover , this approach can be used to correct existing potentials , even in explicit solvent , for specific goals .it is enough to use the potential to be corrected as initial potential of the optimization procedure .99 s. piana , k. lindorff - larsen and d. e. shaw , j. phys .chem b * 117 * , 12935 ( 2013 ) .g. r. bowman , v. a. voelz and v. s. pande , j. am .* 133 * , 664 ( 2011 ) .l. sutto and f. l. gervasio , proc .usa * 110 * , 10616 ( 2013 ) . c. b. anfinsen , science , * 181 * , 223 ( 1973 ) .r. a. goldstein , z. a. luthey schulten and p. g. wolynes , proc .usa * 89 * , 4918 ( 1992 ) m. c. prentiss , c hardin , m. p. eastwood , c. zong and p. g. wolynes , j. chem .theo . comp . * 2 * , 705 ( 2006 ) .l. a. mirny and e. i. shakhnovich , j. mol. biol . * 264 * , 1164 ( 1996 ) .m. h. hao and h. a. scheraga , proc .usa , * 93 * , 4984 ( 1996 ) .f. seno , and a. maritan , proteins struct .. gen . * 30 * , 244 ( 1998 ) .f. seno , c. micheletti and a. maritan , a. phys . rev81 * , 2172 ( 1998 ) .d. shirvanyants , f. ding , d. tsao , s. ramachandran and n. v. dokholyan , j. phys .b * 116 * 8372 ( 2012 ) .s. kimura , m. caldarini , r. a. broglia , n. v. dokholyan and g. tiana , proteins struct .( in press ) n. go , annu .. bioengin .* 12 * , 183 ( 1983 ) .j. d. bryngelson and p. g. wolynes , proc .usa * 84 * , 7524 ( 1987 ) .a. h. elcock , plos comp . biol . * 2 * , e98 ( 2006 ) . c. clementi , curr .* 18 * , 10 ( 2008 ) .r. d. hills and c. l. brooks , int .sci . * 10 * , 889 ( 2009 ) .k. wolff , m. vendruscolo and m. porto , phys . rev .e * 8 * , 041934 ( 2011 ) .a. b. norgaard , j. ferkinghoff - borg and k. lindorff - larsen , biophys .j. , * 94 * , 182 ( 2008 ) .s. c. lovell , j. m. word , j. m. , richardson and d. c. richardson , proteins struct .. gen . * 40 * , 389 ( 2000 ) .d. t. jones , j. mol .biol . * 292 * , 195 ( 1999 ) r. swendsen and j. wang , phys .rev . lett . * 57 * , 2607 ( 1986 ) .j. shimada , e. l. kussell and e. i. shakhnovich , j. mol ., * 308 * , 79 ( 2001 ) . see supplemental material at [ url will be inserted by aip ] for more details about the model and the omptimization scheme .g. toulouse , commun .* 2 * , 115 ( 1977 ) .e. i. shakhnovich and a. m. gutin , proc .natl . acad .usa * 90 * , 7195 ( 1993 ) .a. ferrenberg and r. swendsen , phys .lett . , * 63 * , 1195 ( 1989 ) .r. godoy - ruiz , e. r. henry , j. kubelka , j. , hofrichter , v. muoz , j. m. sanchez - ruiz and w. a. eaton , j. phys .b , * 112 * ) , 5938 ( 2008 ) .p. alexander , s. fahnestock , t. lee , j. orban and p. bryan , biochemistry * 31 * , 3597 ( 1992 ) p. l. privalov and n. n. khechinashvili , j , mol .biol . * 86 * , 665 ( 1974 ) .h. s. chan , proteins struct .* 40 * , 543 ( 2000 ) .y. tang , d. j. rigotti , r. fairman and d. p. raleigh , biochemistry * 43 * , 3264 ( 2004 ) . l. wickstrom , a. okur , k. song , v. hornak , d. p. raleigh and c. l. simmerling , j. mol . biol . * 360 * , 1094 ( 2006 ) .n. sari , p. alexander , p. n. bryan and j. orban , biochemistry * 39 * , 965 ( 2000 ) .i. rsner and f. m. poulsen , biochemistry , * 49 * , 3246 ( 2010 ) .e. l. mccallister , e. alm and d. baker , nature struct . biol .* 7 * , 669 ( 2000 ) .v. grantcharova , d.riddle , j. santiago and d. baker , d.nature struct . biol . * 5 * , 714 ( 1998 ) .f. eisenhaber , p. lijnzaad , p. argos , c. sander and m. scharf , j. comput . chem . * 16 * , 273 ( 1995 ) .m. jenik , r. g. parra , l. g. radusky , a. turjanski , p. g. wolynes and d. u. ferreiro , nucl .acid res . *40 * , w348 ( 2012 ) s. lui and g. tiana , j. chem .phys , * 139 * , 15103 ( 2013 ) between average and native distances ( red curve , in semi log scale ) and the average rmsd to the experimental native conformation ( blue curve ) as a function of the number of iterations of the mc sampling for villin ( upper panel ) , protein g ( middle panel ) and sh3 ( lower panel ) . ]
the current capacity of computers makes it possible to perform simulations of small systems with portable , explicit - solvent potentials achieving high degree of accuracy . however , simplified models must be employed to exploit the behaviour of large systems or to perform systematic scans of smaller systems . while powerful algorithms are available to facilitate the sampling of the conformational space , successful applications of such models are hindered by the availability of simple enough potentials able to satisfactorily reproduce known properties of the system . we develop an interatomic potential to account for a number of properties of proteins in a computationally economic way . the potential is defined within an all - atom , implicit solvent model by contact functions between the different atom types . the associated numerical values can be optimised by an iterative monte carlo scheme on any available experimental data , provided that they are expressible as thermal averages of some conformational properties . we test this model on three different proteins , for which we also perform a scan of all possible point mutations with explicit conformational sampling . the resulting models , optimised solely on a subset of native distances , not only reproduce the native conformations within a few angstroms from the experimental ones , but show the cooperative transition between native and denatured state and correctly predict the measured free energy changes associated with point mutations . moreover , differently from other structure - based models , our method leaves a residual degree of frustration , which is known to be present in protein molecules .
paradoxes hit us with some mystery .the simpler and the shorter their formulation , the more appealing and the more frequently they are discussed on scientific fora and at meetings .paradoxes challenge our understanding and often insist on conceptual clarity .they have appeared about everywhere in scientific discourse , some having a long history .an ancient example is democritos paradox , as described for example in _ nature and the greeks _ by erwin schrdinger : take a cone , or make it something more tasty like a pear , and slice it anywhere parallel to its base .the two circular faces thus produced must have the same area ; they just perfectly fit together .but , if they are equal in size , how could the pear ever get its cone - shape ?that paradox touches at the foundations of statistical mechanics because it deals with the relationship between atomism as a physical theory and the continuum nature of macroscopic objects .it witnesses to the long struggle , as apparent also in zeno s paradox , of reconciling the continuum with discreteness , of describing physical limits in a correct mathematical language when starting from corpuscular concepts of nature .another famous now 20th century example is the twin paradox originating in special relativity and first described by paul langevin .the problem with the twin has little or nothing to do with statistical mechanics ( except when one would discuss the influence of traveling on metabolic processes ) and no confusion on atomism or on thermal aspects can arise .it just teaches us about the essential structure of minkowski space - time and the invariant meaning of proper time .however , when combined with quantum field theory , the problem touches on the one of accelerating travelers observing black - body radiation at the unruh temperature , having the same form as the hawking temperature of a black hole .time after time then paradoxes induce discussions on the physical interpretation of the formalism of our best theories or on possible extensions or unifications .+ sometimes however history repeats itself , and paradoxes are brought forward that have been answered long ago .it probably means that the paradox is still much alive and deserves further elaboration when formulated in a new context .it could also suggest that simple things have been forgotten or that sloppy thinking has gone unnoticed . whatever is the case ,the problems or paradoxes that are discussed below are very present in today s scientific communication .they appear in introductions of talks ; there are books written about them , and the web has very many entries repeating naive mistakes against some of the foundations of statistical mechanics .it is likely that there are different and much more interesting and deeper issues associated to the horizon problem and to the information paradox , but the present contribution discusses these paradoxes as encountered most often in the streets of physics . and the suspicion of th .smiths is then unavoidable : can there not just be some misunderstanding here of basic statistical mechanics ?+ as the title suggests , the present paper is not truly a specialized scientific one in the more traditional sense .it is more popularizing and perhaps provocative at times , hoping it can stimulate discussions on fundamental questions in high energy physics from the point of view of statistical mechanics .a more technical paper concentrating on sharper discussions of the information paradox is under construction ( joint work with wojciech de roeck ) .the usual statement of the so called _ horizon problem _ is at best naive and at worst fundamentally misconceived .here is the wikipedia version ( 20 february 2015 ) : _ the horizon problem is a problem with the standard cosmological model of the big bang which was identified in the late 1960s , primarily by charles misner .it points out that different regions of the universe have not `` contacted '' each other because of the great distances between them , but nevertheless they have the same temperature and other physical properties .this should not be possible , given that the transfer of information ( or energy , heat , etc . ) can occur , at most , at the speed of light ._ the notion of thermodynamic equilibrium has various aspects , and it obviously depends on considered spatio - temporal scales and types of observations . forgetting much of these last subtleties and speaking operationally , equilibrium is the thermodynamic condition of macroscopic systems where there is a homogeneous temperature , chemical potential(s ) and pressure . in that way , an equilibrium system has no systematic currents or collective motions of energy or particles , subsystems are again in equilibrium and their dynamical condition is that of detailed balance , reversibility and not showing any difference between evolution forwards or backwards in time .many more features can be added , and depending on the physical situation ( e.g. on how the system is open to the environment ) various ensembles can be used to describe equilibria mathematically , each governed by their own thermodynamic potential ( and corresponding variational principle ) and associated gibbs machinery with its thermal probabilities as introduced also by maxwell and boltzmann . the theory and its mathematicsare very well developed , a highlight of 20th century physics , including the fundamental understanding of phase transitions , critical phenomena and possible instabilities related to long range interactions ( such as the jeans instability for gravity ) , .+ there is however a deeper statistical understanding of the equilibrium condition , which starts from basic observations on the particularities of systems composed of a very large number of constituents .to unveil already the key - ingredient : the law of large numbers is to be expected to play a fundamental role in any description in terms of additive quantities involving a massive number of terms .+ i start from the simplest set - up for the phase space of classical mechanical systems at fixed energy , volume and particle number .it represents all the allowed microscopic states ( positions and momenta of all the particles in the system ) and indeed we take as working hypothesis that all states on that energy surface are equally probable ( microcanonical distribution ) . that liouville measure is rather natural , unbiased as it is and invariant for the mechanical evolution .indeed the hamiltonian evolution defines a flow in that space , which is reversible and incompressible .+ we can try to classify the macroscopic conditions of the system by first defining a number of macroscopic quantities and to see what are the possible values .the transition from microscopic states to macroscopic condition ( values of macroscopic variables ) is formalized by a many - to - one map . generally is achieved through spatial averaging ( as when computing a density ) or by counting averages ( like the fraction of particles with a given property ) .that map roughly induces a partition on the constant energy surface , dividing it into patches of all states that have the same macroscopic value .the largest patch is the condition referred to as equilibrium .that induces a partition of the phase space in `` rooms '' or phase space regions that distinguish between macroscopic conditions .+ it goes without saying that the coarse graining is physically inspired and the choice of macroscopic variables is not completely arbitrary : we prefer macroscopic descriptions that are sufficiently simple and yet in a way , are dynamically closed ( e.g. giving rise to first order dynamical evolutions on hydrodynamic scales ) .here there is a role for the specific dynamics but in fact already earlier the hamiltonian has entered as we are fixing the energy and we only consider microscopic states with that energy .+ the rest of the story is statistical and is based on another property of the relevant macroscopic quantities : they are arithmetic averages over space or over the various particles of local quantities . for example, the density in mass or energy can show a macroscopic profile as made from the various local concentrations of masses or energy .it is here that the law of large numbers starts to play : typically , when randomly selecting a phase space point , it belongs to the phase space region of ` thermal equilibrium , ' where macroscopic quantities take their equilibrium values .it is by very far the largest room in the phase space , as visualized in the figure above . for many - particle systemsthe equilibrium region will be overwhelmingly huge compared to the other ( nonequilibrium ) regions and in that way the equilibrium values are _ typical _ values just as in the law of large numbers .no details about the system or its dynamics have been specified yet except that we fixed the energy and that we want our macroscopic description to be relevant and ( in a sense ) complete ( for macroscopic autonomy ) .irrespective of that , it tells us that equilibrium is the most probable condition from the macroscopic point of view .+ perhaps we can as well remember the * boltzmann entropy * , given by where of a phase region denotes its liouville volume , always given the constraints on , and .it quantifies the plausibility of a macroscopic condition , in the sense that }{\text{prob}[m ' ] } = e^{[s(m ) - s(m')]/k_b}\ ] ] where stand for macroscopic conditions , outlook , values , ... take here into account that the entropy scales with the number of particles , so that an entropy change of order 1 joule / kelvin is easily reached in the kitchen .such an increase of entropy easily gives rise to a factor for the ratio of probabilities .+ i repeat that equilibrium ( as sensed for example by homogeneous temperatures ) is in that sense quite the opposite of `` special . '' hereis how the master himself was saying that : + _ one should not forget that the maxwell distribution is .... in no way a special singular distribution which is to be contrasted to infinitely many more non - maxwellian distributions ; rather it is characterized by the fact that by far the largest number of possible velocity distributions have the characteristic properties of the maxwell distribution , and compared to these there are only a relatively small number of possible distributions that deviate significantly from maxwell s . _( ludwig boltzmann , 1896 ) + wait , do not read on without having understood that citation .the horizon problem belongs to the original theoretical motivations for inflationary cosmology .the general feeling there is that , as in the standard cosmological model no causal process can establish thermal relaxation within the presently observable universe , we need inflation ( a period of accelerated expansion ) to account for the _ observed _ thermal homogeneity. there may well be other reasons to believe in inflation or , what would be even better , solid observational evidence for the process of inflation , but that is not discussed here .+ here is the usual presentation of the horizon argument , : there is something special and even very implausible about having equal temperatures in distant not causally connected regions of our universe , as was measured with relative fluctuations of on the wmap for cosmic microwave ( black body ) radiation and confirmed by the planck esa mission . that specialness of equal temperatures can be removed and hence understood by an inflation scenario ( somehow pushing back in time the big bang ) as that would allow thermalization via causal contact to have taken place after all .we refer to the nasa - page for a summary of that standard formulation . in the following sectioni address the above two claims .i start with a summary of the answer .equal temperatures are the rule in equilibrium and equilibrium is typical .if indeed temperature is found to be almost homogeneous , then there is no need and indeed no use in explaining that via thermal relaxation unless you know for sure that the temperatures were not equal at a previous time , if defined at all .in fact , the universe never was and still is not in equilibrium concerning the gravitational degrees of freedom ( including products as ourselves ) , but there is no reason to doubt its large scale homogeneity in temperature at any moment in its evolution . + here is my evaluation of the reasoning based on the above boltzmann picture : no , on the contrary , there is nothing special about equal temperatures .in fact , equal temperatures are typical for all regions which are solely constrained to conservation of energy .if we imagine the universe with the standard cosmology according to a friedmann - lematre - robertson - walker geometry with at an initial time shortly after the big bang an arbitrary matter distribution with a given total energy , then we can and should expect uniform temperature all over .that is just the statement that equilibrium is typical . as a matter of logic, thermalization makes the universe less special so that thermalization can not explain specialness ; the universe would have needed to be more special before . in other words requiring thermalization is not only not needed ; it is worse than useless .+ the above comments concerning the horizon problem are not original ; they have been around for some time , and have been written down more or less in the same way by a number of people ; see in particular the analysis by roger penrose on the horizon problem and personally i have greatly benefitted from discussions with shelly goldstein ; see also .+ we can also add that a rough calculation ( in communication with frederik denef ) based entirely on equilibrium fluctuation theory shows that the fluctuations as measured by the wmap are bigger ( not _ smaller _ ) than expected .the calculation uses that for black body radiation the specific heat scales like . for volume we take one pixel of the wmap cmb image , which is a volume much bigger than a cube light year .for we can take a temperature of about 1000 kelvin at the time of cmb emission . the relative standard deviation in energy scales like .hence we get . +it should not surprise us however that there is such a deviation as clearly gravitational degrees of freedom are extremely important here .the universe was indeed very special at the big bang .as far as we know and can reasonably assume ( also based on the fact that all verified calculations on nuclear synthesis and chemical reactions in the early universe are based on standard equilibrium thermodynamics ) , there appears to have been a thermal equilibrium but not for the gravitational degrees of freedom which prefer a clustered ( macroscopic ) matter distribution .ever since , the universe is relaxing its gravitational degrees of freedom along the einstein equation of general relativity . at any rate ,if we want to know why the universe was initially so very special for its macroscopic gravitational / geometric condition , we need completely different arguments from those alluded at in the horizon problem . if we enquire why it was thermally in equilibrium , there is and remains boltzmann s answer : that is only normal ; it is a matter of counting .+ note that the above is not saying that contact or dynamics would not play a role in the _ approach _ to equilibrium .derivations of e.g. diffusive behavior are not at all simple . but even there , it is not _just _ the dynamics that matters and statistical reasoning on typicality of initial conditions will play a crucial role .maxwell characterized a proposed reduction of the second law of thermodynamics to a theorem in dynamics , with + _ as if any pure dynamical statement would submit to such an indignity _( letter to tait , 1876 ) .the truth of the second law is as in a statistical theorem , _ of the nature of a strong probability ... not an absolute certainty _ like dynamical laws .( foreword in a book by tait , 1878 ) . in the abovei have in boltzmann s picture not addressed specifically the case of long range interactions ( such as in newtonian gravity ) .the ideas are indeed not different , but the consequences look different than for a dilute gas . as i said , gravity typically leads to clustering and that is not at all in contradiction with boltzmann s ideas , on the contrary .note here that some problems can be created ( not solved ) by taking the so called canonical ensemble for treating a system of particles with ( very ) long range interactions ; as is well - known the canonical and microcanonical ensemble need not always be equivalent and there are special problems with the physical meaning of the canonical ensemble when the difference between bulk and boundary fades .the microcanonical treatment for systems with gravity does not present any special conceptual differences with that of dilute gases , but of course equilibrium looks totally different .we have thus also emphasized that the hot big bang exactly by its supposed matter homogeneity is a very nonequilibrium state of affairs .yet , an even better understanding of gravity from a statistical mechanical point would certainly be welcome .more specifically i have in mind that a good unison between general relativity and ( nonequilibrium ) statistical mechanics has not at all been found .the book by richard tolman , _ relativity , thermodynamics and cosmology _ , written in 1934 must be revisited , at last .in fact only in the last decade or so a renewed interest has been observed in kinetic relativistic gas theory and that is still restricted to special relativity .so the horizon problem should get us moving , and remains valuable , if not as motivation for an inflation scenario , then to get us to start thinking about questions as : + - what are the kinetic constraints in the geometric relaxation of the universe to equilibrium ?+ - what are the roles of expansion and gravitational instability in a statistical mechanical description of the universe ?+ - what are the relevant and correct macroscopic variables in a geometric theory of gravity ? how to quantify here the distance to equilibrium ?is there an associated boltzmann entropy which satisfies an h - theorem ? + what is the quantum statistical mechanical equilibrium of a gravitational system hawking radiation ?+ - how can one formulate the balance equations of irreversible thermodynamics in general relativity ?the so called information paradox consists of multiple questions and problems related to the construction of a quantum theory of gravity .it turns out that our understanding today is not optimal , and in particular that shows up in various specific attempts that run into inconsistencies .that is very normal for a scientific domain in full development , but it is not very specific .in fact it is not easy to get a sharp and precise version of the paradox .that black hole information paradox appears in many different versions , changing in time and depending on the source .what it is generally accepted to imply is that our usual effective descriptions do not seem to work .it is heard that our effective quantum field theory in which we have good reasons to trust , produces conflicting or inconsistent results .surely it could very well be that the quantum field theory that is usually applied there is inappropriate , or must be extended ; the claim that follows below is much more conservative : even _ within _ the usual scheme of quantum field theoretical understanding of hawking radiation , the problem is false ; there is no inconsistency for th. smiths . or , the usual arguments leading to the paradox are not well founded statistical mechanically .does it mean that we do understand a quantum theory of black holes ?no , but let us concentrate on the real problems , and the information paradox if formulated in any sharper way does not appear to be one of them . hereis again a wikipedia version ( 20 february 2015 ) : _ physical information seems to ` disappear ' in a black hole , allowing many physical states to devolve into the same state , breaking unitarity of quantum evolution . from the no - hair theorem, one would expect the hawking radiation to be completely independent of the material entering the black hole .nevertheless , if the material entering the black hole were a pure quantum state , the transformation of that state into the mixed state of hawking radiation would destroy information about the original quantum state .this violates liouville s theorem and presents a physical paradox ._ these previous lines do return in many of the ( more ) relevant references , but i do not include here a list of the most popular ones .there is for example the book _ an introduction to black holes , information and the string theory revolution .the holographic universe _ by leonard susskind and jameson lindesay , or the collection of papers in _ quantum aspects of black holes _ , ed .xavier calmet , springer fundamental theories of physics , .we do not comment on the way entropy or the second law are presented there , and move directly to the core of the argument ( basically chapters 8 - 9 in and the paper on the firewall phenomenon by r.b .mann in ) .we also do not discuss the precise meaning of the firewall proposal and how its introduction was thought to avoid some `` entanglement paradoxes , '' .+ to find a somewhat precise formulation of the information paradox is not so easy for th .there are many web - entries , some very instructive as prepared e.g. by samir mathur .a very clear version is in , also having the advantage of being fairly recent and containing discussions of previous remarks and proposed changes .my own version of it follows now and i try to summarize the useful discussions i had with wojciech de roeck ; let me cut the story in pieces : 1 . first there is the condition of unitarity which is emphasized .that is not much more than to say we want a description starting from microscopic mechanics .whatever happens in the formation process or evaporation process of a black hole , the evolution is unitary , mapping the initial wave function to intermediate and final wave functions with a so called unitary -matrix .the unitarity is especially important here because it will lead to the mathematical identity essential to the paradox .we make the usual splitting of hilbert spaces between the singularity and the outer black hole ; we are just considering now the process of evaporation and radiation being emitted by the black hole .so we will consider ( for short ) the inner ( singular ) and the outer part of the black hole .there is the interior of the black hole ( behind the horizon , region b ) and there is the outside or the exterior ( region a ) . the total system ( a and b together ) are quantum mechanically described by a pure state , a wave function .it is unitarily evolved from another pure state where we put time zero at the beginning of the evaporation process as described by hawking radiation .if we want to consider the outside region a , we can integrate out the degrees of freedom in b. in quantum mechanics , that means we trace out , and , in contrast to classical states , we do not obtain in general another pure state describing the situation in b , but we obtain a density matrix similarly , there is a density matrix for the statistical distribution of the interior of the black hole .the fact that these are density matrices ( and not wave functions ) arises because the pairs of photons that are created at the horizon are entangled .they create in other words an entanglement between regions a and b. since the state was pure , it is a theorem that the entanglement entropy of region a equals that of region b. mathematically , that is an equality between von neumann entropies at all times .the von neumann entropy of the density matrix in the outer region must be equal to the von neumann entropy of the interior black hole .3 . consider the entanglement entropy of the outer region a ( left - hand side in ) .can we estimate that ? here is an important statement let us call it the statement of _ increased entanglement _: entanglement just increases with further pair creation .the reason and calculation which is given of that _ increased entanglement _ is that ( a ) the hawking radiation is thermal , and ( b ) the radiation is additive in pair creation .an observer external to the black hole will see a thermal state characterized by a density matrix and moreover purity of that state is never to be restored even when all the black hole is evaporated .the computation of increased entanglement is based on that thermal equilibrium distribution for black body radiation ( at the hawking temperature ) .we are now at page 17 in . in the words of the `` fuzzball person '' : _thus the entanglement can not go down ever , and thus the information can not emerge in the hawking radiation . _ + the increase of the left - hand side of is taken from the thermal entropy , which is then increasing alright with every creation .a detailed calculation can be found on pages 9093 in the book in the paper _ the firewall phenomenon _ of r.b .mann . to be sure, there is an upper bound on that entanglement entropy , at first growing linear in the steps of pair creation , and then saturating .( in bekenstein showed that there is an upper limit on the amount of entropy ( and thus information ) one can store in a chunk of space - time " of a certain radius ; a time - dependent version is found in . )also , there can easily be imagined corrections to thermality , but one shows that the increase of entanglement is stable ; see again p93 in , or p6 - 8 in based on .+ secondly and moreover , the calculation for increased entanglement uses that the cumulative pair creation works additively in the entanglement entropy .every new pair creation adds an elementary unit to the entanglement entropy .4 . meanwhile , what happens to the region b ?well , by the hawking radiation the black hole evaporates and its entropy starts to become smaller and smaller .the radiation in a grows and region b gets smaller . at a certain moment ( beyond the so called page time ) , the region b is getting smaller and smaller and therefore its entanglement entropy which is always smaller than its real ( thermodynamic ) entropy must also decrease to zero .if there is almost nothing left of the black hole , then its entropy gets very small and hence the right - hand side of must decrease to zero .5 . now comes the paradox . in the equality ,the previous lines just showed the right - hand side goes to zero .but then the entanglement entropy in region a also decreases in time which contradicts the _ increased entanglement _, that the radiation remains entangled , that the density matrix never restores purity as was inferred before from taking it thermal .a large number of `` solutions '' and `` answers '' have been proposed ; too many to include all of them .i start by mentioning some of them very briefly ; some are rather involved and sometimes get very technical .compared with the simple line of arguing above , in general they appear somewhat off target .+ the whole description is of course based on calculations that are approximate .hawking s calculation that showed that black holes emit thermal radiation uses a semi - classical calculation , say quantum field theory with curved background ( in fact , quantum theory of a scalar field in the background of a large classical black hole ) .moreover the calculation uses locality for example in assuming the decomposition of the hilbert space structure corresponding to regions a and b. nevertheless i would be very surprised that these assumptions and approximations lead to such disaster .after all , the size of the horizon can be arbitrarily weakly curved when the mass of the black hole is large enough .i do not believe that quantum gravity effects can be relevant at such large length scales .+ what is probably more valid if one wants to criticize these effective theories , is that we had better used _open effective field theory _ ,i.e. , effective field theory of systems that are not closed .of course , then one must explain well the ultimate and cogent reasons for using these open system theories , but that is basically a problem of statistical mechanics we are used to and some elements of it will be discussed in section [ thm ] .+ the suggestion or solution of `` forgetting degrees of freedom '' has been entertained in e.g. ; see also , in which samir mathur explains why some common beliefs do not resolve the apparent puzzle , which sharpens significantly the nature of the information paradox as originally stated by hawking .it appears that the solution of the quantum information paradox is not to be found in these ideas of `` approximate '' thermalization , as i already have mentioned in the point 3 above for _ increased entanglement_. + a further objection to the paradox ( again to come back in section [ thm ] ) states that the description of the black hole space - time by the schwarzschild metric is truly based on an exchange of limits .a collapsing shell of matter only becomes schwarzschild in an infinite time limit .at all finite times it slightly differs .still i do not believe that it is the good answer to the firewall phenomenon described above .the solution is much simpler to state .+ let me no longer postpone giving the answer of th .smiths : the thermality ( approximate or not ) of hawking radiation does imply the _ increased entanglement _ nor does the cumulative nature of pair creation . just from rereading the scenario in 5 acts of section [ form ]it is quite clear that the statement of _ increased entanglement _ must be very false that is a matter of logic if one accepts all the rest .the entanglement entropy just goes to zero , indeed both right - hand side and left - hand side of .so the arguments that lead to that presumed ( entanglement ) entropy increase is wrong .why , how so ? + indeed, even a pure state can very well look like a thermal state for all practical purposes and for local observations ; the von neumann entropy or the shannon entropy are not continuous ; see e.g. .it is not because two density matrices very much resemble each other locally , that their von neumann entropies can not be drastically different , and the usual argument for _ increased entanglement _ is just and only based on that wrong premise .+ the real issue thus concerns the nature of the thermal description .what is really the meaning and the status of these density matrices and probabilities that seem to enter our description of being thermal ? how seriously should we take them ?the answer is , not too much . to give a trivial example : the liouville equation for a mechanical system implies that the shannon von neumann entropy of a density matrix ( or phase space distribution ) does not change in time , even though it can on the appropriate space time scale or for some class of observables be considered thermal .so it then describes equilibrium for many practical purposes and yet its shannon - von neumann entropy ( like in ) does not at all equal the thermodynamic entropy ( which itself however can be written as the shannon - von neumann entropy of a thermal density matrix ) ; see appendix [ sha ] for a toy - example .that density matrix ( solution of the liouville - von neumann equation ) is for example also quite irrelevant for a more microscopic understanding of the second law .similarly , a wavefunction for a many - body system can statistically reproduce a thermal state when evaluated for local observables , but that does not imply that the two are equal in all possible senses . certainly , their ( global ) von neumann entropy need not be equal .one should realize here that a wavefunction or a density matrix for a quantum many - body system contains much more information than for example the one - particle statistics .so it is not because you can reproduce locally the black body radiation spectrum for a big density matrix that it would entail that its von neumann entropy is even approximately equal to that of the corresponding thermal state .mathematically , entropy is not a continuous functional ( with respect to such weak metric ) .+ moreover , even when not inserting ( wrongly ) the `` mathematically thermal '' condition ( i.e. , even when not literally using thermal density matrices ) , one must still avoid a second mistake : that the pair creation as such which is additive , need not lead to increased entanglement .the reason is simply that the strict correlations with the inner degrees of freedom in the black hole easily get lost ; see appendix [ bag ] for a simple toy - example .+ the previous answer is just a detection of where was the mistake in the reasoning of section [ form ] . to th .smiths it would be like saying that the shannon entropy of the liouville evolved probability distribution equals the ( real ) thermodynamic entropy .that is certainly false , even though there may appear good reasons to say that the distribution is thermal indeed .hawking radiation is essentially pure but that does not require a breakdown of effective field theory .nevertheless that answer does not describe the mechanism of purification or how to calculate the true and correct entanglement entropy .that is a much more detailed and complicated question , which is typically not even tackled for much simpler systems .but it also seems to take us in the wrong direction , the quantum - mechanical description of black holes and of singularities probably requires much more interesting and important challenges , and it may well be that the answer depends strongly on the version of quantum mechanics that one considers , .not so very long ago , speaking of entanglement was considered ( bad ) philosophy . in those times , the information paradox was formulated quite differently from what is written above , but still with a similar flair as people have in general had less reservations to associate the word information with entropies .+ hawking s theorem in showing that black holes emit thermal radiation is then combined with another important fact about classical black holes , namely the no - hair " theorem : _ a stationary four - dimensional solution of the einstein - maxwell equations ( in lorentzian signature ) is uniquely characterized by its mass ( ) , angular momentum ( ) , electric charge ( ) , and magnetic charge ( ) . _see the reviews . + the thermal radiation by black holes together with the no - hair " theorem seem to suggest that one can take a pure state with charges , evolve it in time to form a black hole and then observe a thermal radiation coming out of the black hole .that is in a nutshell what was the original formulation of the `` information paradox '' since it seems to suggest that one can evolve a pure state into a thermal one thus breaking unitarity .( the version of the paradox presented above in section [ form ] is more specific and more recent . ) + also here ( and in relation with the firewall phenomenon ) various `` answers '' have been formulated .there is for example the claim ( contained e.g. in ) that using ads / cft ( a topic that got recently much attention ) one can understand exactly how information leaks out of an asymptotically ads black hole . those answers and in particular those based on ads / cft dualities may very well be correct , but is there not a much simpler issue here , which is felt intuitively by th .smiths ? + the reply here must be that the no - hair theorem already supposes a stationary limit - situation .but in such limiting regimes dissipative effects easily arise and are compatible with pre - limit unitary evolution .very often , to make things very sharp , we take some thermodynamic limit after which the ( reduced ) description is particularly simple .it can for example suffice to give energy , density and volume to completely describe a gas , or to give temperature and magnetization to describe a magnet etc .that is similar to the status of these no - hair theorems in black hole physics : they involve limiting procedures and considerations of asymptotic stationary behavior both in time and in degrees of freedom . in these same limits and for the appropriate variables ,autonomous dissipative evolutions appear rigorously as mesoscopic and macroscopic behavior with no other input than hamiltonian or unitary microscopic laws .the first example of that was probably the proof of the boltzmann equation for a dilute gas .perhaps that is related to the fuzzball proposal for black holes .the basic idea is that the black hole has microstates to account for its thermodynamic entropy and the typical microstate is some fuzzy " space time which has features on the scale of the horizon and at asymptotic infinity looks like minkowski space .one can then somehow average out over these fuzzy " geometries and obtain the black hole as a coarse - grained effective description of the physics .+ perhaps we are witnessing something similar to what happened with classical mechanics when confronted with hydrodynamics and thermodynamics . here are two historical examples both concerned with theorems , mathematical rigorous work that has lead to paradoxes that are however solved by understanding their complete irrelevance for the true situation ( which mathematically amounts to exchanges of limits ) . + the first one dates from 1752 and contains what became known as the dalembert paradox , meaning the rigorous conclusion from classical mechanics that birds can not fly .( more precisely , dalembert saw that both drag and lift are zero in potential flow which is incompressible , inviscid , irrotational and stationary . )the resolution is of course found in the emergence of viscosity either from hamiltonian mechanics or from eulerian hydrodynamics , as explained through the validity of the navier stokes equation ( 18221845 ) , or more generally from microscopic ( statistical ) derivations of the second law ( as i had a chance to hint at in section [ 2nd ] ) . + the second one is a paper in 1889 by henri poincar where he shows that no monotonically increasing ( entropy ) function in time could be defined in terms of the canonical variables in a theory of -body hamiltonian dynamics . in that way he added to the so called irreversibility paradox , andindeed its solution shows that that poincar theorem is quite irrelevant ; see for example .to what comes of the horizon problem and the information paradox to the streets of statistical mechanics , there appears the following reply : + for the horizon problem : seeing equilibrium aspects such as homogeneous temperature of the background radiation can not be a problem if there is no reason to suppose it was different before .the gravitational degrees of freedom appear very far from equilibrium , yet not contradicting the large scale almost equal temperatures without previous causal contact .+ for the information paradox : unitary evolutions become effectively dissipative in a reduced description , but formal paradoxes can easily arise when interchanging limits of time and thermodynamic limit .macroscopic steady state descriptions are by their nature approximations , valid in some limiting regimes of spatio - temporal scales . that a statistical distribution is thermal for local observations or forall practical purposes does not need to imply that its shannon von neumann entropy coincides with that of the thermal distribution .in particular , the statement that the hawking radiation `` is '' thermal , does not include that the entanglement entropy as seen by the outside observer is the bekenstein hawking entropy or is not ultimately decreasing to zero .moreover , the cumulative or additive pair creation does also not imply a never - decreasing entanglement entropy of the radiation as the inner degrees of freedom of the black hole , while shrinking , can become much more internally correlated .( see some toy - examples in the appendices . )+ there have been times of great mutual interactions between researchers in statistical mechanics and in field theory .several ideas on dynamically broken symmetries and on collective phenomena have been shared and they shaped the landscapes of condensed matter and elementary particle physics alike .the renormalization group , universality , effective coupling , etc .remain key - concepts in all of theoretical physics .today there appear new opportunities , also for th .smiths , for joined initiatives and efforts at institutes for theoretical and mathematical physics , including work and discussions on a fluctuation theory for gravitation with special challenges regarding the statistical mechanics of the big bang and black holes .+ * acknowledgment * + 99 e. schrdinger , _ nature and the greeks_. cambridge university press , 1996 .f. bouchet and j. barr , statistical mechanics of systems with long range interactions .journal of physics : conference series * 31 * , 18-26 ( 2006 . ) m. k .- h .kiessling , the `` jeans swindle '' : a true story mathematically speaking .advances in applied mathematics * 31 * , 132-149 ( 2003 ) .the wikipedia initiative is often a source of good information and is very much appreciated .i quote it here simply because it thus represents the majority of writings on the subject .it also avoids embarrassing specific authors .what is `` usual '' is possibly subject of discussion , as it is somewhat subjective .smiths probably heard the formulation at talks and in popular accounts motivating inflation .there are of course also a wealth of professional texts saying the same thing . as a recent reference ,see for example + d. baumann and l. mcallister , _ inflation and string theory_. cambridge university press , 2015 .see the highlights of the wilkinson microwave anisotropy probe webpage http://wmap.gsfc.nasa.gov/universe/bb_cosmo_infl.html last visited 1 april 2015 , with the text : + _ the horizon problem : distant regions of space in opposite directions of the sky are so far apart that , assuming standard big bang expansion , they could never have been in causal contact with each other .this is because the light travel time between them exceeds the age of the universe . yetthe uniformity of the cosmic microwave background temperature tells us that these regions must have been in contact with each other in the past .+ since inflation supposes a burst of exponential expansion in the early universe , it follows that distant regions were actually much closer together prior to inflation than they would have been with only standard big bang expansion .thus , such regions could have been in causal contact prior to inflation and could have attained a uniform temperature . _there are other claims and formulations under the horizon problem , see again .a popular emphasis is on the high correlations between causally not connected regions or between temperature and other physical fields .i have not been able to find a precise formulation of that `` correlation - paradox '' which is not either the same as the `` more or less '' equal temperature problem , or which is a problem at all ( given what one means by correlations which of course need not be causal in general ; apple trees and pear trees do not cause their blossoming but the time of their flowering is about equal . )r. penrose , _ the road to reality : a complete guide to the laws of the universe_. vintage ; reprint edition ( january 9 , 2007 ) .carroll , in what sense is the early universe fine - tuned ?arxiv:1406.3057v1 [ astro-ph.co ] . + that paper also contains the complete solution of the so called flatness problem , another historical motivation for inflation . an earlier paper exactly hitting that target is + g. evrard and p. coles , getting the measure of the flatness problem . class . quantum grav . *12 * , l93-l97 ( 1995 ) .s. weinberg , gravitation and cosmology .new york , wiley , 1972 .+ `` unfortunately , we still do not have even a tentative quantitative theory of the formation of galaxies ... '' r. tolman , _ relativity , thermodynamics and cosmology_. ( dover books on physics ) january 20 , 2011 ; clarendon press oxford , 1934 . for example , it is not unusual to hear or read that the paradox is about information , with the key - question whether hawking radiation carries information or not .another emphasis regards the paradox as a contradiction between predictions of semi - classical gravity and expectations for a ( good ) quantum theory of gravity . or still others would speak about the unwanted pure to mixed transition ; the hawking radiation would become mixed / thermal because of paired particles disappearing in the singularity , thus erasing information , etc .all of these concerns are recognized and more sharply stated in the recent debates on fire walls .we come to a more precise formulation in section [ form ] .l. susskind and j. lindesay , _ an introduction to black holes , information and the string theory revolution .the holographic universe_. world scientific 2005 . , ed .xavier calmet , springer fundamental theories of physics * 178 * , 2015 . in particular , see the paper on the firewall phenomenon therein , by r.b .a. almheiri , d. marolf , j. polchinski and j. sully , black holes : complementarity or firewalls ?journal of high energy physics * 2 * , 62 ( 2013 ) . the webpage of samir d. mathur has many entries on the black hole information paradox including a pedagogical blog .( last visited 9 april 2015 ) .on page 5 of `` confusions and questions about the information paradox '' by samir d. mathur ; see .a. almheiri , d. marolf , j. polchinski , d. stanford and j.sully , an apologia for firewalls . arxiv:1304.6483v2 [ hep - th ] .mathur , what the information paradox is _not_. arxiv:1108.0302 . s.d .mathur , the information paradox : a pedagogical introduction .* 26 * , 224001 ( 2009 ) .s. w. hawking , particle creation by black holes .phys . * 43 * , 199 ( 1975 ) .[ erratum - ibid . *46 * , 206 ( 1976 ) ] .j. d. bekenstein , a universal upper bound on the entropy to energy ratio for bounded systems .d * 23 * , 287 ( 1981 ) .r. bousso , a covariant entropy conjecture .jhep * 9907 * , 004 ( 1999 ) .dobrushin , a mathematical approach to foundations of statistical mechanics .boltzmann s legacy 150 years after his birth ( rome , 1994 ) , atti covegni lincei , 131 , accad .lincei , rome , 1997 , 227243 .f. t. falciano , n. pinto - neto and w. struyve , wheeler - dewitt quantization and singularities .d * 91 * , 043524 ( 2015 ) .i remember that was still often the case in the 1990s ; entanglement was at the very best considered to be some philosophical term for what was obvious and not so interesting after all .it remains mind - boggling that entanglement was introduced already by schrdinger in the 1930 s , that john bell used it essentially in his analysis of the bohm - epr experiment , and that still today simple understanding of the content of e.g. the bell inequality ( 1964 ) has remained largely lacking or gets confused with other statements as the kochen - specker theorem . for a good review , see + t. maudlin , what bell did .j. phys . a : math. theor . * 47 * , 424010 ( 2014 ) , part of the special issue of journal of physics a : mathematical and theoretical devoted to `` 50 years of bell s theorem . ''the rigorous proof of the boltzmann equation in the so called grad - limit , at least for short times but without explicit assumptions of molecular chaos ( the sto ) was given in + o.e .lanford iii , the evolution of large classical system . in : _ dynamical systems ,theory and applications _, j. moser , ed . ,lecture notes in physics 38:1 111 , springer - verlag , heidelberg , 1975 .+ see also www.scholarpedia.org/article/boltzmann-grad_limit ( last visited 9 april 2015 ) .s. d. mathur , the fuzzball proposal for black holes : an elementary review .fortsch .phys . * 53 * , 793 ( 2005 ) .b. d. chowdhury and s. d. mathur , radiation from the non - extremal fuzzball .grav . * 25 * , 135005 ( 2008 ) .t. chrusciel , no hair theorems : folklore , conjectures , results .contemp .math . * 170 * , 23 ( 1994 ) .robinson , four decades of black hole uniqueness theorems . in _ the kerr spacetime : rotating black holes in general relativity _ , edited by d. wiltshire et al . , cam- bridge university press , 2009 .jean le rond dalembert , essai dune nouvelle thorie de la rsistance des fluides .( 1752 ) .h. poincar , sur les tentatives dexplication mcanique des principes de la thermodynamique .comptes rendus hebdomadaires de lacadmie des sciences de paris * 108 * , 550553 ( 1889 ) .s. goldstein , boltzmann s approach to statistical mechanics . in _ chance in physics : foundations and perspectives _ , edited by jean bricmont , detlef durr , maria c. galavotti , giancarlo ghirardi , francesco petruccione , and nino zanghi , lecture notes in physics 574 , ( springer - verlag ) 2002 .w. de roeck , t. jacobs , c. maes and k. neton , an extension of the kac ring model .gen . * 36 * , 113 ( 2003 ) . c. maes , k. neton and b. shergelashvili , a selection of nonequilibrium issues . in : _methods of contemporary mathematical statistical physics _roman koteck , lecture notes in mathematics 1970 , pp .247 - 306 , springer , 2009 .i repeat here some ingredients of the quantum kac ring model to show how a quantum unitary evolution gives rise to `` effective '' dissipative behavior for some thermodynamic observables .in fact the model has similar semi - classical aspects as found for quantum field discussions in a curved back ground .yet , the kac model is so transparent that all calculations become very simple .+ consider a ring of a large number of sites each carrying a quantum spin and a classical background variable .we characterize that background by the average . for the quantum dynamicswe choose a unitary matrix on and define which is extended by linearity to be unitary on the hilbert space ( given the background which is unchanged in the dynamics ) .+ for observables we take one - site matrices , the pauli matrices in the three directions and magnetization operator where the are copies of at site . as initial condition we can essentially take any situation which is concentrating on a particular magnetization vector , by which we mean an initial state for which the expectations \rightarrow m_\alpha ] .concerning magnetization observations the theorem thus states that for all times , disordered with a magnetization evolving dissipatively .+ obviously , the von neumann entropy of is constant in time , but that of is not .understandably , the tr ] where is the color of the ball that was returned inside the container , with with equal probability . at that moment the balls in the containerare reshuffled and again two of them are picked out , colored randomly but differently after which one returns to the container and the other is added to the bag , _ etcetera_. the correlation at time when averaged over many runs equals .there is an initial period ( till about time ) of increased ( anti-)correlation after which , time - symmetrically , ( anti-)correlations decrease .+ i repeat that the present example is not taking care of unitarity ; it just makes the point that ( 1 ) one can easily produce maximal disorder ( in the bag ) , and additively in ( 2 ) each time , for each pair , impose strict anti - correlation , while still ( 3 ) the correlation is non - monotone in time with the source .i thank urna basu for discussions on that toy - model .
e tatistical echanician n e treet ( our th . smiths ) must be surprised upon hearing popular versions of some of today s most discussed paradoxes in astronomy and cosmology . in fact , rather standard reminders of the meaning of thermal probabilities in statistical mechanics appear to answer the horizon problem ( one of the major motivations for inflation theory ) and the information paradox ( related to black hole physics ) , at least as they are usually presented . still the paradoxes point to interesting gaps in our statistical understanding of ( quantum ) gravitational effects .
the diffusion equation is widely used to describe transport phenomena in which heat , mass , and other physical quantities are transferred in space and time due to underlying molecular collision processes . at steady - state and when the source term does not depend explicitly on the concentration of the associated field , such diffusion processes settle down in the form of a non - local relation between the spatial distribution of the source and the resulting concentration of the associated field , a relation which is governed by the poisson equation .the notion of the poisson equation as the steady - state solution of the diffusion equation extends to many other phenomena , e.g. electrostatics systems , plasma physics , chemistry and biology , molecular biophysics , density functional theory and solid - state physics , to name but a few .in fact , solving the poisson equation accurately and efficiently continues to be a subject of intense research to this day . from a microscopic point of view, the diffusion equation emerges from the underlying boltzmann kinetic equation in the limit of very small knudsen numbers , i.e. molecular mean free path much shorter than the typical macroscopic scale .solving the boltzmann kinetic equation to study diffusion processes makes apparently little sense , since the boltzmann distribution function lives in a double - dimensional phase - space , i.e. position and velocity .however , in the last two decades , minimal ( lattice ) versions of the boltzmann equation have been developed , in which velocity space is reduced to a small set of discrete velocities , so that the computational cost is cut down dramatically , making the solution of the lattice kinetic equation often competitive towards standard grid - discretisation of the corresponding partial differential equation .the lattice boltzmann method ( lbm ) has attracted much interest for solving the navier - stokes equations , and has been extended to solve other type of phenomena , e.g. diffusion equation , maxwell equations , quantum systems , relativistic fluid dynamics , and the poisson equation , to name but a few . to the best of our knowledge , to date , the solution of the poisson equation based on kinetic theory has been confined to smooth ( non - stiff ) source terms and small knudsen numbers .if the target is the steady - state solution of the diffusion equation , as it is the case for the poisson equation , the kinetic approach looses much of its appeal , since its inherently real - time dynamics must proceed through smaller time - steps than those affordable by fictitious - time iteration techniques . in this paperwe shall show that such gap can be closed by resorting to higher - order kinetic formulations , at least for the cases where the source term does not depend on the concentration field avoiding artificial solutions as pointed out in ref . .higher - order formulations of the lb equation have been developed before , mostly in the context of thermal and relativistic fluid dynamics , and recently , density functional theory .there , the main idea is to design the equilibrium distribution in such a way to include not only the conserved moments , such as density and current , but also the non - conserved ones , namely the momentum flux tensor and the heat flux , and eventually higher order kinetic moments with no direct hydrodynamic significance ( sometimes called `` ghosts '' ) .clearly , in order to match this longer ladder of kinetic moments , a correspondingly larger set of discrete velocities is required .it should be emphasized that the discrete kinetic equation is a superset of the diffusion equation , and consequently , to the purpose of solving the diffusion equation alone , higher - order contributions must be regarded as discretisation artefacts which need to be minimised , and possibly canceled altogether .this is precisely the target of this paper . here, we will use higher - order lattices and associated equilibria , to remove higher - order contributions to the macroscopic equations reproduced by the boltzmann equation . for this purpose, we develop the respective lbm and show that , by expanding the equilibrium distribution up to seventh order in hermite polynomials , one can solve the poisson equation _ six orders _ of magnitude faster than with standard lb models , at a given level accuracy .the paper is divided as follows : in sec .[ sec1 ] , we introduce the diffusion equation with stiff source term and the associated boltzmann kinetic equation . in sec. [ sec2 ] , we describe the influence of higher order moments on the steady - state solution of the diffusion equation . in sec .[ sec3 ] , we develop a lbm for the poisson equation , validate and compare the model with standard cases . finally , we discuss the results and outlook of our work .let us begin by writing the diffusion equation for a scalar in the standard form : where is the concentration of particles ( temperature in the case of the heat transfer equation ) , is the diffusivity , and is the source term , which , in our case , depending neither on nor on time .the steady - state version of eq. delivers the poisson equation with source .it is well - known that this equation can be obtained from an underlying boltzmann kinetic equation , via a chapman - enskog expansion , in the limit of small knudsen numbers , , being the mean - free path and the characteristic size of the system . the boltzmann equation in the single relaxation time approximation ( bgk ) reads as follows : where is the probability distribution function , are the microscopic velocity vectors , is the relaxation time , and is the equilibrium distribution . in the non - relativistic context, the local equilibrium is given by a maxwell - boltzmann distribution : where is the local flow speed .such local equilibrium encodes the basic symmetries of newtonian mechanics , particularly galilean invariance , which is built - in via the dependence on the molecular speed relative to the fluid , rather than the absolute one , . in order to secure galilean invariance for _ any _ fluid speed , kinetic moments at _ all orders_ should be matched on the lattice .this would require an infinite connectivity , which is clearly unviable on any realistic lattice . as a result , lb local equilibriaare typically based on finite - order hermite expansions , of the form : where are the tensorial hermite polynomials of order , and is the weight function defined as , being the sound speed .note that is the uniform _equilibrium , corresponding the no - flow limit of the local equilibrium .the coefficients ( kinetic moments ) are calculated by projecting the equilibrium distribution onto the respective hermite polynomial , the lowest - order kinetic moments have a direct macroscopic interpretation .for instance , by integrating on the velocity space ( ) , where the last equality comes from the requirement of mass conservation on the collsion operator , namely : the diffusivity of the system is dictated by the relaxation time as it can be shown that the boltzmann equation recovers the diffusion equation in the limit of small knudsen numbers , which is related with the mean - free path and therefore with the relaxation time .this is the strongly - interacting fluid regime , whereby the particle dynamics is collision - dominated , so that collective behaviour sets in on a short timescale .the opposite limit of large knudsen numbers , hence large values of , corresponds to the free - particle motion ( ballistic regime ) , whereby memory of the initial conditions is kept for a very long time .it is therefore clear that in the ballistic regime higher order moments of the equilibrium distribution do not relax and consequently the boltzmann equation does not converge to any standard diffusion equation . on the other hand , by increasing the relaxation time , hence the effective diffusivity , one would intuitively expect a quickest path to steady - state . in this respect, it would be highly desirable to derive a diffusion equation in kinetic form , which can attain steady - state solution at relatively large knudsen numbers by exactly cancelling as many high - order terms as possible .as discussed above , this can be achieved by zeroing as many higher order contributions as possible , while retaining the largest possible diffusion coefficient .let us consider the case when all time derivatives are zero , . the boltzmann equation , eq ., can be written as , by integrating this equation in velocity space , we obtain a steady - state continuity equation for the current density : where and .multiplying eq . by and integrating again , we obtain with being the momentum - flux tensor and the source current .the first term at the right - hand - side takes the form , where is the flow speed in the local equilibrium . for the case of a purely diffusive dynamicsthis term is zero .the second term contributes a shift to the flow speed , and it must also be set to zero for the case of pure diffusion .finally , the pressure tensor shoud reduce to , so that inserting ( [ second ] ) into ( [ first ] ) , the diffusion equation is obtained .the first two conditions are automatically ensured by setting in the local equilibrium .the third one , however , can not be enforced exactly because of higher order contributions to the momemtum flux tensors .indeed , by iterating the procedure , one obtains the general expression : = s \quad , \ ] ] where and , are both tensors of order .note that for small knudsen numbers ( ) higher - order terms become negligible and the series is convergent .we also observe that the solution is dictated by the equilibrium distribution and the source term , so that , by properly choosing both expressions , higher order terms can be canceled thus opening the way to high - accuracy solutions of the poisson equation . in numerical practice , accuracy is typically improved by increasing the resolution of the grid .this imposes correspondingly smaller time steps , which is clearly expensive , especially in three dimensions .high - order methods are meant to mitigate the problem by achieving the same level of accuracy with many less grid points .they are typically based on higher - order stencils for the discretisation of the corresponding differential operators . in this work, we present a procedure whereby the same goal is achieved by drawing directly from an underlying lattice kinetic theory , i.e. by a suitable ( non - gaussian ) extension of the local kinetic equilibria as combined with the use of larger sets of discrete speeds than those typically employed in standard lattice boltzmann theory .in order for the boltzmann kinetics to converge to the poisson equation , the following conditions need to be met : the equilibrium distribution can be expressed as a separable product of a function of the microscopic velocity ( global uniform equilibrium ) and a function of the time - spatial coordinates , namely : and similarly for the source term : in the above , and are velocity dependent functions that need to be determined based on the conditions above . to this purpose ,we write : and , where and are constant coefficients and is a cut - off truncating the hermite expansion . with reference to the one - dimensional case and an expansion up to seventh order ( ), we obtain : and + \frac{3c_s^2 - 6 c_s^2 v^2+ v^4}{8 c_s^4 } + \frac{15 c_s^6 - 45 c_s^4 v^2 + 15 c_s^2 v^4 - v^6}{48 c_s^6}\right ) , \ ] ] where the weight for the one - dimensional case is defined as with eqs . and, one can prove that the the following expressions for the moments are fulfilled : , , , , , .these expressions hold only up to .to develop a lattice boltzmann model and solve the poisson equation numerically , we need to impose a quadrature and find a corresponding set of finite velocity vectors , , such that the orthogonality conditions are fulfilled to the desired order , namely : here are discrete weights and are the one - dimensional hermite polynomials .the number of velocity vectors depends on the order of the expansion that we intend to reproduce with the quadrature . for expansions up to seventh order , we need at least velocity vectors and weights .each set of numbers also provides its own lattice sound speed , which goes from about with to nearly with .the details of these values are given in [ app ] .higher - order schemes are usually exposed to numerical instabilities , due to the appearance of additional modes in the dispersion relation . however , since our procedure is designed to annihilate precisely these modes , we expect our approach to be stable . furthermore , the method proposed here can also be used as a systematic technique of calculating weights and velocity vectors , such that one can achieve the desired level of accuracy for the computation of laplacian operators , similar to the work proposed in refs . .we have performed the theoretical analysis in one - dimension , however , extensions to two- and three - dimensions are straightforward , by using the tensorial form of the hermite polynomials and by finding the discrete velocity vectors with the respective orthogonality conditions .for instance , in order to fulfill the conditions , up to fifth order , the following expansions apply : and + \frac{15 c_s^4 - 10 c_s^2 \vec{v}^2 + \vec{v}^4}{8 c_s^4 } \right ) , \ ] ] where the weight is defined as for the three - dimensional case , we should calculate the corresponding set of discrete velocity vectors , , such that the orthogonality condition is fulfilled up to fifth order , namely : which is the three - dimensional version of eq . .by solving these algebraic equations , we find that we need at least velocity vectors and weights .details are given in [ app ] .as an application , we solve the poisson equation , where is the charge density and is the electric permittivity . replacing the equilibrium distribution from eq . into eq ., we obtain : implying that . in the lattice boltzmann model, one must take into account second order corrections to the relaxation time , , where the lattice boltzmann relaxation time . from eq ., it is appreciated that the spatial derivatives of the source also introduce errors in the solution .thus , by annihilating higher order contributions , we are also eliminating spurious effects due to the derivatives of the source term .if the source term is smooth , this derivatives are negligible , however , for stiffer source terms , one can use the present approach to add accuracy to the solution . in order to investigate this issue , we solve the potential for the following charge density : where the integer controls the smoothness of the derivatives of the charge density , and is the simulated length , with .we use periodic boundary conditions for simplicity , and ( numerical units ) . and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] we express the relative error as where the amplitude and scaling exponent ( order of accuracy ) both depend on . from fig .[ fig1 ] , we see that for a relatively large system size , , the first order expansion is sufficient to produce satisfactory results , with errors around for and , and below for and . in fig .[ fig2 ] , we see that by increasing the derivatives of the charge density ( increasing ) , the error increases for all values of the cut - off .however , as we have mentioned before , the effect of removing higher - order errors becomes crucial for the case of small system sizes . by decreasing the number of lattice cells ( see fig .[ fig3 ] ) , the discrepancies become visual and the errors are around for , for , and less than for . and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] and .the absolute error is computed , where is the potential calculated with the lattice boltzmann model , and is the analytical solution of eq . .the system size is lattice cells.,title="fig : " ] finally , by further reducing the lattice resolution , cells , and considering only one oscillation , , we see that the errors for the first order expansion , , are around , while for the highest values of they remain below .the fact that one can obtain very small errors by increasing the order of the expansion in the velocity space of the equilibrium distribution and source term , opens up the possibility of highly accurate solutions on very small grids . increasing the order of the expansion inevitably implies an overhead of numerical operations per time step , thereby slowing down the simulations . on the other hand , higher order lattices also bring aditional side - benefits such as a higher sound speed , leading to a larger diffusion coefficient and , consequently , to a faster path to the steady - state solution . in order to highlight the concrete advantages of our approach, we next implement the same simulations for different orders and inspect the cpu time , number of iterations , and system size , required to achieve a specific level of accuracy .let us next change the order of the expansion and analyse how the relative error decreases with the system size .for each cut - off value of the hermite polynomials expansion .the lines denote the fitting curves using the general expression , ( right ) .number of iterations as a function of the system size for each cut - off value of .the lines denote the fitting curve using the general expression , , being the number of time - steps.,title="fig : " ] for each cut - off value of the hermite polynomials expansion .the lines denote the fitting curves using the general expression , ( right ) .number of iterations as a function of the system size for each cut - off value of .the lines denote the fitting curve using the general expression , , being the number of time - steps.,title="fig : " ] from fig .[ fig5 ] ( left ) , we see that , according to our expectations , by increasing the order of the expansion , the order of convergence , measured by the exponent , also increases and in a very substantial way . from the same figure ( right ), we can also observe that for all values of , the number of time - steps scales quadratically with the lattice size , and decreases with the order of the expansion .the former feature is the typical signature of diffusion process , while the second reflects the fact that higher order lattice deliver a larger sound speed , hence they take less time - steps to achieve a given level of accuracy .the computational time required to achieve a given accuracy clearly depends on the computational speed of the model on each given machine .this is conveniently expressed in terms of the number of lattice sites updates per cpu second , .this quantity is expected to be a decreasing function of , since higher - order quadratures require more operations per lattice site .actual measurements give the values reported in table [ table1 ] .although no clear trend can be extracted from these measurements , it is observed that the cases and are about five times more expensive than the cases and . also to be noted that , which is counter - intuitive .however , upon inspecting the velocity vectors ( see appendix [ app ] ) , one can appreciate that for the velocities are consecutive ( no velocity gap ) , while for they are not .since , memory access is faster for consecutive elements , we expect the case to perform better than .more generally , the dependence on appears to be highly dependent on memory access patterns , which are not easily quantified through simple scaling relations .the total cpu time , , it takes to attain steady - state for a given system size can be written as follows : in the above , the cubic dependence derives from the diffusive scaling , as combined with the number of sites ( the space - time computational volume of a diffusive process scales like in spatial dimensions ) .finally , is a relative scale for the number of time - steps , which is expected to decrease at increasing because of the increasing sound speed .the actual values of obtained in the simulations are reported in table [ table1 ] . as an example , for the case , we find that the simulations take ( 65629 time steps ) , ( 65629 time steps ) , ( 33769 time steps ) , and ( 20079 time steps ) seconds , for respectively .this shows that the total computational time remains within a factor two across all values of , while the error with respect to the analytical solution decreases dramatically at increasing ( see fig .[ fig5 ] ) .hence , we can conclude that increasing leads to a dramatic boost of accuracy at a very moderate extra computational cost .finally , we have inspected the computational time to steady - state for a given accuracy , at changing the size of the problem and the order of the quadrature . by combining the previous relations , ( [ accur ] ) and ( [ tl3 ] ) ,we obtain : the plot reports the computational time and system size needed to achieve a prescribed relative error , for the case in point .and system size needed to recover the solution of the poisson equation with a relative error of , for different . ] from fig .[ fig6 ] , we see that by increasing the order of the expansion in hermite polynomials , it is possible to reduce the computational time by several orders of magnitude ( up to six ) .these data refer specifically to the solution of the poisson equation , but we have reasons to believe that similar conclusions would hold for other partial differential equations with a kinetic - theory background , including many linear non - linear partial differential equations of great relevance in many branches of modern science . in order to gain information about the performance of this approach , as compared with existing tools to solve the poisson equation , we have implemented simulations using the multi - grid ( mg ) method . the number of nodes in the mg simulations is denoted by , like for the lattice boltzmann method .fixing ( from eq . ) , in table [ table2 ] we show the relative errors per node , , together with the corresponding computational time .note that for , the mg method is faster than our approach , with the same second order accuracy . by increasing , however , the mg becomes increasingly faster , but also increasingly less accurate at a given grid size .for instance , in order to attain the same accuracy of the case on , the mg solver requires grid points , being still faster by about a factor .given that requires arrays , the lb memory saving is about a factor .these figures indicate that the present high - order lb poisson solver is consistently slower than mg , however it also takes correspondingly less memory to achieve high levels of accuracy .given that mg represents the state - of - the art for fast poisson solvers , these can be regarded as satisfactory results .in summary , we have shown that the present lb approach provides a viable option for the solution of poisson equation , in the sense of simplicity and possibly also in terms of memory demand at high accuracy .summarizing , we have studied the effect of higher order terms on the steady - state solution of the diffusion equation , using a lattice version of the boltzmann equation with high - order ( non - gaussian ) equilibria . for the numerical solution, we have developed a higher order lattice boltzmann model capable of reproducing up to seventh order moment of the equilibrium distribution and source charge .this permits to cancel exactly error terms up to seventh order . for the validation and study of our approach, we have solved the one - dimensional poisson equation , and found that the computational time to steady - state , at a prescribed level of accuracy , can be reduced by up to six orders of magnitude as compared to standard lb formulations .the six orders gain in computational time is due to the fact that higher order lattices permit to utilize much smaller grid sizes .moreover , since they contain a large number of discrete velocities , they allow faster propagation within the lattice , hence a correspondingly faster attainement of the steady - state . we have also compared with state - of - the art multigrid solvers , and found that high - order lb is nearly competitive in efficiency at a lower memory demand .extensions to two and three - dimensions , as well as time - dependent diffusion and advection - diffusion equations will make the object of future research .we acknowledge financial support from the european research council ( erc ) advanced grant 319968-flowccs .using eq . for each order of expansion , we can calculate the velocity vectors with the respective weights .in addition , each set of and provides a characteristic normalised speed , .here we only show the cases for , and , since the simulations for were performed with the same lattice than for . for , we have for , and finally , for ,
we present a new approach to find accurate solutions to the poisson equation , as obtained from the steady - state limit of a diffusion equation with strong source terms . for this purpose , we start from boltzmann s kinetic theory and investigate the influence of higher order terms on the resulting macroscopic equations . by performing an appropriate expansion of the equilibrium distribution , we provide a method to remove the unnecessary terms up to a desired order and show that it is possible to find , with high level of accuracy , the steady - state solution of the diffusion equation for sizeable knudsen numbers . in order to test our kinetic approach , we discretise the boltzmann equation and solve the poisson equation , spending up to six order of magnitude less computational time for a given precision than standard lattice boltzmann methods .
coronal mass ejections ( cmes ) are the most energetic eruptions in the solar system and can affect critical technological systems in space and on the ground when they interact with the geo - magnetosphere in high speeds ranging from several hundred to even more than one thousand kilometers per second .therefore , it is an important issue to investigate the acceleration mechanisms of cmes in solar / space physics .it is generally accepted that cmes are driven by magnetic flux rope ( mfr ) eruptions ( chen 2011 ) .however , we can not observe mfr structures directly in the corona because no instrument can provide the high quality measurement of the coronal magnetic filed at present .several lines of observations in the lower corona have been proposed as the proxies of mfrs , e.g. , filaments / prominences ( rust & kumar 1994 ) , coronal cavities ( wang & stenborg 2010 ) , sigmoids ( titov & dmoulin 1999 ; mckenzie & canfield 2008 ) , and hot channels ( zhang et al . 2012 ; song et al .2014a , 2014b , 2015 ) . resolving the dynamics of these structures are critical to our understanding of the cme acceleration process .* there exist two important magnetic energy release mechanisms : one is the resistive magnetic reconnection process ( carmichael 1964 ; sturrock 1966 ; hirayama 1974 ; kopp & pneuman 1976 ) and the other is the ideal global magnetohydrodynamic ( mhd ) mfr instability ( van tend & kuperus 1978 ; priest & forbes 1990 ; forbes & isenberg 1991 ; isenberg et al . 1993 ; forbes & priest 1995 ; hu et al . 2003 ; kliem & t " or " ok 2006 ; fan & gibson 2007 ; chen et al .2007a , 2007b ; olmedo & zhang 2010 ) .both mechanisms are supported by observations .for example , good correlations exist between cme speed ( acceleration ) and the soft x - ray ( hard x - ray and microwave ) profiles of associated flares ( zhang et al . 2001 ; qiu et al .2004 ; marii et al .in addition , studies showed that the extrapolated magnetic flux in the flaring region was comparable with the magnetic flux of the mfr reconstructed from in situ data ( qiu et al .all these cme - flare association studies support that reconnections play an important role in accelerating cmes . on the other hand, statistical studies showed that the projected speed in the sky plane and kinetic energy of cmes only had weak correlations with the peak values of their associated x - ray flares ( yashiro et al .2002 ; vrnak et al .these observations support that the ideal mhd instability also makes significant contributions to the cme acceleration besides the magnetic reconnection . * * the theoretical and simulation studies also support that the instability and reconnection can accelerate the cmes ( dmoulin & aulanier 2010 ; roussev et al .2012 ; amari et al .generally , the instability plays an important role in triggering and accelerating the mfr , and then the magnetic reconnection accelerates the mfr further and allows the process to develop continuously ( priest & forbes 2002 ; lin et al .* * however , it remains elusive whether both mechanisms have a comparable contribution to the acceleration in a particular event as they usually accelerate cmes in a close time sequence . in this letter, we address this issue through analyzing a filament eruption with two apparently separated fast acceleration phases , instead of one as usual . *the relevant observations and results are described in section 2 .section 3 presents the related discussion , which is followed by a summary in section 4 .the eruption process was recorded by the atmospheric imaging assembly ( aia ) telescope ( lemen et al .2012 ) on board the _ solar dynamic observatory ( sdo)_. aia has 10 narrow uv and euv passbands with a cadence of 12 s , a spatial resolution of 1.2 , and an fov of 1.3 .the aia images shown in this letter ( figure 1 and supplementary movie ) are a small portion of the original full size images . the soft and hard x - ray ( sxr and hxr ) data shown in figure 3 are from _ geostationary operational environment satellite ( goes ) _ and the _ reuven ramaty high energy solar spectroscopic imager ( rhessi : _ lin et al .2002 ) , respectively ._ goes _ provides the integrated full - disk sxr emission from the sun , which are used to characterize the magnitude , onset time , and peak time of solar flares ._ rhessi _ provides the hxr spectrum and imaging of solar flares .the filament eruption originated from noaa active region 12151 located at the heliographic coordinates on 2014 august 24 .this eruption produced an m5.9 class sxr flare on _ goes _ scale , which started at 12:00 ut and peaked at 12:17 ut .a cme ( linear velocity 417 km s ) associated with it was recorded by the large angle spectroscopic coronagraph ( lasco , brueckner et al .1995 ) on board the _ solar heliospheric observatory_. we inspect the aia images and find that the filament appeared in all bandpasses corresponding to the coronal and chromospheric temperatures , which indicate that it has a multi - thermal nature .the eruption process snapshots observed with 304 ( .05 mk ) , 171 ( .6 mk ) and 335 ( .5 mk ) are presented at the top , middle and bottom panels of figure 1 , respectively .the time when the image was taken is shown at the top of each panel .the arrows depict the euv brightening positions , where the plasma was heated by magnetic reconnection .the full aia image sequences of the eruption in six passbands are provided in the supplementary movie . in order to clearly display the rising motion of the filament ,a slice - time plot was constructed with base - difference images of 304 along the dotted line in panel b of figure 1 , as presented in figure 2 .around 12:16:00 ut , the filament apex moved out of the field of view ( fov ) of aia , so no further kinematic evolution analysis was possible after that time .through the time - stacking plot , we measure the filament height with time for a careful kinematic analysis as described in next subsection .the kinematic information of the filament is obtained by analyzing aia 304 base - difference images .we carefully inspect the images and identify the filament leading edges along the slice in figure 1(b ) .the heights are measured from the projected distance of the leading edges from the flare location .the uncertainty of the height measurement is about 4 pixels ( 2 mm ) , which are propagated to estimate the velocity errors in the standard way .based on the height - time measurements , the velocities are derived from a numerical derivative method that employs lagrangian interpolation of three neighboring points , a piecewise approach to calculate the velocity ( zhang et al .2001 ; cheng et al . 2013; song et al. 2014a , 2014b ) .then the filament acceleration is calculated with the same method based on the calculated velocity profile .figure 3 presents the kinematic analysis results .the entire eruption process as seen by aia can be divided into three distinct phases as shown in panel a : an initial slow - rise phase , and then two fast acceleration phases .the vertical green solid lines on the left and right demarcate the start time of the first and second fast acceleration phases , respectively .in addition , the sxr flux with time can also be divided into a slow - rise phase and two impulsive phases .the vertical black dotted line denotes the onset of the first impulsive phase of x - ray flare .apparently , the right green solid line also marks the onset of the second impulsive phase of the flare .the slow - rise phase of the filament lasted for about 7 min from :00 to :07 ut ; at the end of this phase , the velocity increased to km s with an averaged acceleration of 95 m s .the first fast acceleration phase started around 12:07:19 ut and ended around 12:11:31 ut when the velocity reached to km s with an averaged acceleration of m s .then the velocity decreased slightly during the following .5 minutes .the second fast acceleration phase started around 12:13:00 ut .as the filament moved out of the aia fov , we can trace a part of the second acceleration phase .the final velocity we deduce is km s around 12:15:40 ut .therefore , the averaged acceleration for this phase is m s .the horizontal red solid lines in figure 3 depict these two fast acceleration periods .as mentioned , the sxr flux with time has two impulsive phases , which indicate there exist two obvious reconnection processes .this point is further confirmed by the derivation of the sxr flux with two peaks as shown in panel b. the two red dots in panels b and c depict the positions of two peaks , which show that the first obvious reconnection is weak compared with the second one .the onset of cme acceleration phase often coincides well with the onset of accompanying flares ( zhang et al .2001 ) . for our event, it is obvious that the velocity and sxr flux profiles are tightly consistent during their slow - rise phase and second fast acceleration phase .however , the onset time of the first fast acceleration phase of filament ( 12:07:19 ut ) is obviously ahead of the first impulsive phase of flare ( 12:09:19 ut ) for two minutes , which means the filament was accelerated before the first obvious reconnection onset .thanks to aia s high cadence , such short difference is otherwise difficult to observe and measure .further , we plot the hxr flux versus time in panel c , along with the filament acceleration profile . the error bars of acceleration are large and not shown here .note the hxr rates fell after 12:05 ut when the thin attenuators moved into the detector fov , and it fell suddenly again before 12:18 ut when _ rhessi _ went into night .we made two gaussian fitting for the hxr flux as presented with two thin blue solid curves in the panel .we believe the two fitted gaussian shapes correspond to the two obvious reconnection processes according to the derivation of sxr flux in panel b. the observed hxr peaks in the middle of the two fitted peaks should be due to the overlapping effect of hxr emission from these two reconnections , which is supported by the accumulation of the two fitted curves as shown with the thick blue solid curve .panel c shows that the first fast acceleration profile did not have any correlations with the hxr flux , even the acceleration continued to decrease after the reconnection onset .they tightly correlated with each other only during the second fast acceleration process .* therefore , we suggest that the first fast acceleration process was induced by the instability ( cheng et al . 2013 ) , while the second one was accelerated mainly through the subsequent magnetic reconnection ( priest & forbes 2002 ; lin et al .* the aia 335 observations present a kink morphology during the eruption ( see figure 1(h ) ) , indicating that the filament might be accelerated through the mfr kink instability ( cheng et al .our study suggests that both mechanisms have a comparable contribution to the cme acceleration , which is consistent with the mhd simulation results ( chen et al .our observational results have significant theoretical implications . *the event provides us an excellent opportunity to compare the contributions of different acceleration mechanisms .* during the eruption , the filament first exhibits a slow - rise phase , followed by two fast acceleration phases . for the slow - rise phase, the acceleration might be induced by the initial weak quasi - separatrix - layer ( qsl ) reconnection ( cheng et al . 2013 ) , i.e. , reconnections at the interface between filament and its surrounding corona .the brightening as shown with arrows in figures 1(a ) and ( d ) indicated that there existed reconnection during that period ( cheng et al .another possibility is that the slow - rise phase was due to the initial acceleration stage caused by the mfr kink instability .it is not possible to distinguish their contributions at this phase .however , for the following two fast acceleration phases , we suggest that they are attributed to the instability and the reconnection , respectively .it can be seen that the first obvious reconnection did nt result in any considerable filament acceleration .this might be explained by two possible factors : first , the reconnection is weaker compared to the second one according to their hxr peak values as mentioned above .* this weak reconnection might be insufficient to accelerate the mfr as it can not weaken the tension force of the overlying magnetic loops fast enough , and may even lead to the mfr deceleration as calculated by lin & forbes ( 2000 ) and lin ( 2002 ) * ; second , it is likely that this reconnection is still the qsl reconnection , as told from the hxr data of _ rhessi_. the locations of the hxr sources are shown as black contours at 70% , 80% , and 90% of the maximum in the 25 - 50 kev band ( figures 1(b),(c),(e),(f ) ) .the sources are mainly at the two footpoints of the filament during the first obvious reconnection ( figures 1(b),(e ) ) , which is consistent with the qsl reconnection .the qsl reconnection can heat the filament ( as depicted with arrows in figures 1(b ) and ( e ) ) ( cheng et al .2013 ) and produce high energy electrons that escape along two legs of the filament and produce hxr emission at the footpoints , but it does not contribute to accelerate filament . on the other hand ,the hxr sources locate mainly between the two footpoints during the second obvious reconnection ( figures 1(c),(f ) ) , which indicates the reconnection mainly took place in the current sheet connecting the filament to the flare loop .this reconnection can accelerate filament / mfr ( carmichael 1964 ; sturrock 1966 ; hirayama 1974 ; kopp & pneuman 1976 ; lin & forbes 2000 ; lin 2002 ; chen 2011 ) and produce energetic electrons that hit the flare loop to produce the _ rhessi _ hxr source .if there was not the first obvious magnetic reconnection , and the second obvious reconnection took place quickly after the kink instability onset , we will not be able to distinguish the contributions from the instability and reconnection , and we should observe only one fast acceleration phase like most events .therefore , we point out that the time difference between the instability onset and the subsequent reconnection in the current sheet should be long enough to distinguish their contributions .we conjecture that this time difference is very short in most events , which might be one important reason that similar events with two fast acceleration phases are rare .* a filament eruption associated with an m5.9 class flare was well observed by the aia at the southeast limb of the sun on 2014 august 24 , which presented two fast acceleration phases during the eruption and provided us a perfect opportunity to compare the contributions of different acceleration mechanisms of cmes / mfrs during a particular event . *based on the detailed analysis of the relations between velocity ( acceleration ) and sxr ( hxr ) profiles , we suggest that the instability and magnetic reconnection make a major contribution during the first and second fast acceleration phases , respectively .the averaged acceleration for the first phase is m s , similar to that of the second phase ( m s ) .therefore , both instability and reconnection play a comparable role to accelerate the filament in this event , which is consistent with the mhd simulation results ( chen et al. 2007b ) .we summarize the two fast acceleration phases as follow : the instability takes place first in a catastrophic manner , then the mfr is accelerated by the lorentz force ( chen et al .2007b ; amari et al .the magnetic energy is mainly transformed into the kinetic energy of mfr . in the meanwhile ,current sheet is bound to form as the eruptive mfr drags magnetic field lines outwards , which provides proper site for fast magnetic reconnection .the subsequent reconnection produces a further acceleration of the mfr , i.e. , the second fast acceleration phase .we thank the referee for his / her valuable comments that improved the manuscript .sdo is a mission of nasa s living with a star program .this work is supported by the 973 program 2012cb825601 , nnsfc grants 41274177 , 41274175 , and 41331068 .j.z . is supported by us nsf ags-1249270 and nsf ags-1156120 .
filament eruptions often lead to coronal mass ejections ( cmes ) , which can affect critical technological systems in space and on the ground when they interact with the geo - magnetosphere in high speeds . therefore , it is an important issue to investigate the acceleration mechanisms of cmes in solar / space physics . based on observations and simulations , the resistive magnetic reconnection and the ideal instability of magnetic flux rope have been proposed to accelerate cmes . * however , it remains elusive whether both of them play a comparable role during a particular eruption . * it has been extremely difficult to separate their contributions as they often work in a close time sequence during one fast acceleration phase . here we report an intriguing filament eruption event , which shows two apparently separated fast acceleration phases and provides us an excellent opportunity to address the issue . through analyzing the correlations between velocity ( acceleration ) and soft ( hard ) x - ray profiles , we suggest that the instability and magnetic reconnection make a major contribution during the first and second fast acceleration phases , respectively . further , we find that both processes have a comparable contribution to accelerate the filament in this event .
twenty five years ago aharonov , albert and vaidman published a paper entitled `` how the result of a measurement of a component of the spin of a spin-1/ 2 particle can turn out to be 100 ? '' .the authors idea was further developed a large volume of work on the so - called weak measurements ( see , for example , - ) , culminating in a somewhat bizarre the bbc report suggesting that `` pioneering experiments have cast doubt on a founding idea of the branch of physics called quantum mechanics '' .there seems to be room for discussion about what actually happens in a weak measurement , and this is the subject of this paper .some of the early and more recent criticisms of the original approach used in ref . can be found in refs .there appear to be only two possible answers to the original question posed by the authors of ref. : ( i ) there is a new counter - intuitive aspect to quantum measurement theory , or ( ii ) the proposed measurement is flawed . in this paperwe will follow ref . in advocating the second point of view .the argument is subtle .there is no error in the simple mathematics of the ref .it is the interpretation of the result which is at stake .below we will argue that a weak measurement attempts to answer the which way? question without destroying interference between the pathways of interest .such an attempt must be defeated by the uncertainty principle , and the unusual weak values are just the evidence of the defeat .a random variable is fully described by its probability distribution .often it is sufficient to know only the typical value of , and the range over which the values are likely to be spread . to get an estimate for the centre and the width of the range ,one usually evaluates the mean value of , and the standard mean deviation .suppose can only take the values and , and its unnormalised probability distribution is and .we , therefore , have /[\rho(1)+\rho(2)]\approx 1.4761,\\ \nonumber \sigma % = \{[<f>^2\rho(0)+(1-<f>)^2\rho(1)]/(\rho(0)+\rho(1))\}^{1/2 } \approx 0.4994,\end{aligned}\ ] ] which reasonably well represent the centre and width of the interval ] , - is too large , and is purely imaginary .the reason for obtaining such an anomalous mean value is that the denominator in eq.([n1 ] ) is small , while the numerator is not - hence the large negative expectation value in eq.([n3 ] ) . in general , the mean and the standard mean deviation of an alternating distribution do not have to represent the region of its support .these useful properties of and are lost , once a distribution is allowed to change its sign .to make things worse , let us assume that the unnormalised probabilities are also allowed to take complex values , while may take any value inside an interval ] . from eq.([c2 ] ) we note that if both and do not change sign , is a proper probability distribution , and its mean certainly lies within the region of its support .if , on the other hand , both and alternate , the mean is allowed to lie anywhere , and is not obliged to tell us anything about the actual range of values of .so here is how a confusion might arise : suppose one needs to evaluate the average of a variable known to take values between and indirectly , i.e. , without checking whether the distribution alternates , or is a proper probabilistic one . obtaining a result of may seem unusual , until it is realised that the employed distribution changes sign , and scrambles the information about the actual range values involved .one remaining question is why was it necessary to employ such a tricky distribution in the first place ?a chance to employ oscillatory complex valued distribution is offered by quantum mechanics , and for a good reason . consider a kind of double - slit experiment in which a quantum system , initially in a state , may reach a given final state via two pathways , the corresponding probability amplitudes being and .there are two possibilities .( i ) the pathways interfere , and the probability to reach is given by ( ii ) interference between the pathways has been completely destroyed by bringing the system in contact with another system , or an environment . now the probability to reach is the two cases are physically different , as are the two probabilities . in the second casethe two pathways are .one can make an experiment which would confirm by multiple trials that the system travels either the first or the second route with frequencies proportional to and , respectively . in the first case the pathways remain .together they form a single real pathway travelled with probability , and there is no way of saying , even statistically , which of the two virtual paths the system has actually travelled .the above leads to a loose formulation of the uncertainty principle : several interfering pathways or states should be considered as a single unit .quantum interference erases detailed information about a system .this information can only be obtained if interference is destroyed , usually at the cost of perturbing the system s evolution , thus destroying also the very studied phenomenon , e.g. , an interference pattern in young s double - slit experiment .let us go about the pathways in a slightly more formal way . by slicing the time interval into subintervals , and sending to infinity ,we can write the transition amplitude for a system with a hamiltonian as a sum over paths traced by a variable , \end{aligned}\ ] ] where and are the eigenvalues and eigenvectors of the variable of interest , . we also introduced feynman paths - functions which take the values from the spectrum of at each discrete time . in the limit will denote such a path by .the paths are virtual pathways , each contributing a probability amplitude ] equals some .each pathway now contributes the amplitude ) a^{f\leftarrow i}[path],\ ] ] where is the dirac delta .the new pathways contain the most detailed information about the variable , while information about other variables has been lost to interference in the sum ( [ f2 ] ) .next we can define a coarse grained amplitude distribution for by smearing with a window function : with chosen , for example , to be a gaussian of a width we are unable to distinguish the values and less than apart , , since the corresponding pathways may now interfere .the coarse graining does , however , have a physical meaning .consider a basis containing our final state , and construct a state so that .it is easy to check that satisfies a differential equation , with the initial condition latexmath:[\[\label{f5 } schroedinger equation describing a system interacting with a von neumann pointer whose position is . with itwe have the recipe for measuring the the quantity ] will be destroyed , since they lead do different outcomes for the pointer .our measurement scheme has an important parameter , the width of the window , , which determines the extent to which we can ascertain the value of ] , outside of which vansihes .a very broad can , therefore be replaced by , making the l.h.s . of eq.([w2 ] ) proportional to .thus , in order to study the system with the interference between the pathways intact , we must make a highly inaccurate weak measurement .this can be achieved by introducing a high degree of uncertainty in the pointer s initial position .the following classical example may give us some encouragement .consider a classical system which can reach a final state by several different routes .let us say , a ball can roll from a hole to a hole down the first groove with the probability , or down the second groove , with the probability , and so on .it is easy to imagine a ( purely classical ) pointer which moves one unit to the right if the ball travels the first route , or two units to the right , if the second route is travelled , and so on .the meter is imperfect : we can accurately determine its final position , while we can not be sure that it has been set exactly at zero . rather , its initial position is distributed around with a probability density of a zero mean and a known variance .let there be just two routes .now the final meter readings are also uncertain , with the probability to find it in given by if the meter is accurate , i.e. , if is very narrowly peaked around , we will have just two possible readings , , in approximately out of trials , or , in approximately out of all cases .suppose next that the meter is highly inaccurate , and the width of , is much larger than .a simple calculation shows that the first two moments of the final distribution are given by we have , therefore , a very broad distribution , whose mean coincides with the mean of the . since the second moment of is known , by performing a large number of trials we can extract from the data also the variance of . for instance , if the two routes are travelled with equal probabilities , , we have from this we can correctly deduce that there are just two , and not three or four , routes available to the system , and that they are travelled with roughly equal probabilities .this simple example shows that , classically , even a highly inaccurate meter can yield limited information about the alternatives available to a stochastic system .it is just a matter of performing a large number of trials required to gather the necessary statistics .next we will see whether this remains true in the quantum case .in the quantum case , employing an inaccurate meter has a practical advantage - we minimise the back action of the meter on the measured system , and may hope to learn something without destroying the interference . as discussed in sect .vi , we can make a measurement non - invasive by giving the initial meter s position a large quantum uncertainty ( that is to say , we choose a pure meter state broad in the coordinate space ) .we prepare the system and the pointer in a product state ( [ f5 ] ) , turn on the interaction , check the system s final state , and sample the meter s reading provided this final state is . from ( [ f4 ] )the moments of the distribution of the meter s readings are given by as the width of the initial meter s state tends to infinity , assuming we have and where is a factor of order of unity , which depends only on the shape of .and we have introduced the notation for the -th moment of the complex valued amplitude distribution defined in eq.([f2 ] ) , it is at this point that improper averages ( [ ww5 ] ) evaluated with oscillatory distributions enter our calculation , originally set to evaluate proper probabilistic averages ( [ ww2 ] ) .expressions similar to eq.([ww2 ] ) have been obtained earlier in for a weak von neumann measurement and in for the quantum traversal time .they are the quantum analogues of the classical eqs.([i2 ] ) .we see that the quantum case turned out to be different in one important aspect . where the inaccurate classical calculation of the previous section yielded the mean of the _ probability distribution _, its quantum counterpart gives us the mean evaluated with the _ probability amplitude _ .there is no _ apriori _ reason to expect that either its real or imaginary part does not change sign . as discussed in sects .ii and iii , such averages are not obliged to tell us anything about the actual range of a random variable . thus , our attempt to answer the which way? ( which ? ) question is likely to fail , as we are not able to extract the information about the alternatives available to a quantum system .but we have been warned : the uncertainty principle suggests that , for as long as the pathways remain interfering alternatives , the question we ask has no meaning .to give our approach a concrete example , we return to the double slit experiment .consider a two - level system , e.g. , a spin-1/2 precessing in a magnetic field .the hamiltonian is given by where is the larmor frequency , and is the pauli matrix .we assume that the spin is pre - selected in a state polarised along the -axis at , and then post - selected in the same state at .we also wish to know the state of the spin half - way through the transition , at .we follow the steps outlines in sect .v. at any given time , and in the given representation , the spin can point up or down the -axis .we label these two sates and , respectively .feynman paths are , therefore , irregular curves shown in fig . 1 . the functional is given by eq.([f1a ] ) with , thus , we combined the feynman paths ending in the state at into two virtual pathways , one containing the paths passing at through the state , and the other - the paths passing through the state .the corresponding probability amplitudes are those for evolving the spin freely from its initial state to or at , and then to the final state at , we will need a meter .the interaction corresponds to a von neumann measurement of the operator performed at .the accuracy of the measurement depends on the width of the initial meter s state , which we will choose to be a gaussian , it is easy to check that the average meter reading in eqs.([i2 ] ) is given by its dependence on shown in fig.2 .this is , of course , an oversimplified version of the young s double slit experiment : the states at play the role of the two slits , and the states at - the role of the positions on the screen where an interference pattern is observed .consider first a strong measurement of the slit number .choose the final time such that finding the freely precessing spin in the state is unlikely ( our interference pattern has there a minimum , or a dark fringe ) , say . sending , for the probability distribution of the meter s readings we have [ cf .eq.([w1 ] ) ] we observe that the two pathways are travelled with almost equal probability , and eq.([d3a ] ) gives us the mean slit number however , this is not the original spin precession we set out to study .the interference pattern has been destroyed and the probability to arrive at the final position , which without a meter was latexmath:[\[\label{d5 } this is a textbook example which illustrates the uncertainty principle : converting virtual paths into real ones comes at the cost of loosing the interference pattern .not satisfied , we try to minimise the perturbation in the hope to learn something about the route chosen by the system with the interference intact .we send to infinity , and after many trials obtain the answer : the mean number of the slit used is which brings us back to our original question , to rephrase the title of ref. , `` how the result of measuring the number of the slit in a double slit experiment can turn out to be ? ''we have tried to evaluate the mean number of the slit a particle goes through in a double slit experiment , and came up with the number .the mathematics is straightforward , and we need to understand the meaning of this result before employing the weak measurements elsewhere .there are just two slits , numbered and , so the result looks a bit strange .has our measurement gone wrong , or is the quantum world so strange that there are slits we are not aware of ?we opt for the first choice .wrong measurements are common in classical physics. they can be made and repeated , but only have meaning within the narrow context of the wrong experiment itself .a broken speedometer will read m.p.h each time the car goes at m.p.h , and might convince the driver , but not the traffic policemen who stops him for speeding .the slit number may come up in a weak measurement , but can not be used for any other purpose , such as convincing a potential user that the screen he is about to buy has more than two holes drilled in it .there is , however , one important distinction .classically , one can always find the right answer and correct , or re - calibrate the errant speedometer .quantally , it is not so .according to the uncertainty principle , there is no correct answer to the question asked .the nearest classical analogy might be this .suppose a ( purely classical ) charge can be transferred across one of the two lead wires , and an observer can measure , which one has been chosen .then the wires are heated up and melted into one .which of the two wires has the charge gone through now ?this is what interference does , it melts the pathways through the two slits into a single one , thus depriving the which way? question of its meaning .having started to use analogies it is difficult to stop . here is the last one : one asks a manager a question the said manager is unable or unwilling to answer properly . yetan answer he / she must give .the answer ( or no - answer ) given will have little to do with what one wants to know .it will be repeated should the question be asked again .it should nt , however , be used to draw further conclusions about the matter of interest .the weak measurements rely on an interesting interference effect which has applications beyond measurement theory , . they can be made , and have been made in practice .they have useful applications in interferometry .however , their results should not be over - interpreted .bizarre weak values indicate the failure of a measurement procedure under the conditions where , according to the uncertainty principle , it must fail .seen like this , the weak measurements loose much of their original appeal , and the calculation of weak values reduces to a simple exercise in first order perturbation theory . finally , throughout the paper we appealed to the uncertainty principle , seen as one of the basic axioms of quantum theory .it is possible that the principle itself will be explained in simpler terms within a yet unknown general theory .however , we argue , that the weak measurements have not yet given such an explanation , nor provided any deeper insight into physical reality .i acknowledge support of the basque government ( grant no. it-472 - 10 ) , and the ministry of science and innovation of spain ( grant no . fis2009 - 12773-c02 - 01 ) .i am also grateful to dr .g. gribakin for bringing the lines used in the epigraph to my attention .references aharonov y , albert dz , vaidman l. how the result of a measurement of a component of the spin of a spin- particle can turn out to be 100 .physical review letters 1988 ; 60 ( 14 ) : 13511354 .duck i m , stevenson pm , sudarshan ecg .the sense in which a `` weak measurement '' of a spin- particle s spin component yields a value 100 .physical review d 1989 ; 40 ( 6 ) : 21122117 .http://dx.doi.org/10.1103/physrevd.40.2112 aharonov y , vaidman l. the two - state vector formalism of quantum mechanics . in : time in quantum mechanics , muga g , mayato rs , egusquiza i ( editors ) , springer , 2002 , pp.369412 .aharonov y , botero a , popescu s , reznik b , tollaksen j. revisiting hardys paradox : counterfactual statements , real measurements , entanglement and weak values .physics letters a 2002 ; 301 : 130138 .dixon pb , starling dj , jordan an , howell jc .ultrasensitive beam deflection measurement via interferometric weak value amplification .physical review letters 2009 ; 102 ( 17 ) : 173601 .rozema la , darabi a , mahler dh , hayat a , soudagar y , steinberg am .violation of heisenbergs measurement - disturbance relationship by weak measurements .physical review letters 2012 ; 109 ( 10 ) : 100404 .leggett aj .comment on `` how the result of a measurement of a component of the spin of a spin- particle can turn out to be 100 ''. physical review letters 1989 ; 62 ( 19 ) : 23252325 .sokolovski d , puerto gimnez i , sala mayato r. feynman - path analysis of hardy s paradox : measurements and the uncertainty principle .physics letters a 2008 ; 372 ( 21 ) : 37843791 .monks pdd , xiahou c , connor jnl .local angular momentum - local impact parameter analysis : derivation and properties of the fundamental identity , with applications to the f + h2 , h + d2 , and cl + hcl chemical reactions .journal of chemical physics 2006 ; 125 ( 13 ) : 133504 - 133513 .
weak measurements can be seen as an attempt at answering the which way? question without destroying interference between the pathways involved . unusual mean values obtained in such measurements represent the response of a quantum system to this forbidden question , in which the true composition of virtual pathways is hidden from the observer . such values indicate a failure of a measurement where the uncertainty principle says it must fail , rather than provide an additional insight into physical reality . _ q. what time is it when the clock strikes 13 ? _ _ a. time to buy a new clock . _ ( a joke )
many potential quantum information processing systems are controlled by means of a set of time - dependent parameters , such as quasi - static electromagnetic fields , radio - frequency , , or optical fields . for most such systems , it is relatively simple to device a set of controls that implement a given evolution in an ideal situation . often , however , a more careful choice of controls can lead to an implementation that is less sensitive to variations in unknown or uncontrollable system parameters .examples of such robust implementations include system specific solutions such as the hot gate for ion trap quantum computing which is insensitive to vibrational excitations , as well as more general techniques such as composite pulses , a technique originating in nmr spectroscopy . in this articlewe describe a numerical method for designing robust controls for systems where the evolution is adequately described by a possibly non - unitary evolution operator .this form does not allow a general master equation formulation , but it is sufficient to establish worst case behavior in many quantum computing settings where the worst case effects of decoherence and loss can be adequately modeled by a schrdinger equation with a non - hermitian hamiltonian .as an application of the method , we will consider the construction of a robust phase shift operation for the rare earth quantum computing ( reqc ) system , which is based on rare - earth ions embedded in a cryogenic crystal . in each ion ,two metastable ground - state hyperfine levels , labeled and , serve as a qubit register which is manipulated via optical transitions from both states to an inhomogeneously shifted excited state .the reqc system is an ensemble quantum computing system and macroscopic numbers of ions are manipulated in parallel , addressed by the value of the inhomogeneous shift of their -state . to obtain a sufficient number of ions within each frequency channel ,it is necessary to operate on all ions within a finite range of inhomogeneous shifts , and the main difficulty in operating the reqc system is to achieve the same evolution for each of these ions independent of their particular inhomogeneous shift . this article is divided into two sections : in section [ sec : design - robust - puls ] we describe the method we have used to design robust gate implementations ; these results should be applicable to a variety of quantum information processing systems . in section [ sec : optimal - control - reqc ] we present the results of applying the method to a specific problem relating to the reqc system , and show that it is indeed possible to obtain very high degrees of robustness .we consider a collection of quantum system which evolve according to a set of time - dependent controls .in addition to , the single system hamiltonian depends on a system specific set of uncontrollable or unknown parameters , such as field strength or quantum numbers corresponding to unused degrees of freedom .the evolution of each system is governed by the schrdinger equation , where we will allow the hamiltonian to include non - hermitian terms describing loss and decoherence .our goal is to choose a set of controls that lead to an evolution which is as close as possible to a given desired evolution over a range of -values . to quantify this, we introduce an objective functional which describes the performance of a set of controls for a given value of . by conventionwe take a low value of to indicate a good performance , and the problem of finding a robust set of controls thus corresponds to minimizing where is the set of -values for which we want the implementation to perform well . the conceptually simple approach we have taken to this problemis to replace with a discrete subset , so that the minimization of has the form of a standard minimax problem , which can be solved efficiently provided that we are able to calculate . belowwe show how to achieve this by methods from optimal control theory . in this sectionwe show how may be calculated for a quite general class of objective functionals . to keep the notation simple and avoid unnecessary restrictions, we will consider the following generalization of the schrdinger equation , determining the evolution of a complex - valued , time - dependent matrix due to a set of real - valued , time - dependent controls , .we will consider objective functionals of the form where is a real - valued function quantifying how close the final state is to our goal , and the real valued function , referred to as a penalty function , can be chosen to discourage the use of certain control values .our goal is to calculate subject to the constraint that and obey eq . .to achieve this we introduce the modified objective functional : + \text{h.c . }\right ) dt,\ ] ] where the complex time - dependent adjoint state matrix is in effect a continuous set of lagrange multipliers leaving identical to , provided that and obey eq . .if we require to obey the adjoint equations where and should be considered as independent with respect to the partial derivative and is defined as we find by integration by parts that the differential of is given by from which the derivatives of with respect to the parameters used to parametrize can be calculated by the chain rule .we now return to the case of a quantum system governed by the schrdinger equation . if we assume the penalty function to be independent of , the adjoint equations in this case are and given by with the role of the adjoint state and the adjoint equations is often described as back - propagating the errors in achieving the desired final state .if is hermitian , the boundary value for can be optimized for numerical computation as shown in appendix [ sec : optimized - co - state ] . we will now discuss the choice of the function , quantifying how well the obtained evolution approximates .as we are concerned with quantum information processing we will assume that all operations start out with an unknown state in the qubit subspace of the full system hilbert space , and that this subspace is invariant under the ideal evolution .the function should not depend on the evolution of states outside , nor on collective phases on the states originating in .a cautious choice of fulfilling these conditions could be based on the worst case overlap fidelity : which measures the least possible overlap between the obtained output state and the ideal output for initial states in .this fidelity measure has the desirable quality that both population transfer from to and population transfer completely out of , as described by a non - unitary evolution , is counted as loss of fidelity . from the point of view of optimal control , a significant drawback of the worst case overlap fidelity , ,is that it is computationally complicated .a computationally accessible fidelity measure which share many appealing features with is the trace fidelity , where .as shown in appendix [ sec : trace - fidelity ] , is related to by the strict bound indicating that we can safely replace by for numerical computations on a few qubits at high fidelity . for numerical calculationsit is beneficial to use rather than ; in the calculations presented in the next section we have used , leading to an adjoint state boundary condition of which can be directly computed .one approach to minimizing is to directly solve the extremum condition for .this task is significantly simplified if a penalty function proportional to is introduced , , so that the extremum condition according to reads , which may be used as an iterative formula for calculating . variations over this iterative approach give rise to the krotov and zhu - rabitz algorithms which have been shown to have excellent convergence properties , and have been successfully applied to optimal control of unitary transformations for one set of parameters by the group of r. kosloff . a unifying view of these direct methodscan be found in ref . . in the work presented herewe have chosen to use an indirect minimization algorithm : rather than trying to solve the extremum condition directly , we have used the gradient information obtained through eq . as input for a general sequential quadratic programming procedure based on a constrained quasi - newton method .the primary advantage of this approach is that we have total freedom to choose the parametrization of the controls , and can place arbitrary bounds on these .this allows us to more accurately model the fact that the experimental limitations most often only distinguish between possible and impossible controls : no possible controls are significantly harder than others .an explicit field strength limit also serves to introduce an absolute scale on which to introduce decay strengths etc .-pulse sequence ( dashed line ) and the optimized ( solid line ) implementation of on far - detuned ions , as described by , where is calculated with respect to the identity , as these ions should ideally not be disturbed .both implementations achieve worst case fidelities very close to unity , with the -pulse sequence achieving the best results .the curve for the optimized pulse is actually a running maximum , as the actual value of oscillates with a period of ., width=317 ] the motivation for the work presented in this article has been the design of robust gate implementations for the reqc system mentioned in the introduction . as an example, we will consider the construction of a robust implementation of the single qubit operation which could simply be implemented by a single pulse on the - transition if we were not concerned with robustness .our primary concern will be to make the implementation robust with respect to variations in the inhomogeneous shift of the -state in order to allow the use of finite with channels .since it is experimentally difficult to obtain a homogeneous field strength over the crystal , we would also prefer the implementation to be insensitive to variations in the relative field strength , which we will denote .in addition to requiring the implementation of to be robust with respect to variations in and , we will add the requirement that ions outside the channels should not be affected , as this allows us to use the obtained implementation of as a part of a controlled phase shift operation : if the -states of the qubit ion and a controlling ion are coupled sufficiently strongly by static dipole interaction , an excitation of the controlling ion will effectively shift the qubit ion out of the channel thus conditioning the evolution of the qubit on the state of the controlling ion . even though the simplest implementation of for an ideal ion with and would involve only the and levels ,a robust implementation must also involve the -state since the coupling of to will result in a -dependent phase on which can only be compensated by introducing the same phase on the -state , e.g. through phase compensating rotations .a highly successful example of this approach is the -pulse sequence suggested by roos and mlmer , which as illustrated by fig .[ fig : fidelities](a ) yields a very robust implementation , achieving high fidelities over a wide range of parameter values .we model the reqc system by the single ion hamiltonian which does not include any effects of decay or decoherence . in the notation introduced in section[ sec : design - robust - puls ] , the system parameters are , the controls are , and the qubit subspace is . no penalty function is used : we use , and limit the field by strict bounds on , as this is the relevant limiting parameter in the reqc system . inspired by the success of the -based solution and the hat - like fourier spectrum of the -pulse , is parametrized in terms of a truncated fourier basis . based on trial and errorwe have arrived at -values to constitute , some within the neighborhood of , , and some at large detunings where the ions should not be disturbed .the result of the optimization with this choice of is shown in fig .[ fig : fidelities](b ) , where the circles indicate the members of .it is evident from the plot that the optimization has achieved a high fidelity over an even larger range of parameters than the -pulse sequence illustrated in fig .[ fig : fidelities](a ) . with respect to not disturbing the detuned ions , both the optimized pulse and the -pulses obtain fidelities within of unity for , where is the maximal resonant rabi frequency at .as illustrated by fig .[ fig:3levavoid_edge ] , which only shows fidelities at as is nearly independent of at , the -pulse sequence performs better than the optimized pulse in this regime .we have shown that it is possible to construct highly robust gate implementations for quantum information processing by a quite general method .in particular , the method has been used to greatly enhance the performance of a gate implementation for a model reqc system by extending the range of inhomogeneous shifts and relative field strengths over which an acceptable performance is achieved .the model reqc system used in this article ignores many performance degrading factors , the two most important being decoherence and implementation noise .decoherence could in the present case be adequately modeled by a non - hermitian hamiltonian , for which we expect the method described in this paper to be able to find a robust implementation as in the decoherence - free case .it is not clear , however , how the method could be extended to address the problem of robustness with respect to implementation imperfections .the author would like to thank the people at the centre for quantum computer technology at the university of queensland for their hospitality , and klaus mlmer for valuable comments on the manuscript .this research was funded by project esquire of the ist - fet programme of the ec .in this section we prove the relation between the worst case overlap fidelity , , and the trace fidelity . referring to the definition ,we note that is completely determined by the restriction of the operator to . since describes the evolution of a quantum system , it is possible to extend it to a unitary operation on a hilbert space containing , and is consequently the restriction of a unitary operator to . in the ideal case will be equal to the identity on , perhaps with the exception of a complex phase . is defined as the minimum of the overlap .since the unit sphere of is compact , this minimum will be attained for some : .we now extend to an orthonormal basis by the gram - schmidt process . evaluating the trace fidelity in this basis we find by the triangle inequality : where we have used that for all since is the restriction of a unitary operator . by rewriting we obtain the desired relation , .we note that the established bound is strict in the sense that for any , the operator will fulfill eq . with equality for any .in the case of a hermitian hamiltonian and a penalty function , that does not depend on the state , it is possible to modify the adjoint state boundary condition to reduce the required accuracy of the adjoint state propagation . in this case, we find according to eqs . and that has the form \delta { { \ensuremath{\boldsymbol{\varepsilon}}}}(t ) \ , dt,\] ] where and denotes the -th columns of and respectively .since and thus are assumed to be hermitian , as given by is not affected by adding to a component of for any real .furthermore , since and evolve according to the same schrdinger equation , this corresponds to modifying the boundary condition for the adjoint state to read : for any real .the obvious use of the freedom in the choice of boundary value is to minimize the norm of the adjoint state , in order to relax the requirements of the relative precision of the adjoint state propagation .this minimum is easily calculated from , but we prefer to illustrate the physical background of the result by calculating it in a different way : the freedom in the choice of boundary value is allowed by the hermiticity of .the same hermiticity ensures that is normalized , so that , where is the normalization function , is equal to . the gradient of , however , is different from that of .in fact we find \ ; { { \ensuremath{\boldsymbol{x}}}}_k,\ ] ] where and should be considered independent with respect to the derivative . comparing this expression to eq ., it is tempting to let , which is indeed the answer found by minimizing subject to eq . . the modified boundary condition of eqcarries the same error information as that of eq . , but the required relative numerical precision when propagating the adjoint state will be significantly reduced .
quantum information processing systems are often operated through time dependent controls ; choosing these controls in a way that makes the resulting operation insensitive to variations in unknown or uncontrollable system parameters is an important prerequisite for obtaining high - fidelity gate operations . in this article we present a numerical method for constructing such robust control sequences for a quite general class of quantum information processing systems . as an application of the method we have designed a robust implementation of a phase - shift operation central to rare earth quantum computing , an ensemble quantum computing system proposed by ohlsson et . al . . in this case the method has been used to obtain a high degree of insensitivity with respect to differences between ensemble members , but it is equally well suited for quantum computing with a single physical system .
biological and ecological research has investigated cell migration . to model cell migration , studies have been composed to include migration , diffusion , haptotaxis , and chemotaxis , . in this paper , the focus is chemotaxis .chemotaxis describes the movement of an organism and/or groups of cells that either move toward or away from a chemical or sensory stimulus . in the early work by keller and segel , chemotactic responses of amoebae to bacteriais studied in a cellular slime mold .bacterial chemotaxis , which describes the ability of bacteria to move toward increased or decreased concentrations of attractants is analyzed at the macroscopic level through a microscopic model of individual cells , .it was first observed by engelmann in 1881 .for example , if _ salmonella typhimurium _ , a strain of salmonella associated with meat and poultry products , is introduced to a petri dish filled with a nutrient , the bacteria will migrate outward , consuming the nutrient .as they consume the nutrient , they secrete a chemoattractant .after several days , the bacteria will have clustered in the areas of high chemical concentration .a structure of concentric rings is usually observed experimentally .chet and mitchell s work describes patterns formed from _e. coli _ movement toward amino acids .allweis et .al investigate _ vibrio cholerae _ which are inhibited by a pepsin digest that reduces the possibility of the vibrios attaching to the intestinal wall .other authors have addressed chemotaxis in immune cell motility which when combined with tumor morphology is hoped to provide new avenues of treatment strategies .in addition , authors have analyzed chemotactic responses in ecology and have investigated mathematical issues for the existence of global solutions in multiple dimensions , .chemotaxis also arises in a variety of medical applications .in particular , it has been studied in connection with myxobacteria , leukocyte mobility in tissue inflammation , the migration of tumor cells towards bone , and other issues in morphogenesis .another interesting problem involves the study of vascular tumors through angiogenesis .angiogenesis involves the formation of capillary networks of blood vessels that are vital for the growth of tumors .mathematical modeling of angiogenesis has given new insight into tumor structure .normal tissue , lymphocytes , and other types of cells grow at the tumor site or are recruited through chemotaxis .the need to identify the nature of this recruitment is at the heart of this paper .the identification of a chemotactic term falls under the umbrella of an inverse problem . in principle, we can measure certain characteristics of the tumor concentration and use mathematical techniques to recover the chemotactic term , in particular the chemotactic sensitivity , that is driving the tumor growth . to our knowledge , this _ inverse problem _ approach has only been used in the analysis of chemotaxis models by dolak - stru and kgler under the assumption that the chemical concentration is explicitly known .since there are many applications in which chemotaxis arises , there are also different models of the chemotactic effect .there have been many different expressions proposed that model chemotactic velocity , see keller and segel , lapidus and schiller , ford and lauffenburger , and tyson et al .. this velocity is used in a bacterial conservation equation in the formulation of a system of partial differential equations that governs the particular application .the chemotactic sensitivity determines the velocity .our goal is to develop a technique whereby the appropriate chemotactic sensitivity model , and hence chemotactic velocity , can be determined from available data .in particular , we consider a system of partial differential equations that was developed by oster and murray to model the pattern formation of cartilage condensation in a vertebrate limb bud .a similar system was studied by myerscough et al . , . the numerical solution of similar systems were recently studied by tyson et al . and by nakaguchi and yagi .work by fister and mccarthy has shown that the system of partial differential equations can in fact be controlled theoretically through the introduction of a mechanism controlling the number of cells being generated .simulations provide optimal drug treatment programs for patients to facilitate the rebuilding of cartilage or the reduction of cancerous tumors .the chemotactic sensitivity in was known and the control parameter was a harvesting term .our goal in this work is to _ identify _ the chemotactic sensitivity .the paper is organized into six sections . in section two ,the existence of the forward problem is proven . in section three , identifiability of the chemotactic sensitivityis established using the weak formulation of the state problem . in section four , tikhonov regularizationis used to approximate the solution through the use of minimization arguments .the rate of convergence of the approximate minimizer of the chemotactic sensitivity to the true parameter follows next . in section five ,numerical experiments provide graphical depictions of the accuracy of the recovery of the parameter . in section six, conclusion remarks are made .in this model , and represent the concentration of the cells and the chemoattractant , respectively .the cells and the chemoattractant are governed by a convection - diffusion equation and a reaction - diffusion equation as where is the outward unit normal . and represent the diffusion coefficients of the cells and the chemoattractant .the michaelis - menten term , , represents a response of the chemoattractant to a maximum carrying capacity or saturation rate , assuming .we incorporate a decay term where denotes the degradation rate .we assume that there is no flux of the concentrations across the boundary , and that the initial concentrations for the cells and chemoattractant are and respectively . here, is the chemotactic sensitivity which monitors the chemical gradient attraction of the cells .it is this term that we seek to identify . in term is simply a constant .more generally , is a linear function of in , while in it is a nonlinear function of we assume henceforth that the chemotactic sensitivity has the form and is a bounded function .we restrict our analysis to the dimensionless system we will establish a technique for the identification of where observe that , with the available data , we can only expect to recover on the interval . ] such that with and .the solution satisfies the lower bounds .}\ ] ] * proof : * let and . the system ( [ forward ] )can be formulated as an abstract quasilinear equation on the banach space let clearly .we define to be the linear operator in that with domain let the vector be the function since is lipschitz , application of yagi s work ( * ? ? ?* thm 2.1 and 3.4 ) yields our result .( see appendix a for statements of yagi s results . )in this section , we begin by establishing the identifiability of the parameter from the available data and almost everywhere in . note that , in order for chemotaxis to be observed biologically , cells must be present anda chemical gradient must exist .this means that and must be nonzero for a measurable subset of we denote by and the solution pairs of ( [ forward ] ) with chemotactic sensitivities and respectively .[ identification ] let and both be solutions in of the direct problem ( [ forward ] ) corresponding to and .if and almost everywhere in ] we define with in the presence of perfect data , we would solve the non - linear ill - posed problem where is the solution of the direct problem with to do this using tikhonov regularization would involve approximating the solution by minimizing where is a small parameter and is an _ a priori _ guess of the true solution in real applications , measurement errors mean that exact data is not available .noisy data is assumed to have an error level which means that we assume attainability of a true solution , i.e. if there exists such that in the presence of noisy data the minimizer of ( [ minimization_problem ] ) minimizes for appropriate choices of and we begin by establishing the weak - closedness of the map if then and in * proof : * here , we give the outline of the proof and refer the reader to for details . using that the solution to the state system ( [ forward ] ) is unique , one can define and .a transformation involving times each component of the solution pair is made with to be chosen in order to obtain the boundedness of the solution in .the weak definition of the solution associated with the transformed and in equation ( [ weakdiff ] ) is analyzed via cauchy s inequality and the boundedness of the coefficients . using the boundedness ( independent of ) of the solution pairs, subsequences are extracted that converge weakly to and .lastly , comparison results are used so that one can pass to the limit in the weak formulation of the solution to show that and . existence of a minimizer now follows from the lower semi - continuity of for any data a minimizer of ( [ tikobj ] ) exists .continuous dependence on the data for fixed and the convergence of toward the true parameter as the noise level and the regularization parameter go to zero also follow from standard results . for fixed minimizers depend continuously on the data if satisfies then although we have noted ( without proof ) the convergence of the minimizer to the true parameter the rate of convergence may be arbitrarily slow . we wish to determine a source condition that will guarantee a certain rate of convergence .even when our regularization parameter is comparable to our noise level such a source condition will require assumptions involving and recall that we seek to solve the nonlinear problem ( [ nonlinear - eqn ] ) , where the true solution is and is an _ a priori _ guess . in order to apply the theory of engl , hanke , kunisch and neubauer , we must establish the following : * is frechet differentiable , * is lipschitz with * there exists satisfying the source condition * in practice , although computing and is not difficult , it can be quite tricky to establish the lipschitz condition on with our system of coupled nonlinear partial differential equations .instead , our approach involves developing a _ source condition _ without imposing differentiability constraints on thus we establish convergence .this technique is also found in the work of engl and kgler , .suppose that there exists a function satisfying such that for any if then and [ convergence - theorem ] * proof : * for clarity , we briefly describe the techniques used in this proof .using that a minimizer to exists , we obtain an upper bound in terms of the error level and the norm of the difference in the minimizer and optimal values .we then use our source condition with the weak formulation of the cell and chemical differential equations to obtain a representation of the inner product of the appropriate differences of the approximating minimizers .this allows us to bound .specifically , we use triangle and young s inequalities to bound the time and spatial derivatives of the differences in the state variables . integration by parts and hlder s inequality enable us to successfully bound the spatial derivatives of the states in terms of the states themselves . using the assumptions from and choosing sufficiently small , we can obtain the error of order with . since is a minimizer of we have using our definition of noise level ( [ noise - defn ] ), we find that adding to both sides of the inequality and using inner product properties yields observe that our source condition with together with the weak forms of the cell equation in the forward problem ( [ forward ] ) for and is \cdot \nabla w\ dx \ dt\end{aligned}\ ] ] and ( [ source - ineq2 ] ) becomes \cdot \nabla w\ dx \ dt .\label{source - ineq3}\end{aligned}\ ] ] we bound each integral in ( [ source - ineq3 ] ) separately using triangle and young s inequalities . for the estimates of and we refer the reader to the appendix for some of the details of the bounds used .we find that and where is an arbitrary parameter resulting from the use of young s inequality .we utilize the assumptions green s theorem , the boundary conditions , and hlder s inequality to obtain the estimate, \cdot \nabla w\ dx \ dt } \\ & \leq & \varepsilon \left [ \hat{\eta}_1 ^ 2 \int_{0}^t \norm{u_{a_\alpha^\delta}-z_u^\delta}^2_{l^2(\omega ) } \dt + \norm{u}_{l^\infty(\omega)}^2 k^2 \int_{0}^t \norm{c_{a_\alpha^\delta}-z_c^\delta}^2_{l^2(\omega ) } \ dt \right ] \\ & & + \frac{\alpha^2}{\varepsilon } \int_{0}^t \norm{\delta w}^2_{l^2(\omega ) } \ dt + \varepsilon \hat{\eta}_1 ^ 2\delta^2 + \varepsilon \norm{u}_{l^\infty(\omega)}^2k^2\delta^2 \\ & & + \varepsilon \norm{\nabla u}_{l^\infty(\omega)}^2 k^2 \int_{0}^t \norm{c_{a_\alpha^\delta}-z_c^\delta}^2_{l^2(\omega ) } \ dt + \varepsilon \hat{\eta}_1 ^ 2 \mu^2 \int_{0}^t \norm{u_{a_\alpha^\delta}-z_u^\delta}^2_{l^2(\omega ) } \ dt \\ & & + \frac{\alpha^2}{\varepsilon } \int_{0}^t \norm{\nabla w}^2_{l^2 } \ dt + \varepsilon \norm{\nabla u}_{l^\infty(\omega)}^2 k^2 \delta^2 + \varepsilon \hat{\eta}_1 ^ 2\mu^2\delta^2.\end{aligned}\ ] ] grouping terms and relabeling constants , we have if is chosen to be sufficiently small , then for the choices we obtain and order to demonstrate the effectiveness of tikhonov regularization for this application , we consider several examples .all computations were carried out in matlab .the tikhonov functional was minimized using lsqnonlin , a matlab implementation of the levenberg - marquardt method with line search .although it was not tractable to do so in the convergence analysis , a gradient based algorithm is appropriate here because computing the gradient and its adjoint is straightforward .we restrict our discussion to . ] was generated using pdepe with high accuracy . during the computation of cell and chemoattractant concentrations and associated with a particular were computed using pdepe with moderate accuracy over coarser space and time meshes than those used to simulate data . since lsqnonlin requires objective functions of the form we approximated the first two terms of by ^ 2 + \sum_{i=1}^n \left[c(x_i , t_j)-z_c(x_i , t_j)\right]^2 \right\ } \left ( \delta x\right ) \left(\delta t\right ) \\\end{aligned}\ ] ] where for with and for with we approximate by where the usual piecewise linear hat functions defined over a partition of . ] and was a bounded perturbation function . in examples 1 - 3 , we used the myerscough parameters with [ [ example-1 ] ] example 1 + + + + + + + + + consider the chemotactic coefficient used by myerscough et al . the cell and chemoattractant concentrations associated with this are shown over the time interval ] when since the optimization algorithm will use approximations of other chemotactic functions , we attempt to reconstruct over a larger interval .we found the interval ] and that its quality degrades , as expected , outside this interval .[ [ comments ] ] comments + + + + + + + + the number of iterations used by the algorithm is quite sensitive to the choice of initial function and the number of piecewise linear basis functions used in equation ( [ a - approximation ] ) . for experimental data , we must acknowledge that the quality of the recovery degrades with increased noise in the data . in certain applications such as pattern formation in _ escherichia coli _ or _ salmonella typhimurium _ , see tyson et al . , the size of the interval ] in numerical simulations , this requires a careful choice of numerical method for the solution of the chemotaxis system , see .an alternative approach is to restrict our measurements to a particular time , rather than an interval of time .the efficacy of this approach will be discussed in a future paper .in this work , we have explored a particular mathematical aspect of the chemotactic sensitivity within the gradient .the identification of a chemotactic sensitivity with functional dependence has been determined .the interesting aspect of this work is that , to our understanding , no one has been able to capture the chemotactic sensitivity information from limited data with dependence on the chemical in a system .we have proven the existence of the state solutions in specific sobolev spaces and formulated an inverse problem .we have employed tikhonov regularization to recover the chemotactic sensitivity from noisy measurements . in doing so ,a minimization problem is formed and the necessary convergence results for an approximating minimizer to the true parameter are discussed .another significant result is that we have established a source condition that guarantees a particular rate of convergence by imposing a lipschitz condition on the derivative of the chemotactic sensitivity . in practice , this is biologically reasonable , since the chemotactic sensitivity has a rate of change that is bounded for bacterial growth , .numerically , we have utilized models from myerscough et .al , and keller and segel , for the studies of the comparison of our proposed work to the actual scenarios . with the use of tikhonov regularization ,we have been able to recover the chemotactic sensitivity with reasonable accuracy .a biological benefit of this knowledge is the ability for one to understand the growth associated with chemotaxis within tumor studies , leukocyte dynamics , and bacterial patterns based on the specific gradient information that can be recovered from imperfect data .this work was partially supported by the howard hughes medical insitute as part of its undergraduate biological sciences education program award to murray state university , and by the national science foundation awards dms-0209562 , dms-0414011 and due-0531865 .any opinions , findings , and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the howard hughes medical insitute or the national science foundation .[ existencelocalyagi ] let for , and , on . assume a real local solution to ( [ yagiprob ] ) exists on the interval ] and an estimate for some and constant . then },\ ] ] where is a positive function defined by equations in ( [ peqn ] ) .the interval $ ] on which the solution exists at least is determined by the norms , and by the initial lower bound .it is to be noted that by this theorem from yagi s work that a maximal solution to ( [ yagiprob ] ) can be uniquely defined in the space for for each such that for , and , on .ford , r. , lauffenburger , d. , 1991 .analysis of chemotactic bacterial distributions in population migration assays using a mathematical model applicable to steep or shallow attractant gradients ., 53(5):721729 .hillen , t. , levine , h. , 2003 .blow - up and pattern formation in hyperbolic models for chemotaxis in 1-d ., 54:839868 .hillen , t. , painter , k. , 2001.global existence for a parabolic chemotaxis model with prevention of overcrowding . , 26:4 , 280 - 301 .nakaguchi , n. , yagi , , a. , 2001 .full discrete approximation by galerkin method for chemotaxis - growth model ., 47:60976107 .painter , k. , hillen , t. , 2003 .volume - filling and quorum - sensing in models for chemosensitive movement ., 10:4,501 - 543 .tyson , r . , stern , l.g . ,leveque , r.j .fractional step methods applied to a chemotaxis model . , 41:455475 .velazquez , j.j.l .point dynamics for a singular limit of the keller - segel model , i. motion of the concentration regions ., 64:4 , 1198 - 1223 .velazquez , j.j.l .point dynamics for a singular limit of the keller - segel model , ii .formation of the concentration regions ., 64:4 , 1224 - 1248 .
chemotaxis is the process by which cells behave in a way that follows the chemical gradient . applications to bacteria growth , tissue inflammation , and vascular tumors provide a focus on optimization strategies . experiments can characterize the form of possible chemotactic sensitivities . this paper addresses the recovery of the chemotactic sensitivity from these experiments while allowing for nonlinear dependence of the parameter on the state variables . the existence of solutions to the forward problem is analyzed . the identification of a chemotactic parameter is determined by inverse problem techniques . tikhonov regularization is investigated and appropriate convergence results are obtained . numerical results of concentration dependent chemotactic terms are explored . * keywords : * inverse problem , chemotaxis , tikhonov regularization
turing proved in 1936 that undecidability exists by showing that the halting problem is undecidable .rice extended the set of known undecidable problems to cover all questions of the form `` does the partial function computed by the given program have property '' , where is any property that at least one computable partial function has and at least one does not have .for instance , could be `` returns for all syntactically correct c++ programs and for all remaining inputs . '' in other words , it may be impossible to find out whether a given weird - looking program is a correct c++ syntax checker .these results are basic material in such textbooks as .on the other hand , imperfect halting testers are possible . for any instance of the halting problem , a _ three - way tester _ eventually answers `` yes '' , `` no '' , or `` i do nt know '' .if it answers `` yes '' or `` no '' , then it must be correct .we say that the `` i do nt know '' instances are _ hard instances _ for the tester .also other kinds of imperfect testers have been introduced , as will be discussed in section [ s : variants ] .assume that is a tester . by turing s proof, it has a hard instance .if is a halting instance , then let be `` if the input is , then reply ` yes ' , otherwise run and return its reply '' .if is non - halting , then let be `` if the input is , then reply ` no ' , otherwise run and return its reply '' . by construction, is a tester with one fewer hard instances than has . by turing s proof, also has a hard instance .let us call it .it is hard also for .this reasoning can be repeated without limit , yielding an infinite sequence , , of testers and , , of instances such that is hard for , , but not for , .therefore , every tester has an infinite number of hard instances , but no instance is hard for all testers . a program that answers `` i do nt know '' for every program and input is a three - way tester , although it is useless .a much more careful tester simulates the given program on the given input at most steps , where is the joint size of the program and its input .if the program stops by then , then the tester answers `` yes '' . if the program repeats a configuration ( that is , a complete description of the values of variables , the program counter , etc . ) by then , then the tester answers `` no '' .otherwise it answers `` i do nt know '' . with this theoretically possible but in practice unrealistic tester ,any hard halting instance has a finite but very long running time . the proofs by turing and rice may leave the hope that only rare artificial contrived programs yield hard instances .one could dream of a three - way tester that answers very seldom `` i do nt know '' .this publication analyses this issue , by surveying and proving results that tell how the proportion of hard instances behaves when the size of the instances grows without limit .section [ s : def ] presents the variants of the halting problem and imperfect testers surveyed , together with some basic results and notation .earlier research is discussed in section [ s : related ] .the section contains some proofs to bring results into the framework of this publication .section [ s : proglang ] presents some new results in the case that a program has many copies of all big sizes , or information can be packed densely inside the program .it is not always assumed that the program has access to the information .a natural example of such information is dead code , such as ` if(1==0)then { } ` . in section [ s : c++ ] , results are derived for c++ programs with inputs from files .section [ s : conclusions ] briefly concludes this publication .this publication is a significantly extended version of .the papers are otherwise essentially the same , but three proofs were left out from because of lack of space . in the present publication , theorems [ t : ill - eof ] and [ t : hard - sd ] and corollaries [ c : nolimit1 ] and [ c : nolimit2 ] are new results lacking from . furthermore , incorrectly claimed the opposite of theorem [ t : hard - sd ] .the present publication fixes this error and also a small error in proposition [ p : syntax ] .the literature on hard instances of the halting problem considers at least three variants of the halting problem : e : : does the given program halt on the _ empty _ input , s : : does the given program halt when given _ itself _ as its input ( * ? ? ?* ; * ? ? ?* ) , and g : : does the given program halt on the _ given _ input ( * ? ? ?* ; * ? ? ?* ; * ? ? ?each variant is undecidable .variant g has a different notion of instances from others : program input pairs instead of just programs .a tester for g can be trivially converted to a tester for e or s , but the proportion of hard program input pairs among all program input pairs of some size is not necessarily the same as the similar proportion with the input fixed to the empty one or to the program itself .the literature also varies on what the tester does when it fails .three - way testers , that is , the `` i do nt know '' answer is used implicitly by , as it discusses the union of two decidable sets , one being a subset of the halting and the other of the non - halting instances . in _generic - case decidability _ , instead of the `` i do nt know '' answer , the tester itself fails to halt .yet another idea is to always give a `` yes '' or `` no '' answer , but let the answer be incorrect for some instances .such a tester is called _approximating_. one - sided results , where the answer is either `` yes '' or `` i do nt know '' , were presented in . for a tester of any of the three variants , we say that an instance is _ easy _ if the tester correctly answers `` yes '' or `` no '' on it , otherwise the instance is _these yield altogether nine different sets of testers , which we will denote with three - way(x),generic(x ) , and approx(x ) , where x is e , s , or g. some simple facts facilitate carrying some results from one variant of testers to another .[ p:3way- > ] for any three - way tester there is a generic - case tester that has precisely the same easy `` yes''-instances , easy `` no''-instances , hard halting instances , and hard non - halting instances .there also is an approximating tester that has precisely the same easy `` yes''-instances , at least the same easy `` no''-instances , precisely the same hard halting instances , and no hard non - halting instances ; and an approximating tester that has at least the same easy `` yes''-instances , precisely the same easy `` no''-instances , no hard halting instances , and precisely the same hard non - halting instances .a three - way tester can be trivially converted to the promised tester by replacing the `` i do nt know '' answer with an eternal loop , the reply `` no '' , or the reply `` yes '' . [ p : gen->gen ] for any generic - case tester there is a generic - case tester that has at least the same `` yes''-instances , precisely the same `` no''-instances , no hard halting instances , and precisely the same hard non - halting instances . in parallel with the original tester , the instance is simulated .( in turing machine terminology , parallel simulation is called `` dovetailing '' . )if the original tester replies something , the simulation is aborted .if the simulation halts , the original tester is aborted and the reply `` yes '' is returned .[ p : finite ] for any and tester , there is a tester that answers correctly `` yes '' or `` no '' for all instances of size at most , and similarly to for bigger instances .because there are only finitely many instances of size at most , there is a finite bit string that lists the correct answers for them . if , picks the answer from it and otherwise calls .( we do not necessarily know what bit string is the right one , but that does not rule out its existence . )we use to denote the set of characters that are used for writing programs and their inputs .it is finite and has at least two elements .there are character strings of size .if and are in , then denotes that is a prefix of , and denotes proper prefix .the size of is denoted with .a set of finite character strings is _ self - delimiting _ if and only if membership in is decidable and no member of is a proper prefix of a member of .the _ shortlex ordering _ of any set of finite character strings is obtained by sorting the strings in the set primarily according to their sizes and strings of the same size in the lexicographic order . not necessarily all elements of are programs .the set of programs is denoted with , and the set of all ( not necessarily proper ) prefixes of programs with .so .for tester variants e and s , we use to denote the number of programs of size .then . for tester variant g, denotes the number of program input pairs of joint size .we will later discuss how the program and its input are paired into a single string .the numbers of halting and non - halting ( a.k.a .diverging ) instances of size are denoted with and , respectively .we have .if is a tester , then , , , and denote the number of its easy halting , hard halting , easy non - halting , and hard non - halting instances of size , respectively . obviously and .the smaller and are , the better the tester is .the _ failure rate _ of is .when referring to all instances of size at most , we use capital letters .so , for example , and .nancy lynch used _ gdel numberings _ for discussing programs .in essence , it means that each program has at least one index number ( which is a natural number ) from which the program can be constructed , and each natural number is the index of some program .although the index of an individual program may be smaller than the index of some shorter program , the overall trend is that indices grow as the size of the programs grows , because otherwise we would run out of small numbers . on the other hand ,if the mapping between the programs and indices is 11 , then the growth can not be faster than exponential .this is because . with real - life programming languages ,the growth is exponential , but ( as we will see in section [ s : c++model ] ) the base of the exponent may be smaller than . to avoid confusion ,we refrain from using the notation , etc . , when discussing results in , because the results use indices instead of sizes of programs , and their relationship is not entirely straightforward .fortunately , some results of can be immediately applied to programming languages by using the _ shortlex gdel numbering_. the shortlex gdel number of a program is its index in the shortlex ordering of all programs .the first group of results of reveals that a wide variety of situations may be obtained by spreading the indices of all programs sparsely enough and then filling the gaps in a suitable way .for instance , with one gdel numbering , for each three - way tester , the proportion of hard instances among the first indices approaches as grows . with another gdel numbering, there is a three - way tester such that the proportion approaches as grows .there even is a gdel numbering such that as grows , the proportion oscillates in the following sense : for some three - way tester , it comes arbitrarily close to infinitely often and for each three - way tester , it comes arbitrarily close to infinitely often . in its simplest form ,spreading the indices is analogous to defining a new language spaciousc++ whose syntax is identical to that of c++ but the semantics is different .if the first characters of a spaciousc++ program of size are space characters , then the program is executed like a c++ program , otherwise it halts immediately .this does not restrict the expressiveness of the language , because any c++ program can be converted to a similarly behaving spaciousc++ program by adding sufficiently many space characters to its front .however , it makes the proportion of easily recognizable trivially halting instances overwhelm . a program that replies `` yes '' if there are fewer than space characters at the front and `` i do nt know '' otherwise , is a three - way tester .its proportion of hard instances vanishes as the size of the program grows . as a consequence of this and proposition [ p : finite ], one may choose any failure rate above zero and there is a three - way tester for spaciousc++ programs with at most that failure rate .of course , this result does not tell anything about how hard it is to test the halting of interesting programs .this is the first example in this publication of what we call _ an anomaly stealing the result_. that is , a proof of a theorem goes through for a reason that has little to do with the phenomenon we are interested in . indeed ,the first results of depend on using unnatural gdel numberings .they do not tell what happens with untampered programming languages .even so , they rule out the possibility of a simple and powerful general theorem that applies to all models of computation. they also make it necessary to be careful with the assumptions that are made about the programming language . to get sharper results , _ optimal gdel numberings _were discussed in .they do not allow distributing programs arbitrarily .a gdel numbering is optimal if and only if for any gdel numbering , there is a computable function that maps it to the former such that the index never grows more than by a constant factor .the most interesting sharper results are opposite to what was obtained without the optimality assumption . to apply them to programming languages , we first define a programming language version of optimal gdel numberings .[ d : eof - data ] a programming language is _ end - of - file data segment _ , if and only if each program consists of two parts in the following way . the first part , called the _ actual program _ ,is written in a self - delimiting language ( so its end can be detected ) .the second part , called the _ data segment _ , is an arbitrary character string that extends to the end of the file .the language has a construct via which the actual program can read the contents of the data segment .the data segment is thus a data literal in the program , packed with maximum density .it is not the same thing as the input to the program .[ c : lyn6 ] for each end - of - file data segment language , let be the end - of - file data segment language , and let be any gdel numbering .consider the following program in .let and be the sizes of its actual program and data segment .the actual program reads the data segment , interpreting its content as a number in the range from to .then it simulates the program in .the shortlex index of is at most , yielding , so , thus .the shortlex numbering of is thus an optimal gdel numbering . from this, proposition 6 in gives the claims .a remarkable feature of the latter result compared to many others in this publication is that is chosen before .that is , there is a positive constant that only depends on the programming language ( and not on the choice of the tester ) such that all testers have at least that proportion of hard instances , for any big enough . on the other hand ,the proof depends on the programming language allowing to pack raw data very densely .real - life programming languages do not satisfy this assumption .for instance , c++ string literals ` ` can not pack data densely enough , because the representation of ` " ` inside the literal ( e.g. , ` " ` or ` 042 ` ) requires more than one character .because of proposition [ p : finite ] , `` '' can not be moved to the front of `` '' .the result can not be generalized to , , and , because the following anomaly steals it .we can change the language by first adding ` 1 ` or ` 01 ` to the beginning of each program and then declaring that if the size of ` 1` or ` 01` is odd , then it halts immediately , otherwise it behaves like .this trick does not invalidate optimality but introduces infinitely many sizes for which the proportion of hard instances is . in ,the halting problem was analyzed in the context of programming languages that are _ frequent _ in the following sense : [ d : frequent ] a programming language is ( a ) _ frequent _ ( b ) _ domain - frequent _ , if and only if for every program , there are and such that for every , at least programs of size ( a ) compute the same partial function as ( b ) halt on precisely the same inputs as . instead of `` frequent '', the word `` dense '' was used in , but we renamed the concept because we felt `` dense '' a bit misleading .the definition says that programs that compute the same partial function are common .however , the more common they are , the less room there is for programs that compute other partial functions , implying that the smallest programs for each distinct partial function must be distributed more sparsely .`` dense '' was used for domain - frequent in .any frequent programming language is obviously domain - frequent but not necessarily vice versa . on the other hand ,even if a theorem in this field mentions frequency as an assumption , the odds are that its proof goes through with domain - frequency . whether a real - life programming language such as c++ is domain - frequent , is surprisingly difficult to find out .we will discuss this question briefly in section [ s : frequent ] . as an example of a frequent programming language ,was mentioned in .its full name starts with `` brain '' and then contains a word that is widely considered inappropriate language , so we follow the convention of and call it .information on it can be found on wikipedia under its real name .it is an exceptionally simple programming language suitable for recreational and illustrational but not for real - life programming purposes .in essence , programs describe turing machines with a read - only input tape , write - only output tape , and one work tape .the alphabet of each tape is the set of 8-bit bytes .however , programs only use eight characters . as a side issue , a non - trivial proof was given in that only a vanishing proportion of character strings over the eight characters are programs .that is , exists and is .it trivially follows that if all character strings over the 8 characters are considered as instances and failure to compile is considered as non - halting , then the proportion of hard instances vanishes as grows .the only possible compile - time error in is that the square brackets ` [ ` and ` ] ` do not match .most , if not all , real - life programming languages have parentheses or brackets that must match .so it seems likely that compile - time errors dominate also in the case of most , if not all , real - life programming languages .unfortunately , this is difficult to check rigorously , because the syntax and other compile - time rules of real - life programming languages are complicated . using another , simpler line of argument, we will prove the result for both c++ and in section [ s : syntax ] . in any event, if the proportion of hard instances among all character strings vanishes because the proportion of programs vanishes , that is yet another example of an anomaly stealing the result .it is uninteresting in itself , but it rules out the possibility of interesting results about the proportion of hard instances of size among all character strings of size . therefore , from now on , excluding section [ s : syntax ] , we focus on the proportion of hard instances among all programs or program input pairs . in the case of program input pairs , the results may be sensitive to how the program and its input are combined into a single string that is used as the input of the tester . to avoid anomalous results ,it was assumed in that this `` pairing function '' has a certain property called `` pair - fair '' .the commonly used function is pair - fair . to use this pairing function ,strings are mapped to numbers and back via their indices in the shortlex ordering of all finite character strings .a proof was sketched in that , assuming domain - frequency and pair - fairness , that is , the proportion of wrong answers does not vanish .however , this leaves open the possibility that for any failure rate , there is a tester that fares better than that for all big enough .this possibility was ruled out in , assuming frequency and pair - fairness .( it is probably not important that frequency instead of domain - frequency was assumed . )that is , there is a positive constant such that for any tester , the proportion of wrong answers exceeds the constant for infinitely many sizes of instances : the third main result in , adapted and generalized to the present setting , is the following .we present its proof to obtain the generalization and to add a detail that the proof in lacks , that is , how is made to halt for `` wrong sizes '' .generic - case testers are not mentioned , because proposition [ p : gen->gen ] gave a related result for them .[ t : ksz05 - 3 ] for each programming model and variant e , s , g of the halting problem , let .consider the family of the programs of the following kind , where , , and . if , answers `` no '' in the case of approximating and `` i do nt know '' in the case of three - way testers .if , simulates all instances of size until of them have halted .if the simulation stage terminates , then if the given instance is among those that halted , answers `` yes '' , otherwise answers `` no '' or `` i do nt know '' .thus an approximating has .we prove next that some is the required tester .let . then .when , the simulation stage of terminates and the proportion of hard halting instances of is less than .some is the for infinitely many values of .furthermore , there is a smallest such .we denote it with .there also is a such that when , then . with these choices, always halts . for a small enough and the approximating tester in theorem [ t : ksz05 - 3 ] , ( [ e : ksz05 ] ) implies that the failure rate of oscillates , that is , does not approach any limit as .this observation is directly obtainable from lemma 23 in . for turing machines with one - way infinite tape and randomly chosen transition function , the probability of falling off the left end of the tape before halting or repeating a state approaches as the number of states grows .the tester simulates the machine until it falls off the left end , halts , or repeats a state .if falling off the left end is considered as halting , then the proportion of hard instances vanishes as the size of the machine grows .this can be thought of as yet another example of an anomaly stealing the result .formally , , that is , here x may be e , s , or g. although e was considered in , the proof also applies to s and g. comparing the result to theorem [ t : itself ] in section [ s : frequent ] reveals that the representation of programs as transition functions of turing machines is not domain - frequent . on the other hand , independently of the tape model , the proportion does not vanish exponentially fast .like in , the proportion is computed on the transition functions , and not on some textual representations of the programs .the proof relies on the fact that any turing machine has many obviously similarly behaving copies of bigger and bigger sizes .they are obtained by adding new states and transitions while keeping the original states and transitions intact .so the new states and transitions are unreachable .they are analogous to dead code .these copies are not common enough to satisfy definition [ d : frequent ] , but they are common enough to rule out exponentially fast vanishing .generic - case decidability was used in , but the result applies also to three - way testers by proposition [ p:3way- > ] .the results in are based on using weighted running times . for every positive integer , the proportion of halting programs that do not halt within time is less than , simply because the proportion of times greater than is less than .the publication presents such a weighting that is a computable constant .assume that programs are represented as self - delimiting bit strings on the input tape of a universal turing machine .the smallest three - way tester of variant e that answers `` yes '' or `` no '' up to size and `` i do nt know '' for bigger programs , is of size .the assumption that the programming language is domain - frequent ( definition [ d : frequent ] ) makes it possible to use a small variation of the standard proof of the non - existence of halting testers , to prove that each halting tester of variant s has a non - vanishing set of hard instances . for three - way and generic - case testers, one can also say something about whether the hard instances are halting or not . despite its simplicity ,as far as we know , the following result has not been presented in the literature .however , see the comment on in section [ s : ksz05 ] .[ t : itself ] if the programming language is domain - frequent , then let the execution of with an input be denoted with . for any , consider the program that first tries its input with .if replies `` yes '' , then enters an eternal loop .if replies `` no '' , then halts immediately .the case that replies `` i do nt know '' is discussed below .if fails to halt , then can not continue and thus also fails to halt . by the definition of domain - frequent ,there are and such that when , at least programs halt on precisely the same inputs as .let be any such program .consider .if answers `` yes '' , then fails to halt. then also fails to halt .thus `` yes '' can not be the correct answer for .a similar reasoning reveals that also `` no '' can not be the correct answer for .so is a hard instance for .nothing more is needed to prove the claim for approximating testers . in the case of generic - case testers ,the hard instances make and thus fail to halt , so they are non - halting instances . in the case of three - way testers , all hard instances can be made halting instances by making halt when replies `` i do nt know '' .this proves the claim .the claim is proven by making enter an eternal loop when replies `` i do nt know '' .these two proofs may yield different values , but the smaller one of them is suitable for both .similarly , the bigger of their values is suitable for both .the second claim of theorem [ t : itself ] lacks a part .indeed , proposition [ p : gen->gen ] says that with generic - case testers , can be made . with approximating testers , can be made at the cost of becoming , by always replying `` yes '' .similarly , can be made by always replying `` no '' .the next theorem applies to testers of variant e and presents some results similar to theorem [ t : itself ] . to our knowledge , it is the first theorem of its kind that applies to the halting problem on the empty input .it assumes not only that many enough equivalent copies exist but also that they can be constructed . on the other hand, its equivalence only pays attention to the empty input .[ d : cdf ] a programming language is _ computably empty - frequent _ if and only if there is a decidable equivalence relation `` '' between programs such that * for each program , there are and such that for every , at least programs of size are equivalent to , and * for each programs and , if , then either both or none of and halt on the empty input .if , we say that is a _ cousin _ of .it can be easily seen from that is computably empty - frequent .[ t : ill - d - r1 ] if the programming language is computably empty - frequent , then the result also holds for generic - case testers but not for approximating testers . given any three - way tester ,consider a program that behaves as follows .first it constructs its own code and stores it in a string variable .hard - wiring the code of a program inside the program is somewhat tricky , but it is well known that it can be done . with gdel numberings ,the same can be obtained with kleene s second recursion theorem. then starts constructing its cousins of all sizes and tests each of them with . by the assumption ,there are and such that for every , has at least cousins of size .if ever replies `` yes '' , then enters an eternal loop and thus does not continue testing its cousins .if ever replies `` no '' , then halts immediately .if replies `` i do nt know '' , then tries the next cousin .if ever replies `` yes '' , then fails to halt on the empty input . by definition ,also the tested cousin fails to halt on the empty input .so the answer `` yes '' would be incorrect .similarly , if ever replies `` no '' , that would be incorrect .so must reply `` i do nt know '' for all cousins of .they are thus hard instances for . because there are infinitely many of them , does not halt , so they are non - halting .to prove the result for generic - case testers , it suffices to run the tests of the cousins in parallel , that is , go around a loop where each test that has been started is executed one step and the next test is started .if any test ever replies `` yes '' or `` no '' , aborts all tests that it has started and then does the opposite of the reply . a program that always replies `` no '' is an approximating tester with for every .the results in this section and section [ s : ksz05 ] motivate the question : are real - life programming languages domain - frequent ?for instance , is c++ domain - frequent ?unfortunately , we have not been able to answer it .we try now to illustrate why it is difficult . given any c++ program , it is easy to construct many longer programs that behave in precisely the same way , by adding space characters , line feeds ( denoted with {->}(2,1.5)(2,.5)(0,.5 ) \end{pspicture}} ] .( the purpose of \ { and } is to hide the variable ` s ` , so that it does not collide with any other variable with the same name . )more programs are obtained by including escape codes such as ` " ` to .however , it seems that this is a vanishing instead of at least a positive constant proportion when . in the absence of escape codes, it certainly is a vanishing proportion .this is because one can add \{`char*s=\sigma,*t=\rho ; ` } instead , where .without escape codes , this yields programs . when , . that is , although string literals can represent information rather densely, they do not constitute the densest possible way of packing information into a c++ program ( assuming the absence of escape codes ) .a pair of string literals yields an asymptotically strictly denser packing .similarly , a triple of string literals is denser still , and so on . counting the programs in the presence of escape codesis too difficult , but it seems likely that the phenomenon remains the same .so string literals do not yield many enough programs .it seems difficult to first find a construct that does yield many enough programs , and then prove that it works . in this sectionwe prove a theorem that resembles theorem [ t : ill - d - r1 ] , but relies on different assumptions and has a different proof .we say that a three - way tester is _-perfect _ if and only if it does not answer `` i do nt know '' when the size of the instance is at most .the following lemma is adapted from .[ l : n - o(1 ) ] each programming language has a constant such that the size of each -perfect three - way tester of variant e or s is at least .let be any -perfect three - way tester of variant e or s. consider a program that constructs character strings in shortlex order and tests them with until replies `` i do nt know '' .if replies `` yes '' , simulates before trying the next character string .when simulating , gives it the empty input in the case of variant e and as the input in the case of s. the reply `` i do nt know '' eventually comes , because otherwise would be a true halting tester .as a consequence , eventually halts . before halting , simulates at least all halting programs of size at most .the time consumption of any simulated execution is at least the same as the time consumption of the corresponding genuine execution .so the execution of can not contain properly a simulated execution of . does not read any input , so it does not matter whether it is given itself or the empty string as its input .therefore , the size of is bigger than . because the only part of that depends on is ,there is a constant such that the size of is at least . in any everyday programming language, space characters can be added freely between tokens .motivated by this , we define that a _ blank character _ is a character that , for any program , can be added to at least one place in the program without affecting the meaning of the program .[ t : ill - eof ] let x be e or s. if the programming language is end - of - file data segment and has a blank character , then assume first that tester is a counter - example to the -claim .that is , for every , has infinitely many values of such that .if uses its data segment , let the use be replaced by the use of ordinary constants , liberating the data segment for the use described in the sequel .let be the following program . here is a constant inside represented by characters , and is the content of the data segment of interpreted as a natural number in base .let and be the sizes of the actual program and data segment of .we have .let be the input of .the program first computes : = . if , then adds blank characters to , to make its size .next , if , then replies `` i do nt know '' and halts. otherwise gives ( which is now of size precisely ) to .if replies `` yes ''or `` no '' , then gives the reply as its own reply and halts .otherwise constructs each character string of size and tests it with . simulates in parallel those for which returns `` i do nt know '' until of them have halted ( with or the empty string as the input , as appropriate ) .then it aborts those that have not halted .if is among those that halted , then replies `` yes '' , otherwise replies `` no '' . for each , there are infinitely many values of such that . for any such we have latexmath:[ ] is not a prefix of a program .[ p : syntax ] if for every there is such that , then let .obviously .assume first that for every , there is such that for every . because , we get as .in the opposite case there is such that for infinitely many values of .let they be . because is not a prefix of any program , .for the remaining values of , obviously .these imply that when , we have latexmath:[ ] can not occur next .that is , for every character string , either or {->}(2,1.5)(2,.5)(0,.5 ) \end{pspicture}} ] can occur . )[ l : allc++ ] if , then there are at least c++ programs of size .let , and let .consider the character strings of the form ` int main()`\{`/*\alpha\beta*/ ` } where consists of space characters and is any string of the form , where for .each such string is a syntactically correct c++ program of size .their number is .the proportion of comment - less c++ programs among all c++ programs of size approaches , when .let . by lemmas [ l : clc++ ] and [ l : allc++ ] ,the proportion is at most + , when . as a consequence , although comments are irrelevant for the behaviour of programs , they have a significant effect on the distribution of long c++ programs .to avoid the risk that they cause yet another anomaly stealing the result , we restrict ourselves to c++ programs without comments . this assumption does not restrict the expressive power of the programming language , but reduces the number of superficially different instances of the same program .the input may be any finite string of bytes .this is how it is in linux .although not all such inputs can be given directly via the keyboard , they can be given by directing the so - called standard input to come from a file .there is a separate test construct in c++ for detecting the end of the input , so the end of the input need not be distinguished by the contents of the input .there are different inputs of size .the sizes of a program and input are the number of bytes in the program and the number of bytes in the input file .this is what linux reports .the size of an instance is their sum .analogously to section [ s : frequent ] , the size of a program is additional information to the concatenation of the program and the input .this is ignored by our notion of size .however , the notion is precisely what programmers mean with the word .furthermore , the convention is similar to the convention in ordinary ( as opposed to self - delimiting ) kolmogorov complexity theory .[ l : pc++ ] with the c++ programming model in section [ s : c++model ] , , the number of different program input pairs of size is at most + the next theorem says that with halting testers of variant g and comment - less c++ , the proportions of hard halting and hard non - halting instances do not vanish .[ t : ill - h - c ] with the c++ programming model in section [ s : c++model ] , we prove first the part and then the part .the results are combined by picking the bigger and the smaller .there is a program that behaves as follows .first , it gets its own size from a constant in its program code . the constant uses some characters and thus affects the size of .however , the size of a natural number constant is and grows in steps of zero or one as grows .therefore , by starting with and incrementing it by steps of one , it eventually catches the size of the program , although also the latter may grow .then reads the input , counting the number of the characters that it gets with and interpreting the string of characters as a natural number in base .we have , and any natural number in this range is possible .let .next constructs every program input pair of size and tests it with . in this way gets the number of easy halting pairs of size .then constructs again every pair of size .this time it simulates each of them in parallel until of them have halted .then it aborts the rest and halts .it halts if and only if .( it may be helpful to think of as a guess of the number of hard halting pairs . ) among the pairs of size is itself with the string that represents as the input .we denote it with .the time consumption of any simulated execution is at least the same as the time consumption of the corresponding genuine execution .so the execution of can not contain properly a simulated execution of .therefore , either does not halt , or the simulated execution of is still continuing when halts . in the former case , . in the latter case is a halting pair but not counted in , so . in both cases , .as a consequence , no natural number less than is .so . by lemma[ l : pc++ ] , .so for any , we have .the proof of the part is otherwise similar , except that continues simulation until pairs have halted .( now is a guess of , yielding a guess of by subtraction . )the program gets by counting the pairs of size whose program part is compilable .it turns out that , so can not be , yielding .next we adapt the second main result in to our present setting , with a somewhat simplified proof and obtaining the result also for three - way and generic - case testers .[ t : indep ] with the c++ programming model in section [ s : c++model ] , [l]{{\textnormal{generic(g ) } } } : \forall n_0 \in \mathbb{n } : \exists: \frac{{\overline d_t}(n)}{p(n ) } \geq c\textrm { , and}\hspace{13mm}\ ] ] [l]{{\textnormal{approx(g ) } } } : \forall n_0 \in \mathbb{n } : \exists n \geq\frac{{\overline h_t}(n)+{\overline d_t}(n)}{p(n ) } \geq c\textrm { .}\hspace{7mm}\ ] ] the proof follows the same strategy as the proof of theorem [ t : hard - sd ] , but differs in some technical details . to prove the claim for three - way testers , for any character string , let if is the empty string , and otherwise is the value of the least significant bit of the last character of . for any character strings and , let if and only if and .for any size greater than , `` '' has two equivalence classes , each containing character strings . for any ,let be the program whose shortlex index is .there is a program that behaves as follows .we denote its execution on input with .please observe that if , then behaves in the same way as .first finds the program , where , where is the biggest square number that is at most .then goes through , in the shortlex order , all , until any of the termination conditions mentioned below occurs or has gone through all of them . for each , it runs on .we denote this with .if fails to halt , then never returns from it and thus fails to halt .if halts replying `` yes '' , then enters an eternal loop , thus failing to halt .if halts replying `` no '' , then halts immediately .if halts replying `` i do nt know '' , then tries the next .it is not important what does if halts replying something else . if halted replying `` i do nt know '' for every such that , then checks whether .if yes , then enters an eternal loop , otherwise halts .now let be any three - way tester that tests whether program halts on the input .how the two components and of the input of are encoded into one input string is not important .there is a program that has hard - coded into a string constant , inputs , calls , and gives its reply as its own reply . let be the shortlex index of this program , so the program is .there are infinitely many positive integers such that .let be such , and let be any character string of size .so is .if , during the execution of , ever replies `` yes '' or `` no '' , then the same happens during the execution of , because behaves in the same way as ( the fact that was called implies ) .but that would be incorrect by the construction of . therefore , replies `` i do nt know '' for every of size . as a consequence, has at least hard instances of size . if , then half of them are halting and the other half non - halting , thanks to the test near the end of . by lemma [ l : pc++ ] ,so if , then the program does not depend on , so letting latexmath:[$c = 1/(2 the proof for generic - case testers is otherwise similar , but the are tried in parallel and fails to halt for every of size .all hard instances are non - halting .the for approximating testers lets each continue until completion , counts the numbers of the `` yes''- and `` no''-replies they yield , and then does the opposite of the majority of the replies .application of lemma [ l : limit ] to this result yields the following .[ c : nolimit2 ] with the c++ programming model in section [ s : c++model ] , does not exist .this study did not cover all combinations of a programming model , variant of the halting problem , and variant of the tester .so there is a lot of room for future work .the results highlight what was already known since : the programming model has a significant role . with some programming models , a phenomenon of secondary interest dominates the distribution of programs , making hard instances rare .such phenomena include compile - time errors and falling off the left end of the tape of a turing machine .many results were derived using the assumption that information can be packed very densely in the program or the input file .sometimes it was not even necessary to assume that the program could use the information .it sufficed that the assumption allowed to make many enough similarly behaving longer copies of an original program .intuition suggests that if the program can access the information , testing halting is harder than in the opposite case .a comparison of theorem [ t : easy - sd ] to theorem [ t : hard - sd ] supports this intuition .corollaries [ c : nolimit1 ] and [ c : nolimit2 ] and the comment after corollary [ c : nolimit1 ] tell that the proportion of _ all _ ( not just hard ) halting instances has no limit with end - of - file dead segment languages and variant s of the halting problem , with the c++ model and variant g , and in the framework of .it must thus oscillate irregularly as the size of the program grows irregularly because of lemma [l : limit ] .this is not a property of various notions of imperfect halting testers , but a property of the halting problem itself .i thank professor keijo ruohonen for helpful discussions , and the anonymous reviewers of splst 13 and acta cybernetica for their helpful comments .the latter pointed out that proposition [ p : syntax ] had been formulated incorrectly .khler , s. , schindelhauer , c. , and ziegler , m. on approximating real - world halting problems . in likiewicz ,m. and reischuk , r. , editor , _ proc .15th fundamentals of computation theory _ , lecture notes in computer science 3623 , pages 454466 , 2005 .springer .schindelhauer , c. and jakoby , a. the non - recursive power of erroneous computation . in pandurangan , c. , raman , v. , and ramanujam , r. , editors , _ proc .19th foundations of software technology and theoretical computer science _ , lecture notes in computer science 1738 , pages 394406 , 1999 .springer .valmari , a. sizes of up - to- halting testers . in halava , v. , karhumki ,j. , and matiyasevich , y. , editors , _ proceedings of the second russian finnish symposium on discrete mathematics _ , tucs lecture notes 17 , pages 176183 , turku , finland , 2012 . valmari , a. the asymptotic behaviour of the proportion of hard instances of the halting problem . in kiss , . ,editor , _ proceedings of splst 13 , 13th symposium on programming languages and software tools _ , pages 170184 , szeged , hungary , 2013 .
although the halting problem is undecidable , imperfect testers that fail on some instances are possible . such instances are called _ hard _ for the tester . one variant of imperfect testers replies `` i do nt know '' on hard instances , another variant fails to halt , and yet another replies incorrectly `` yes '' or `` no '' . also the halting problem has three variants : does a given program halt on the empty input , does a given program halt when given itself as its input , or does a given program halt on a given input . the failure rate of a tester for some size is the proportion of hard instances among all instances of that size . this publication investigates the behaviour of the failure rate as the size grows without limit . earlier results are surveyed and new results are proven . some of them use c++ on linux as the computational model . it turns out that the behaviour is sensitive to the details of the programming language or computational model , but in many cases it is possible to prove that the proportion of hard instances does not vanish . * keywords : * halting problem , three - way tester , generic - case tester , approximating tester * acm computing classification system 1998 : * f.1.1 models of computation computability theory * mathematics subject classification 2010 : * 68q17 computational difficulty of problems
compressed sensing is a recently emerged technique for signal sampling and data acquisition which enables to recover sparse signals from undersampled linear measurements where is a sampling matrix with , denotes an -dimensional sparse signal , and denotes the additive noise .the problem has been extensively studied and a variety of algorithms , e.g. the orthogonal matching pursuit ( omp ) algorithm , the basis pursuit ( bp ) method , the iterative reweighted and algorithms , and the sparse bayesian learning method were proposed . in many practical applications , in addition to the sparse structure , sparse signals may exhibit two - dimensional cluster patterns that can be utilized to enhance the recovery performance .for example , the target of interest in the synthetic aperture radar / inverse synthetic aperture radar ( sar / isar ) images often demonstrates continuity in both the range and cross - range domains . in video surveillance , the foreground image exhibits a cluster pattern since the foreground objects ( humans , cars , text etc . )generally occupy a small continuous region of the scene . besides these , block - sparsity is also present in temporal observations of a time - varying block - sparse signal whose support varies slowly over time .analyses show that exploiting the inherent block - sparse structure not only leads to relaxed conditions for exact reconstruction , but also helps improve the recovery performance considerably .a number of algorithms have been proposed for recovering block - sparse signals over the past few years , e.g. , block - omp , mixed norm - minimization , group lasso , model - based cosamp , and block - sparse bayesian learning .these algorithms , however , require _ a priori _ knowledge of the block partition ( e.g. the number of blocks and location of each block ) such that the coefficients in each block are grouped together and enforced to share a common sparsity pattern . in practice ,the prior information about the block partition of sparse signals is often unavailable , especially for two - dimensional signals since the block partition of a two - dimensional signal involves not only the location but also the shape of each block .for example , foreground images have irregular and unpredictable cluster patterns which are very difficult to be estimated _ a priori_. to address this difficulty , a few sophisticated bayesian methods which do not need the knowledge of the block partition were developed . in ,a `` spike - and - slab '' prior model was proposed , where by introducing dependencies among mixing weights , the prior model has the potential to encourage sparsity and promote a tree structure simultaneously .this `` spike - and - slab '' prior model was later extended to accommodate block - sparse signals .nevertheless , for the `` spike - and - slab '' prior introduced in , the posterior distribution can not be derived analytically , and a markov chain monte carlo ( mcmc ) sampling method has to be employed for bayesian inference . in ,a graphical prior , also referred to as the `` boltzmann machine '' , is employed as a prior on the sparsity support in order to induce statistical dependencies between atoms .with such a prior , the maximum a posterior ( map ) estimator requires an exhaustive search over all possible sparsity patterns . to overcome the intractability of the combinatorial search , a greedy method and a variational mean - field approximation method were developed to approximate the map . in , to cope with the unknown cluster pattern , an expanded model is employed by assuming that the original sparse signal is a superposition of a number of overlapping blocks , and the coefficients in each block share the same sparsity pattern .conventional block sparse bayesian learning algorithms such as those in can then be applied to the expanded model . recently in , we proposed a pattern - coupled hierarchical gaussian prior model to exploit the unknown block - sparse structure . unlike the conventional hierarchical gaussian prior model where each coefficient is associated independently with a unique hyperparameter , the pattern - coupled prior for each coefficient not onlyinvolves its own hyperparameter , but also its immediate neighboring hyperparameters .this pattern - coupled hierarchical model is effective and flexible to capture any underlying block - sparse structures , without requiring the prior knowledge of the block partition .numerical results show that the pattern - coupled sparse bayesian learning ( pc - sbl ) method renders competitive performance for block - sparse signal recovery . nevertheless , a major drawback of the method is that it requires computing an matrix inverse at each iteration , and thus has a cubic complexity in terms of the signal dimension .this high computational cost prohibits its application to problems with even moderate dimensions .also , only considers recovery of one - dimensional block - sparse signals . in this paper , we generalize the pattern - coupled hierarchical model to the two - dimensional ( 2-d ) scenario in order to leverage block - sparse patterns arising from 2-d sparse signals . to address the computational issue , we resort to the generalized approximate message passing ( gamp ) technique and develop a computationally efficient method .specifically , the algorithm is developed within an expectation - maximization ( em ) framework , using the gamp to efficiently compute an approximation of the posterior distribution of hidden variables .the hyperparameters associated with the hierarchical gaussian prior are learned by iteratively maximizing the q - function which is calculated based on the posterior approximation obtained from the gamp .simulation results show that the proposed method presents superior recovery performance for block - sparse signals , meanwhile achieving a significant reduction in computational complexity .the rest of the paper is organized as follows . in section [ sec :model ] , we introduce a 2-d pattern coupled hierarchical gaussian framework to model the sparse prior and the pattern dependencies among the neighboring coefficients . in section [ sec : algorithm ] , a gamp - based em algorithm is developed to obtain the maximum a posterior ( map ) estimate of the hyperparameters , along with the posterior distribution of the sparse signal .simulation results are provided in section [ sec : simulation ] , followed by concluding remarks in section [ sec : conclusions ] .we consider the problem of recovering a two - dimensional block - sparse signal from compressed noisy measurements where denotes the compressed measurement vector , is a linear map : , with , and is an additive multivariate gaussian noise with zero mean and covariance matrix .let , the linear map can generally be expressed as where denotes the measurement matrix . in the special casewhere , then we have , in which stands for the kronecker product .the above model ( [ data - model ] ) arises in image applications where signals are multi - dimensional in nature , or in the scenario where multiple snapshots of a time - varying sparse signal are available . in these applications ,signals usually exhibit two - dimensional cluster patterns that can be utilized to improve the recovery accuracy .to leverage the underlying block - sparse structures , we introduce a 2-d pattern - coupled gaussian prior model which is a generalization of our previous work . before proceeding , we provide a brief review of the conventional hierarchical gaussian prior model , and some of its extensions . for ease of exposition , we consider the prior model for the two - dimensional signal instead of its one - dimensional form .let denote the entry of . in the conventional sparse bayesian learning framework ,a two - layer hierarchical gaussian prior was employed to promote the sparsity of the solution . in the first layer ,coefficients of are assigned a gaussian prior distribution where is a non - negative hyperparameter controlling the sparsity of the coefficient .the second layer specifies gamma distributions as hyperpriors over the hyperparameters , i.e. as discussed in , for properly chosen and , this hyperprior allows the posterior mean of to become arbitrarily large . as a consequence, the associated coefficient will be driven to zero , thus yielding a sparse solution .this conventional hierarchical model , however , does not encourage structured - sparse solutions since the sparsity of each coefficient is determined by its own hyperparameter and the hyperparameters are independent of each other .in , the above hierarchical model was generalized to deal with block - sparse signals , in which a group of coefficients sharing the same sparsity pattern are assigned a multivariate gaussian prior parameterized by a common hyperparameter .nevertheless , this model requires the knowledge of the block partition to determine which coefficients should be grouped and assigned a common hyperparameter . to exploit the 2-d block - sparse structure , we utilize the fact that the sparsity patterns of neighboring coefficients are statistically dependent . to capture the pattern dependencies among neighboring coefficients , the gaussian prior for each coefficient not only involves its own hyperparameter , but also its immediate neighbor hyperparameters .specifically , a prior over is given by where in which denotes the neighborhood of the grid point , i.e. changes accordingly .] , and ] , can then be computed , where the operator $ ] denotes the expectation with respect to the posterior distribution . in the m - step , we maximize the q - function with respect to the hyperparameters . it can be seen that the em algorithm , at each iteration , requires to update the posterior distribution , which involves computing an matrix inverse .thus the em - based algorithm has a computational complexity of flops , and therefore is not suitable for many real - world applications involving large dimensions . in the following, we will develop a computationally efficient algorithm by resorting to the generalized approximate message passing ( gamp ) technique .gamp is a very - low - complexity bayesian iterative technique recently developed for obtaining an approximation of the posterior distribution .it therefore can naturally be embedded within the em framework to replace the computation of the true posterior distribution . from gamps point of view , the hyperparameters are considered as known . the hyperparameters can be updated in the m - step based on the approximate posterior distribution of .we now proceed to derive the gamp algorithm for the pattern - coupled gaussian hierarchical prior model .gamp was developed in a message passing - based framework . by using central - limit - theorem approximations , message passing between variable nodes and factor nodescan be greatly simplified , and the loopy belief propagation on the underlying factor graph can be efficiently performed . as noted in ,the central - limit - theorem approximations become exact in the large - system limit under i.i.d .zero - mean sub - gaussian . for notational convenience ,let denote the hyperparameters .firstly , gamp assumes posterior independence among hidden variables and approximates the true posterior distribution by where and are quantities iteratively updated during the iterative process of the gamp algorithm . here, we have dropped their explicit dependence on the iteration number for simplicity . substituting ( [ x - prior ] ) into ( [ eqn-1 ] ), it can be easily verified that the approximate posterior follows a gaussian distribution with its mean and variance given respectively as another approximation is made to the noiseless output , where denotes the row of .gamp approximates the true marginal posterior by where and are quantities iteratively updated during the iterative process of the gamp algorithm .again , here we dropped their explicit dependence on the iteration number . under the additive white gaussian noise assumption, we have .thus also follows a gaussian distribution with its mean and variance given by with the above approximations , we can now define the following two scalar functions : and that are used in the gamp algorithm .the input scalar function is simply defined as the posterior mean , i.e. the scaled partial derivative of with respect to is the posterior variance , i.e. the output scalar function is related to the posterior mean as follows the partial derivative of is related to the posterior variance in the following way given the above definitions of and , the gamp algorithm tailored to the considered sparse signal estimation problem can now be summarized as follows ( details of the derivation of the gamp algorithm can be found in ) , in which denotes the entry of , and denote the posterior mean and variance of at iteration , respectively . [ cols="<",options="header " , ]we developed a pattern - coupled sparse bayesian learning method for recovery of two - dimensional block - sparse signals whose cluster patterns are unknown _ a priori_. a two - dimensional pattern - coupled hierarchical gaussian prior model is introduced to characterize and exploit the pattern dependencies among neighboring coefficients .the proposed pattern - coupled hierarchical model is effective and flexible to capture any underlying block - sparse structures , without requiring the prior knowledge of the block partition .an expectation - maximization ( em ) strategy is employed to infer the maximum a posterior ( map ) estimate of the hyperparameters , along with the posterior distribution of the sparse signal .additionally , the generalized approximate message passing ( gamp ) algorithm is embedded in the em framework to efficiently compute an approximation of the posterior distribution of hidden variables , which results in a significant reduction in computational complexity .numerical results show that our proposed algorithm presents a substantial performance advantage over other existing state - of - the - art methods in image recovery .d. wipf and s. nagarajan , `` iterative reweighted and methods for finding sparse solutions , '' _ ieee journals of selected topics in signal processing _ ,vol . 4 , no . 2 , pp . 317329 , apr .2010 .v. cevher , a. sankaranarayanan , m. f. duarte , d. reddy , r. g. baraniuk , and r. chellappa , `` compressive sensing for background subtraction , '' in _european conf .comp . vision ( eccv ) _ , marseille , france , october 12 - 18 2008 .z. zhang and b. d. rao , `` sparse signal recovery with temporally correlated source vectors using sparse bayesian learning , '' _ ieee journal of selected topics in signal processing _, vol . 5 , no . 5 , pp . 912926 , sept .2011 .a. drmeau , c. herzet , and l. daudet , `` boltzmann machine and mean - field approximation for structured sparse decompositions , '' _ ieee trans .signal processing _ , vol .60 , no . 7 ,34253438 , july 2012 .z. zhang and b. d. rao , `` extension of sbl algorithms for the recovery of block sparse signals with intra - block correlation , '' _ ieee trans . signal processing _ , vol .61 , no . 8 , pp . 20092015 , apr . 2013 .g. warnell , d. reddy , and r. chellappa , `` adaptive rate compressive sensing for background subtraction , '' in _ ieee international conference on acoustics , speech and signal processing ( icassp ) _ , kyoto , japan , march 25 - 30 2012 .j. p. vila and p. schniter , `` expectation - maximization bernoulli - gaussian approximate message passing , '' in _45th asilomar conference on signals , symtems and computers _ , pacific grove , ca , usa , november 6 - 9 2011 .
we consider the problem of recovering two - dimensional ( 2-d ) block - sparse signals with _ unknown _ cluster patterns . two - dimensional block - sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging . to exploit the block - sparse structure , we introduce a 2-d pattern - coupled hierarchical gaussian prior model to characterize the statistical pattern dependencies among neighboring coefficients . unlike the conventional hierarchical gaussian prior model where each coefficient is associated independently with a unique hyperparameter , the pattern - coupled prior for each coefficient not only involves its own hyperparameter , but also its immediate neighboring hyperparameters . thus the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage 2-d structured - sparse solutions . an expectation - maximization ( em ) strategy is employed to obtain the maximum a posterior ( map ) estimate of the hyperparameters , along with the posterior distribution of the sparse signal . in addition , the generalized approximate message passing ( gamp ) algorithm is embedded into the em framework to efficiently compute an approximation of the posterior distribution of hidden variables , which results in a significant reduction in computational complexity . numerical results are provided to illustrate the effectiveness of the proposed algorithm . pattern - coupled sparse bayesian learning , block - sparse structure , expectation - maximization ( em ) , generalized approximate message passing ( gamp ) .
as an approximation to control problems for critically - loaded stochastic networks , harrison ( in , see also ) has formulated a stochastic control problem in which the state process is driven by a multidimensional brownian motion along with an additive control that satisfies certain feasibility and nonnegativity constraints .this control problem , that is , usually referred to as the _ brownian control problem _ ( bcp ) has been one of the key developments in the heavy traffic theory of controlled stochastic processing networks ( spn ) .bcps can be regarded as formal scaling limits for a broad range of scheduling and sequencing control problems for multiclass queuing networks .finding optimal ( or even near - optimal ) control policies for such networks which may have quite general non - markovian primitives , multiple server capabilities and rather complex routing geometry is in general prohibitive . in that regard ,bcps that provide significantly more tractable approximate models are very useful . in this diffusion approximation approach to policy synthesis , one first finds an optimal ( or near - optimal ) control for the bcp which is then suitably interpreted to construct a scheduling policy for the underlying physical network . in recent yearsthere have been many works that consider specific network models for which the associated bcp is explicitly solvable ( i.e. , an optimal control process can be written as a known function of the driving brownian motions ) and , by suitably adapting the solution to the underlying network , construct control policies that are asymptotically ( in the heavy traffic limit ) optimal .the paper also carries out a similar program for the crisscross network where the state space is three dimensional , although an explicit solution for the bcp here is not available .although now there are several papers which establish a rigorous connection between a network control problem and its associated bcp by exploiting the explicit form of the solution of the latter , a systematic theory which justifies the use of bcps as approximate models has been missing . in a recent work it was shown that for a large family of _ unitary networks _( following terminology of , these are networks with a structure as described in section [ secsetup ] ) , with general interarrival and service times , probabilistic routing and an infinite horizon discounted linear holding cost , the cost associated with any admissible control policy for the network is asymptotically , in the heavy traffic limit , bounded below by the value function of the bcp .this inequality , which provides a useful bound on the best achievable asymptotic performance for an admissible control policy , was a key step in developing a rigorous general theory relating bcps with spn in heavy traffic .the current paper is devoted to the proof of the reverse inequality .the network model is required to satisfy assumptions made in ( these are summarized above theorem [ ab937 ] ) .in addition , we impose a nondegeneracy condition ( assumption [ non - deg ] ) , a condition on the underlying renewal processes regarding probabilities of deviations from the mean ( assumption [ ldp ] ) and regularity of a certain skorohod map ( assumption [ ab1012 ] ) ( see next paragraph for a discussion of these conditions ) . under these assumptionswe prove that the value function of the bcp is bounded below by the heavy traffic limit ( limsup ) of the value functions of the network control problem ( theorem [ main1016 ] ) . combining this with the result obtained in ( see theorem [ ab937 ] ), we obtain the main result of the paper ( theorem [ maincorr ] ) .this theorem says that , under broad conditions , the value function of the network control problem converges to that of the bcp .this result provides , under general conditions , a rigorous basis for regarding bcps as approximate models for critically loaded stochastic networks .conditions imposed in this paper allow for a wide range of spn models .some such models , whose description is taken from , are discussed in detail in examples [ exammodel](a)(c ) .we note that our approach does not require the bcp to be explicitly solvable and the result covers many settings where explicit solutions are unavailable . most of the conditions that we impose are quite standard and we only comment here on three of them : assumptions [ ab148 ] , [ g - column ] and [ ab1012 ] .assumption [ ab148 ] says that each buffer is processed by at least one basic activity ( see remark [ rem - ht ] ) .this condition , which was introduced in , is fundamental for our analysis .in fact , has shown that without this assumption even the existence of a nonnegative workload matrix may fail .assumption [ g - column ] is a natural condition on the geometry of the underlying network . roughly speaking, it says that a nonzero control action leads to a nonzero state displacement .assumption [ ab1012 ] is the third key requirement in this work .it says that the skorohod problem associated with a certain reflection matrix [ see equation ( [ refmat ] ) for the definition of ] is well posed and the associated skorohod map is lipschitz continuous . as example[ exammodel ] discusses , this condition holds for a broad family of networks ( including _ all _ multiclass open queuing networks , as well as a large family of parallel server networks and job - shop networks ) .the papers noted earlier , that treat the setting of explicitly solvable bcp , do much more than establish convergence of value functions .in particular , these works give an explicit implementable control policy for the underlying network that is asymptotically optimal in the heavy traffic limit . in the generality treated in the current work , giving explicit recipes ( e.g. , threshold type policies ) is unfeasible , however , the policy sequence constructed in section [ subconstruct ] suggests a general approach for building near asymptotically optimal policies for the network _ given _ a near optimal control for the bcp .obtaining near optimal controls for the bcp in general requires numerical approaches ( see , e.g. , ) , discussion of which is beyond the scope of the current work .we now briefly describe some of the ideas in the proof of the main result theorem [ main1016 ] .we begin by choosing , for an arbitrary , a suitable -optimal control for the bcp and then , using , construct a sequence of control policies for the network model such that the ( suitably scaled ) cost associated with converges to that associated with , as .this yields the desired reverse inequality .one of the key difficulties is in the translation of a given control for the bcp to that for the physical network .indeed , a ( near ) optimal control for the bcp can be a very general adapted process with rcll paths . without additional information on such a stochastic process, it is not at all clear how one adapts and applies it to a given network model .a control policy for the network needs to specify how each server distributes its effort among various job classes at any given time instant . by a series of approximationswe show that one can find a rather simple -optimal control for the bcp , that is , easy to interpret and implement on a network control problem . as a first step , using pde characterization results for general singular control problems with state constraints from ( these , in particular , make use of the nondegeneracy assumption assumption [ non - deg ] ) , one can argue that a near - optimal control can be taken to be adapted to the driving brownian motion and be further assumed to have moments that are subexponential in the time variable ( see lemma [ cont - bcp ] ) . using results from , one can perturb this control so that it has continuous sample paths without significantly affecting the cost .next , using ideas developed by kushner and martins in the context of a two - dimensional bcp , one can further approximate such a control by a process with a fixed ( nonrandom ) finite number of jumps that take values in a finite set . two main requirements ( in addition to the usual adaptedness condition ) for such a process to be an admissible control of a bcp ( see definition [ bcp ] ) are the nonnegativity constraints ( [ ab934 ] ) and state constraints ( [ ab440 ] ) .it is relatively easy to construct a pure jump process that satisfies the first requirement of admissibility , namely , the nonnegativity constraints , however , the nondegenerate brownian motion in the dynamics rules out the satisfaction of the second requirement , that is , state constraints , without additional modifications .this is where the regularity assumption on a certain skorohod map ( assumption [ ab1012 ] ) is used .the pure jump control is modified in a manner such that in between successive jumps one uses the skorohod map to employ minimal control needed in order to respect state constraints .regularity of the skorohod problem ensures that this modification does not change the associated cost much .the skorohod map also plays a key role in the weak convergence arguments used to prove convergence of costs .the above construction is the essential content of theorem [ main - bcp ] .the -optimal control that we use for the construction of the policy sequence requires two additional modifications [ see part ( iii ) of theorem [ main - bcp ] and below ( [ 111 ] ) ] which facilitate adapting such a control for the underlying physical network and in some weak convergence proofs , but we leave that discussion for later in section [ nearopt ] ( see remark [ on3 ] and above theorem [ newmain518 ] ) . using a near - optimal control of the form given in section [ nearopt ] ( cf .theorem [ newmain518 ] ) , we then proceed to construct a sequence of policies for the underlying network .the key relation that enables translation of into is ( [ y - defn ] ) using which one can loosely interpret as the asymptotic deviation , with suitable scaling , of from the nominal allocation ( see definition [ defn - ht ] for the definition of nominal allocation vector ) .recall that is constructed by modifying , through a skorohod constraining mechanism , a pure jump process ( say , ) .in particular , has sample paths that are , in general , discontinuous . on the other hand , note that an admissible policy is required to be a lipschitz function ( see remark [ rem921 ] ) .this suggests the following construction for . over time periods ( say , ) of constancy of should use the nominal allocation ( i.e. , ) , while jump - instants should be stretched into periods of length of order ( note that in the scaled network , time is accelerated by a factor of and so such periods translate to intervals of length in the scaled evolution and thus are negligible ) over which a nontrivial control action is employed as dictated by the jump vector ( see figure [ policyfig ] for a more complete description ) .this is analogous to the idea of a discrete review policy proposed by harrison ( see also and references therein ) .there are some obvious difficulties with the above prescription , for example , a nominal allocation corresponds to the average behavior of the system and for a given realization is feasible only when the buffers are nonempty .thus , one needs to modify the above construction to incorporate idleness , that is , caused due to empty buffers .the effect of such a modification is , of course , very similar to that of a skorohod constraining mechanism and it is tempting to hope that the deviation process corresponding to this modified policy converges to ( in an appropriate sense ) , as .however , without further modifications , it is not obvious that the reflection terms that are produced from the idling periods under this policy are asymptotically consistent with those obtained from the skorohod constraining mechanism applied to ( the state process corresponding to ) .the additional modification [ see ( [ ab543 ] ) ] that we make roughly says that jobs are processed from a given buffer over a small interval , only if at the beginning of this interval there are a `` sufficient '' number of jobs in the buffer .this idea of safety stocks is not new and has been used in previous works ( see , e.g. , ) .the modification , of course , introduces a somewhat nonintuitive idleness even when there are jobs that require processing .however , the analysis of section [ secproof ] shows that this idleness does not significantly affect the asymptotic cost .the above very rough sketch of construction of is made precise in section [ subconstruct ] .the rest of the paper is devoted to showing that the cost associated with converges to that associated with .it is unreasonable to expect convergence of controls ( e.g. , with the usual skorohod topology)in particular , note that has lipschitz paths for every while is a ( modification of ) a pure jump process however , one finds that the convergence of costs holds .this convergence proof , and the related weak convergence analysis , is carried out in sections [ secconvprf ] and [ weakcgce ] .the paper is organized as follows .section [ secsetup ] describes the network structure , all the associated stochastic processes and the heavy - traffic assumptions as well as the other assumptions of the paper .the section also presents the spn control problem , that is , considered here , along with the main result of the paper ( theorem [ maincorr ] ) .section [ nearopt ] constructs ( see theorem [ newmain518 ] ) a near - optimal control policy for the bcp which can be suitably adapted to the network control problem . in section [ secproof ] the near - optimal control policy from section [ nearopt ]is used to obtain a sequence of admissible control policies for the scaled spn .the main result of the section is theorem [ mainweak ] , which establishes weak convergence of various scaled processes . convergence of costs ( i.e. , theorem [ jrtoj ] ) is an immediate consequence of this weak convergence result .theorem [ maincorr ] then follows on combining theorem [ jrtoj ] with results of ( stated as theorem [ ab937 ] in the current work ) .finally , the collects proofs of some auxiliary results . the following notation will be used .the space of reals ( nonnegative reals ) , positive ( nonnegative ) integers will be denoted by ( ) , ( ) , respectively . for and ] ) to with the topology of uniform convergence on compacts ( resp .uniform convergence ) . also , ] ) to with the usual skorohod topology . for and , we write and , where for , all vector inequalities are to be interpreted component - wise .we will call a function nonnegative if for all .a function is called nondecreasing if it is nondecreasing in each component . all ( stochastic ) processes in this work will have sample paths that are right continuous and have left limits , and thus can be regarded as -valued random variables with a suitable . for a polish space , will denote the corresponding borel sigma - field .weak convergence of valued random variables to will be denoted as .sequence of processes is tight if and only if the measures induced by s on form a tight sequence . a sequence of processes with paths in ( ) is called -tight if it is tight in and any weak limit point of the sequence has paths in almost surely ( a.s . ) . for processes , defined on a common probability space , we say that converge to , uniformly on compact time intervals ( u.o.c . ) , in probability ( a.s . ) if for all , converges to zero in probability ( resp .. to ease the notational burden , standard notation ( that follow ) for different processes are used ( e.g. , for queue - length , for idle time , for workload process etc . ) .we also use standard notation , for example , , to denote fluid scaled , respectively , diffusion scaled , versions of various processes of interest [ see ( [ fl - scaled ] ) and ( [ diff - scaled ] ) ]. all vectors will be column vectors .an -dimensional vector with all entries will be denoted by . for a vector , will denote the diagonal matrix such that the vector of its diagonal entries is . will denote the transpose of a matrix .also , will denote generic constants whose values may change from one proof to the next .let be a probability space .all the random variables associated with the network model described below are assumed to be defined on this probability space .the expectation operation under will be denoted by .we begin by introducing the family of stochastic processing network models that will be considered in this work .we closely follow the terminology and notation used in .the network has infinite capacity buffers ( to store many different classes of jobs ) and nonidentical servers for processing jobs .arrivals of jobs , given in terms of suitable renewal processes , can be from outside the system and/or from the internal rerouting of jobs that have already been processed by some server . several different servers may process jobs from a particular buffer .service from a given buffer by a given server is called an _ activity_. once a job starts being processed by an activity , it must complete its service with that activity , even if its service is interrupted for some time ( e.g. , for preemption by a job from another buffer ) .when service of a partially completed job is resumed , it is resumed from the point of preemption that is , the job needs only the remaining service time from the server to get completed ( preemptive - resume policy ) .also , an activity must complete service of any job that it started before starting another job from the same buffer .an activity always selects the oldest job in the buffer that has not yet been served , when starting a new service [ i.e. , first in first out ( fifo ) within class ] .there are activities [ at most one activity for a server - buffer pair , so that . herethe integers are strictly positive .figure [ fig1-general ] gives a schematic for such a model .buffers , activities , servers and probabilistic routing ( given by the matrix ) . ]let , and .the correspondence between the activities and buffers , and activities and servers are described by two matrices and respectively . is an matrix with if the activity processes jobs from buffer , and otherwise .the matrix is with if the server is associated with the activity , and otherwise .each activity associates one buffer and one server , and so each column of has exactly one 1 ( and similarly , every column of has exactly one 1 ) .we will further assume that each row of ( and ) has at least one 1 , that is , each buffer is processed by ( server is processing , resp . )at least one activity .for , let , if activity corresponds to the server processing class jobs .let , for , and .thus , for the server , denotes the set of activities that the server can perform , and represents the corresponding buffers from which the jobs can be processed .we are interested in the study of networks that are nearly critically loaded .mathematically , this is modeled by considering a sequence of networks that `` approach heavy traffic , '' as , in the sense of definition [ defn - ht ] below .each network in the sequence has identical structure , except for the rate parameters that may depend on .here , where is a countable set : with and , as .one thinks of the physical network of interest as the network embedded in this sequence , for a fixed large value of . for notational simplicity , throughout the paper , we will write the limit along the sequence as simply as `` . ''also , will always be taken to be an element of and , thus , hereafter the qualifier will not be stated explicitly .the network is described as follows .if the class ( ) has exogenous job arrivals , the interarrival times of such jobs are given by a sequence of nonnegative random variables that are i.i.d with mean and standard deviation respectively .let , by relabeling if needed , the buffers with exogenous arrivals correspond to , where .we set and , for .service times for the type of activity ( for ) are given by a sequence of nonnegative random variables that are i.i.d .with mean and standard deviation respectively .we will assume that the above random variables are in fact strictly positive , that is , we will further impose the following uniform integrability condition : rerouting of jobs completed by the activity is specified by a sequence of -dimensional vector , where .for each and , if the completed job by activity gets rerouted to buffer , and takes the value zero otherwise , where represents jobs leaving the system .it is assumed that for each fixed , , , are ( mutually ) independent sequences of i.i.d , where .that , in particular , means , for , .furthermore , for fixed , where is if and otherwise .we also assume that , for each , the random variables next we introduce the primitive renewal processes , , that describe the state dynamics .the process is the -dimensional exogenous arrival process , that is , for each , is a renewal process which denotes the number of jobs that have arrived to buffer from outside the system over the interval ] _ if the associated server worked continuously and exclusively on jobs from the associated buffer in ] , .let and ' \in{\mathbb{r}}^{{\mathbf{j}}} ] .we now argue that .note that by construction .thus , we can find such that whenever satisfies .also , since has full row rank ( see corollary 6.2 of ) , we can find a matrix such that .thus , and , consequently , . the result follows .note that the vector constructed in the proof of the lemma above has the property that and .thus , we have shown the following : [ tset ] the set is nonempty .the above result will be used in the construction of a suitable near optimal control policy for the bcp [ see below ( [ 111 ] ) ] .[ rem32 ] we will make use of some results from and that concern a general family of singular control problems with state constraints . we note below some properties of the model studied in the current paper that ensure that the assumptions of and are satisfied : has full row rank .this follows from the observation that and have full row ranks and , therefore , and has a nonempty interior ( see lemma [ intgood ] ) . , for all and for all .this is an immediate consequence of the fact that the entries of and are nonnegative [ see above ( [ mrgk ] ) ] .since has full row rank and , by assumption [ non - deg ] , is positive definite , we have that is positive definite .the above properties along with assumption [ g - column ] , ( [ cond511 ] ) and ( [ cond512 ] ) ensure that assumptions of and are satisfied in our setting .in particular , assumption ( 2.1)(2.2 ) and ( 2.8)(2.10 ) of hold in view of properties ( b ) , ( c ) and ( d ) and equations ( [ cond511 ] ) and ( [ cond512 ] ) .similarly , assumptions ( 1 ) , ( 5 ) and 2.2 of hold in our setting [ from property ( c ) , ( [ cond511 ] ) and assumption [ g - column ] , resp . ] .henceforth , when appealing to results from and , we will not make an explicit reference to these conditions . recall and the map introduced above ( [ 234b ] ) and ( [ abn756 ] ) , respectively .the following is a key step in the construction of a near - optimal control with desirable properties .[ main - bcp ] fix . for each , there exists , given on some system , that is -optimal and has the following properties : where is an adapted process with sample paths in satisfying the following : for some , , with and , a. for and for .b. letting for and .c. there is an i.i.d sequence of uniform ( over ] , , such that the map is continuous , for a.e . in ] , hence , by ( [ stepwise - bcp-1 ] ) , we have , for each , } \biggl|\frac{y_n(t)-y_n(t')}{t - t ' } \biggr|^m\biggr ] < \infty.\ ] ] note that for ] , . since is continuous and , a.s . combining this with ( [ stepwise - bcp-1 ] ) and the estimate , we now have that , for some , satisfies with . also , }\biggl|\frac{y^1(t)-y^1(t')}{t - t ' } \bigg|^m\biggr ] { \doteq}c_1 < \infty.\ ] ] note that given and , define fix large enough so that and set .then , from ( [ step - pf-3 ] ) we have \nonumber\\ & & \qquad= { \mathbb{e}}\bigl[\max_{n=0,\ldots , p_0 - 1}\bigl\{\sup_{t\in[n\theta , ( n+1)\theta ) } \bigl|y^{(1)}(n\theta)-y^{(1)}(t)\bigr|^m\bigr\}\bigr ] \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \qquad\leq \theta^m { \mathbb{e}}\biggl[\sup_{n=0,1,\ldots , p_0 - 1}\biggl\{\sup_{t , t'\in[n\theta , ( n+1)\theta ) } \frac{|y^{(1)}(t)-y^{(1)}(t')|}{|t - t'|}\biggr\}^m\biggr]\\ & & \qquad\leq \theta^m c_1 < \tilde\varepsilon_0.\nonumber\end{aligned}\ ] ] from ( [ step - pf-4 ] ) we have for , let denote the smallest integer upper bound for . for , let .fix and , with convention , define for where for , denotes .note that for , observing that for , , we have that \leq \bigl(p_0\eta\sqrt{{\mathbf{j}}}\bigr)^m \le\tilde\varepsilon_0 .\label{ab1208}\ ] ] note that if satisfies , then for all and , consequently , for such , . combining this with the fact that has nonnegative entries, we see that . from this observation , along with ( [ step - pf-6 ] ) , we have the process constructed above is constant on and the jumps take value in the lattice , for . also , for . for fixed ,define then there exists such that , for all , \leq c_2\sum_{n=0}^{p_0}{\mathbb{e}}\bigl [ \bigl|\partial y^{(3)}(n)\bigr|^m i_{(|\partial y^{(3)}(n)|>m)}\bigr].\hspace*{-30pt}\ ] ] also , for some , we have from ( [ stepwise - bcp-1 ] ) , ( [ step - pf-2 ] ) , ( [ ab1206 ] ) and ( [ ab1208 ] ) that , for , \le c_3({\mathbb{e}}[|y|^m_{\infty , t } ] + 1 ) < \infty.\ ] ] fix such that the right - hand side of ( [ step - pf-75 ] ) is bounded by . setting , we now have that \leq \tilde\varepsilon_0 . \label{ab1221}\ ] ] also , combining ( [ stepwise - bcp-1 ] ) , ( [ step - pf-2 ] ) , ( [ ab1206 ] ) , ( [ ab1208 ] ) and ( [ ab1221 ] ) , we now have that satisfies ( [ ab102 ] ) as well as ( i ) and ( ii ) of theorem [ main - bcp ] .this completes the proof .[ cont - bcp ] for each there exists an -optimal , which is -adapted , continuous a.s . , and satisfies fix and let .applying theorem 2.1(iv ) of , we have that , where the infimum is taken over all -adapted controls .hence , using ( [ mincost - equal ] ) , we conclude that there is an -dimensional -adapted process for which ( [ ewf-1 ] ) holds and from lemma 4.7 of and following the construction of proposition 3.3 of [ cf .( 12 ) and ( 14 ) of that paper ] , we can assume without loss of generality that has continuous sample paths and for all , hence , using properties of the matrix ( see assumption [ g - column ] ) , we have that \\[-8pt ] \eqntext{\mbox{for all } m > 0.}\end{aligned}\ ] ] we will now use a construction given in the proof of theorem 1 of .this construction shows that there is a matrix and a matrix such that letting we have that and we refer the reader to equations ( 35 ) and ( 36 ) of for definitions and constructions of these matrices . from ( [ bd622 ] ) we now have that is an -optimal control , has continuous sample paths a.s . andis -adapted .finally from ( [ cond615 ] ) , we have that for some , by combining ( [ 32half ] ) and ( [ ewf-1 ] ) , the fourth term on the right - hand side can be bounded above by for some .the result then follows on using ( [ cond612 ] ) .the following construction will be used in the proof of theorem [ main - bcp ] .[ inter159 ] fix such that and ( [ ab127 ] ) holds .for , let , , and .then given , there exists such that and ( [ ab127 ] ) holds with replaced by . since , we have that where is the state process corresponding to and .choose large enough so that using the lipschitz property of and ( see assumption [ ab1012 ] ) , we can find such that , for all , \\[-8pt ] \nonumber & & \qquad\le c_1 \sup_{t \le s \le t } t \ge t,\end{aligned}\ ] ] where is the state process corresponding to .thus , for some , where . using ( [ ab127 ] ), we can now choose large enough so that next , letting , we have from ( [ ab302 ] ) that for some , integration by parts now yields that for some , combining the estimates in ( [ lab16 ] ) , ( [ lab17 ] ) and ( [ lab18 ] ) , we now have that for all , finally , the fact that ( [ ab127 ] ) holds with replaced by is an immediate consequence of ( [ ab302 ] ) .we can now complete the proof of theorem [ main - bcp ] .proof of theorem [ main - bcp ] using lemma [ cont - bcp ] , one can find which is -optimal , has continuous paths a.s ., is -adapted and satisfies ( [ ab127 ] ) . using lemma [ inter159 ] ,we can find such that is -optimal and ( [ ab127 ] ) holds with replaced by . we will apply theorem [ stepwise - bcp ] , with , replaced with , replaced by and denote the corresponding processes obtained from theorem [ stepwise - bcp ] , once again by and .in particular , is such that , where is -adapted , satisfies ( i ) and ( ii ) of theorem [ main - bcp ] , for some and , and ( [ ab102 ] ) holds with , replaced by and where is as in ( [ ab161 ] ) .then \le\bar l { \mathbb{e}}\bigl[\bigl|y^t-\tilde y_0^{(1)}\bigr|_{\infty , t}\bigr ] \le\frac{\varepsilon}{5c_1}.\ ] ] thus , from lemma [ ab210 ] , is -optimal .processes as in the statement of theorem [ main - bcp ] will be constructed by modifying the processes above ( but denoted once more by the same symbols ) , by constructing successive approximations and .these approximations are only used in the current proof and do not appear elsewhere in the paper .consider such that .let .since as and is -adapted , we have for each fixed , & \rightarrow & { \mathbb{p}}\bigl[\partial \tilde y^{(1)}_0(n ) = \varsigma | \mathcal{g}^{(n)}\bigr ] \nonumber \\[-8pt ] \\[-8pt ] \nonumber & = & 1_{\ { \partial \tilde y^{(1)}_0(n ) = \varsigma\}}\qquad \mbox{a.e . , as } \kappa\downarrow0.\end{aligned}\ ] ] note that for fixed and , = p^{\kappa } _ { n,\varsigma}(\mathcal{x}^{\kappa}(n)),\ ] ] for some measurable map ] , , such that is continuous , at every , for a.e . in ] , then let be an i.i.d sequence of uniform random variables , that is , independent of .now construct such that , and .note that = \hat p^{\gamma}_{n,\varsigma}(\mathcal{x}^{\kappa}(n ) ) , \qquad n \ge1 , \varsigma\in\mathcal{s}_m^{\eta}.\vadjust{\goodbreak}\ ] ] since pointwise , as , we have for every real bounded map on , for every , as .thus , , as .let .then and the above weak convergence and , once more , the lipschitz property of the skorohod map yields that as .recall that is -optimal .we now choose sufficiently small so that is -optimal . by construction , and satisfy all the properties stated in the theorem .the goal of this section is to prove theorem [ jrtoj ] . fix and .let be the -optimal control introduced above theorem [ newmain518 ] .fix , such that , as .section [ subconstruct ] below gives the construction of the sequence of policies , , such that , yielding the proof of theorem [ jrtoj ] .the latter convergence of costs is proved in section [ secconvprf ] . the main ingredient in this proof is theorem [ mainweak ] whose proof is given in section [ weakcgce ] . for the rest of this section as in theorem[ newmain518 ] and parameters that specify shall be fixed .in addition , let and be such that > m(\vartheta_{\mathrm{lip}}+1 ) \quad \mbox{and } \quad r_0 \theta > \rho,\ ] ] where is as in ( [ 110 ] ) . will only specify a for and so henceforth , without loss of generality , we assume . in this section , since will be fixed , the superscript will frequently be suppressed from the notation .the following additional notation will be used . for , define , , , and , where we set .note , .recall introduced in assumption [ ldp ] .fix such that fix such that .define also define as thus , if for some , and equals the left end point of the -subinterval in which falls . otherwise , if for some , and .recall the probability space introduced in section [ secsetup ] which supports all the random variables and stochastic processes introduced therein .let be a sequence of uniform random variables on ] and .then is precompact in and any limit point satisfies for all , ; ; }\tilde h(s ) \,ds ] is a measurable map such that } 1_{\{f_i(s ) > 0\ } } \tilde h_i(s ) \,ds = 0 \qquad\mbox{for all } t \ge0.\ ] ] the proof of the above lemma is given in the . recall the definitions of various scaled processes given in ( [ fl - scaled])([w - relation ] ) , and that for .in addition , we define , , , .[ tisadmis ] as , , u.o.c . in probability .from the definition of [ see ( [ abnov8 ] ) ] , and the observation that the interval has length [ see ( [ ab1743 ] ) and ( [ ab8b ] ) ] , we have that over each interval , that is , contained in for some , the lebesgue measure of the time instants such that equals , .this is equivalent to the statement that \\[-8pt ] \eqntext { \mbox{whenever } \bigl[m\delta , ( m+1)\delta\bigr ) \subset \mathcal{i}_2(n ) \mbox { for some } n \le p_0 .}\end{aligned}\ ] ] also , noting that for , and , we have that for some , \\[-8pt ] \eqntext{\mbox { whenever } \bigl[m\delta , ( m+1)\delta\bigr ) \subset \mathcal{i}(n ) \mbox { for some } n \le p_0 .}\end{aligned}\ ] ] fix and such that for some . then using ( [ ab248 ] ) and ( [ ab248a ] ) and the fact that the number of intervals in is bounded by , we see that thus , for some , also , since for all , , we get from ( [ xr - defn ] ) and standard estimates for renewal processes ( see , e.g. , lemma 3.5 of ) that converges to u.o.c . in probability , as .combining this with assumption [ assum - ht ] and ( [ ab1254 ] ) , we have next , define , and \dvtx \bar q^r_{\sigma _ 1(j)}(s)=0 , \mbox{or } \bar q^r_{\sigma_1(j)}(\bar{\mathsf{p}}^r(s))\le \bar\theta^r(s ) \bigr\}\nonumber\\ & = & \bigl\{s \in[0 , t]\dvtx\bar q^r_{\sigma_1(j)}(\bar{\mathsf{p}}^r(s))\le \bar\theta^r(s ) \bigr\ } \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & { } \cup\bigl\{s \in[0 , t]\dvtx \bar q^r_{\sigma_1(j)}(s)=0 , \bar q^r_{\sigma_1(j)}(\bar{\mathsf{p}}^r(s ) ) >\bar\theta^r(s ) \bigr\ } \\ & { \doteq } & \mathcal{s}_j^{r , 1}(t ) \cup\mathcal{s}_j^{r , 2}(t).\nonumber\end{aligned}\ ] ] we will fix for the rest of the proof and suppress from the notation when writing , unless there is scope for confusion . using the above display and ( [ ab543 ] ) , we have that } 1_{\mathcal{s}_j^r}(s ) \,d\bar t^{r,1}_j(s),\qquad j \in\mathbb{j } , t \ge0.\ ] ] using the fact that along with ( [ q - relation ] ) and ( [ ab104 ] ) , we can write where for , } 1_{\mathcal{s}_j^{r,1}}(s)\ , d\bigl(\bar t^{r,1}_j(s ) - x^*_js\bigr ) \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & { } + \int_{[0 , t ] } 1_{\mathcal{s}_j^{r,2}}(s ) \,d\bar t^{r,1}_j(s)\end{aligned}\ ] ] and } 1_{\mathcal{s}_j^{r,1}}(s ) \,ds = ( \operatorname{diag}(x^ * ) c'\bar\ell^r)_j(t),\ ] ] where } 1_{\{\bar q^r_{i}(\bar{\mathsf{p}}^r(s))\le \bar\theta^r(s)\ } } \,ds,\qquad t \ge0 , i \in\mathbb{i}.\ ] ] since [ see ( [ r - defn ] ) and ( [ refmat ] ) ] , we have next , letting , we have from the choice of [ see above ( [ ab411 ] ) ] that for some , where the next to last inequality makes use of assumption [ ldp ] .recalling [ from ( [ lab23 ] ) ] that , we get that we note that the above convergence only requires that . the property will , however , be needed in the proof of proposition [ tight424 ] [ see ( [ ab632 ] ) ] .next , using ( [ ab248 ] ) and ( [ ab248a ] ) , we have for some , } 1_{\mathcal{s}_j^{r,1 } } \,d\bigl(\bar t^{r,1}_j(s ) - x^*_js\bigr)\biggr| \le\frac{\varrho_1}{r } \to0 \qquad\mbox{as } r \to\infty,\ ] ] for all .the above inequality follows from the fact that the integral can be written as the sum of integrals over -subintervals : when the subinterval is within some , the integral is zero [ using the definition of in such intervals and ( [ ab248 ] ) ] and when the subinterval is within some [ the number of such intervals is which can be bounded by for some , the integral is bounded by from ( [ ab248a ] ) . now , combining ( [ ab1254 ] ) , ( [ ab1230 ] ) and ( [ ab1240 ] ), we have that , for each , u.o.c . in probability , as . from ( [ ab1249 ] ) , ( [ ab218n ] ) , lemma [ invpr ] and unique solvability of the skorohod problem for , we now have that , u.o.c . in probability , as .thus , converges to as well .the result now follows on noting that .the following proposition gives a key estimate in the proof of theorem [ jrtoj ] .[ tight424 ] for some and , using standard moment estimates for renewal processes ( cf .lemma 3.5 of ) , one can find such that from ( [ 413half ] ) and ( [ ab1249 ] ) , we have where , and .we rewrite the above display as + \hat\zeta^r(t ) + r\hat h^r(t ) + d\hat\ell^r(t ) , \qquad t \ge0.\ ] ] from theorem 5.1 of , for some , \\[-8pt ] \nonumber & & \qquad\le c_2 \biggl(\hat q^r + |\hat\zeta^r|_{\infty , t } + |\hat h^r|_{\infty , t } + { d_1r^{{\bolds{\mathsf{k}}}}}{r}\biggr).\end{aligned}\ ] ] also , from ( [ ab210n ] ) , ( [ ab1254 ] ) and ( [ ab1240 ] ) , for all and , } 1_{\mathcal{s}_j^{r,2}}(s ) \,d\bar t^{r,1}_j(s ) \biggr| \le\varrho+ \varrho_1 . \label { ab612}\ ] ] next , using ( [ ab1230 ] ) , we get , for some , } 1_{\mathcal{s } _j^{r,2}}(s ) \,d\bar t^{r,1}_j(s ) \biggr)^{\upsilon\wedge2 } & \le & r^{\upsilon}c_3(t^3 + 1 ) r^{-\upsilon } \nonumber \\[-8pt ] \\[-8pt ] \nonumber & \le & c_3(t^3 + 1).\end{aligned}\ ] ] finally , for some , for all , \\[-8pt ] \nonumber & & \qquad \le \frac{r^2t}{\delta^r}\biggl(\sum_{i \in\mathbb{i}}{\mathbb{p}}\biggl(a_i^r(\delta ^r ) \ge\frac{ar}{c_4}\biggr ) + \sum_{j \in\mathbb{j}}{\mathbb{p}}\biggl(s_j^r(\delta^r ) \ge\frac{ar}{c_4}\biggr)\biggr).\hspace*{-30pt}\end{aligned}\ ] ] using moment estimates for renewal process once more ( lemma 3.5 of ) , we can find such that thus , there is an and such that , for all , this shows that for some and , the result now follows on using ( [ ab425 ] ) , ( [ ab612 ] ) , ( [ ab632 ] ) and ( [ ab629 ] ) in ( [ ab610n ] ) and observing that . in preparation for the proof of theorem [ jrtoj ] , we introduce the following notation . for , we define processes with paths in and , respectively , as . ] with .then where .similarly , define the process with paths in .recall introduced in ( [ ab201 ] ) .denote and , where we set and .next , for , define processes with paths in and , respectively , as also , define by the first line of the above display by replacing by .then , a.s ., where .similarly , define the process with paths in .also , let for , , where is as above ( [ ab610 ] ) , and .then define and , where and . then and .next , let then let note that , and are -valued random variables .the following is the main step in the proof of theorem [ jrtoj ] .[ mainweak ] as , proof of the above theorem is given in the next subsection .using theorem [ mainweak ] , the proof of theorem [ jrtoj ] is now completed as follows .proof of theorem [ jrtoj ] from proposition and integration by parts , & & \hspace*{18pt}{}+ { \mathbb{e}}\int_{[a^r(n)/r^2 , b^r(n)/r^2 ) } e^{-\gamma t } \bigl(h\cdot\hat q^r(t ) + \gamma p\cdot\hatu^r(t)\bigr ) \,dt \biggr ] \\[-2pt ] & = & \sum_{n=0}^{p_0 } { \mathbb{e}}\int_{[b^r(n)/r^2 , a^r(n+1)/r^2 ) } e^{-\gamma t } \bigl(h\cdot\hat q^r(t ) + \gamma p\cdot\hatu^r(t)\bigr ) \,dt + \varepsilon_r,\nonumber\vadjust{\goodbreak}\end{aligned}\ ] ] where , using proposition [ tight424 ] and the observation that , we have that as . from theorem [ mainweak ] , as , .combining this with proposition [ tight424 ] , we get for every , where , by convention , when .next , for , , \\[-8pt ] \nonumber & = & \gamma p \cdot k { \mathbf{\mathpzc{z}}}^{r , n}(t - n\theta ) + \gamma p \cdot k\nu _ 0^{r , n}.\end{aligned}\ ] ] from theorem [ mainweak ] , as , in . combining this with ( [ ab1945 ] ) and proposition [ tight424 ] we now get similarly to ( [ ab753 ] ) , for , note that for , thus , the expression on the right - hand side of the above display equals the result now follows on using this observation along with ( [ ab753 ] ) in ( [ ab804 ] ) . for , , and ,define \dvtx q^r_{\sigma_1(j)}(nr^2\theta+ rs , \omega ) = 0\}.\ ] ] from the definition of , it follows that for some , as a consequence of this observation , we have the following result . the proof is given in section [ prop46 ] .[ lemma45a ] for some ] . for ,let =(\bar\nu ^{r,0},\ldots,\bar\nu^{r , n}) ] and their limiting analogues ,\nu[n ] , \nu_0[n ] , { \mathbf{\mathpzc{q}}}[n ] , { \mathbf{\mathpzc{z}}}[n] ] are ] . in the lemma belowwe will in fact show , recursively in , that \rightarrow\mathcal{j}[n] ] , as .the proof will follow the following two steps : as , \rightarrow\mathcal{j}[0] ] as for , for some .then , as , \rightarrow\mathcal{j}[n+1] ] follows trivially since = \bar \nu[0]=0 ] .next , consider = \hat y^r(\rho / r) ] , in probability , to .in particular , this shows that \\[-8pt ] \nonumber & & \quad \rightarrow\quad(q+\varepsilon_0ry^ * , \varepsilon_0y^ * ) = \bigl({\mathbf{\mathpzc{q}}}^{(0)}(0 ) , \nu_0^{(0)}\bigr).\end{aligned}\ ] ] finally , we prove the convergence of to . we will apply theorem 4.1 of . note that ,\end{aligned}\ ] ] where is a -valued random variable defined as . from ( [ ab1042 ] ) and ( [ ab1122 ] ) next , for , write where for , }\bigl(1 - 1_{\mathcal{s } _d\bigl(x^*_js -\bar t^{r,1}_j(s ) \bigr ) + r \int_{[\rho / r , t]}1_{\mathcal{s}_j^{r,2}}(s ) \,d\bart^{r,1}_j(s),\\ \hat l^{r,0}_j(t ) & = & rx^*_j\int_{[\rho / r , t]}1_{\mathcal{s}_j^{r,1}}(s)\,ds,\end{aligned}\ ] ] with defined in ( [ lab33 ] ) .using calculations similar to those in the proof of proposition [ tisadmis ] [ see ( [ lab71 ] ) ] , we get } 1_{\mathcal{s } _ j^{r,2 } } ( s ) \bart^{r,1}_j(s)\biggr| \to0\qquad \mbox{in probability , as } r \to\infty,\hspace*{-35pt}\ ] ] for all .also , from ( [ ab248 ] ) it follows that } \bigl(1 - 1_{\mathcal{s}_j^{r,1 } } ( s)\bigr)\ , d\bigl(x^*_js- \bar t^{r,1}_j(s)\bigr)\biggr| \le \frac{\delta^r}{r}.\ ] ] combining the above estimates , also , where for and ] .hence , setting for ] , which completes the proof of ( i ) . we now prove ( ii ) .we can write & = & ( \mathcal { j}^r[n ] , ( \bar\nu ^{r , n+1 } , \nu^{r , n+1 } , \nu_0^{r , n+1 } , { \mathbf{\mathpzc{q}}}^{r , n+1 } , { \mathbf{\mathpzc{z}}}^{r , n+1}))\\ \mathcal{j}[n+1 ] & = & \bigl(\mathcal{j}^r[n],\bigl(\bar\nu^{(n+1 ) } , \nu^{(n+1 ) } , \nu_0^{(n+1 ) } , { \mathbf{\mathpzc{q}}}^{(n+1 ) } , { \mathbf{\mathpzc{z}}}^{(n+1)}\bigr)\bigr).\end{aligned}\ ] ] by assumption , \rightarrow\mathcal{j}[n]\ ] ] and , thus , in particular , ( [ ab1122 ] ) holds .this shows that and as a consequence , using continuity properties of , \\[-8pt ] \nonumber & = & \bar\nu ^{(n+1)}.\end{aligned}\ ] ] in fact , this shows the joint convergence : , \bar \nu ^{r , n+1 } ) \rightarrow(\mathcal{j}[n ] , \bar\nu^{n+1}) ] , processes are defined similarly to . then , equations ( [ abnew558 ] ) and ( [ abnew558b ] ) are satisfied with these new definitions . hence ,using arguments similar to the ones used in the proof of ( [ ab1042 ] ) ( in particular , making use of proposition [ lemma45a ] ) , we have that converges in distribution to as . combining the above observations, we have , as , \\[-8pt ] \nonumber & & \quad\rightarrow\quad\bigl(\tilde q\bigl((n+1)\theta\bigr ) , \nu_0^{(n)}+ { \mathbf{\mathpzc{z}}}^{(n)}(\theta)+\nu^{(n+1)}\bigr)=\bigl({\mathbf{\mathpzc{q}}}^{(n+1)}(0 ) , \nu_0^{(n+1)}\bigr).\nonumber\hspace*{-30pt}\end{aligned}\ ] ] finally , we consider weak convergence of to .similar to ( [ lab99 ] ) , we have ,\ ] ] where is a -valued random variable defined as using ( [ ab604 ] ) and ( [ ab1122 ] ) , as , weak convergence of to now follows exactly as below ( [ ab430 ] ) . combining the above weak convergence properties , we now have \rightarrow\mathcal{j}[n+1] ] are measurable maps such that for all , .then there is a sequence of -valued random variables defined on an augmentation of such that where . by suitably augmenting the space, we can assume that the probability space supports an i.i.d .sequence of uniform ] be measurable maps , such that for all : , , . , , , .the result follows on defining from ( [ ab440 ] ) we have that for some , thus , for some , next , for and , using assumption [ ab1012 ] and ( [ novab1 ] ) , we now have that for all , this shows that , for some , next , for some , } e^{-\gamma t } p \cdot d \tilde u^1(t ) - \int _ { [ 0,t ] } e^{-\gamma t } p \cdot d \tilde u^2(t ) \biggr|\nonumber\\[-2pt ] & & \qquad\le |p|\biggl [ |\tilde u^1(0 ) - \tilde u^2(0)| + e^{-\gamma t } |\tilde u^1(t ) -\tilde u^2(t)| \nonumber \\[-10pt ] \\[-10pt ] \nonumber & & \hspace*{78pt}\quad\qquad{}+ \gamma\int_{[0,t ] } |\tilde u^1(t ) - \tilde u^2(t)| \,dt \biggr ] \nonumber \\[-2pt ] & & \qquad\le c_4 |\tilde y^1 - \tilde y^2|_{\infty , t}.\nonumber\end{aligned}\ ] ] next , note that for , , } e^{-\gamma t } p \cdot d \tilde u^i(t ) \nonumber \\[-8pt ] \\[-8pt ] \nonumber & & \qquad = \gamma\int_t^s e^{-\gamma t } p \cdot [ \tilde u^i(t ) - \tilde u^i(t ) ] \,dt + e^{-\gamma s } p \cdot [ \tilde u^i(s ) - \tilde u^i(t)].\end{aligned}\ ] ] also , using assumption [ ab1012 ] and ( [ ab934 ] ) , for some , which shows that , for , | \to0\qquad \mbox{as } s \to\infty.\ ] ] combining this observation with ( [ novab5 ] ) , we now have , on sending , that for , \,dt.\ ] ] thus , for some , - [ \tilde u^2(t ) - \tilde u^2(t)]| \,dt \\ & & \qquad \le c_6 \tilde{\mathbb{e}}\int_{(t,\infty ) } e^{-\gamma t } equality follows on using assumption [ ab1012 ] and ( [ novab25 ] ) . combining this with ( [ novab1 ] ), we now have that the result now follows on combining the above estimate with ( [ novab2 ] ) , ( [ novab4 ] ) and ( [ novab45 ] ) .it is immediate from the construction that satisfies ( i)(iii ) of definition [ t - adm - defn ] .we now verify that , with , the proof of ( [ admis1232 ] ) is similar to that of theorem 5.4 in , which shows that if a policy satisfies certain natural conditions ( see assumptions 5.1 , 5.2 , 5.3 therein ) , then it is admissible ( in the sense of definition [ t - adm - defn ] of the current paper ) .the policy constructed in section [ subconstruct ] does not exactly satisfy conditions in section 5 of , but it has similar properties .since most of the arguments are similar to , we only provide a sketch , emphasizing only the changes that are needed . for the convenience of the reader , we use similar notation as in .also , we suppress the superscript from the notation .recall from ( [ ab543 ] ) that } 1_{\ { q_{\sigma_1(j)}(u ) > 0\ } } 1_{\{q_{\sigma_1(j)}({\mathsf{p}}(u ) ) > \theta(u)\ } } { \dot { t}^{(1)}_j}(u ) \,du,\\ \eqntext { j \in\mathbb{j } , t \ge0.}\end{aligned}\ ] ] in particular , assumption 5.1 of is satisfied . in view of ( [ stpos1029 ] ) , the integrand above has countably many points ( a.s . ) where the value of changes from 0 to 1 ( or vice versa ) .denote these points by .set .we refer to these points as the `` break - points '' of .break - points are boundaries of the intervals of the form or those of subintervals of length or for some [ see ( [ abn532b ] ) , see also ( [ abn532a ] ) for or that are used to define the policy .next , define as the countable set of ( random ) `` event - points '' as defined in ( denoted there as ) .these are the points where either an arrival of a job or service completion of a job takes place anywhere in the network . combining the event - points and the break - points , we get the set of `` change - points '' of the policy denoted by : we will assume that the sequence [ resp . , is indexed such that [ resp . , is a strictly increasing sequence in . as noted earlier , uses the notation , instead of , for event points .we have made this change of notation since here plays an identical role as that of event - points in the proof of .in particular , it is easily seen that assumption 5.2 of holds with this new definition of . we will next verify assumption 5.3 ( a nonanticipativity condition ) of in lemma [ appen - lemma ] below . for and ,let .thus , is the residual ( exogenous ) arrival time at the buffer at time , unless an arrival of the class occurred at time , in which case it equals .similarly , for , , define .next , write , where is right continuous . for ,set , and for , .also , for , and , let .let .finally , define for , the definition of above is similar to that in , with the exception of the sequence .this enlargement of the collection is needed due to the randomization step , involving the sequence , in the construction of the policy [ see ( [ ab201 ] ) ]. in , part ( iv ) of the admissibility requirement ( for the smaller class of policies considered there ) was in fact shown with respect to a smaller filtration , namely , . here , using the above enlargement, we will show that part ( iv ) holds ( for the policy in section [ subconstruct ] ) with .in lemma [ appen - lemma ] below , we prove that is a measurable function of , for all .this shows that satisfies assumptions 5.15.3 of with the modified definition of and .now part ( iv ) of the admissibility requirement [ i.e. , ( [ admis1232 ] ) ] follows exactly as the proof of theorem 5.4 of .this completes the proof of the proposition .[ appen - lemma] is a measurable function of , for all .let for , denote the length of the break - point interval .define and for .hence , denotes the number of break - points that preceded the change - point , and is the `` last '' break - point before the change - point ( note that ) for all .also , define for , as the `` residual '' time for the next break - point after .in particular , implies that itself is a break - point . by definition of [ see ( [ ab201 ] ) ] , it follows that , , , and , hence , are all measurable functions of for . summarizing this , we get using notation from , let be the set of all activities that are associated with the buffer and , for , be as defined by equation ( 5.2 ) of . then denotes all activities in that are active at time , under . clearly , for , for , let be the indicator function of the event that at the change - point an arrival or service completion occurs at buffer .more precisely , for and , from ( [ ch - pt-1 ] ) and ( [ chdef538 ] ) , it follows that using ( [ ab201 ] ) and the construction below it , along with ( [ new545 ] ) , it is easily checked that is a measurable function of .next , for , from ( [ new545 ] ) and ( [ chdef538 ] ) , and are measurable , thus , so is the first indicator in the above display . also ,since is either or on whether is in or not and both and are measurable , we see that the second indicator in ( [ abn617 ] ) is measurable as well .the lemma follows on combining the above observations .since is equicontinuous , pre - compactness of is immediate .suppose now that converges ( in ) , along some subsequence , to .then for and .also , for suitable measurable maps ] for such .since is arbitrary , we get } 1_{\{f_i(s)=0\ } } \tilde h_i(s ) \,ds = h_i(t).\ ] ] the result follows .we thank an anonymous referee for pointing us to the paper .
scheduling control problems for a family of unitary networks under heavy traffic with general interarrival and service times , probabilistic routing and an infinite horizon discounted linear holding cost are studied . diffusion control problems , that have been proposed as approximate models for the study of these critically loaded controlled stochastic networks , can be regarded as formal scaling limits of such stochastic systems . however , to date , a rigorous limit theory that justifies the use of such approximations for a general family of controlled networks has been lacking . it is shown that , under broad conditions , the value function of the suitably scaled network control problem converges to that of the associated diffusion control problem . this scaling limit result , in addition to giving a precise mathematical basis for the above approximation approach , suggests a general strategy for constructing near optimal controls for the physical stochastic networks by solving the associated diffusion control problem . . .
distributed control of multi - agent systems has been receiving a great deal of attention in the recent literature due to its broad applications in a number of areas ; e.g. , see - .the leader - follower tracking problem represents a particular class of distributed control problems which is concerned with the design of control protocols for each agent based only on the information from the nearest neighbours , with the aim to guarantee that states of all followers converge to that of a dynamic leader ; e.g , see .this paper considers the leader - following tracking problem for a class of discrete - time linear multi - agent systems with a high - dimensional leader and undirected communications between followers .closely related work includes .these references exemplify a common trend in the existing literature , which has a significant focus on consensus of discrete - time multi - agent systems with first or second - order integrator dynamics .in contrast , consensus tracking for discrete - time multi - agent systems with identical linear higher - order node dynamics was studied in .the work in covers the results on consensus tracking for multi - agent systems consisting of first- and second - order integrator dynamics as special cases , respectively .also , in the majority of the existing papers on this topic , including some of the previously mentioned references , the leader and all the followers are assumed to have identical dynamics models .this assumption allows one to directly analyze dynamics of the tracking error arising in the corresponding multi - agent networks consisting of closed - loop agent systems . on the contrary , the case where dynamics of the leader and those of the followers have different models ( e.g. ,are described by state - space equations of different order ) has not received as much attention . in this paper, we focus on the case where dynamics of the leader are more complex than those of the followers .therefore , the existing theoretical approaches for analyzing leader - following tracking problems which have been developed for networks of identical agents can not be directly applied in this case .furthermore , the state of the leader which evolves independently of the followers is not directly measurable by all of the followers .it is assumed that only a partial information about the state of the leader can be sensed by a small group of followers which are subject to uncertainty .thus , the multi - agent system under consideration is more general and contains some other commonly studied classes of leader - following multi - agent systems such as , e.g. , the multi - agent systems in , as special cases .the control goal here is to design a tracking protocol for each agent such that the closed - loop system of agents achieves a desired level of leader - following tracking performance .note that the proposed problem of distributed leader - following tracking for multi - agent systems with a high - dimensional leader is meaningful in a number of practical applications such as the design of distributed sensor networks where dynamics of the leader are more complex than those of the followers .the fact that the followers can only sense partial information about the state of the leader prompts us to introduce a dynamic protocol where a local controller together with a neighbor - based state observer is designed for each follower . in the present framework ,the state observer embedded in the followers performs the task of estimating the unmeasurable states of the leader in a distributed way . under the assumptions that the leader is detectable and the communication topology containsa directed spanning tree , we propose a procedure for the design of a tracking protocol which involves a solution to a modified algebraic riccati equation .the analysis of distributed leader - following tracking performance of this protocol when applied to a multi - agent system with a high - dimensional leader is then presented . the protocol design to achieve a pre - specified level of leader - following tracking performance for a system of agents whose dimension are different form that of the leader is the main contribution of this paper .we note that dynamic protocols have been considered in a number of recent papers .for example , a dynamic protocol for synchronization of multiagent systems has been proposed recently in .similar to this paper , the analysis in is based on the reduction of the problem to the analysis of decoupled systems .however , in contrast to our paper , robust performance issues are not considered in .it is also worth noting that there is another possible way to solve tacking problems under partial information about the leader , by using the distributed output regulation theory and internal model principle .the remainder of this paper is organized as follows . in sectionii , some preliminaries from the graph theory and the problem formulation are given .in section iii , the main results are presented . a numerical example and simulations to illustrate our theoretical analysis are provided in section iv .section v concludes the paper . [ [ notation ] ] notation + + + + + + + + let and be the sets of real matrices and complex matrices , respectively .let and be , respectively , the sets of natural numbers and the -dimensional real square summable functions . if not explicitly stated , all matrices are assumed to have compatible dimensions .the superscripts and denote the transpose and the hermitian adjoint of a matrix , respectively .a matrix is a unitary matrix if . represents a diagonal matrix with , on its diagonal .the notation denotes the vector whose elements are equal to .let and be the zero and identity matrices , respectively .a square matrix is said to be schur stable if the magnitude of all of its eigenvalues is less than .the symbols and denote , respectively , the kronecker product and the euclidian norm .let be a directed graph with a set of nodes , a set of directed edges , and a weighted adjacency matrix {n\times n} ] is called a row - stochastic matrix associated with the graph , if the following properties hold : ; if , and otherwise ; and , for all .[ lemmarowsto ] for any row - stochastic matrix associated with the graph , 1 is an eigenvalue of , and all other eigenvalues of lie in the open unit disk .furthermore , 1 is a simple eigenvalue of if and only if the graph contains a directed spanning tree .consider a group of agents indexed by . without loss of generality, it is assumed that the agent labeled is the leader , whose dynamics are governed by the following equations the vector represents the state of the leader at time instant , and are two positive integers .this vector is assumed to be partitioned as , where , .accordingly , the state matrix ] has the property that for all , .note that assumption [ assumptionundirected ] indicates that the subgraph describing the communication topology between the followers is undirected .however , this subgraph is not required to be connected in the present framework .[ assumptionspanningtree ] the communication topology graph contains a directed spanning tree with the leader node being its root .note that assumption [ assumptionspanningtree ] is not restrictive . for example , it holds when the subgraph describing the communication topology between the followers is connected , and also at least one follower senses the output of the leader .more generally , when the communication topology between the followers consists of separate connected components , assumption [ assumptionspanningtree ] will be satisfied if each component of the graph includes a node which directly senses the output of the leader .the control problem in this paper is to design a distributed protocol , , to enable the closed - loop multi - agent system ( [ followerdynamics ] ) , equipped with this protocol , to achieve a prescribed level of leader - following tracking performance .the mathematical definition of the leader - following tracking performance index will be given later .to guarantee the leader - following consensus tracking performance , the following observer - based dynamic distributed tracking protocol is proposed for each follower , .the protocol consists of two parts : * the neighbor - based local controller : where , , , are defined in ( [ partition ] ) and is given in ( [ relativeinformation ] ) . *distributed state estimator : the gain matrix is the design parameter of the protocol which will be defined later . combining equations ( [ followerdynamics ] ) , ( [ localcontroller ] ) and ( [ distributedestimator ] ) yields the closed - loop system describing dynamics of each follower governed by the proposed protocol : where is the state of the closed - loop system , , and .then , it is easy to see that the difference between the state of the leader and the state of the -th closed - loop system , , satisfies the following equation . to characterize performance of the proposed tracking protocol ,define the performance variable where where is a given performance output matrix , . using the expression for given in ( [ relativeinformation ] ), we have \!\right\}\rho(k ) \\ & \quad \quad \quad \quad \;\;+ ( i_{n-1}\otimes \hat{b}_{\omega})\omega(k ) , \\ & e(k+1)=\left(i_{n-1}\otimes c\right)\rho(k+1 ) , \end{aligned } \right.\ ] ] where denotes the matrix defined as {(n-1)\times ( n-1)}$ ] with , , and , and are the constants defined in ( [ relativeinformation ] ) , where is the matrix from equation ( [ outputinformation ] ) .denote by the transfer function matrix of the system ( [ errordynamics-2 ] ) from disturbance input to the performance output .we are now in a position to formulate the leader - following tracking problem under consideration in this paper .[ definition1 ] the multi - agent system consisting of the leader ( [ leaderdynamics ] ) and the followers ( [ followerdynamics ] ) and equipped with the protocol ( [ localcontroller ] ) is said to solve the distributed leader - following tracking problem with performance index , if the following two conditions hold : 1 .the multi - agent system described by ( [ leaderdynamics ] ) and ( [ followerdynamics ] ) with , , achieves consensus in the sense of , where is the state of the closed loop system defined in ( [ closed - loop - state ] ) , .2 . the norm of satisfies the following condition : .in this section , the main theoretical results are presented .since the leader has no neighbors , the matrix associated with communication topology , has the following structure and is a row - stochastic matrix ; is the constant defined in ( [ relativeinformation ] ) . by assumption[ assumptionundirected ] , the block in ( [ row - stochasticd ] ) is symmetric .let , , be the eigenvalues of .it then follows from lemma [ lemmarowsto ] and assumption [ assumptionspanningtree ] that , for all .[ maintheorem1 ] suppose the communication graph satisfies assumptions [ assumptionundirected ] and [ assumptionspanningtree ] .then , for a given , the distributed leader - following tracking problem stated in definition [ definition1 ] admits a solution if and only if the following systems are simultaneously internally stable and have the norm less than : where .[ remarkbelowtheorem1 ] theorem [ maintheorem1 ] shows that the distributed tracking problem for the networked agent system ( [ errordynamics-2 ] ) can been converted into a collection of control problems for a group of uncoupled systems ( [ maintheorem - eq1 ] ) , each having the same dimension .thus , the complexity of the design reduces significantly . note also that the effect of topology on the distributed leader - following tracking performance is characterized by the eigenvalues of the .although theorem [ maintheorem1 ] gives necessary and sufficient conditions for the distributed leader - following tracking problem to admit a solution , it does not explain how the feedback gain matrix should be selected in order to obtain such a solution .the following theorem shows that this issue can be addressed using tools from the control theory , based on the result of theorem [ maintheorem1 ] .[ maintheorem2 ] suppose that assumptions [ assumptionundirected ] and [ assumptionspanningtree ] hold , and let .given a constant , suppose there exist real matrices , and a positive scalar such that and then the protocol ( [ localcontroller ] ) augmented with the distributed state estimator ( [ distributedestimator ] ) , with the feedback gain matrix , defined as , solves the leader - following tracking problem for the multi - agent system described by ( [ leaderdynamics ] ) and ( [ followerdynamics ] ) , with a disturbance attenuation level .theorem [ maintheorem2 ] provides sufficient conditions for solvability of the distributed leader - following tracking problem for the multi - agent system described by ( [ leaderdynamics ] ) and ( [ followerdynamics ] ) .it is not hard to see that a necessary condition for this tracking problem to have a solution is that the matrix pair must be detectable .note that performance of the proposed tracking protocol is determined by the matrix in ( [ errordynamics-2 ] ) which defines the performance variable . in the special case where the performance variable of interest is the tracking error , the conditions of theorem [ maintheorem2 ] are simplified by letting .consider a leader whose dynamics are described by equation ( [ leaderdynamics ] ) , with furthermore , let , , and .then the corresponding matrices and are , .clearly , the pair is detectable . to illustrate theorem [ maintheorem2 ] ,let us consider a network of agents of the form ( [ followerdynamics ] ) connected over the communication graph shown in fig .the adjacency matrix of this graph is . ]it is easy to check that assumption [ assumptionspanningtree ] holds .thus , the neighbor - based protocol consisting of the local controller of the form ( [ localcontroller ] ) and the distributed state estimator of the form ( [ distributedestimator ] ) can be designed by solving the conditions in theorem [ maintheorem2 ] . to design the protocol in this example ,the parameter in ( [ relativeinformation ] ) is set to be equal .calculations show that the eigenvalues of defined in ( [ row - stochasticd ] ) are , , , . then , .also , the output matrix and performance level were chosen in this example . solving the linear matrix inequality ( [ maintheorem2firstlmi ] ) with that and . to illustrate properties of this protocol , we simulated the closed loop system without disturbances and also with disturbance inputs of the form , where , , and to illustrate asymptotic convergence of the tracking agents in the absence of disturbances , the corresponding state trajectories of the closed - loop multi - agent system ( [ followerdynamics ] ) with a high - dimensional leader ( [ leaderdynamics ] ) , are shown in figs .[ figure2][figure4 ] .let be the square of the norm of the consensus tracking error vector for the multi - agent system , where , .[ figure5 ] indicates that the proposed distributed dynamic tracking protocol indeed ensures consensus tracking in the absence of disturbances , i.e. , when , .next , the consensus tracking under disturbances is considered .under zero initial conditions , the ` energy trajectories ' and were computed as functions of the evolution time and were plotted in fig .[ figure9 ] .it can be seen from fig .[ figure9 ] that the proposed distributed dynamic tracking protocol indeed ensures the set level of disturbance attenuation .the distributed leader - following tracking problem for a class of discrete - time multi - agent systems with a high - dimensional active leader has been investigated in this paper . in the presented framework ,the outputs of the leader are only sensed by some informed followers .a new kind of dynamic tracking protocol consisting of a local controller and a distributed state estimator has been constructed and employed to solve such a coordination problem . using tools from the control theory , it has been proved that distributed leader - following tracking can be ensured if the underlying topology graph contains a directed spanning tree with the leader being its root while the communication topology among the followers is undirected .future work will focus on solving the distributed tracking problem for multi - agent systems with a high - dimensional leader and time - varying topologies as well as leader - following tracking for multi - agent systems with nonlinear dynamics .g. wen would like to thank prof .yiguang hong for the inspiring discussions and helpful suggestions .l. ballard , y. cao , and w. ren , `` distributed discrete - time coupled harmonic oscillators with application to synchronized motion coordination , '' _ iet control theory & applications _ , vol .5 , pp . 806816 , 2010 .z. li , z. duan , and g. chen , `` consensus of discrete - time linear multi - agent systems with observer - type protocols , '' _ discrete and continuous dynamical systems - series b _ , vol .2 , pp . 489505 , 2011 .z , zhou , h. fang , and y. hong , `` distributed estimation for time - varying target in noisy environment , '' _ proceedings of the 10th world congress on intelligent control and automation _ , beijing , china , pp . 43414346 , 2012 .c. de souza and l. xie , `` on the discrete - time bounded real lemma with application in the characterization of static state feedback controllers , '' _ systems & control letters _ , vol . 18 , no .1 , pp . 61 - 71 , 1992 .
this paper considers the distributed leader - following tracking problem for a class of discrete - time multi - agent systems with a high - dimensional dynamic leader . it is assumed that output information about the leader is only available to designated followers , and the dynamics of the followers are subject to perturbations . to achieve distributed leader - following tracking , a new class of control protocols is proposed which is based on the feedback from the nearest neighbors as well as a distributed state estimator . under the assumptions that dynamics of the leader are detectable and the communication topology contains a directed spanning tree , sufficient conditions are obtained that enable all followers to track the leader while achieving a desired leader - following tracking performance . numerical simulations illustrate the effectiveness of the theoretical analysis .
public opinion expressed in results of elections and opinion polls have been studied widely using traditional statistics .an alternative approach is information theory , which can be applied to probabilistic data .electoral data can be easily transformed from percentages to probabilities .thus the use of information theory to investigate public opinion about political parties and the conduct of governments and its opposition is obvious , but has not been carried out so far .such an application , the only one to our knowledge , is an analytical approach to interpret the public s high job approval rating for president clinton .this rating has been high and nearly constant between january 1998 and february 1999 , despite the well known unfavorable conditions for the us president in that period .such a high rating could be explained partially , but is still considered unusual for several reasons . the political situation in greece in the recent three years is completely different than the united states of the years 1998 - 1999 .however , an interesting ( parallel ) question arises : greek prime minister karamanlis enjoyed a high job approval ( 2004 - 2007 ) , although his party new democracy approval by the public was just higher than the opposition party pasok , headed by george papandreou .we note that we used statistical data from a specific greek opinion polls company ( metron analysis ) in the period 2004 - 2007 , stopping just three months before the latest parliament elections in greece ( september 2007 ) .it is of interest to try to clarify the above striking fact by extending the usual statistical treatment to shannon s information theory .information theory was used for the first time in telecommunications in the late 40 s .our aim is to investigate the possibility to extract some general , qualitative conclusions from typical opinion polls in greece , employing the tools of information and complexity theories . as we mentioned above, our inspiration comes from a similar study in the united states .although the political systems and the conditions in the usa and greece are very different , our work leads to the same mathematical model .it is seen that information - theoretic methods can be used to extend the results of usual statistics , which illuminate certain statistical data of public opinion .information theory can proceed further towards an interpretation , in some sense , of statistical processes .the use of the logarithm in the definition of information entropy smooths small differences in statistical data from various companies and yields the same qualitative conclusions .this illustrates the strength of information theory to give quantitative ( numerical ) answers to qualitative questions .specifically information entropy , corresponding to a probability distribution of events occuring with probabilities , respectivelly , can be defined as is an information theoretic quantity which takes into account all the moments of a probability distribution and can be considered , in a sense , superior to traditional statistics employing the well - known quantities of average value and variance . in relation ( [ eq : eq1 ] ) is measured in bits ( if the base of logarithm is 2 ) , nats natural units of information ( if the base is e ) and hartleys ( if the base is 10 ) . in the present paperthe base is 10 , for the sake of comparison with .however , one case can be transformed to the other one , by multiplying with just a constant . definition ( [ eq : eq1 ] ) represents the average information content of an event , which occurs with a specific probability distribution .the use of the logarithm is justified because in such a way obeys certain mathematical and intuitive properties expected from a quantity related to information content of a probability function .specifically , is positive and the joint information content of two simultaneous independent events translate to the addition of the corresponding information measures of each event e.t.c . for more properties and a pedagogical description see . is maximum for an equiprobable or uniform probability distribution , i.e. . is minimum when one of the s is 1 ( ) and all the other s are 0 , i.e. , under the convention that . in this case ,one of the outcomes is certain , while all the other ones are impossible to occur . represents a measure of information content of a probabilistic event , i.e. the average number of `` yes '' or `` no '' questions needed to specify the event ( in the case of bits ) . is reciprocal to the degree of surprise of an event , i.e. the least probable event has the most information and vice versa .we give a simple example in order to understand the meaning of relation ( [ eq : eq1 ] ) .let us ask to a certain number of people the following question : _ is c. karamanlis suitable for the position of prime minister of greece ?_ we receive answers with percentages and corresponding probabilities ( yes ) , ( no ) , ( something else ) . a direct application of ( [ eq : eq1 ] ) for the normalized probability distribution , where ( ) gives the information content in hartleys of that set of probabilities . in the case of a uniform ( equiprobable )distribution i.e. , relation ( [ eq : eq1 ] ) gives .this is the maximum information entropy with uniform probability distribution ( ) .this can be interpreted as a distribution of complete ignorance ( unbiased ) in the sense that a specific answer does not contain more information than any other one .a case of maximum entropy corresponds to a minimum amount of information about our question .thus information , is reciprocal with the above convention agrees with our intuition , i.e. the information content of an event corresponding to a probability distribution can be quantified by the magnitude of our surprise after the event has occurred or how unpredictable is the outcome . the case of equiprobable distribution for , i.e. occurred in the recent general parliament elections in italy ( april 2006 ) .there were two large coalition of parties and one of the coalitions won with a slight difference in votes , about 40,000 -while the number of votes was about 40,000,000 .thus , with real results versus , we can consider with a very satisfactory approximation that .the application of information theory in this case gives bit ( base of the logarithm equals 2 ) .this fact is completely equivalent with throwing a fair coin ( equal probability for the two results heads - tails ) or with the question yes - no ( equiprobable ) which coalition will win .that means that gives and the minimum information can be interpreted as a complete homogenization of the public opinion about the two coalitions . in other words ,the results of elections in italy correspond to the random throw of a fair coin i.e. a complete lack of knowledge of the voters .our observation does not intend to depreciate the process of elections , the culmination of democracy , but it is an extreme case with maximum possible information entropy .there are other measures of information such as onicescu s information energy and fisher s information .shannon s information is a global measure , while fisher s is a local one i.e. does not depend on the ordering of the probabilities , while does depend , due to the existence of the derivative of the distribution in its definition .their definitions are given below together with appropriate comments .it is stressed that all are based on the same probability distributions as .landsberg s definition of disorder is and order disorder is a normalized disorder ( ) . ( zero disorder , ) corresponds to complete order and ( complete disorder , ) corresponds to zero order . enable us to study the organization of data , described probabilistically .the next important step is the _ statistical complexity _ defined by shiner - davison - landsberg ( sdl ) , where is the strength of disorder and is the strength of order . in the present work we consider the simple case and .another measure of _ complexity _ is according to lopez ruiz - mancini - calbet ( lmc ) . here is the so - called _ disequilibrium _ ( or distance from equilibrium ) defined as sdl complexity describes correctly the two extreme cases of complete order and complete disorder , where we expect intuitively zero complexity or organization of the data .an example taken from the physical world is illuminating .a perfect crystal ( complete order ) has and the same holds for a gas ( complete disorder ) where as well .thus ( perfect ) crystals and gases are not interesting , lacking complexity or organization .this is given by and agrees with intuition .instead , for the information entropy we have for crystals and for gases , which is not satisfactory . thus extending from physics , , , and us to study quantitatively the ( organized ) complexity of probabilistic data of opinion polls and elections .an other very important information measure is fisher information .recently , there is a revival of interest for fisher information , culminating in two books and , defined as for a continuous probability distribution , which is modified accordingly in the present work for discrete probability distributions .specifically , for a discrete probability distribution employed in the present work , relation ( [ eq : eq8 ] ) becomes thus the treatment of high job approval of clinton in , will be repeated for the case of the greek prime minister constantinos karamanlis and the greek political scene in the recent three years ( 2004 - 2007 ) and extended in the present paper using new quantities e.g. , , , and as functions of time .we used statistical data for the public opinion coming from the greek opinion polls company _ _ metron analysis__. specifically , we focused our interest on the following three questions , presented in table [ tab : tab1 ] .
a general methodology to study public opinion inspired from information and complexity theories is outlined . it is based on probabilistic data extracted from opinion polls . it gives a quantitative information - theoretic explanation of high job approval of greek prime minister mr . constantinos karamanlis ( 2004 - 2007 ) , while the same time series of polls conducted by the company metron analysis showed that his party new democracy ( abbr . nd ) was slightly higher than the opposition party of pasok -party leader mr . george papandreou . it is seen that the same mathematical model applies to the case of the popularity of president clinton between january 1998 and february 1999 , according to a previous study , although the present work extends the investigation to concepts as complexity and fisher information , quantifying the organization of public opinion data .
the genetic architecture of biological organisms shows remarkable robustness against both structural and environmental perturbations .for example , quantitative models indicate that the functionality of the _ drosophila _ segment polarity gene network is extremely insensitive to variations in initial conditions and robust against architectural modifications .( also see . ) gene knock - out studies on yeast have shown that almost 40% of the genes on chromosome v have either negligible or no effects on the growth rate .certain cellular networks , such as the _ e. coli _ chemotaxis network , are also known to be very robust to variations in biochemical parameters .the genetic regulatory networks that control the developmental dynamics buffer perturbations and maintain a stable phenotype .that is why phenotypic variation within most species is quite small , despite the organisms being exposed to a wide range of environmental and genetic perturbations .it has been proposed that genetic robustness evolved through stabilizing selection for a phenotypic optimum . showed that this in fact can be true by modeling a developmental process within an evolutionary scenario , in which the genetic interaction sequence represents the development , and the stationary configuration of the gene network represents the phenotype .his results indicate that the genetic robustness of a population of model genetic regulatory networks can gradually increase through stabilizing selection , in which deviations from the `` optimal '' stationary state ( phenotype ) are considered deleterious . in this paper , we focus on the effects of evolution of genetic robustness on the dynamics of gene regulatory networks in general .first , we examine the relationship between genetic robustness and the dynamical character of the networks . by dynamical character ,we mean the stability of the expression states of the network against small perturbations , or noise .models that are employed to study dynamics of gene networks , such as random boolean networks ( rbn ) or some variants of random threshold networks ( rtn ) have been known to undergo a phase transition at low connectivities , giving rise to change in dynamical behavior : on average , small perturbations will percolate through the network above the threshold connectivity ( chaotic phase ) , whereas they stay confined to a part of the network below the threshold ( ordered phase ) . intuitively , one might expect to find that robustness to mutations ( which are permanent structural changes , not dynamic perturbations ) is related to the dynamical behavior of the system , therefore , ordered gene regulatory networks should be genetically more robust than the chaotic ones . however , this is not necessarily true . here , we show that the relation between the dynamical character of a genetic regulatory network and its mutational robustness can be quite the opposite .in fact , even earlier studies provide a clue on this issue : for gene networks that have undergone selection ( i.e. , evolved ) , mutational robustness is known to increase with increasing connectivity . on the other hand, chaoticity has been shown to increase with increasing connectivity in random gene regulatory networks .these facts seem to contradict the intuitive interpretation of robustness since they suggest that chaotic networks can be mutationally robust . although this inference is proven true in this paper , such an assessment can not be based on the previous studies of the dynamics of random networks , as the evolved networks ( with high mutational robustness and connectivity ) mentioned above have undergone selection .the selection process could potentially tune a network to exhibit a different dynamical character than its random ancestors .therefore , the dynamical character of the evolved networks should be studied independently and then compared with their random counterparts . here , we study the dynamics of the evolved networks numerically and show that selection for an optimal phenotype indeed has only a minor effect on their global dynamical behavior .this result indicates that the evolution of mutational robustness can not be understood in terms of simple dynamical measures .we also provide statistics on robustness to _ noise _ , which is the ability of a network to reach its `` optimal '' steady state after a perturbation to the gene - expression _trajectory_. computer simulations indicate that mutational robustness is correlated to robustness of the gene - expression trajectory to small perturbations ( noise ) , at least for short trajectories .this result is supported by recent studies . for perturbations of arbitrary magnitude, the basin size of the steady - state attractor provides a better measure .our analysis shows that basin sizes of densely connected networks ( which are highly chaotic ) have a broad distribution , and therefore such networks can have very large attractor basins .the yeast cell - cycle network has been reported to have similar properties .although chaoticity is just a side - effect of high connectivity , and not directly selected for during evolution , it appears that this intrinsic property of chaotic networks can be useful in terms of robustness to noise .when all of these measures are taken into account , chaotic dynamics does not seem be an obstacle for the networks with high connectivity since they are more robust to both mutations and noise than the ones that are sparsely connected .the organization of the rest of this paper is as follows .we describe the model in sec .[ sec : model ] and explain its implementation in simulations in sec .[ sec : methods ] .we give the results in sec .[ sec : results ] and discuss their implications in sec [ sec : discussion ] .we use the model introduced by , which has also been used with some modifications by other researchers . each individualis represented by a regulatory gene network consisting of genes .the expression level of each gene , can be either or , meaning that the gene is expressed or not , respectively .the expression states change in time according to regulatory interactions between the genes .the time development of the expression states ( i.e. , the dynamical trajectory taken by the network ) represents a developmental pathway .this deterministic , discrete - time dynamics of the development is given by a set of nonlinear difference equations , where sgn is the sign function and is the strength of the influence of gene on gene .nonzero elements of the matrix are independent random numbers drawn from a gaussian distribution with zero mean and unit variance. the diagonal elements of * * are allowed to be nonzero , corresponding to self - regulation .the mean number of nonzero elements in is controlled by the connectivity density , , which is the probability that any given is nonzero .thus , the mean degree of the network is to clarify , represents the _ developmental _ time , through which genetic interactions occur .it is different from the _ evolutionary _ time , , which will be explained in the next section .the dynamics given by eq . can display a wide variety of features . for a specified initial state ,the network reaches an attractor ( either a fixed point or a limit cycle ) after a transient period .transient time , number of attractors , attractor periods , etc . can differ depending on the connectivity of the network , and from one realization of to another .the fitness of an individual is defined by whether it can reach a developmental equilibrium , i.e. , a fixed point , which is a fixed gene - expression pattern , , in a `` reasonable '' transient time .( it has been shown that selection for developmental stability is sufficient for evolution of mutational robustness , i.e. , deviations from do not have to be deleterious , as long as the gene - expression configuration reaches a fixed point .however , we shall adopt the criterion used by for compatibility . )further details of the model are explained in the next section .we studied populations of random networks with .we use these relatively small networks to be able to enumarate all network states exhaustively for a large set of realizations . in the simulations ,each network was first assigned a random interaction matrix and an initial state . was generated as follows . for each ,a random number uniformly distributed on was generated , and was set to zero if the random number was greater than the connectivity density , .otherwise , was assigned a random number drawn from a gaussian distribution with zero mean and unit variance .then , each `` gene '' of the initial configuration , , was assigned either or at random , each with probability 1/2 .after and were created , the developmental dynamics were started , and the network s stability was evaluated . if the system reached a fixed point , in time steps , then it was considered _ viable _ and kept .otherwise it was considered unstable , both and were discarded , and the process was repeated until a viable network was generated .( a viable network can have several fixed points and/or limit cycles in addition to . ) for each viable network , its fixed point , , was regarded as the `` optimal '' gene - expression state of the system .this is the only modification we made to the model used by : we accept any as long as it can be reached within time steps from , whereas generated networks with preassigned random and ., between and to assess mutational robustness of these networks .we ignore this fact since producing networks with a predetermined is computationally not very feasible .the probability distribution of for the networks we generate follows approximately a gaussian with a mean between 2.8 and 4.5 , increasing with .see supplementary fig . s2 . ] in order to generate a collection of more robust networks , a mutation - selection process was simulated for each viable network as follows .first , a clan of identical copies of each network was generated .for each member of the clan , a four - step process was performed for generations : 1 .recombination : each pair of the rows of consecutive matrices in the clan were swapped with probability 1/2 . sincethe networks were already shuffled in step 4 ( see below ) , there was no need to pick random pairs .2 . mutation : each nonzero was replaced with probability by a new random number drawn from the same standard gaussian distribution .thus , on average , one matrix element was changed per matrix per generation .fitness evaluation : each network was run starting from the original initial condition . if the network reached a fixed point , within developmental time steps , then its fitness was calculated using where denotes the normalized hamming distance between and , denotes the inverse of the strength of selection , is the optimal gene - expression state , which is the final gene - expression state of the original network that `` founded '' the clan .we used .if the network could not reach a fixed point , it was assigned the minimum nonzero fitness value , 4 .selection / asexual reproduction : the fitness of each network was normalized to the fitness value of the most fit network in the clan .then a network was chosen at random and duplicated into the descendant clan with probability equal to its normalized fitness .this process was repeated until the size of the descendant clan reached .then the old clan was discarded , and the descendant clan was kept as the next generation .this process allows multiple copies ( offspring ) of the same network to appear in the descendant clan , while some networks may not be propagated to the next generation due to genetic drift . at the end of the generation ( evolutionary time ) selection ,any unstable networks were removed from the evolved clan .some steps of the process above may be unnecessary , and in fact , the results do not depend strongly on model details .nevertheless , the entire procedure of was retained for compatibility . the mutational robustness , of a network was assessed as follows .first , a nonzero was picked at random and replaced by a new random number with the same standard gaussian distribution .then , the developmental dynamics were started , and it was checked whether the system reached the same stationary state , , within time steps .this process was repeated times , starting from the original matrix .the robustness of the original network before evolution was defined as the fraction of singly - mutated networks that reached . for the evolved networks, we picked one sample network at random from the clan and used the same procedure to assess its mutational robustness . ) of 10000 sample networks before ( filled bars ) and after ( empty bars ) evolution with and .before : .after : .the evolved distribution was calculated by sampling one network from each of 10000 evolved clans .the mean indegree , , rather than the connectivity density , , is the parameter that controls the behavior of the system .[ sec : methods ] for other model parameters.,scaledwidth=50.0% ]the stabilizing selection described above increases the robustness of a model population of gene networks against mutations .figure [ fig : symmetry - and - robustness ] shows that a population with a very large initial variation in robustness evolved an increased ability to absorb mutations after generations of stabilizing selection . however , it is not very clear what kind of a reorganization in the state spaces of these networks occur during the evolution .we measured the changes in several system parameters to answer this question .it has been analytically shown that the rtns undergo a phase transition from order to chaos with increasing .obviously , the state space of an rtn is finite .therefore , the expression states have to display periodicity in development after at most steps .thus , chaos here is not a long - term aperiodic behavior ; rather it corresponds to a dynamical regime in which small perturbations percolate through the gene network .we quantify the dynamical character of a network by comparing the time development of two configurations , and , that differ by one gene : we measure the mean number of different genes in time step , , averaged over all possible and pairs with .this is known as damage spreading or damage propagation .this incompatibility vanishes for larger as this detail in the update rule does not have any significance when almost all nodes have inputs .nevertheless , we use the term `` damage spreading '' instead of `` damage - spreading rate '' when referring to to avoid a confusion . ]. the dynamical behavior of random networks can be quantified analytically using damage - spreading analysis . however, this analytical approach may not be applied to the _ evolved _ networks as the selection process can tune a network to behave dynamically very differently than its ancestor .therefore , we calculated numerically for both viable and evolved networks , averaging ensembles of 10000 networks exhaustively over all possible configuration pairs , and , for each .as seen in fig .[ fig : matrixdistr - and - damage ] , the increases monotonically with increasing connectivity , indicating that highly connected networks are more chaotic on average .the evolved networks are slightly more ordered ( on average ) than their viable ancestors .these results indicate that the dynamical behavior of the evolved networks is not much different than that of the viable networks from which they are descended . , after a one - bit perturbation before ( viable ) and after evolution ( evolved ) for networks with , measured exhaustively for all possible state pairs , averaged over 10000 samples each .the error bars represent one standard deviation ( not standard error ) .evolved networks ( triangles ) are slightly more ordered compared to their viable ancestors .( the differences are statistically significant . ) is always larger than unity due to our specific update rule .see the discussion in the text for details .the lines connecting the symbols are guides to the eye .( probability distributions for for certain values of are given in supplementary figs .s3 and s4 . )[ fig : matrixdistr - and - damage ] ] to .arrows indicate the direction of flow .each state drains into an attractor ( pentagons ) through transient states .( a ) state space of a sample network with and .there are eight basins , each having a fixed - point attractor . the principal basin ( at the top )contains the initial ( diamond shaped node ) and final states .note that the basins are quite symmetric , and transients are very short as the system behaves less chaotically when the connectivity of the network is low .( b ) basin of a sample network with and .the principal basin occupies a large portion of the state space .the basin on the left crosses the symmetry plane with a 2-cycle ( note the symmetry of the branches ) .the other basins have mirror images on the other side of the state space ( not shown ) .the principal basin is significantly larger than the others .the high connectivity makes the network behave more chaotic , creating basins with broadly distributed sizes and branch lengths , and significantly longer transients .see text for details. generated using graphviz .[ fig : state - space - layout],title="fig : " ] to .arrows indicate the direction of flow .each state drains into an attractor ( pentagons ) through transient states .( a ) state space of a sample network with and .there are eight basins , each having a fixed - point attractor . the principal basin ( at the top )contains the initial ( diamond shaped node ) and final states .note that the basins are quite symmetric , and transients are very short as the system behaves less chaotically when the connectivity of the network is low .( b ) basin of a sample network with and .the principal basin occupies a large portion of the state space .the basin on the left crosses the symmetry plane with a 2-cycle ( note the symmetry of the branches ) .the other basins have mirror images on the other side of the state space ( not shown ) .the principal basin is significantly larger than the others .the high connectivity makes the network behave more chaotic , creating basins with broadly distributed sizes and branch lengths , and significantly longer transients .see text for details .generated using graphviz .[ fig : state - space - layout],title="fig : " ] we also analyzed the state spaces of random , viable , and evolved networks .figures [ fig : stats1 ] and [ fig : stats2 ] show these statistics for networks with and connectivities , 5 , and 10 . due to the up - down symmetry of the system ,the state space is divided into two parts , where the dynamics on one side is the mirror image of the other .therefore , the basin size of a fixed - point attractor can not exceed half the size of the state space , .limit cycles , however , can cross the symmetry plane that divides the state space ( fig . [fig : state - space - layout](b ) ) .therefore , their basins can contain up to states .typically , sparsely connected gene networks have many attractors ( fixed points and limit cycles ) and , consequently , smaller basin sizes on average , as depicted in fig .[ fig : state - space - layout](a ) .increasing connectivity brings state spaces with fewer attractors ( figs .[ fig : stats1](a ) , ( b ) , and ( c ) ) , longer attractor ( limit cycle ) periods ( figs . [ fig : stats1](d ) , ( e ) , and ( f ) ) , and broadly distributed basin sizes ( figs , [ fig : stats1](g ) , ( h ) , and ( i ) ) . densely connected networks can contain basins that occupy a large portion , occasionally even all , of the state space . the basins of such networks tend to have broadly distributed branch lengths as seen in fig .[ fig : state - space - layout](b ) , and their states also tend to have fewer precursors ( figures [ fig : stats2](a ) , ( b ) , and ( c ) ) .( all states , , that go to state at time step + 1 are precursors of . ) therefore , mean transient time ( total number of steps from a state to the attractor ) on such networks are typically larger than on networks with lower connectivity ( figs .[ fig : stats2](d ) , ( e ) , and ( f ) ) .these changes in the state - space characteristics are consistent with the damage - spreading measurements .the distributions for the evolved networks are shifted toward the distributions of more ordered networks of lower connectivity .this means evolved networks are slightly more ordered , as the damage - spreading measurements shown in fig .[ fig : matrixdistr - and - damage](b ) suggest .for example , the distribution of the number of attractors ( figs . [ fig : stats1](a ) , ( b ) and ( c ) ) for the networks with low connectivity ( has a longer tail compared to the networks with and 10 . similarly , the distributions for the evolved networks have slightly longer tails compared to those of viables of the same connectivity .the same effect can be seen in the attractor - period ( figs .[ fig : stats1](d ) , ( e ) , and ( f ) ) , precursor ( figs . [ fig : stats2](a ) , ( b ) , and ( c ) ) , and transient - time distributions ( figs . [fig : stats2](d ) , ( e ) , and ( f ) ) , as well .the most significant changes are seen in the transient - time distributions , indicating that basins with relatively shorter branches are preferred by selection .the basin - size distributions of viable and evolved networks , however , do not display much difference as they virtually overlap .and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones .for the evolved networks , we did not use clan averages to avoid a bias .instead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated . then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones .for the evolved networks , we did not use clan averages to avoid a bias .instead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated . then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection . the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks, we did not use clan averages to avoid a bias .instead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] and 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .each curve represents an average over 20,000 realizations for the `` random '' networks , and 10,000 realizations for the `` viable '' and `` evolved '' ones . for the evolved networks , we did not use clan averages to avoid a biasinstead , we picked one sample network from each evolved clan .error bars were calculated by grouping the data : each data set was divided into groups of 1000 samples and the average distribution for each group was calculated .then , the error calculations were performed on the new set of averaged distributions .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) to reduce the noise .( see supplementary fig .s1 for histograms with linear bins . )the basin size does not include the size ( period ) of the attractor . the evolved attractor - count and attractor - period distributionsare shifted toward those of viable networks with lower , indicating that evolved networks display a slightly more ordered character .the basin - size distributions for the evolved networks seem to overlap with the ones for the viable networks except for the largest basins , which become less probable after selection .the lines connecting the symbols are guides to the eye.[fig : stats1],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states . the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) . the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] 5 , and 10 ( columns 1 , 2 , and 3 , respectively ) .the second row shows the distributions for the lengths of the transients starting from all possible states .the third row , however , shows the distribution for , the length of the transient starting from , the original initial state .again , changes in the distributions indicate that evolved networks display a slightly more ordered character .the data plotted on log - log scale were histogrammed using exponential bins ( 0 , 1 , 2 - 3 , 4 - 7 , ... ) and the other plots were histogrammed using linear ( 0,1,2 ... ) bins .see the caption of fig .[ fig : stats1 ] for the details of the data analysis .the lines connecting the symbols are guides to the eye.[fig : stats2],title="fig : " ] as seen in fig .[ fig : before - and - after - plots](a ) , networks of all connectivities have similar values of mutational robustness , , before selection .after selection , however , the networks with higher connectivity reach a much greater mean mutational robustness .sparsely connected networks do not show much improvement because the low connectivity leaves little room for optimization .another significant change is seen in the transient time , ( figs .[ fig : stats2](g ) , ( h ) , and ( i ) and [ fig : before - and - after - plots](b ) ) . for viable networks ,the transient time increases with the connectivity .this is not surprising since state spaces of densely connected networks typically have basins with longer branches ( fig .[ fig : state - space - layout](b ) ) .however , selection brings the mean transient time down to around 2 , independent of the connectivity of the network .low transient time is one of the properties of highly robust networks .mutations are not the only kind of perturbations that genetic regulatory networks experience .all genetic systems are also exposed to noise created by both internal an external sources .we measure robustness to noise dynamical robustness using two parameters .the first one is the dynamical robustness , , of a gene expression state , , to small perturbations ( random one - bit flips ) , which is simply the fraction of nearest neighbors in hamming distance of that lie in the same basin as . in other words , is the probability that a random flip of a gene on yields a state that drains into the same attractor as .we also employ the mean robustness of a set of states , for instance , robustness of the gene expression trajectory , , which is the average of over all states in the trajectory ( including and ) . similarly , the robustness of the principal basin , , is the average of over all states in the basin , and the robustness of the entire state space , , is the average of over all possible states of the network .( we usually omit the term `` dynamical '' when we talk about robustness of a state or a set of states since the mutational robustness of a _ state _ is not defined . )the second parameter we use to estimate dynamical robustness is the normalized size of the principal basin , , which is the basin containing both and .the size of a basin is the total number of states it contains .therefore , is the fraction of all possible perturbations to the trajectory that leave unchanged , i.e. , the probability that flipping an arbitrary number of genes at random in a state on the trajectory does not change the attractor that the network settles into. clearly , the parameters we employ to measure robustness to noise are quite different than conventional measures of stability , such as the damage - spreading rate .this is because we are interested in the endpoints of the gene expression trajectory , but not in the exact path taken .therefore , the relationship between the damage spreading and robustness to noise is not trivial .in fact , it is quite counterintuitive as explained below . as shown in figs .[ fig : stats1](g ) , ( h ) and ( i ) , densely connected networks have a greater number of larger basins as their basin - size distributions essentially follow a power law for about two decades .the effect of the connectivity on is similar , as shown in fig .[ fig : before - and - after - plots](c ) . networks with low connectivity have small principal basins both before and after selection .more densely connected networks have larger before selection .we see a small increase in networks with , while fully connected networks ( increase their significantly after evolution .however , compared to the change in mutational robustness , the effect of selection on the size of the principal basin is modest for highly connected networks .although greater connectivity increases the damage spreading ( sensitivity to small perturbations ) , the dynamical robustness of the trajectory , , as well , increases with the connectivity ( fig .[ fig : before - and - after - plots](d ) ) . networks with have a small both before and after selection . the curve for the viableshas a maximum around , which also shows no improvement after selection .this indicates that the networks with are intrinsically robust to noise , but not evolvable .viable networks with higher connectivity have monotonically decreasing values of , but the selection improves their average robustness significantly , above or up to the same level as the networks with .the robustnesses of the initial state , , and the final state , ( figs .[ fig : before - and - after - plots](e ) and ( f ) ) follow the same trend .the robustnesses of the principal basin , , and state space , , ( figs .[ fig : before - and - after - plots](g ) and ( h ) ) do not show as much improvement with selection .this implies that the effect of selection is more local ( on the specific gene - expression trajectory ) than global in terms of the change in dynamics .our results above agree with the findings by , who used very similar parameters to measure robustness to noise .we note that most of the robustness measures discussed above do not increase monotonically with for the viable networks ( i.e. , before selection ) .for example , for viable networks has a maximum around ( fig .[ fig : before - and - after - plots](a ) ) .this probably indicates that these quantities depend on more than one parameter ( like the principal basin size and the magnitude of the damage spreading ) , some increasing and some decreasing with increasing . for the evolved networks , however , , , and increase monotonically with at least for the range tested in this paper .this indicates that networks of higher connectivity are more evolvable when mutational robustness is considered .however , these networks are more chaotic on average compared to the ones of low connectivity , regardless of selection .mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] mean mutational robustness ( transient time , normalized size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space before ( viable ) and after evolution ( evolved ) for and , 2.5 , 5 , 7.5 and 10 averaged over 10,000 networks .the error bars are equal to one standard deviation ( not standard error ) . is normalized by .the lines connecting the symbols are guides to the eye.,title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] and , 2.5 , 5 , 7.5 , and 10 .the -value is much smaller than 0.01 for all correlation coefficients with an absolute value above 0.03 . , , , , , , and denote changes in normalized size of the transient time , size of the principal basin , robustnesses of the trajectory , initial state , final state , principal basin , and state space , and damage spreading , respectively .the dynamical robustness of the trajectory includes the contribution from the final state .the lines connecting the symbols are guides to the eye .see text for details.[fig : correlations],title="fig : " ] the recent theoretical and computational work on gene regulatory networks indicates that there is a strong link between mutational robustness and robustness to noise . in order to see the relation between these two quantities , we calculated the correlation coefficients between the change in mutational robustness ( ) and changes in different measures of robustness to noise , as well as transient time and the damage spreading as functions of linkage density .changes in the transient time and mutational robustness are highly correlated in agreement with the earlier studies .the principal basin size does not seem to have much effect on , as the correlation between their changes is quite weak ( fig .[ fig : correlations](b ) ) .although a larger principal basin size means greater robustness to noise , this result suggests that it is not strongly selected for .changes in robustnesses of the initial state , final state , principal basin , and state space are also correlated with ( figs .[ fig : correlations](c ) , ( d ) , ( e ) , and ( f ) ) . the weak correlation between the changes in the damage spreading and mutational robustness ( fig . [ fig : correlations](h ) ) imply that it is not the change in dynamical behavior that brings mutational robustness .in this paper , we have analyzed changes in state - space properties of model genetic regulatory networks under selection for an optimal phenotype . both numerical stability analysis andthe state - space statistics indicate that the difference between the global dynamical properties of mutationally robust networks that have undergone selection and their random ancestors are quite small .furthermore , the correlation between the changes in the damage spreading and the mutational robustness is weak .therefore , changes in the global dynamical properties do not seem to be responsible for the increase in mutational robustness after selection .dynamics of many random threshold networks , as well as random boolean networks , depend largely on their connectivity distributions .variants of the random threshold networks used in this paper have been shown to have a chaotic phase above , depending on the model details . essentially , rtns have a chaotic phase for sufficiently large just as rbns .the connectivity of a network affects the evolvability of its mutational robustness , as well as its dynamical character . for viable ( essentially random ) networks ,the mutational robustness is very similar for all connectivities . for the evolved networks ,however , it increases monotonically with increasing connectivity , creating drastic differences for large .the dynamical robustness of the gene - expression trajectory , the initial state , and the final state follow similar trends .these results clearly indicate that the stability of the system as measured by the damage spreading does not capture its dynamical characteristics in this context .as pointed out in a recent paper by , selection decreases the transient time by picking the `` proper '' interaction constants to construct a shorter ( or direct ) path from to .both and stated that mutationally robust networks are the ones that have found a path for the gene expression trajectory at a safe distance from the basin boundary , so that small perturbations can not kick them into a different basin .thus , even if the selection operates only on the stability of the stationary gene - expression pattern , robustness to mutations intrinsically requires stability of the gene - expression trajectory against small perturbations .there is also some experimental evidence supporting the association between genetic and non - genetic change .although it might seem like robustness to noise evolves as a by - product of robustness to mutations , the converse case is also true .thus , robustness to noise and robustness to mutations seem to evolve mutually when certain conditions are met .the analysis of the cell - cycle regulatory network of budding yeast provides further evidence for the chaotic behavior of gene networks .the simplified form of this network has 11 nodes ( genes or proteins ) , one checkpoint ( an external input , in this case , the cell size ) , and 34 links including self - degrading interactions . using a dynamical model similar to the one used in this paper , _ _ _ _showed that the stationary state of the network has a basin occupying 86% of the state space . studied an ensemble of networks that can perform the same function , i.e. , the 12-step sequence of transitions in the expression trajectory , and found that these functional networks have larger basins for the stationary state ( consequently , broadly distributed basin sizes ) , fewer attractors , longer transient times , and a larger damage - spreading rate compared to their randomized counterparts .they concluded that those dynamical features emerge due to the functional constraints on the network .here , we showed that those features , which are signs of chaotic dynamics , can arise under the presence of structural perturbations ( mutations ) if the connectivity of the network is large enough , even when the constraints on the function are minimal , i.e. , when the only selection is on the phenotype .although the length of the gene - expression trajectory of the yeast cell - cycle network needs further explanation in terms of mutational robustness , it appears like chaotic dynamics may be a design principle underlying seemingly `` boring '' and ordered behavior generally seen in models of gene regulatory networks , where a simple cascade of expression terminates at a stationary state .the effect of network topology on evolvability of robustness is another aspect of the problem , which we do not discuss in this paper .however , we would like to point out that recent studies indicate that certain topological features , such as connectivity , are not very crucial in determining the response of cellular networks to genetic or non - genetic change , and there may be other factors shaping their topological structure . our results also imply that the `` life at the edge of chaos '' hypothesis , which suggests adaptability ( evolvability ) is maximized in the critical regime does not seem to be necessary , at least not to explain evolvability of robustness to mutations and noise . indeed, recent studies concerning dynamics of genetic regulatory networks do not indicate any special feature brought by criticality .the `` edge of chaos '' concept was primarily developed to describe the phase in which cellular automata can perform universal computation , and it may not be related to dynamics and evolution of biochemical pathways as was once thought .to summarize , our study indicates that conventional measures of stability may not be very informative about robustness to mutations or noise in gene regulatory networks when one considers the steady gene - expression pattern as the robust feature of the network .also , the dynamics underlying the simple gene - expression trajectories can be very rich , reflecting a complex state - space structure .the online version of this article contains supplementary material : linearly binned histograms of the data represented in figs .[ fig : stats1 ] and [ fig : stats2 ] , probability densities for the hamming distance between the initial and final states , the magnitude of the damage spreading before and after evolution , and probability densities for the latter .we thank t. f. hansen , s. bornholdt , g. brown , d. balcan , b. uzunolu , p. oikonomou , and a. pagnani for helpful discussions .this research was supported by u.s .national science foundation grant nos .dmr-0240078 and dmr-0444051 , and by florida state university through the school of computational science , the center for materials research and technology , and the national high magnetic field laboratory .albert , r. , othmer , h. g. , 2003 .the topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in_ drosophila melanogaster_. j. theor .223 ( 1 ) , 1 18 .aldana , m. , coppersmith , s. , kadanoff , l. p. , 2003 .boolean dynamics with random couplings . in : kaplan ,e. , marsden , j. e. , sreenivasan , k. r. ( eds . ) , perspectives and problems in nonlinear science . a celebratory volume in honor of lawrence sirovich .springer , berlin heidelberg new york , pp .2389 .azevedo , r. b. r. , lohaus , r. , srinivasan , s. , dang , k. k. , burch , c. l. , 2006 .sexual reproduction selects for robustness and negative epistasis in artificial gene networks .nature ( london ) 440 ( 7080 ) , 87 90 .balcan , d. , kabakiolu , a. , mungan , m. , erzan , a. , 2007 .the information coded in the yeast response elements accounts for most of the topological properties of its transcriptional regulation network .plos one 2 ( 6 ) , e501 . de visser , j. a. g. m. , hermisson , j. , wagner , g. p. , meyers , l. a. , bagheri - chaichian , h. , blanchard , j. l. , chao , l. , cheverud , j. m. , elena , s. f. , fontana , w. , gibson , g. , hansen , t. f. , krakauer , d. , lewontin , r. c. , ofria , c. , rice , s. h. , von dassow , g. , wagner , a. , whitlock , m. c. , 2003 . perspective : evolution and detection of genetic robustness .evolution 57 ( 9 ) , 1959 1972 .ellson , j. , gansner , e. r. , koutsofios , e. , north , s. c. , woodhull , g. , 2003 .graphviz and dynagraph static and dynamic graph drawing tools . in : junger , m. , mutzel , p. ( eds . ) , graph drawing software .springer - verlag , berlin heidelberg new york , pp .127148 .von dassow , g. , odell , g. m. , 2002 .design and constraints of the _ drosophila _ segment polarity module : robust spatial patterning emerges from intertwined cell state switches . j. exp .294 ( 3 ) , 179 215 .
robustness to mutations and noise has been shown to evolve through stabilizing selection for optimal phenotypes in model gene regulatory networks . the ability to evolve robust mutants is known to depend on the network architecture . how do the dynamical properties and state - space structures of networks with high and low robustness differ ? does selection operate on the global dynamical behavior of the networks ? what kind of state - space structures are favored by selection ? we provide damage propagation analysis and an extensive statistical analysis of state spaces of these model networks to show that the change in their dynamical properties due to stabilizing selection for optimal phenotypes is minor . most notably , the networks that are most robust to both mutations and noise are highly chaotic . certain properties of chaotic networks , such as being able to produce large attractor basins , can be useful for maintaining a stable gene - expression pattern . our findings indicate that conventional measures of stability , such as the damage - propagation rate , do not provide much information about robustness to mutations or noise in model gene regulatory networks .
uncertainty in quantum theory can be attributed to two different issues : the irreducible indeterminacy of individual quantum processes , postulated by born , and the complementarity , introduced by bohr , which implies that we can not simultaneously perform precise measurements of noncommuting observables .for that reason , estimation of the quantum state from the measurement outcomes , which is sometimes called quantum tomography , is of paramount importance .moreover , in practice , unavoidable imperfections and finite resources come into play and the performance of tomographic schemes should be assessed and compared . at a fundamental level , mutually unbiased bases ( mubs )provide perhaps the most accurate statement of complementarity .this idea emerged in the seminal work of schwinger and it has gradually turned into a keystone of quantum information . apart from being instrumental in many hard problems ,mubs have long been known to provide an optimal basis for quantum tomography .when the hilbert space dimension is a prime power , , it is known that there exist sets of mubs .these hilbert spaces are realized by systems of particles , which , in turn , allow for special entanglement properties .more specifically , we consider here qubits , as they are the building blocks of quantum information processing .it was known that for three qubits , four different set of mubs exist with very different entanglement properties .the analysis was further extended to higher number of qubits and confirmed by different approaches . for the experimentalist, this information is very important , because the complexity of an implementation of a given set of mubs will , of course , greatly depend on how many of the qubits need to be entangled .in this work , we use fisher information to set forth efficient tools for assessing the quality of a wide class of tomographic schemes , paying special attention to -qubit mubs . despite the widespread belief that mubs are equally sensitive to all the state features ,we show that mubs with different entanglement properties differ in their potential to characterize particular aspects of the reconstructed state .we illustrate this with the fidelity estimation of three - qubit states , making evident the relevance of mub - entanglement classification and illustrating the possibility to optimize mub tomography with respect to the observables of interest .let us start with a brief outlook of the basic methods we need for the rest of the discussion .we deal with a -dimensional quantum system , represented by a positive semidefinite density matrix .a very convenient parametrization of can be achieved in terms of a traceless hermitian operator basis , satisfying and : where the -dimensional generalized bloch vector ( ) is uniquely determined by .the set coincides with the orthogonal generators of su( ) , which is the associated symmetry algebra . the measurements performed on the system are described , in general , by positive operator - valued measures ( povms ) , which are a collection of positive operators , resolving the identity .each povm element represents a single output channel of the measuring apparatus ; the probability of detecting the output is given by the born rule . in a sensible estimation procedure, we have identical copies of the system and repeat the measurement on each of them .the statistics of the outcomes is then multinomial , i.e. , where is the probability of detection at the channel and the actual number of detections . here , is the probability of registering the data provided the true state is .the estimation requires the introduction of an estimator ; i.e. , a rule of inference that allows one to extract a value for from the outcomes .the random variable is an unbiased estimator if . the ultimate bound on the precision with which one can estimate is given by the cramer - rao bound , which , in terms of the covariance matrix , can be stated as where f stands for the fisher matrix since , it might superficially appear that computing the fisher matrix ( and hence the covariances of the estimated parameters ) is straightforward. however , in practice , this can become quite a difficult task : the cost of computing matrix multiplications and inversions , even with the best - known exact algorithm , scales as . for a system of qubits, this computational cost goes as , which sets an upper limit on the dimension for which the evaluation of the reconstruction errors is feasible .for instance , analyzing a system of just five qubits requires about a billion of arithmetic operations with individual elements , which makes the problem intractable along these lines .this especially applies when a large number of repeated evaluations is required , as in monte carlo simulations .this numerical cost can be considerably reduced by employing a special parametrization for . to this end , we restrict ourselves to informationally complete ( ic ) measurements , which are those for which the outcome probabilities are sufficient to determine an arbitrary quantum state . given a system of dimension , any ic measurement must have at least output channels ; when it has just of them , it will be called a minimal ic reconstruction scheme .it turns out that the error analysis in this case is particularly simple and can be done analytically avoiding time - expensive computations .we remark that we are not addressing here the resources needed for a complete tomography ( which scale exponentially ) ; rather , our aim is to ascertain what can be better estimated from ic measurements . indeed , for an ic minimal scheme ( ) , there exists a unique representation of any quantum state in terms of the basic probabilities : normalization reduces by one the number of independent parameters describing the state : ( and . in this way, we get ( ) and , leading us to we thus conclude that the errors of any ic minimal scheme are given by the covariance matrix of the underlying true multinomial distribution governing the measurement outcomes . notice that this might not apply to other overdetermined setups , as , for instance , optical homodyne tomography .once the fisher matrix is known , the errors in any observation can be estimated .let us consider the measurement of the average value of some generic observable .if the reconstructed state is , the predicted outcomes are and the expected errors are ^ 2 \right \rangle \ , , \ ] ] where the averaging here is over many repetitions of the reconstruction . by expanding the observable in the povm elements , , the true and predicted outcomes can be given in terms of true and predicted measurement probabilities as and , where and , respectively . denoting the differences between the true and inferred probabilities , the expected error of the observable defined by eqbecomes next , we rearrange eq . so that only linearly independent probabilities are involved .notice that implies and hence . in this way, we get employing the cramer - rao lower bound , we finally obtain where is the inverse fisher matrix in the probability representation given by eq . .occasionally , working in the measurement representation might be preferable .expanding the true and estimated states in the measured povm elements , and , we seek to express in terms of the reconstruction errors .if is the matrix connecting the measurement and probability representations , then and we have , which is computed effectively provided is sparse . by inserting this into eq ., we get the desired result .one pertinent example for which this probability representation turns out to be very efficient is for mubs .as heralded before , we consider a system of qubits ; since the dimension is a power of a prime sets of mubs exist and explicit construction procedures are at hand .we denote the corresponding projectors by where the greek index labels one of the families of mubs and denotes one of the orthogonal states in this family .unbiasedness translates into notice that , in agreement with the usual mub terminology , each set of eigenstates is normalized to unity , so that the povm becomes normalized to , rather than to unity .our previous convention is readily recovered by using as the total number of copies . as the total number of projections [ minus the number of constraints [ matches the minimal number [ of independent measurements ,this mub tomography is indeed an ic minimal scheme .in addition , there are sets of vectors , each resolving the unity , so that holds for each observable .consequently , the fisher information matrix in the probability representation takes a block diagonal form and in each block , and are given by eq . .measurement errors can now be easily estimated . indeed , expanding a generic state in the mub basis as with , we find that and the total error appears as a sum of independent contributions of individual mub eigensets , with we can reinterpret these results in an alternative way . without lack of generality , we consider one diagonal block and drop the index .we introduce the operator by , with , , and .this can be represented by the rectangular -dimensional matrix .if is the standard scalar product in this -dimensional vector space , we may simply write where .notice that the inferred variance takes now the form of born s rule and the effective inverse fisher matrix becomes the relevant object governing the errors .for example , the mean hilbert - schmidt distance from the estimate to the true state becomes \rangle= { \mathop{\mathrm{tr } } \nolimits}(\tilde{\mathrm{f}}^{-1 } ) \ , , \ ] ] which gives a unitary invariant error , as it might be anticipated .another instance of interest is the error of the fidelity measurement . here the constant term can be dropped because enters eq .( [ eq : uf ] ) only through differences .these formulas , together with eq . ,provide timely tools for analyzing the performance and optimality of different mub reconstruction schemes .for example , one could be interested in which observable can be most ( least ) accurately inferred from a mub tomography .this is tantamount to minimizing ( maximizing ) eq .subject to a fixed norm of .the principal axes of the error ellipsoid are defined by the eigenvectors and eigenvalues of the effective inverse fisher matrix .notice that there is always one zero eigenvalue per diagonal block , corresponding to a constant vector ( , which , in turn , corresponds to measuring the trace of the signal density matrix and .this is consistent with the fact that the trace is constrained to unity and hence error free .thus , the least ( most ) favorable measurements from the point of view of a particular detection scheme are those given by the largest ( second smallest ) eigenvectors of .similarly , one may be interested in the distribution ] of the expected variances .this distribution , being the real shadow of the effective inverse fisher matrix , tells us how the errors are distributed in the set of all possible measurements and hence describes in detail the performance of a given reconstruction scheme . in fig .[ fig3 ] , the difference -p_{(3,0,6)}[(\delta z)^2]12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) _ _ ( , , ) _ _ , ed .( , , ) and , eds . , _ _ , , vol .( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) http://link.aps.org/doi/10.1103/physreva.84.022327 [ * * , ( ) ] * * , ( )for the sake of completeness , we briefly review the structure of the sets of mubs for three qubits , following essentially the approach in ref . . because states belonging to the same basis are usually taken to be orthonormal , to study the property of mutually unbiasedness " it is possible to use either mutually unbiased bases or the operators which have the basis states as eigenvectors .we thus need operators to obtain the whole set of states . in the case of power of prime dimension ,this set can be constructed as classes of commuting operators . in this way we get tables with nine rows of three mutually commuting ( tensor products of ) operators .we have suppressed the tensor multiplication sign in all the tables . by construction , the simultaneous eigenstates of the operators in each row give a complete basis , and each basis is mutually unbiased to each other .the number on the left enumerates the bases , while the number on the right denotes how many subsystems the bases can be factorized into . as stated before , we label the different sets of mubs by , where denotes the number of separable bases ( every eigenvector of theses bases is a tensor product of singe - qubit states ) , the number of biseparable bases ( one qubit is factorized and the other two are in a maximally entangled state ) and the number of nonseparable bases . in this way , the allowed structures are and the corresponding tables are given in the following .
an efficient method for assessing the quality of quantum state tomography is developed . special attention is paid to the tomography of multipartite systems in terms of unbiased measurements . although the overall reconstruction errors of different sets of mutually unbiased bases are the same , differences appear when particular aspects of the measured system are contemplated . this point is illustrated by estimating the fidelities of genuinely tripartite entangled states .
in the history of using computer simulation as a research tool to study the physics of turbulence , the dominant approach has been to use spectral methods . direct numerical simulation ( dns )was introduced as a means to check the validity of turbulence theories _ directly _ from the equations of fluid dynamics .the idea that important features of turbulence are _ universal _ encouraged researchers to study the simplest of geometries , a periodic cubic volume of homogeneous , isotropic turbulent fluid . in this case , the simplicity and efficiency of a fourier - spectral method can not be matched .the largest direct numerical simulation of isotropic turbulence was conducted by ishihara et al . using grid points at a maximum taylor - microscale reynolds number .this record - breaking computation was done on _ earth simulator_a large vector machine with crossbar switch interconnect that can efficiently perform large - scale fft .the successive generations of supercomputers have not been so fft - friendly , and this record has not been surpassed even though the peak performance of supercomputers has increased nearly 50-fold since then .the record was matched for the first time in the u.s . by donzis et al . , running on 16 thousand cpu cores of the ranger supercomputer in texas , a linux - cluster supercomputer with 16-core nodes .future high - performance computing systems will have ever more nodes , and ever more cores per node , but will probably not be equipped with the bandwidth required by many popular algorithms to transfer the necessary data at the optimal rate .this situation is detrimental to parallel scalability .therefore , it is becoming increasingly important to consider alternative algorithms that may achieve better sustained performance on these extremely parallel machines of the future . in most standard methods of incompressible cfd, the greatest fraction of the calculation runtime is spent solving a poisson equation .equations of this type can be efficiently solved by means of an fft - based algorithm , a sparse linear solver , or a fast multipole method ( fmm ) . for the sake of our argument, we will not differentiate between fft - based poisson solvers and pseudo - spectral methods because they both rely on fft .the fast multipole method has not gained popularity due to the fact that it is substantially slower depending on implementations , at least an order of magnitude slower than fftand multigrid solvers when compared using a small cpu cluster .we aim to show with our ongoing research that the relative performance of fmmimproves as one scales to large numbers of processes using gpu acceleration .the highly scalable nature of the fmmalgorithm , among other features , makes it a top contender in the algorithmic toolbox for exascale systems .one point of evidence for this argument is given by the gordon bell prize winner of 2010 , which achieved 0.7 petaflop / s with an fmmalgorithm on 200k cores of the jaguar supercomputer at oak ridge national laboratory .in the previous year , fmmalso figured prominently at the supercomputing conference , with a paper among the finalists for the best paper award and the gordon bell prize in the price / performance category going to work with hierarchical -body methods on gpu architecture .the fmmalgorithm is well adapted to the architectural features of gpus , which is an important consideration given that gpus are likely to be a dominant player as we move towards exascale .the work presented in 2009winner in the price / performance category in great measure thanks to the ingenious and resourceful system design using gaming hardware reported ( at the time of the conference ) 80 teraflop / s . that work progressed to an honorable mention in the 2010 list of awardees , with a performance of 104 teraflop / s . at the level of the present work , where we present a 1.08 petaflop / s ( single precision ) calculation of homogeneous isotropic turbulence , the fmmmoves firmly into the arena of _ petascale _ gpu computing .the significance of this advance is that we are now in a range where the fmmalgorithm shows its true capability .the excellent scalability of fmmusing over 4000 gpus is an advantage over the dominant fft - based algorithms .showcasing the fmmin the simulation of homogeneous isotropic turbulence is especially fitting , given that a years - old record there remains unchallenged .we not only match the grid size of the world record , but also demonstrate that using the fmmas the underlying algorithm enables isotropic turbulence simulations that scale to thousands of gpus . given the severe bottleneck imposed by the all - to - all communication pattern of the fftalgorithm ,this is not possible with pseudo - spectral methods in current hardware .the fmmis used here as the numerical engine in a vortex particle method , which is an approach to solve the navier - stokes equations using a the vorticity formulation of momentum conservation and a particle discretization of vorticity .this is not a standard approach for the simulation of turbulence and the vortex method is not yet trusted for this application .for this reason , we have made efforts to document the validation of our vortex - method code and , in a separate publication , compared with a trusted spectral - method code .we looked at various turbulence statistics , including higher - order velocity derivatives , and performed a parameter study for the relevant numerical parameters of the vortex method .that work provides evidence that the vortex method is an adequate tool for direct numerical simulation of fluid turbulence , while in the present work we focus on the performance aspects . for completeness , this sections gives a brief overview of the numerical methods .the vortex method is a particle - based approach for fluid dynamics simulations .the particle discretization results in the continuum physics being solved as an -body problem .therefore , the hierarchical -body methods that extract the full potential of gpus can be used for the simulation of turbulence . unlike other particle - based solvers for fluid dynamics , e.g. ,smoothed particle hydrodynamics , the vortex method is especially well suited for computing turbulent flows , because the vortex interactions seen in turbulent flows are precisely what it calculates with the vortex particles . in the vortex method ,the navier - stokes equation is solved in the velocity - vorticity formulation , using a particle discretization of the vorticity field .the velocity is calculated using the following equation , representing the biot - savart law of fluid dynamics : here , is the strength of vortex particles , is the green s function for the laplace equation and is the cutoff function , with the distance between the interacting particles , and the standard deviation of the gaussian function .the navier - stokes system is solved by a simultaneous update of the particle positions to account for convection , of the particle strengths to account for vortex stretching , and of the particle width to account for diffusion .the equation used to calculate the stretching term , , is : which was obtained by substituting the biot - savart equation ( [ eq : biotsavart ] ) for , and using the discrete form of the vorticity .finally , the diffusion update is calculated according to we perform a radial basis function interpolation for reinitialized gaussian distributions to ensure the convergence of the diffusion calculation . equations ( [ eq : biotsavart ] ) and ( [ eq : stretching ] ) are -body interactions , and are evaluated using the fmm .we use a highly parallel fmmlibrary for gpus developed in our group , called ` exafmm ` , which is available under an open - source license and is described further below . for the purpose of comparing with a spectral method, we used a code for homogeneous isotropic turbulence developed and used at the center for turbulence research of stanford university .the code is called ` hit3d ` and is available freely for download .it uses a spectral galerkin method in primitive - variable formulation with pseudo - spectral methods to compute the convolution sums . for the fft , it relies on the ` fftw ` library and it provides parallel capability for cpu clusters using mpi .parallelization of the fftis accomplished by a domain - decomposition approach , illustrated by the schematic of figure [ fig : spectral_fft ] .domain decomposition is applied in two spatial directions , resulting in first `` slabs '' then `` pencil '' sub - domains that are assigned to each mpiprocess .the initial conditions for our runs were generated by ` hit3d ` .the vortex method used the same initial condition by first calculating the vorticity field in physical space , and then using radial basis function interpolation to obtain the vortex strengths .this is different from our other publication , where we studied the accuracy of turbulence simulations using the fmm - based vortex method on gpus , looking at high - order turbulence statistics . for that case ,the initial condition provided by ` hit3d ` , which has a fully developed energy spectrum , was not suitable for our validation exercise looking at the time evolution of the velocity derivative skewness and flatness .for this reason , we constructed initial conditions in fourier space as a solenoidal isotropic velocity field with random phases and a prescribed energy spectrum .this initial velocity field had a gaussian distribution and satisfied the incompressibility condition .it is common that algorithms with low complexity ( sparse linear algebra , fft ) have low arithmetic intensity , while algorithms with high arithmetic intensity ( dense linear algebra ) tend to have high complexity .the fmmpossesses a rare combination of complexity and an arithmetic intensity that is even higher than dgemm .although this may seem like a great combination , it also implies that there is a large constant in front of the scaling , which results in a larger time - to - solution compared to other or methods like multigrid methods and fft .however , as arithmetic operations become cheaper compared to data movement in terms of both cost and energy , the large asymptotic constant of the _ arithmetic _ complexity becomes less of a concern .the calculation of velocity by means of the biot - savart equation ( [ eq : biotsavart ] ) and the computation of the stretching term ( [ eq : stretching ] ) both result in an -body problem , which has a complexity of for particles , if solved directly .the fast multipole method reduces the complexity to by clustering source and target particles , and using series expansions that are valid far ( multipole expansion ) or near ( local expansion ) a point .the far / near relationships between points in the domain and the interactions between clusters are determined by means of a tree structure .the domain is hierarchically divided , and different sub - domains are associated with branches in the tree , a graphical representation of which is shown in figure [ fig : kernels ] .the algorithm proceeds as follows .first , the strengths of the particles ( e.g. , charges or masses ) are transformed to multipole expansions at the leaf cells of the tree ( known as the particle - to - multipole , or pmkernel ) .then , the multipole expansions of the smaller cells are translated to the center of larger cells in the tree hierarchy recursively and added ( multipole - to - multipole , or mmkernel ) .subsequently , the multipole expansions are transformed into local expansions for all pairs of well - separated cells ( multipole - to - local , or mlkernel ) , and then to local expansions at the center of smaller cells recursively ( local - to - local , or llkernel ) .finally , the local expansions at leaf cells are used to compute the effect of the far field on each target particle .since mloperations can only be performed for well - separated cells , the direct neighbors at the finest level of the tree interact directly via the original equation ( particle - to - particle , or ppkernel ) .the current implementation of the ` exafmm ` code uses expansions in spherical harmonics of the laplace green s function .details of the extension of laplace kernels to biot - savart and stretching kernels and of the implementation on gpus can be found in previous publications . when parallelizing hierarchical -body algorithms , the fact that the particle distribution is dynamic makes it impossible to precompute the decomposition and to balance the work - load or communication _ a priori_. warren and salmon developed a parallel algorithm for decomposing the domain into recursive subdomains using the method of orthogonal recursive bisection ( orb ) .the resulting process can be thought of as a binary tree , which splits the domain into subdomains with equal number of particles at every bisection .another popular technique for partitioning tree structures is to use morton ordering , where bits representing the particle coordinates are interleaved to form a unique key that maps to each cell in the tree . following the morton index monotonicallywill take the form of a space - filling curve in the shape of a z " . partitioning the list of morton indices equally assures that each partition will contain an equal number of cells , regardless of the particle distribution .for an adaptive tree , a nave implementation of the morton ordering could result in large communication costs if the z " is split in the wrong place , as illustrated in figure [ fig : partitioning ] .sundar et al . proposed a bottom - up coarsening strategy that ensures a clean partition at the coarse level while using morton ordering . on the other hand , an orbalways partitions the domain into a well balanced binary tree , since the number of particles is always equal on each side of the bisectiontherefore , orbis advantageous from the point of view of load - balancing .the main difference between using morton ordering and using an orbis the shape of the resulting subdomains and the corresponding tree structure that connects them , as shown in figure [ fig : partitioning ] .morton ordering is done on cubic octrees and the corresponding tree structure can become highly non - uniform depending on the particle distribution .conversely , an orbalways creates a balanced tree structure but the sub - domain shapes are rectangular and non - uniform . a difficulty when applying orbwithin the framework of conventional fmmis that the construction of cell - cell interaction lists depends on the cells being cubic ( and not rectangular ) .the dual tree traversal described in the following subsection allows the use of rectangular cells , while enabling cell - cell interactions in the fmmwith minimum complications . by using orbalong with the dual tree traversal, we obtain a partitioning+traversal mechanism that allows perfect load balancing for highly non - uniform particle distributions , while retaining the complexity of the fmm .another advantage of this technique is that global indexing of cells is no longer necessary .having a global index becomes an issue when solving problems of large size .the maximum depth of octrees that a 64-bit integer can handle is 21 levels ( ) . for highly adaptive trees with small number of particles per cell, this limit will be reached quite easily , resulting in integer - overflow with a global index . circumventing this problem by using multiple integers for index storage will require much more work when sorting ,so this is not a desirable solution .the dual tree traversal allows the entire fmmto be performed without the use of global indices , and is an effective solution to this problem .our current partitioning scheme is an extension of the orb , which allows multi - sections instead of bisections . bisectingthe domain involves the calculation of the median of the particle distribution for a given direction , and doing this recursively in orthogonal directions is what constitutes the orthogonal recursive bisection " . therefore the extension from bisection to multi - sectioncan be achieved by simply providing a mechanism to search for something other than the median .we developed a parallel version of the -element " algorithm .finding the -element " is much faster than any sorting algorithm , so our technique is much faster than any method that requires sorting . always searching for the -th element will reduce to the original algorithm based on bisections , but searching forthe -th element will enable the domain to be split between 3 and 4 processes , for example .therefore , our recursive multisection allows efficient partitioning when the number of processes is not a power of two .the dual tree traversal enables cell - cell interactions in the barnes - hut treecode framework , thus turning it into a algorithm .we give a detailed explanation of the dual tree traversal in a previous publication that focused on hybrid treecode & fmmtechniques .it has several advantages compared to explicit construction of cell - cell interaction lists , the more common approach in the fmmframework .firstly , it is the simplest and most general way to perform the task of finding all combinations of cell - cell interactions .it is simple in the sense that it does not contain multiple loops for finding the parent s neighbor s children that are non - neighbors of the current cell , " as is the case with the explicit construction .it is general in the sense that the property `` well - separated '' can be defined flexibly , instead of the rigid definition of non - neighboring cells " used traditionally in the fmm .it is common in treecodes to define the well - separated property more precisely and adaptively by introducing the concept of multipole acceptance criterion ( mac ) .we give a graphical description of the different types of macused in our method in figure [ fig : mac ] .the simplest macis the one defined for barnes - hut treecodes , where is the size of the source cell and is the distance from the target particle to the center of mass of the source cell ; , called the opening angle , is the parameter that is used to flexibly control the definition of the well - separated property . in the fmmframework ,a macis defined between two cells ( rather than a cell and a particle ) , where is the size of the target cell and is the distance between the center of mass of the target cell and the center of mass of the source cell .a flexible definition of the well - separated property can not be implemented easily in the traditional fmmframework , where the cell - cell interaction lists are constructed explicitly .this is because one must first provide a list of candidates to apply the macto , and increasing the neighbor search domain to neighbors instead of neighbors and applying the macon them is not an efficient solution .the dual tree traversal is a natural solution to this problem , since the list of candidates for the cell - cell interaction is inherited directly from the parent .this inheritance of cell - cell interaction candidates " is lacking from conventional fmm , and can be provided by the dual tree traversal while simplifying the implementation at the same time .furthermore , since the dual tree traversal naturally allows interaction between cells at different levels of the tree , it automatically finds the pair of cells that appear in -lists for adaptive trees , but with much greater flexibility ( coming from the mac ) and no overhead of generating the lists ( since it does nt generate any ) .another advantage of the dual tree traversal is that it enables the use of non - cubic cells . as we have explained in section [ sse : orb ] , this allows using orbpartitioning in the fmmframework , which has superior load balancing properties compared to morton ordering and removes the dependence on global indexing . as mentioned above ,global morton indices are a problem when using millions of cores and the tree depth exceeds 21 levels causing the morton index to overflow from the 64-bit integer . the combination of the dual tree traversal and orbis an elegant solution to this problem as well .we describe the dual tree traversal in algorithm[ al : evaluate ] , which calls an internal routine for the interaction of a pair of cells , given in algorithm [ al : interact ] .first , a pair of cells is pushed to a stack .it is most convenient to start from the root cell although there is no possibility that two root cells will interact . for every step in the while - loop, a pair of cells is popped from the stack and the larger cell is subdivided . then, algorithm [ al : interact ] is called to perform either particle - particle ( pp ) or multipole - local ( ml ) interactions .if the cells in the pair are too close and either of the cells has children , the pair is pushed to the stack and will be handled later .a more detailed explanation is given in a previous publication focusing on hybrid treecode & fmmtechniques .a = b = rootcell push pair(a , b ) into a stack pop stack to get a pair(a , b ) interact(a , b ) interact(a , b ) evaluate multipole - local ( m2l ) evaluate particle - particle ( p2p ) evaluate multipole - local ( m2l ) push pair(a , b ) into a stack the partitioning techniques discussed in section [ sse : orb ] will assign a local portion of the global tree to each process .however , the fmmevaluation requires information from all parts of the global tree , and the multipole expansions on remote processes must be communicated . unlike standard domain - decomposition methods , where only a thin halo needs to be communicated , the fmmrequires a global halo .fortunately , this global halo becomes exponentially coarser as the distance from the target cell increases , as shown in figure [ fig : periodic ] .therefore , the data required to be communicated from far partitions is very small ( but never zero ) .this results in a non - homogeneous ` alltoallv`-type communication of subsets of the global tree .once all the subsets of the global tree are communicated between the processes , one can locally reconstruct a tree structure that contains all the information required for the evaluation in the local process .this reconstructed tree is called the _ local essential tree _ ( let ) , and it is a ( significantly smaller ) subset of the global tree that contains all the information that is necessary to perform the evaluation .salmon and warren introduced two key techniques in this area : one for finding which cell data to send , and the other for communicating the data efficiently .the determination of which data to send is tricky , since each process can not see what the adaptive tree structure looks like on remote processes .the solution proposed by salmon and warren is to use a conservative estimate and communicate a larger portion of the tree than is exactly required .this conservative estimate is obtained by means of a special macdescribed in figure [ fig : mac ] as the letmac .there , the distance is defined as the distance between the center of mass of the source cell and the edge of the partition on the remote process .a formula to calculate can be given as the following expression , where we assume element - wise boolean operations over array elements ( giving zero for false , and 1 for true ) and element - wise multiplication and summation : here , and are the minimum and maximum coordinate values for all particles in the target partition , while is the center of mass of the source cell .this definition of the letmaccorresponds to assuming the case where the target cell is located at the edge of the remote partition .we also assume that the target cell is of the same size as the source cell ( ) , so becomes .these assumptions generally hold quite well and the required part of the tree is sent to the remote process most of the time . however , we must ensure that the fmmcode still works for extreme cases where necessary information fails to be sent . with this end in view, we have added a conditional statement in the interaction calculation ( algorithm[ al : interact ] ) as a further precaution for anomalous cases . this way , the traversal will perform the mltranslation with the smallest cell that is available. this cell may be too large to satisfy the fmmmacso there will be a small penalty on the accuracy , but since the occurrence of such a case is so rare , it does not affect the overall result .a schematic showing how the partitioning and communication techniques are used in the vortex method is given in figure [ fig : flow_chart ] .first , the domain is partitioned using the recursive multisection described in section [ sse : orb ] .then the fmmkernels for the local tree are evaluated while the letdata is being communicated in a separate openmp section .after the letdata is communicated , the fmmkernels are evaluated again for the remaining parts of the let .subsequently , the position , vortex strength , and core radius of the vortex particles are updated locally .this information is communicated in the next stage when the letis exchanged .in addition , the lagrangian vortex method needs to reinitialize the particle positions to maintain sufficient overlap of the smooth particles . for this procedure, we can reuse the same tree structure since the particles are reinitialized to the same position every time .therefore , the partitioning is performed only once at the beginning of this simulation .the fmmwas originally devised to solve potential problems with free - field boundary conditions .the method can be extended to handle periodic boundary conditions by placing periodic images around the original domain and using multipole expansions to approximate their influence , as illustrated in figure [ fig : periodic ] .when a sufficient number of periodic images are placed , the error caused by using a finite number of periodic images becomes smaller than the approximation error of the fmmitself .this approach to extend the fmmto periodic domains adds negligible computational overhead to the original fmm , for two reasons .first , distant domains are clustered into larger and larger cells , so the extra cost of considering a layer of periodic images is constant , while the number of images accounted for grows exponentially .the second reason is that only the sources need to be duplicated and the target points exist only in the original domain .since the work load for considering the periodicity is independent of the number of particles , it becomes negligible as the problem size increases .an earlier study showed that periodic boundary conditions add approximately 3% to the calculation time for a million particles .the fmmconsists of six different computational kernels , as illustrated on figure [ fig : kernels ] . in the ` exafmm`code , all of these kernels are evaluated on gpu devices using cuda . out of the six kernels , a great majority of the runtime is spent executing ppand ml .we use a batch evaluation for these two kernels , since there is no data dependency between them : they are evaluated in one batch after the tree traversal is completed .this batch evaluation can be broken into multiple calls to the gpu device , depending on its storage capacity and the data size . with this approach , we are able to handle problem sizes of up to 100 million particles on a single gpu , if the memory on the host machine is large enough . as an example of the usage of thread blocks in the gpu execution model , we show in figure [ fig : m2l_gpu ] an illustration of the mlkernel on gpus .each coefficient of the multipole / local expansion is mapped to a thread on the gpu and each target cell is mapped to a thread block , while each source cell is loaded to shared memory and evaluated sequentially .all other kernels are mapped to the threads and thread blocks in a similar manner .more details regarding the gpu implementation of fmmkernels can be found in chapter 9 of the _ gpu gems emerald edition _book , with accompanying open - source codes .in this section , we present results from large - scale simulations of homogeneous isotropic turbulence using the fmm - based vortex particle method , up to a problem size of computational points .this is still the largest mesh - size for which dnsof turbulence has been published , even though this scale of simulation was achieved 10 years ago . as we discussed in the introduction ,one of the reasons for this is the difficulty in scaling the fftalgorithm beyond a few thousand processes .the current simulations with a vortex method were checked for correctness by comparing the kinetic energy spectrum with that obtained using a trusted spectral - method code ( described in [ s : spectral ] ) .the focus here is not on the physics , however , but on demonstrating large - scale simulation of turbulence using the fmmas a numerical engine and reporting on performance aspects using gpu hardware .the performance is described via the results of weak scaling tests using between 1 and 4096 processes with ( 16.8 million ) particles per process .each process offloads to one gpu to speed - up the fmm , making it even more challenging to obtain good parallel efficiency ( it is obviously harder to scale faster code ) . despite this, a parallel efficiency of 74% was obtained for the fmm - based vortex method on weak scaling between 1 and 4096 processes , with the full application code .the largest calculation used billion particles , which as far as we know is the largest vortex - method calculation to date .previous noteworthy results by other authors reported calculations with up to 6 billion particles , which we surpass here by an order of magnitude .the calculations reported here were run on the tsubame-2.0system during spring and fall of 2011 , thanks to guest access provided by the grand challenge program of tsubame-2.0 .this system has 1408 nodes , each equipped with two six - core intel xeon x5670 ( formerly westmere - ep ) 2.93ghz processors , three nvidia m2050 gpus , 54 gb of ram ( 96 gb on 41 nodes ) , and 120 gb of local ssd storage ( 240 gb on 41 nodes ) .computing nodes are interconnected with the infiniband device grid director 4700 developed by voltaire inc ., with non - blocking and full bisectional bandwidth .each node has gbps bandwidth , and the bisection bandwidth of the system is over 200 tbps .the total number of m2050 gpus in the system is 4224 , and the peak performance of the entire system is 2.4 petaflop / s .we set up simulations of decaying , homogeneous isotropic turbulence using an initial taylor - scale reynolds number of .given that we had guest access to the full tsubame-2.0system for a very brief period of time ( only a few hours ) to produce the scalability results , we opted for a lower reynolds number than previous turbulence simulations of this size using spectral methods .there is limited experience using vortex methods for dnsof turbulence , but previous work has suggested that higher resolutions are needed than when using the spectral method at the same reynolds number .we thus decided to be conservative and ensure that we obtained usable results from these one - off runs .the calculation domain is a box of size ^ 3 $ ] with periodic boundary conditions in all directions , using periodic images in each dimension in the periodic fmm(see figure [ fig : periodic ] ) . to achieve the best accuracy from the fmmfor the present application , the order of multipole expansionswas set to ; this may be a conservative value , but given our limited allocation in the tsubame-2.0system , we did not have the luxury of tuning the simulations for this parameter .the fmmkernels are run in single precision on the gpu which may raise some concerns .we are able to achieve double - precision accuracy using single - precision computations in the fmmkernels by means of two techniques .the multipole - expansion coefficients have exponentially smaller magnitude with increasing order of expansion ; therefore , by adding them from higher- to lower - order terms , we can prevent small numbers being added to the large numbers .this preserves the significant digits in the final sum .the second technique consists of normalizing the expansion coefficients to reduce the dynamic range of the variables , allowing the use of single - precision variables to get a double - precision result .thus , we are able to achieve sufficient accuracy to reproduce the turbulence statistics ( as shown below ) and obtain the same results that would be obtained using double precision ( given that the fmmerror is larger than 6 significant digits anyway ) . the isosurface of the second invariant of the velocity gradient tensor is shown in figure [ fig : isosurface ] .this is a snapshot at the early stages of the simulation and we do not observe any large coherent structures . in order to take a closer look at the quantitative aspects of the vortex simulation , in figure [ fig : spectrum ] we compare the kinetic energy spectrum with that of the spectral method , where is the eddy turnover time .the energy spectrum of the vortex method is obtained by calculating the velocity field on a uniform lattice , which is induced by the lagrangian vortex particles .the capability of ` exafmm ` to calculate for different sources and targets enabled such operations .we have excellent quantitative agreement between the vortex method and spectral method and we conclude that our fmm - based particle method is capable of simulating turbulence of this scale correctly .we turn our attention to the performance of these calculation in the next section ., obtained with the vortex method and spectral method .notice that the vertical axis goes down to , which is many times smaller than in similar plots presented by other authors . ]we had two very short windows of time in which we were able to run weak scaling tests , one with half the system and the other with almost the full tsubame-2.0system .the larger scaling tests used particles per process , on 1 to 4096 processes and executing three mpiprocesses per node , each process assigned to a gpu card within the node .the results of the weak scaling test are shown in figure [ fig : weak_scaling ] in the form of total runtime of the fmm , and timing breakdown for different phases in the computation .the label ` near - field evaluation ' corresponds to the ppkernel evaluation illustrated in figure [ fig : kernels ] , and the label ` far - field evaluation ' corresponds to the sum of all the other kernel evaluations , i.e. , the far field .the ` mpicommunication ' is overlapped with the fmmevaluation ( see figure [ fig : flow_chart ] ) , so the plot shows only the amount of time that communication exceeds the local portion of the ` near - field ' and ` far - field ' evaluation . in this way , the total height of each bar correctly represents the total wall - clock time of each calculation .note that particle updates in the vortex - method calculation take less than 0.01% in all runs and thus were invisible in the bar plots , so we ve left this computing stage out of the labels .the ` gpu buffering ' label corresponds to the time it takes to form a queue of tasks and corresponding data buffer to be transferred to the gpu , which is a significant amount of time .we have found this buffering to be necessary in order to achieve high efficiency in the ppand mlevaluation and fmmevaluation on gpus .moreover , this part of the computation scales perfectly , and does not affect the scalability of the fmm .the parts that do affect the scalability are the tree construction and mpicommunication . actually , the tree construction also involves mpicommunications for the partitioning , so the parallel efficiency in weak scaling is fully determined by mpicommunications .figure [ fig : weak_scaling ] shows that the current fmmis able to almost completely hide the communication time up to 512 gpus .it may be worth noting that the -d - hypercube - type communication of the let turned out to be slower on tsubame-2.0than a simple call to ` mpi_alltoallv ` for sending the entire letat once .this is a consequence of the network topology of tsubame-2.0(with a dual - qdr infiniband link to each node and non - blocking full - bisection fat - tree interconnect ) and also of the relatively small number of mpiprocesses .the results shown in figure [ fig : weak_scaling ] are those obtained with ` mpi_alltoallv ` and not the hypercube - type communication .we performed a corresponding weak scalability test for the spectral method , increasing the problem size from on one process to on 4096 processes .we used three mpiprocesses per node to match the condition of the vortex method runs , but there is no gpu acceleration in this case . matchingthe number of mpiprocesses per node should give both methods an equal advantage / handicap for the bandwidth per process .note that using gpus for the fftwithin the spectral - method code is unlikely to provide any benefits , because performance improvements of ` cufft ` over ` fftw ` would be canceled out by data transfer between host and device ] and inter - node communications in parallel runs .figure [ fig : compare_scaling ] shows the parallel efficiency obtained with the two methods , under these conditions .the parallel efficiency of the fmm - based vortex method is 74% when going from one to 4096 gpus , while the parallel efficiency of the spectral method is 14% when going from one to 4096 cpus .the bottleneck of the spectral method is the all - to - all communication needed for transposing the slabs into pencils as shown in figure [ fig : spectral_fft ] .even though this may not be the best implementation of a parallel fft , the difference in the scalability between the spectral method and fmm - based vortex method is considerable .the actual calculation time is in the same order of magnitude for both methods at 4096 processes : it was 108 seconds per time step for the vortex method and 154 seconds per time step for the spectral method .therefore , the superior scalability of the fmmhas merely closed the gap with fft , being barely faster at this scale .however , we anticipate that this trend will affect the choice of algorithm in the future , as we move to architectures with higher degree of parallelism .the scaling test with half the tsubame-2.0system was done several months before and with a different revision of the code , with many changes having been incorporated since then .we include the results here for completeness ; see figure [ fig : compare_scaling_old ] . in this case, the number of particles per process is much smaller , at 4 million ( compared to 16.8 million particles per process in the larger test ) and we scale from 4 to 2048 processes .the parallel efficiency of the fmm - based vortex method is 72% when going from 4 to 2048 gpus , while the parallel efficiency of the spectral method was 14% .when we compare the parallel efficiency with the 1 to 4096 gpucase , we see that the 4 to 2048 case is scaling relatively poorly .this is due to the number of particles per process being roughly 1/4 in the 4 to 2048 case , and also the improvement in the ` exafmm ` code during the half year gap between the two runs .the run time per time step in this case was 27 seconds for the vortex method and 20 seconds for the spectral method to compute a domain with points .note that if we read from the plot in figure 2 of donzis et al . , their spectral - method calculation on a grid using 2048 cores of the ranger supercomputer takes them about 20 seconds per time step .since this is the same timing we obtained with the ` hit3d ` code , we are satisfied that this code provides a credible reference point for the scalability of spectral dns codes .the calculation in the fmmis mostly dominated by the floating point operations in the particle - particle interactions , while all other parts are a minor contribution in terms of flop / s ( although not negligible in terms of runtime ) .we will thus consider in the estimate of sustained performance only the operations executed by the ppkernels .two separate equations are being calculated for the particle - particle interactions : the biot - savart equation ( [ eq : biotsavart ] ) and the stretching equation ( [ eq : stretching ] ) . the number of floating point operations required by these two kernels is summarized in table [ tab : flops ] ..floating point operations per ppinteraction . [ cols="^,^,^",options="header " , ] the approximate number of flop / s for one step of the vortex method calculation of isotropic turbulence is obtained by the following equation . thus , the current fmm - based vortex method achieved a sustained performance of 1.08 petaflop / s ( single precision ) on 4096 gpus of tsubame-2.0 .the authors of the ` exafmm`code have a consistent policy of making science codes available openly , in the interest of reproducibility .the entire code that was used to obtain the present results is available from https://bitbucket.org/exafmm/exafmm .the revision number used for the results presented in this paper is 191 for the large - scale tests up to 4096 gpus .documentation and links to other publications are found in the project homepage at http://exafmm.org/. figure [ fig : compare_scaling ] , its plotting script and datasets are available online and usage is licensed under cc - by-3.0 .we acknowledge the use of the ` hit3d ` pseudo - spectral dns code for isotropic turbulence , and appreciate greatly their authors for their open - source policy ; the code is available via google code at http://code.google.com / p / hit3d/.this work represents several milestones .although the fmmalgorithm has been taken to petascale before ( notably , with the 2010 gordon bell prize winner ) , the present work represents the first time that this is done on gpu architecture .also , to our knowledge , the present work is the largest direct numerical simulation with vortex methods to date , with 69 billion particles used in the cubic volume ; this is an order of magnitude larger than the previously reported record .yet another significant event is reaching a range where the highly scalable fmmstarts showing advantage over fft - based algorithms . with a 1.08 petaflop / s ( single precision )calculation of isotropic turbulence in a box , using 4096 gpus , we are within reach of a turning point .the combination of application , algorithm , and hardware used are also notable. the real challenge in exascale computing will be the optimization of data movement .when we compare the data movement of fmmagainst other fast algorithms like multigrid and fft , we see that the fmmhas a potential advantage .compared to fft , both multigrid and fmmhave an advantage in the asymptotic complexity of the global communication .the hierarchical nature of multigrid and fmmresults in communication complexity where is the number of processes . on the other hand , fftrequires two global - transpose communications between processes , and has communication complexity of .when is in the order of millions , it seems unrealistic to expect that an affordable network can compensate for this large gap in the communication complexity . although it is not the focus of the present article , we would like to briefly note that an advantage of fmmover multigrid is obtained from differences in the synchronization patterns .for example , increasing the desired accuracy in iterative solvers using multigrid will result in more iterations , hence more global synchronizations .conversely , increasing the accuracy in fmminvolves increasing the number of multipole expansions , which results in even higher arithmetic intensity in the inner kernels while the number of global synchronizations remains the same . as the amount of concurrency increases , bulk - synchronous execution/ communication models are reaching their limit .thus , fmmprovides a new possibility to reduce the amount of communication and synchronization in these inherently global " problems .finally , we would like to point out that the fmmcan be used to solve the poisson equation directly , or as a preconditioner for an iterative solver .therefore , we are not concerned at this point about the fact that vortex methods may still be comparatively inefficient for the simulation of fluid turbulence , where spectral methods will continue to dominate in the foreseeable future .this does not detract from the conclusions about the efficiency of the fmmitself , which is the object of our study . in future work, we would like to demonstrate the efficiency of fmmby using it as a poisson solver or preconditioner in the framework of more standard finite difference / volume / element methods .there , the comparison against fftand multigrid methods should be of interest to a broader spectrum of the cfdcommunity .computing time in the tsubame-2.0system was made possible by the grand challenge program of tsubame-2.0 .the current work was partially supported by the core research for the evolution science and technology ( crest ) of the japan science and technology corporation ( jst ) .lab acknowledges funding from nsf grant oci-0946441 , onr grant # n00014 - 11 - 1 - 0356 and nsf career award oci-1149784 .lab is also grateful for the support from nvidiacorp . via an academic partnership award ( aug .2011 ) .s. g. chumakov , j. larsson , c. schmitt , and h. pitsch . lag - modeling approach for dissipation terms in large eddy simulation .annual research briefs , center for turbulence research , 2009 .http://www.stanford.edu/group/ctr/resbriefs/arb09.html .t. hamada , t. narumi , r. yokota , k. yasuoka , k. nitadori , and m. taiji .42 tflops hierarchical n - body simulations on gpus with applications in both astrophysics and turbulence . in _sc 09 : proceedings of the conference on high performance computing networking , storage and analysis _ , pages 112 .acm , 2009 .t. hamada and k. nitadori .190 tflops astrophysical -body simulation on a cluster of gpus . in_ high performance computing , networking , storage and analysis ( sc ) , 2010 international conference for _ , pages 19 , nov .t. ishihara , y. kaneda , m. yokokawa , k. itakura , and a. uno .small - scale statistics in high - resolution direct numerical simulation of turbulence : reynolds number dependence of one - point velocity gradient statistics . , 592:335366 , 2007 .christophe g. lambert , thomas a. darden , and john a. board jr . a multipole - based algorithm for efficient calculation of forces and potentials in macroscopic periodic assemblies of particles ., 126(2):274285 , 1996 .i. lashuk , a. chandramowlishwaran , h. langston , t. nguyen , r. sampath , a. shringarpure , r. vuduc , l. ying , d. zorin , and g. biros . a massively parallel adaptive fast - multipole method on heterogeneous architectures . in _ proceedings of the conference on high performance computing networking , storage and analysis , sc 09 _ ,pages 112 , portland , oregon , november 2009 .a. rahimian , i. lashuk , s. veerapaneni , a. chandramowlishwaran , d. malhotra , l. moon , r. sampath , a. shringarpure , j. vetter , r. vuduc , d. zorin , and g. biros .petascale direct numerical simulation of blood flow on 200k cores and heterogeneous architectures . in _ proceedings of the 2010 acm / ieee international conference for high performance computing , networking , storage and analysis _ , sc 10 , pages 111 .ieee computer society , 2010 .m. s. warren and j. k. salmon .astrophysical -body simulations using hierarchical tree data structures . in _ proceedings of the 1992 acm / ieee conference on supercomputing _ , pages 570576 , los alamitos , ca , usa , 1992 .ieee computer society press .m. yokokawa , k. itakura , a. uno , t. ishihara , and y. kaneda .16.4-tflops direct numerical simulation of turbulence by a fourier spectral method on the earth simulator . in _ supercomputing ,acm / ieee 2002 conference _ , pages 5050 .ieee , 2002 .rio yokota and l. a. barba .treecode and fast multipole method for -body simulation with cuda . in wen - mei hwu ,editor , _ gpu computing gems emerald edition _ , chapter 9 , pages 113132 .elsevier/ morgan kaufman , 2011 .preprint on http://arxiv.org/abs/1010.1482[arxiv:1010.1482 ] .rio yokota and l. a. barba .-based vortex method for simulation of isotropic turbulence on gpus , compared with a spectral method . to appear in _ comput . & fluids _ , preprint on http://arxiv.org/abs/1110.2921[arxiv:1110.2921 ] , 2012 .rio yokota , jaydeep p. bardhan , matthew g. knepley , l. a. barba , and tsuyoshi hamada .biomolecular electrostatics using a fast multipole bem on up to 512 gpus and a billion unknowns . , 182(6):12711283 , 2011 .
this paper reports large - scale direct numerical simulations of homogeneous - isotropic fluid turbulence , achieving sustained performance of 1.08 petaflop / s on gpu hardware using single precision . the simulations use a vortex particle method to solve the navier - stokes equations , with a highly parallel fast multipole method ( fmm ) as numerical engine , and match the current record in mesh size for this application , a cube of computational points solved with a spectral method . the standard numerical approach used in this field is the pseudo - spectral method , relying on the fftalgorithm as numerical engine . the particle - based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo - spectral method , using a trusted code . in terms of parallel performance , weak scaling results show the fmm - based vortex method achieving 74% parallel efficiency on 4096 processes ( one gpu per mpiprocess , 3 gpus per node of the tsubame-2.0system ) . the fft - based spectral method is able to achieve just 14% parallel efficiency on the same number of mpiprocesses ( using only cpu cores ) , due to the all - to - all communication pattern of the fftalgorithm . the calculation time for one time step was 108 seconds for the vortex method and 154 seconds for the spectral method , under these conditions . computing with 69 billion particles , this work exceeds by an order of magnitude the largest vortex - method calculations to date . isotropic turbulence , fast multipole method , integral equations , gpu
in the statistical analysis of spatial processes , modelling and estimating the spatial dependence structure is fundamental . it is used by prediction techniques like kriging or conditional simulations .its description is commonly carried out using statistical tools such as the variogram or covariogram calculated on the entire domain of interest and considered under the stationarity assumption , for reasons of parsimony or mathematical convenience .the complexity of the spatial component of the analyzed process is therefore limited .the assumption that the spatial dependence structure is translation invariant over the whole domain of interest may be appropriate , when the latter is small in size , when there is not enough data to justify the use of a complex model , or simply because there is no other reasonable alternative . although justified and leading to a reasonable analysis , this assumption is often inappropriate and unrealistic given certain spatial data collected in practice .non - stationarity can occur due to many factors , including specific landscape and topographic features of the region of interest or other localized effects .these local influences can be observed computing local variograms , whose characteristics may vary across the domain of observations . for this type of non - stationarity structures ,making spatial predictions using conventional stationary methods is not appropriate . indeed, applying stationary approaches in such cases would be liable to produce less accurate predictions , including an incorrect assessment of the estimation error .several approaches have been proposed to deal with non - stationarity through second order moments ( see , for a review ) .one of the most popular methods of introducing non - stationarity is the space deformation of and other .it consists in starting with a stationary random function , and then transforming the distance in some smooth way to construct a non - stationary random function .maximum likelihood and bayesian variants of this approach have been developed by , , , . , , established some theoretical properties about uniqueness and richness of this class of non - stationary models .some adaptations have been proposed recently by , , , .a fundamental limitation of all estimation methodologies presented so far is the fact that implementation requires multiple independent realizations of the random function in order to obtain an estimated sample covariance or variogram matrix .the idea of having several independent realizations of the natural field is unrealistic because there are not multiple parallel physical worlds . in practice ,the approach is feasible when a time series is collected at each location as this gives the necessary , albeit dependent , replications . in general , we would prefer to incorporate a temporal aspect in the modelling rather than attempting repairs ( e.g. , differencing and detrending ) to achieve approximatively independent realizations .however , many geostatistical applications involved only one measurement at each site or equivalently , only one realization of a random function . , are the first authors to address the estimation of space deformation model in the case of a single realization of a random function , obtained as the transformation of a gaussian and isotropic random function .they exhibit a methodology based on quasi - conformal mappings and approximate likelihood estimation of the local parameters that characterize the deformation derived from partitioning densely observed data into subregions and assuming independence of the random function across partitions .however , this approach has not been applied to real datasets and requires very dense data . in this work ,we follow the pioneering work of , while freeing the strong assumption of replication and do not make any distributional assumptions . in addition , we take into account other shortcomings associated with this approach that are : the required property of the deformation to be bijective and the computational challenge to fit the model for moderate and large datasets .to do so , we propose an estimation procedure based on the inclusion of spatial constraints and the use of a set of representative points referred to as anchor points rather than all data points to find the deformation .the proposed method provides a non - parametric estimation of the deformation through a step by step approach : first a dissimilarity matrix is built by combining a non - parametric kernel variogram estimator and euclidean distance between two points in the geographical space , second the estimation of the deformation at anchor points is considered by weighted non - metric multi - dimensional scaling and finally the deformation is interpolated over the whole domain by thin - plate spline radial basis functions .the proposed method also provides a rational and automatic estimation of the spatial dependence structure in the deformed space .we illustrate our estimation procedure on two simulated examples and apply it to soil dataset .this paper is organized as follows : section [ sec2 ] describes the space deformation model through its basic ingredients and main properties . in sections [ sec3 ] and [ sec4 ], we address the problem of estimating the non - stationary spatial dependence structure and how spatial predictions and conditional simulations should proceed .two simulated data in section [ sec5 ] and a real dataset in section [ sec6 ] are used to illustrate the performance of the new approach and its potential . finally , section [ sec7 ] outlines concluding remarks and further work .the main idea behind the space deformation approach is the euclidean embedding of a non - stationary random function into a new space of equal or greater dimension , where it can be easily described and modeled , that is to say where both stationarity and isotropy hold .let be a constant mean random function defined on a fixed continuous domain of interest of the euclidean space and reflecting the underlying studied phenomenon .we consider that is governed by the following model : which can be written equivalently : where represents a stationary and isotropic random function and represents a deterministic non - linear smooth bijective function of the -space onto the -space . in principle , we can allow , although most frequently . from now on , without loss of generality , we assume . the model specification in leads to model the variogram of in the form : where represents the euclidean norm in and the stationary and isotropic variogram of which depends only on the euclidean distance between locations in -space . the second order structure model obtained in leads to a valid variogram , i.e. conditionally non - positive definite .its validity is assessed by the following proposition inspired by and straightforward to establish ( see [ appendix1 ] ) .[ prop1 ] if is a valid variogram , then is a valid variogram on , for any function .it is then even possible to rely on a valid variogram in a different space , through a function linking the two spaces .consequently , instead of working on the support of the non - stationary random function , the variogram of is defined with respect to the latent space where stationarity and isotropy are assumed .any problem involving the observed random function is transposed by the deformation to the stationary and isotropic random function .standard geostatistical techniques , such as kriging and conditional simulations can be apply directly to the latter . the results obtained on will then transpose to by the inverse deformation . as described by ,when is non - decreasing , the spatial deformation operates as follows : the deformation effectively stretches the -space in regions of relatively lower spatial correlation , while contracting it in regions of relatively higher spatial correlation , so that a stationary and isotropic variogram can model the spatial dependence structure as a function of the distance in the -space representation .it is also important to note that the spatial deformation model defined in is identifiable up to a scaling for and up to a homothetic euclidean motion for .all isometries and homotheties are observationally equivalent .this result is based on the following proposition equivalent to that established by for deformation based non - stationary spatial correlation model . the proof is given in [ appendix1 ] .[ prop2 ] if is a solution to ( [ eq3 ] ) , then for any regular square matrix and any vector , with and is a solution as well .let be a vector of observations from a unique realization of the random function , associated to known locations . in the second order structure model defined in ( [ eq3 ] ) , the functions and are unknown and need therefore to be estimated .the estimation workflow involves four main steps .first , we define a non - stationary variogram non - parametric kernel estimator which serves as a dissimilarity measure between two points in the geographical space .second , we construct the deformed space using the procedure of weighted non - metric multi - dimensional scaling applied to a dissimilarity matrix built from the non - stationary variogram estimator .third , we estimate by interpolating between a configuration of points in the -space and the estimations of their deformations in the -space using the class of thin - plate spline radial basis functions .fourth , the estimation of is carried out by calculating the experimental variogram on transformed data in the deformed space and using a mixture of basic variogram models , providing wide flexibility to adapt to the observed structure .we begin with an estimate of the non - stationary variogram at arbitrary locations using a kernel weighted local average of squared increments .our proposal is to use the variogram cloud which is an unbiased estimator of as the input data for the non - parametric kernel estimator of the non - stationary variogram .an intuitive empirical estimator of the non - stationary variogram at any two locations is given by the non - parametric kernel estimator defined as follows : where is a kernel defined as : , with a non - negative , symmetric kernel on with bandwidth .the expression defines the spatial dissimilarity between two arbitrary points in the geographical space .the purpose of the kernel function is to weight observations with respect to a reference point so that nearby observations get more weight while remote sites receive less .the denominator of is a standardization factor that ensures is unbiased when the expectation of the squared difference between the observations is spatially constant .if several realizations are available , we can simply take the average of over the different realizations .regarding the kernel defined in , his choice is less important than the choice of its bandwidth parameter , a reasonable choice of will generally lead to reasonable results .we use the quadratic kernel ( epanechnikov kernel ) which is an isotropic kernel with compact support , showing optimality properties in density estimation .indeed , the computational cost of is greatly reduced by using a compactly supported kernel , as it reduces the number of terms to compute .the selection of the bandwidth parameter is adresssed in section [ ssec4 ] .the transformation of the geographical space into the deformed space can be seen as the reallocation of a set of points in a euclidean space ( in a given dimensionality ) .the approach of non - metric multi - dimensional scaling ( nmds ) provides a solution to this problem .the aim is to find a representation of points in a given dimension space , such that the ordering of the euclidean distances between points matches exactly the ordering of the dissimilarities .till now , the fitting of a space deformation model based on nmds procedure is a challenging numerical problem where the dimensionality is roughly proportional to the number of observations . a sample of size requires a dissimilarity matrix to be stored and a matrix of coordinates to be inferred .the search of the deformed space based on this dissimilarity matrix requires considerable computing time even when is moderately large . to reduce the computational burden ,we can avoid mapping directly all data locations .indeed , spatial dissimilarities calculated between pairs of close points can be unnecessary and redundant because of their high correlation .the idea consists in obtaining the deformed space using only a representative set of points referred to as anchor points over the geographical space .these will be rather mapped with nmds .interpolating the anchor points in the -space and the estimations of their deformations in the -space , produces an estimate of the deformation function .then , the location of all sampling data points in the deformed space is obtained through .representative points may be chosen as a sparse grid over the domain or a subset of data points .they allow to reduce the computation time and to give reliable results for the nmds task .the sampling density which may be vary over the domain , can be accounted by non - uniform distribution of the anchor points .we consider a set of -dimensional anchor points }^t ] , with .the specification of the dissimilarity matrix is required for the nmds procedure .the latter is usually applied in the absence of the euclidean coordinates of the points which produce the dissimilarities . however , in our context , the points of the original space are already identifiable by their euclidean coordinates , hence the necessity to ensure the spatial consistency of the transformation .moreover , reflects the local spatial dissimilarity , that is to say , for close pairs .indeed , this estimator can be quite accurate for short and moderate distances , as in the stationary framework , but very imprecise , although well defined , for large distances .it is therefore necessary to penalize the importance given to large distances compared to short distances in the search of the deformation .therefore , given the distance matrix associated with support points , we build a composite dissimilarity matrix ] is a mixing parameter .the composite dissimilarity matrix is a linear combination of a scaled dissimilarity matrix and a distance matrix .the idea is to build a hybrid spatial dissimilarity measure which takes into account both the dissimilarities observed in the regionalized variable and the spatial proximity to ensure that the deformation function does not fold , i.e. is bijective .thus , the parameter controls the non - folding .the setting of is adressed in section [ ssec4 ] . given the symmetric matrix ] ,the objective is to represent as a configuration of -dimensional points }^t ] for each ; 2 . simulate a gaussian random function with constant mean and isotropic stationary variogram in the domain and at the transformed conditioning points .let and be the generated values ; 3 .calculate the kriged estimates ] , based on the following second order structure model : , where is an isotropic stationary exponential variogram model with range parameter and .our goal is to estimate .the deformation leaves the domain globally unchanged but shrinks the left part of the segment whereas the right side is stretched .thus , points of the segment which are located near the origin are highly correlated with their neighbors and points of the segment which are located near the extremity are slightly correlated with their neighbors .a set of regularly spaced anchor points are used to build the dissimilarity matrix .figures [ fig1a ] and [ fig1b ] show a realization of the non - stationary random function in the geographical space and in the estimated deformed space .the non - stationary process is transformed into stationary one simply by stretching and compressing the -axe as shown in figure [ fig1b ] .the optimal value of hyper - parameters correspond to and . from the figures [ fig1c ] and [ fig1d ] we see that the estimated deformation is very close to the true deformation .this simple example illustrates the ability of the proposed approach to find the right deformation .0.27 0.27 0.27 0.27 we consider a standardized gaussian random function over a domain }^2 $ ] , with a variogram : , where is a radial basis function with center point and an isotropic stationary cubic variogram model with range .we simulate at points on a regular grid of .points of the domain which are located near the center are highly correlated with their neighbors while points which are far from the center are slightly correlated with their neighbors ( figure [ fig2a ] ) . points are sampled and splitted into a points training sample and a points validation sample .we use a regular grid as a configuration of anchor points ( red cross points in figure [ fig2b ] ) .figure [ fig2d ] shows the estimated deformed space which looks similar as the true deformed space ( figure [ fig2c ] ) .our estimation method effectively stretches the domain in regions of relatively lower spatial correlation ( at extremities ) while contracting it in regions of relatively higher spatial correlation ( at the center ) so that an isotropic stationary variogram can model the spatial dependence structure in the deformed space .0.28 0.28 0.28 0.28 figure [ fig3a ] gives the cross - validation variogram error function .we see that moderate and large values of tend to give low scores .figure [ fig3b ] shows the second form cross - validation function .its minimization leads to and .0.28 and .,title="fig:",scaledwidth=100.0% ] 0.28 and .,title="fig:",scaledwidth=100.0% ] a visualization of the variogram in a few points for estimated stationary model , estimated non - stationary model and non - stationary reference model is shown in figure [ fig4 ] .from one site to another change in the non - stationary spatial dependence structure is clearly apparent . 0.25 ( black ) , ( red ) and ( green).,title="fig:",scaledwidth=100.0% ] 0.25 0.25 ( black ) , ( red ) and ( green).,title="fig:",scaledwidth=100.0% ] to assess the predictive performance of our approach , the regionalized variable is predicted at the 1024 validation data points .table [ tab1 ] provides a comparative performance of the estimated stationary model , the estimated non - stationary model and the reference non - stationary model .some well - known discrepancy measures are used , namely the mean absolute error ( mae ) , the root mean square error ( rmse ) , the normalized mean square error ( nmse ) , the logarithmic score ( logs ) and the continued probability score ( crps ) . for rmse , logs and crps , the smaller the better; for mae , the nearer to zero the better ; for nmse the nearer to one the better . table [ tab1 ] summarizes the results for the predictive performance statistics computed on the validation data set .the stationary model is worse than the other two models for example , in terms of both rmse , logs and crps .the estimated non - stationary model performs comparably well as the reference model. the cost of not using a non - stationary model can be substantial .for example , the estimated stationary model is worse than the estimated non - stationary model in terms of rmse ..predictive performance statistics on a test set of 1024 locations .[ cols="<,^,^,^",options="header " , ] 0.30 0.30 0.30 0.30the proposed approach appears to be an efficient tool to estimate a non - stationary spatial dependence structure from a single realization .it improves the estimation of a random function with non - stationary spatial dependence structure , as illustrated by the simulated examples and the real application .additionally , the non - stationary approach leads to better prediction than the stationary model as measured by predictive scores .one advantage of the proposed approach is that instead of finding the deformation using all data points , we can do it with a reduced number of points , compared to existing methods .this is a major improvement for large datasets .indeed , the use of anchor points makes it possible to run the nmds step even for very large datasets .moreover , the approach incorporates spatial constraints which guarantee the bijection property of the deformation .this point was actually a major drawback while trying to fit a deformation function .the proposed approach is easy to implement and allows to use all developments already made in the stationary framework through the deformation .indeed , prediction and simulation under stationarity is a well understood subject , for which fast and robust techniques are available .the approach also provides an exploratory analysis tool for the non - stationarity .in fact , the spatial deformation encodes the non - stationarity and the representation of the deformed space allows to identify the regions of strong and weak continuity .one future direction of research would be to investigate the statistical properties ( such as consistency ) of the non - parametric kernel estimator of the non - stationary variogram .this would have to be done in an asymptotic context by following the work of in the stationary framework .the deformation mapping need not to be restricted to the class of functions that are currently use . from our experience , thin - plate spline radial basis functions work well but a different interpolation method may prove useful. it would be also interesting to account for covariate information in the space deformation model , in order to improve the estimation of the deformation .this would have to be done by following the work of .one challenge remaining is the selection of hyper - parameters .indeed , the cross - validation procedures presented here remains computationally demanding .the authors would like to thank dr budiman minasny at the faculty of agriculture & environment at the university of sydney in australia , for providing the data used in this paper .[ [ appendix1 ] ] _ proof of proposition [ prop1 ] _ consider a set of points and a set of real numbers such that .let .we have hence is a valid variogram on ._ proof of proposition [ prop2 ] _consider a solution to .let such that and .we have hence is a solution as well .
stationary random functions have been successfully applied in geostatistical applications for decades . in some instances , the assumption of a homogeneous spatial dependence structure across the entire domain of interest is unrealistic . a practical approach for modelling and estimating non - stationary spatial dependence structure is considered . this consists in transforming a non - stationary random function into a stationary and isotropic one via a bijective continuous deformation of the index space . so far , this approach has been successfully applied in the context of data from several independent realizations of a random function . in this work , we propose an approach for non - stationary geostatistical modelling using space deformation in the context of a single realization with possibly irregularly spaced data . the estimation method is based on a non - stationary variogram kernel estimator which serves as a dissimilarity measure between two locations in the geographical space . the proposed procedure combines aspects of kernel smoothing , weighted non - metric multi - dimensional scaling and thin - plate spline radial basis functions . on a simulated data , the method is able to retrieve the true deformation . performances are assessed on both synthetic and real datasets . it is shown in particular that our approach outperforms the stationary approach . beyond the prediction , the proposed method can also serve as a tool for exploratory analysis of the non - stationarity . non - stationarity , variogram , deformation , kernel smoothing , kriging , simulation .
precision measurements in neutron and nuclear decay offer a sensitive window to search for new physics beyond the standard electroweak model and allow also the determination of the fundamental weak vector coupling .recent analyses based on the effective field theory performed in e.g. show that in processes involving the lightest quarks the neutron and nuclear decay will compete with experiments at highest energy accelerators .for instance , data taken at the lhc is currently probing these interactions at the level ( relative to the standard weak interactions ) , with the potential to reach the level . in some of the decay correlation measurements there are prospects to reach experimental sensitivities between and making these observables interesting probes for searches of new physics originating at tev scale .the most direct access to the exotic tensor interaction in decay is to measure the fierz term ( coefficient ) or the beta - neutrino correlation coefficient in a pure gamow - teller transition .the coefficient shows up as a tiny energy dependent departure of the spectrum from its v - a ( standard model ) shape .the smallness of the potential contribution requires that other corrections to the spectrum shape of the same order are included in the analysis . indeed , according to recoil terms also affect the spectrum shape with their main contribution being proportional to . in order to disentangle these effects the detector efficiency for particle as a function of energymust be known with the precision better than .the dominating contribution in the systematic uncertainty comes from back - scattering and out - scattering of electrons from the detector .monte carlo simulation of this effect is helpful , however , it introduces its own uncertainty as the input parameters are known with limited accuracy .monte carlo simulation would reflect the real situation better after it is adjusted to real experimental data of a particular measurement setup .the described in this article electronics was designed for a spectrometer capable of direct registration of the back - scattering events , thus providing reference data for the monte carlo calculation of the detector efficiency .the spectrometer itself is still in an r&d phase undergoing detailed tests and tuning .it will be a subject of a separate paper together with the performance benchmark . in this paper, its concept will be described only in a minimum extent at the beginning of section 2 to explain the requirements imposed onto the front - end electronics and daq .the rest of section 2 is devoted to the electronic system architecture .the test results were obtained with a help of a signal generator and are presented in section 3 . therein the resulting time spectrum and the charge asymmetry distribution representing the typical performanceare shown .the asymmetry spectra were obtained using a dedicated tester to simulate the hit position on the wire with adjustable resistance division ( potentiometer ) .conducting the electronic benchmark tests without a detector was chosen by purpose .the detector itself is still not fully understood .therefore it was important to assess the purely electronic contribution to the performance parameters of the spectrometer .the paper ends with a short summary and outlook for the future experiment .in order to facilitate the identification of the electrons impinging onto and scattered from the energy detector ( e.g. si detector , scintillator ) a low - z and low - mass tracker must be applied .one of the attractive options is a low pressure multi - wire drift chamber with minimum number of necessary wires in order to maximize the detector transparency .this condition can be fulfilled by a hexagonal wire geometry and the charge division technique allowing for a 3d track reconstruction without major distortion of the electron energy measurement .the hexagonal wire geometry is not the only one considered in the project .the rectangular ( planar ) wire configuration is the next suitable alternative .the multi - wire drift chamber is based on the small prototype described in ref . and will be operated with he / isobutan gas mixture ( ranging from 70%/30% to 90%/10% ) at lowered pressure ( down to 300 mbar ) .it consists of 10 sense wire planes ( 8 wires for each plane ) separated with 24 field wire planes forming the double f - f - s - f - f - s - f - f - s - f - f - s - f - f - s - f - f structure ( f- denotes a field wire plane , s- denotes a signal wire plane ) .the distance between neighboring signal planes is 15 mm , with wires within a plane being separated by 17.32 mm .this wire plane structure leads to the hexagonal cell geometry as shown in fig .[ cells ] .each cell consists of a very thin anode wire ( nicr alloy , 25 m diameter ) with resistance of about 20 ohms / cm surrounded by 6 cathode field wires forming a hexagonal cuboid .all wires are soldered to pads of the printed circuit board ( pcb ) frames .the chamber is equipped with a two - dimensional positioning system for a beta source installed in the central region of the detector , between both parts of the mwdc structure . in the initial configuration , the electrons detected in two plastic scintillators installed at both sides of the mwdcprovide the time reference signal for the drift time measurement .the pmt signals are also used as a trigger for the mwdc and electron energy detector readout .acquiring the drift time and the pulse height asymmetry at both ends of the responding signal wire one can establish the electron path across the chamber cell .the system provides charge - division position sensing in the direction parallel to the wires as well as precise drift time measurement .the expected position resolution across wires is limited by angular straggling of primary electrons travelling in the gas and accounts to about 450 m as shown in ref. .in this situation , the precision of the drift time measurement of about 200 ps is more than needed as it corresponds to about 50 m for the operating conditions in view .the mwdc will be operated in homogenous magnetic field oriented parallel to wires providing a rough electron transverse momentum filter .the position information along the wires will be used for the identification of the sequence of the cells passed by the electrons and for distinguishing between the electrons impinging onto and scattered from the energy detector .this is why the modest position resolution of a few mm is sufficient for that direction .crucial in the design is the analog part of the system . in principle ,commercial digitizers ( tdc , adc ) could be applied at the end of the acquisition chain .however , it has been decided to equip the data channels with custom digital boards with adjusted specifications such that the whole system is handy , cost effective and scalable .the applied on board processing ( fpga ) allows for acquiring up to 15 000 triggered events per second which gives a comfortable factor of 10 reserve as compared to the application in view .the described modular electronic system consists of three main parts : ( i ) preamplifiers , ( ii ) analog cards containing the peak detector and constant fraction discriminator ( cfd ) , and ( iii ) the digital boards containing analog to digital converters , adc and tdc . the data logging software on the pc constitutes the data receiver .the signal from each signal plane is processed by means of one electronics module , which consists of two analog cards and one digital board providing 16 adc and 8 tdc channels . a corresponding block diagram is shown in fig . [ daq ] .the signals from both ends of a signal wire are fed to inputs of preamplifiers located directly on the detector frames ( see fig . [ pcbpre ] ) in order to minimize the input noise and protect the signal from em interferences .the signals from the preamplifiers are received by the analog cards which drive the adc inputs and produce the tdc stop signals .the digital data is transmitted via a lan port to the back - end computer where the process of receiving , sorting and formatting to a complete physical event is accomplished by the data logging software .the card configuration settings and control is done via a rs485 port .detailed description of each part of the system is presented in the following sections .signal readout from both ends of the relatively low resistance wire requires application of dedicated preamplifiers .low resistance wires force the use of fast preamplifiers with low input impedance since the change of signal amplitudes and thus position resolution depends on the input impedance .the lower the input impedance , the higher the voltage difference that is registered at the wire ends .the preamplifiers work in current - mode with the input impedance below 5 ohms and the input stage bandwidth exceeding 300 mhz .the preamplifier circuit is shown in fig .[ preamp ] .the first stage of the preamplifier is a current - voltage converter with two fast bipolar transistors .the voltage pulse is fed to the high - pass filter which cuts off the dc bias and the slow varying components of the signal .the first filter is followed by a low - pass filter integrating the pulse with 100 ns time constant .the signal is then amplified and transmitted to the differential amplifier used to drive the transmission line .differential signal transmission increases the external noise immunity when unshielded twisted pair ribbon cables are used to connect the preamplifier outputs to the inputs of the analog module .an example of input and output signals from a preamplifier is shown in fig .[ inppreamp ] .the input current signal was reproduced with the help of qucs package .using bipolar transistors increases the preamplifier immunity to possible discharges in the gas chamber . adding more robust input protection ( e.g. a series resistor and protection diodes ) was abandoned as it would increase the input noise and impedance thus worsening the charge division resolutionhowever , accidental discharges can not be avoided completely and the need for replacement of damaged preamplifiers was taken into account in the design .the individual preamplifiers are arranged as small piggyback pcb cards mounted directly on the wire frames ( fig .[ pcbpre ] ) .they can be replaced easily without unplugging cables or any other major intervention in the setup .additionally , such a solution minimizes the length of the unshielded connection between the wire end and the preamplifier input .the block diagram of the analog circuit is presented in fig .[ analog ] .differential signals from the preamplifiers are processed by the analog circuit .the signals from both wire ends are split into two branches . in the first branch ,signals from both ends are added and fed to the cfd which produces a ttl pulse for the time - to - digital converter ( tdc ) . a delay correctionprior to the summing is not needed unless it is of the order of 1 ns ( corresponding to about 20 cm cable length ) significantly less than the rise time of real signals from the mwdc .care is taken to assure the equal length of the connecting cables .if the sum of the analog signals is higher than the set value of the cfd threshold , the stop signal for the tdc is generated .the schematic of the cfd is presented in fig .[ cfd ] . in the second branch ,both signals are transmitted to the fast peak - hold detectors , which are responsible for detection of the pulse amplitudes in a given gate time .peak - hold detectors are used to stretch the pulse to the length acceptable for the adc converters .the schematic of the peak - hold detector is shown in fig .[ phd ] .the gate and hold signals are produced by a programmable timing circuit located on the analog board which uses the tdc start signal ( external trigger ) as a time reference .the analog boards hosting the analog signal processing circuits are equipped with built - in controllers for setting the thresholds of the cfd as well as the gate and the hold timing .the set values are fed to the controller over a slow - control bus ( rs485 ) .the block diagram of the digital circuit is presented in fig .[ digitalb ] .the digital board consists of 16 channels of 14-bit adcs ( ad8099 chips made by analog devices ) and a 8 channel tdc ( tdc - gpx chip made by acam messelectronic gmbh ) . using 14-bit adcs was dictated by the large dynamic range of the amplitudes of the signals from a gas detector operated in the proportional mode .the high bit adc resolution increases the overall performance of the charge division method .the time resolution of the used tdc chip is significantly better than actually needed ( 83 ps binning resolution with the typical standard deviation of 77 ps , from the device datasheet ) .the board measures the delay between the common start ( trigger ) pulse and the individual stop signals received from the constant fraction discriminators , as well as the amplitudes of the pulses delivered by the peak - hold detectors .the stop signals are delayed by a 50 ns delay line in order to compensate for the delays introduced in the analog circuit generating the start signal .the adcs sample the signals with 20 msps sampling rate .the fpga chip ( xilinx spartan xc3s400 ) receives the data from the adcs and the tdc , buffers them , provides zero - suppression and time - stamping , and formats the data frame as described in the next paragraph .the flow diagram of the acquisition algorithm and the data transmission algorithm are shown in fig . [ daq_alg ] .the data frame is transmitted to the integrated ethernet / udp stack ( wiznet w5100 ) which is responsible for communication with the back - end pc hosting the acquisition and control software .the digital board is also equipped with a built - in controller providing initialization and configuration of adc , tdc and ethernet chips .this controller is accessible via the slow - control bus .the data are transferred to the computer in a form of udp frames .the data formatter creates a frame for each recorded event .the frame length is a multiple of 96 bytes .the first part of the frame is a fixed header used for synchronization .next follows the time tag ( 32 bit unsigned integer value ) which allows the time synchronization between multiple modules .the third part of the frame consists of 16 adc conversion values ( 16 bit unsigned integer values ) .the fourth part contains the tdc values - up to eight 24-bit unsigned integer values .only the non - zero values of the tdc data are transmitted .the time tagging is used because the ethernet protocol does not ensure the sequence of the data packet delivery .the time - tags are shared between modules and used by the data - logging software to restore the correct data order .[ formatter ] shows the structure of the data frame .the slow - control interface allows settings the cfd thresholds and timing configuration parameters of the module .each board in the module is equipped with its own controller connected to the slow - control bus .individual addresses of the controllers are set with dip - switches installed on the boards .the controllers use a simple ascii protocol via rs485 interface . in order to connect the bus to the acquisition computera commercial rs485-usb converter is used .an example of the initialization data record necessary for establishing the communication over the slow - control bus is presented on fig .the slow - control parameter set is read back every second and refreshed on a computer display ( fig .[ scview ] ) ., ) and digital board timing ( , ) , where is the gate width and is the pulse hold time . for the digital board denotes the maximum time for the incoming signal and is the delay for the adc .the second column presents the commands used for saving values to the controllers .the save commands use the same format but the uppercase option letters . ] the data logging software provides the parallel reception of data frames sent by the modules and the event building i.e. collection of data generated by the system upon a single hardware trigger signal and uniquely identified by a time tag .each module is identified by its ip address and uses a specific udp port corresponding to this address .the architecture of the logging software is shown in fig .[ dls ] .the software is multi - threaded and consists of the main thread ( the parent ) that is responsible for sorting and recording events , and the child processes whose task is to receive data from the acquisition modules and to send them via data streams to the main thread . upon initializationthe program reads the configuration file with the addresses of the modules and opens sockets for all modules to communicate with them .the sockets listen to the different udp ports correlated with the module address .the main loop creates a set of pipes ( data streams ) and receives the data structures from the child processes .the size of the frame is checked and followed by the pre - selection of cases .if the tdc value is greater than zero , the event is pushed to the fifo queue of the child process .the event is then read from the fifo by the parent process in which the data structures from all child processes are sorted ( using the quick - sort algorithm ) by the time tags and formatted as one complete event .this event will be identified with a particle crossing the chamber planes and directed for physics analysis .the received events are accessible for on - line histogramming analysis by means of a shared memory mechanism and simultaneously saved on a hard disk for later off - line analysis . the cases with empty tdc conversion value ( stop signal below cfd threshold )are used to calculate the baseline offset for the adc . in this methodit is assumed that the lack of the tdc signal in a pair of channels ( serving a particular sense wire ) means that the cell did not fire so the corresponding adcs deliver the offset values .this in - flight offset calculation method can be disabled and the fixed offsets can be introduced with the help of the slow - control interface .the baseline offsets subtracted ( in flight ) from the conversion value are appended to the data structures for each event so that the subtraction is reversible . finally , the data structure consists of a time tag , module number , channel number , adc1 value , adc2 value , tdc value , adc1 offset and adc2 offset , respectively .it is worth mentioning that the amplitude thresholds which are needed for the histogram building were offset from the adc baseline values such that only the electronic noise was eliminated .the ultimate adjustment of the thresholds was postponed to the detector efficiency tuning phase .it will depend on the gas mixture composition and pressure as well as on the operation voltage .the described daq system was tested using a special tester simulating the wire chamber signals .the wire itself is replaced by a potentiometer with the range reflecting the real wire resistance ( see fig . [ tester ] ) .the tester utilizes two preamplifiers and signal cables connected to the selected module .a 400 mv high and 200 ns long simulator input is synchronized with a ttl signal triggering the system .the simulator input was fed by a triangle signal with 10 ns rise time and 200 ns fall time adjusted to the expected detector pulse shape obtained from the garfield ( ref . ) simulation .the hit position corresponds to a unique potentiometer setting .the corresponding adc asymmetry distributions are shown in fig .[ adcall ] .the zero point defines the middle of the wire .the difference between the individual asymmetry spectra reflects the varying relation between the signal and noise amplitudes . in fig .[ wzorzec ] sample results from one channel are presented .[ funadca ] shows a number of centroids the adc pulse height asymmetry distributions acquired at different potentiometer asymmetry settings where and are the adc1 , adc2 amplitudes and , are the resistances for selected potentiometer settings .the results were obtained for varying resistance division corresponding to the charge collected at both wire ends .the obtained adc asymmetry is a monotonic function of the resistance asymmetry .the centroids of these distributions are drawn with 1 error bars in fig .[ funadca ] .the polynomial function fitted to the centroids represents the position calibration . drawing the error band allows the extraction of the position resolution as shown in fig .[ funadcb ] .the uncertainty of the relative position resolution is connected with the interpolation procedure . ) as a function of the resistance asymmetry ( eqn .2 ) . the insert is a zoomed part of the graph showing the error bars equal to of the peak distributions plotted in fig .[ adcall ] .dotted lines interpolate the error bar ends . ] expressed in units of length of the wire deduced from fig .[ funadca ] .a 500 ohm wire resistance was assumed which corresponds to a 25 m nicr wire of = 240 mm length .the uncertainty is connected with the interpolation procedure ( see text ) . ]it shows that the charge division method determines the position with a resolution between 1.2 and 3.6 mm for a 240 mm long wire . when feeding a given preamplifier pair with the simulator signals the noise distributions on the neighboring channelswere acquired as well .neither a baseline change nor an increase of noise were observed , meaning that the electronic crosstalk is negligible .this does not assure there is no crosstalk when the electronic system is attached to the gas detector .inductive crosstalk between the wires depends on the configuration and on the operating conditions and is beyond the scope of this paper . in the second test, the response of the tdc measurement was investigated as a function of the input signal asymmetry .[ tdcconst ] shows a typical example .the impact of the charge division on the tdc measurement is found to be less than 1 ns in the entire charge asymmetry range .for the application in view , a 1 ns drift time corresponds to about 250 m distance ( from garfield simulations ) .the maximum throughput of the system reaches 15 khz per channel ( 120 khz per module ) and is limited by the transmission time of 60 needed for a single event .the front - end electronics and daq system described here was designed to be used with a special multi - wire drift chamber for tracking low energy electrons from nuclear decays .it incorporates both the drift time and charge division measurements allowing for efficient 3d determination of the electron tracks with very few and only parallel sense wires .the performed tests show that the electronic contribution to the drift time measurement uncertainty is less than 1 ns corresponding to about 250 m position uncertainty at the expected electron drift velocities .such result is satisfactory since in the planned experiment the track position resolution will be dominated by the electron angular straggling effects in the gas as shown in monte carlo simulations and confirmed in the small prototype test described in ref .. the necessary spatial resolution of the electron track position determined from the drift time need not to be better than 500 m .the uncertainty of the track position obtained from the charge division measurement varies from 0.5% at wire ends to 1.5% in the center corresponding to 1.2 and 3.6 mm , respectively , for 24 cm long wires ( 25 m nicr ) .this result is sufficient for the application in view : identification of the cell sequence passed by an electron spiraling in an axial magnetic field and resolving possible double track ambiguities appearing in the 2d projection .00 t. bhattacharya , v. cirigliano , s. d. cohen , a. filipuzzi , m. gonzalez - alonso et al ., phys . rev .d 85 , ( 2012 ) 054512 .o. naviliat - cuncic , m. gonzalez - alonso , ann .( berlin ) 525 , no .89 , ( 2013 ) 600619 .j. d. jackson , s. b. treiman , h. w. wyld , phys .106 , ( 1957 ) 517. b. r. holstein , rev .46 , ( 1974 ) 789 .n. severijns , j. phys .g. , nucl .41 , ( 2014 ) 114006 .f. wauters , et al .c 82 ( 2010 ) 055502 .m. perkowski et al . , in preparation ; g. soti , _ the minibeta spectrometer for the determination of weak magnetism and the fierz interference term _ , psi2013 , switzerland ; p. finlay , _ minibeta : a multiwire drift chamber for beta - spectrum shape measurements _ , aris2014 , japan .k. lojek , k. bodek , m. kuzniak , nucl .instr . and meth . a 611 ( 2009 ) 284 .qucs - circuit simulator ( http://qucs.sourceforge.net/ ) .r. veenhof , garfield , cern program library ( http://garfield.web.cern.ch/garfield ) .
this paper presents the design and implementation of the front - end electronics and the data acquisition ( daq ) system for readout of multi - wire drift chambers ( mwdc ) . apart of the conventional drift time measurement the system delivers the hit position along the wire utilizing the charge division technique . the system consists of preamplifiers , and analog and digital boards sending data to a back - end computer via an ethernet interface . the data logging software formats the received data and enables an easy access to the data analysis software . the use of specially designed preamplifiers and peak detectors allows the charge - division readout of the low resistance signal wire . the implication of the charge - division circuitry onto the drift time measurement was studied and the overall performance of the electronic system was evaluated in dedicated off - line tests . data acquisition system , hardware , data logging software 29.85.ca ; 29.40.gx ; 23.40.bw
one of the consequences of the rapid development of the internet and growing presence of information communication technologies is that large part of individuals daily activities , both off and online , is regularly recorded and stored .this newly available data granted us with a substantial insight into activities of a large number of individuals for a long period of time and led to development of new methods and tools which would enable better insight into dynamics of social groups .the structure and features of social connections have both strong influence and depend on social processes such as cooperation , diffusion of innovations and collective knowledge building .therefore , it is not surprising that complex network theory has proven to be very successful in uncovering mechanisms governing the behaviour of individuals and social groups .human activity patterns as well as the structure of social networks and the emergence of collective behaviour in different online communities have been extensively studied in a last decade . on the other hand , dynamics of offline social groups , where the activities take place trough offline meetings ( events ) ,have drawn relatively little attention given their importance .these groups , both professional and leisure ones , have large benefits and influence on everyday lives of individuals , their broader communities and society in general : they provide a social support for vulnerable individuals , can be used for political champagnes and movements , or can have an important role in career development . as they have different purpose of their existence they also vary in structure of participants , dynamics of meetings and organisation .some groups , such as cancer support groups or scientific conference communities , are intended for a narrow circle of people while others , leisure groups for instance , bring together people of all professions and ages . in pre - internet erathese groups have been , by their organisation and means of communication between their members , strictly offline , while today we are witnessing the appearance of growing number of hybrid groups which combine both online and offline communication .although inherently different all these social groups have two main characteristics in common : they do nt have formal organisation , although their members follow certain written and non - written rules , and the membership in them is on a voluntary basis .bearing this in mind it is clear that the function , dynamics and longevity of these self - organised communities depend primarily on their ability to attract new and retain old members active in the group activities .understanding the reasons and detection of key factors which influence members to remain active in social group dynamics are thus important , especially having in mind their relevance for the broader social communities and society . in previous work we have shown that the scientists participation patterns in conference series are not random and that they exhibit an universal behaviour independent of conference subject , size or location .using the empirical analysis and theoretical modelling we have shown that the scientist s conference attendance depends on the balance between the number of previous attendance and non - attendance and argued that this repetitiveness is driven by her association with the conference community , i.e. with the number and strength of social ties with other conference community members .we also argued that similar behaviour when it comes to member s participation patterns in organised group events can be expected in other social communities .here we provide the empirical evidence supporting these claims and further investigate the relationship between dynamics of individuals participation in social group activities and structure of its social network .meetup portal , whose group dynamics we are studying , is an event - based social network .meetup members use the online communication for the organisation of offline gatherings .the online availability of the event attendance lists and group membership enables us to examine the event participation dynamics of the meetup groups and its influence on the structure of social networks between group members .the diversity of meetup groups in terms of the type of activity and size allows us to further examine and confirm the universality of member s participation patterns .the previous works using meetup source of data have been mostly focused on event recommendation problem and the structural properties of social networks and relationships between event participants by disregarding evolutionary behaviour of meetup groups . in this work, we examine the event induced evolution of social networks for four large meetup groups from different categories . like in the case of conference participation , we study the probability distribution of total number of meetup attendance and show that it also exhibits a truncated power law for all four groups , like in the case of conference participation dynamics .this finding suggests that event participation dynamics of meetup groups is characterised by positive feedback mechanism , which is of social origins and is a directly related to member s association with the social community of the specific meetup group . using complex network theory we examine in more details the correlation between member s decision to participate at an event and her association with other members of that meetup group .specifically , we track how member s connectedness with community changes with the number of attendance by measuring the change in clustering coefficient and relation between degree and strength in evolving weighted social network , where only statistically significant connections are considered .our results indicate that greater involvement in group activities is more associated with the strengthening of the existing than to creation of new ties .this is consistent with previous research on meetup which has shown that repeated event attendance leads to increase of bonding and decrease of bridging social capital .furthermore , in view of the fact that people interact and network evolves through events , we examine how particular event affects the network size and structure .we investigate effect of event size and time ordering on social network organization by studying the change in network topology , number of distinctive links and clustering , caused by the removal of specific event and find that the purpose of large events is to facilitate new connections , while during the small events already acquainted members strengthen their interpersonal ties .similar behaviour was observed at the level of communities , where small communities are typically closed for new members , while contrary to this , the change of membership in large communities is favourable .this paper is organized as follows : we first study the distribution of the total number of participations for four meetup groups from different categories .next we introduce filtered weighted social network to characterize significant social connections between members and discuss its structural properties .specifically , we study how the local topological properties evolve with the growth of the number of participations in order to derive relationships between the member association with the group and activity patterns . in order to analyse impact of the particular event on the network organization ,we remove events using different strategies and show how it influences social structure .meetup is the online social networking platform which enables people with common interest to start a group with purpose of arranging the offline meetings ( events , meetups ) all over the world .the groups have various topics and are sorted into different categories , such as careers , hobbies , socializing , health , etc .these groups are of various sizes , have different event dynamics and hierarchical organisation .they also differ in type of activity members engage , ranging from socialising events , like parties and clubbing , to professional trainings , such as seminars and lectures .common to all groups is the way they organise offline events : each member of the group gets an invitation to event to which she replays with yes / no , creating in that way a record of attendance for each event .we use this information to analyse event participation patterns and to study the evolution of social network . here, we analyse four large groups , each having more than three thousand organized events , ( see methods and table [ dataset ] ) , from four different categories .we chose these four groups because of their convenience for statistical analysis , large number of members and organised events , and also for the fact that they are different when it comes to the type of activity and interest their members share .the _ geamclt _( geam ) group is made of _ foodie thrill - seekers _ who mostly meet in the restaurants and bars in order to try out new exciting foods and drinks , while people in _vegashiker _ ( lvhk ) are hikers who seek excitement trough physical activity .the _ pittsburgh - free_ ( pghf ) is a group which invites its members to free , or almost free , social events , and the _ techlife columbus _ ( tech ) is about social events that focus on tech community networking , entrepreneurship , environmental sustainability , and professional development .figure [ total ] shows that the probability distributions of the total number of members attendance of group events for all four groups exhibits truncated power law behaviour , with power law exponent larger than one .power law and truncated power - law behaviour of the probability distributions can be observed for the number of and the time lag between two successive participations in group organised events , fig [ fig_a ] and fig [ fig_b ] in si .in fact , we find that the similar participation patterns ( they differ in the value of exponents ) can be observed for all meetup groups , regardless of their size , number of events or category .as in the case of conference participation dynamics , this indicates that the probability to participate in the next event depends exclusively on the balance of the number of previous participations and non - participations .we argued in that forces behind conference participation dynamics are of social origin , and it follows from the fig [ total ] that the same can be argued in the case of meetup group participation dynamics .the more participations in group activities member has , the stronger and more numerous are her connections to the other group members , and thus her association with the community .we further explore this assumptions by investigating the event driven evolution of the social networks of the four different meetup groups . .] we construct the social network between group members , for each separate groups , as a network of co - occurrence on the same event ( see methods for more details ) . by definitionthese networks are weighted networks where the link weight between two members is equal to the number of events they participated together .these networks are very dense , which is a direct result of construction method , with broad distribution of link weights ( see fig [ fig_c ] in si ) .co - occurrence at the same event does nt necessarily imply a relationship between two members .for instance , a member of the group that attends many events , or big events , has large number of acquaintances , and thus large number of social connections , which are not of equal importance when it comes to her association with the community .similarly , two members that attend large number of events can have relatively large number of co - occurrences which can be the result of coincidence and not an indicator of their strong relationship . in order to filter out these less important connections we use filtering technique based on configuration model for bipartite networks ( see methods ) . by applying this technique to weighted networkswe reduce their density and put more emphasis on the links which are less likely to be the result of coincidence .this way we put more emphasis on the links of higher weight without the removal of all links with weight smaller then certain threshold ( see fig [ fig_c ] in si ) , a standard procedure for network pruning .we explore the evolution of these social networks of significant relationships between meetup group members by studying how the local characteristics of the nodes ( members ) are changing with their growing number of participations in group activities .+ association with the community of the specific meetup group can be quantitatively expressed trough several local and global topological measures of weighted networks .specifically , we explore how the number of significant connections ( member s degree ) and their strength ( member s strength ) , as well as how member s embeddedness in the group ( non and weighted clustering coefficient ) are changing with the number of attended group events .figure [ deg_str ] shows how average strength of a node depends on its degree in filtered networks of four selected meetup groups . while the degree equals to the number of member s significant social relationships , the strength measures how strongly she is connected to the rest of the group . in all considered meetup groups members with small and medium number of acquaintances ( )have similar value of strength and degree , i.e. their association with the community is described by the number of people they know , not from the strength of their connections ( see fig [ deg_str ] ) .having in mind that the average size of the event in these four groups is less than , we can conclude that majority of member with degree less than are the ones that attended only few group meetups .previous study has found that the probability for a member to attend the group events strongly depends on weather her friends will also attend .the non - linear relationship between degree and average strength for shows that event participation of already engaged members ( ones who already attended few meetings ) is more linked to strength of social relations than to their number .this means that at the beginning of their engagement in group activities , when the associations is relatively small , the participation is conditioned with the number of members a person knows , while later , when the associations becomes stronger , the intensity of relations with already known members becomes more important .+ this finding is further supported by the change of average degree and strength with the number of participations .figure [ deg_str_part ] shows how the degree and strength evolve with the number of participations in group events averaged over all members . at the beginning, the degree and strength have the same value and grow at the same rate , but after only few participations the strength becomes larger than degree , and starts to grow much faster for the members of all four meetup communities .after attended events the average strength of a member is up to ten times larger than her degree ( see fig [ fig_d ] in si ) .this indicates that the event participation dynamics is mostly governed by the need of the member to maintain and strengthen her relationships with the already known members of the community . as a matter of fact, our analysis of member s embeddedness in social network shows that it is not about maintaining the strong relations with single members of community , but rather with the small subgroups of members .the relatively high value average clustering coefficient , , shown in fig [ c_cw_part ] indicates that there is a high probability ( more than on average ) that friends of member also form significant relationships .the slow decay of with the number of participations and the fact that it remains relatively large ( above ) even for participants with thousand attended meetups , fig [ c_cw_part ] , shows that personal networks of members have tendency to remain clustered , i.e. have relatively high number of closed triplets compared to random networks .we further examine the structure of these triplets and its change with the number of participations by calculating the averaged weighted clustering coefficient .the weighted clustering coefficient , measures the local cohesiveness of the personal networks by taking into account the intensity of interactions between local triplets .this measure does not just take into account the number of closed triplets of the node but also their total relative weight with respect to the total strength of the nodes ( see methods ) .we examine how its value , averaged over all participants that have attended events , is changing with the number of attended events .as it is shown in fig [ c_cw_part ] the member s network of personal contacts shows high level of cohesiveness , on the average . like its non - weighted counterpart , the value of only slightly declines during member s early involvement in group activities , while later it remains constant and independent of the number of participations .the comparison of weighted and non - weighted clustering coefficients reveals an information about the role of strong relationships in the local network , i.e. weather they form triplets or bridges between different cohesive groups . at the beginning of members involvement in the group , these two coefficients have similar value , fig [ c_cw_part ] , which indicates that the cohesiveness of subgroup of personal contacts is not that important for early participation dynamics .as the number of attended events grows , as well as the number and strength of personal contacts , the weighted clustering coefficient becomes larger than its non - weighted counterpart , indicating that member s strongest ties are with the other members who are also friends .the fact that in latter engagement weighted clustering coefficient is larger than its non - weighted counterpart indicates that clustering has an important role in network organisation of meetup groups and thus in the group participation dynamics . + and weighted clustering coefficient , with the number of attended events . ] in the previous work we have shown that the conference participation dynamics is independent of conference topic , type and size .the same holds true for meetup participation dynamics , i.e. the member s participation patterns in the meetup group activities do not depend on the group size , category , location or type of activity .however , the size of group events and their time - order may influence the structure of network and thus group dynamics .we explore how topological properties of the networks , specifically the number of acquaintances and network cohesion , change after the removal of events according to certain order ( see methods for details ) .firstly , we study how the removal of events according to certain order influence the number of overall acquaintances in the network .for this purpose we define measure ( see methods ) which we use to quantify the percentage of the remaining significant acquaintances after the removal of event .figure [ eta_event ] shows the change of measure after the removal of fraction of events according to chosen strategy .we see that the most of the new significant connections are usually made on the largest events .the importance of large events for the creation of new acquaintances is especially striking for three groups geam , pghf , and tech , where about of acquaintances only met at the of the largest events .for lvhk the decrease is slower , probably due to a difference in event size fluctuations ( see fig [ fig_e ] in si ) , but still more than of acquaintances disappear if we remove of the largest events , which is still much higher percentage of contacts compared to random removal of events ( see fig [ eta_event ] ( right ) ) .similar results are observed when we remove events from the smallest to largest , fig [ eta_event ] ( left ) .only of acquaintances are being destroyed by removing of the smallest events , for all four groups .this indicates that the new , weak , connections ties are usually formed during large events , while these weak , already existing , acquaintances are further strengthen during smaller meetups . on the other hand , the removal of events according to their temporal order , fig [ eta_event ] , has very similar effect as random removal , i.e. the value of parameter decreases gradually as we remove events .+ with the removal of events according to their size ( left ) and time order and random ( right ) .abbreviations indicate the order in which we remove events : * b * -from the largest to the smallest , * s * from the smallest to the largest , * f * from the first to the last and * r * random . ]similar conclusions can be drawn from change of average weighted clustering coefficient with the removal of events , fig [ cw_event ] .removal of events in ordering from the smallest to the largest , does nt result in the significant change of ( now averaged over all nodes in the network ) .the same value of weighted clustering coefficient , even after removal of of events , shows that small events are not attended by _ a pair of _ but rather _ by a group _ of old friends . on the other hand , the removal of events from the largest to the smallest results in gradual decrease of .a certain fraction of triads in the network are made by at least one low weight link , and they are being the first one destroyed by the removal of the largest events , leading to gradual decrease of .removal of events according to their temporal order results in the change of similar to one obtained with random removal of events confirming further that the time - ordering of events does nt have an influence on the network structure .in this article we explore event participation dynamics and underlying social mechanism of meetup groups . the motivation behind this was to further explore event - driven dynamics , work we started by exploring participation patterns of scientists at scientific conferences , and to better examine the social origins behind the repeated attendance of the group events , which was not feasible with the conference data .the results in this manuscript are based on empirical analysis of participation patterns and topological characteristics of networks for four different meetup groups made up of people with different motives and readiness to participate in group activities : geam , pghf , tech , lvhk .although these four groups differ in category and type of activity , we have shown that all four of them are characterised with similar participation patterns : the probability distribution of total number of participations , number of successive participations and time lag between two successive participations follow the power - law and truncated power law behaviour with the value of power law exponents between and .the resemblance of these patterns with the ones observed for conference participation indicates that these two , seemingly different , social system dynamics are governed by similar mechanism .this means that the probability for a member to participate at the future events depends non - linearly on the balance between the number of previous participations and non - participations . as in the case of conferences this behaviour is independent of group category , size , or location meaning that members association with the community of meetup group strongly influence their event participations patterns and thus the frequency and longevity of their engagement in group activities .the member s association with the community is primarily manifested trough her interconnectedness with other members of the specific meetup group community , i.e. in the structure of her personal social network .we have examined topological properties of filtered weighted social network constructed from the members event co - occurrence . by filtering the network we emphasized the importance of significant links , ones which are not the result of coincidence but rather an indicator of existing social relations .the analysis of local topological properties of these networks has revealed that strength of connectedness with the community for the members with small number of participations is predominantly the consequence of the width of their social circles .the average strength and degree of members with , which on average corresponds to only a few participations , are equal , while the strength of members who know more than people and have participated in more than a few events , is several times higher then their degree .this means that strengthening of existing ties becomes more important than meeting new people after a few participations .these arguments are further extended with our observation of the evolution of average strength and degree with the growth of number of participations . both , average degree and strength grow , but the growth rate of strength is higher than degree for all four meetup groups .all four groups are characterised with very high cohesiveness of their social communities .the evolution of clustering coefficients , non and weighted one , and their ratio show that bonding with community becomes more important as the members engagement in group activity progress . as in the case of conference participation ,frequent attendees of group activities tend to form a core whose stability grows with the number of participations .the need of frequent attendees to maintain and increase their bonding with the rest of the community influences their probability to attend future meetings and thus governs the event participation dynamics of the meetup groups .while group category , type of activity and size do nt significantly affect the participation dynamics in group activities and network structure , the size of separate events organised in groups does have an influence on the evolution of social networks .large events represent an opportunity for members to make new acquaintances , i.e. to establish new connections , while small meetings are typically the gatherings of members with preexisting connections and their main purpose is to facilitate the stronger bonding among group members .while the size of the event influence structure of social networks , it turns out that their time - order is irrelevant for group dynamics .the universality of members event participation patterns , shown in this and previous work , and its socially driven nature give us a better insight not only about the dynamics of studied social communities but also about others which are organised on very similar principles : communities that bring together people with the similar interests and where the participation is voluntary .having in mind that these type of groups constitute a large part of human life , including all life aspects , understanding their functioning and dynamics is of great importance .our results not only contribute to the corpus of increasing knowledge but also indicate the key factor which influences the group longevity and successful functioning : the association of group members with the community .this and recent success stories suggest that complex network theory can be an extremely useful tool in creating successful communities .future studies will be conducted towards further confirmation of universality of event - participation patterns and better understanding of how social association and contacts can be used for creating conditions for successful functioning of learning and health support groups .there are more than groups in countries classified into categories . for each of selected four groups ,we have collected list of events organized by the group and information on members who confirmed their participation in the given event since the group s beginnings .each member has a unique i d which enables us to follow her activity in group events during the time .more details about the group sizes and the number of events is given in table [ dataset ] . * network construction * we start with a bipartite member - event network represented with participation matrix .let denotes total number of members in the group and is the total number of the events organized by the group .if the member participated in the event element of matrix takes a value , otherwise . in the bipartite network created in this way , the member s degree is equal to the total number of events member participated in , while the event s degree is defined as the total number of members that have attended that event .the social network , which is the result of members interactions during the meetup events and is represented by weighted matrix , is created from the weighted projection of bipartite network to the member partition . in the obtained weighted network nodescorrespond to individual members while the value of the element of weighted matrix , , corresponds to the number of common events two members have attended together .* network filtering * the observed weighted network is the dense network where some of the non - zero edges can be the result of coincidence . for instance , these edges can be found between members who attended large number of events or events with many participants , and therefore they do not necessarily indicate social connections between members .the pruning of these type of networks and separation of significant edges from the non - significant ones is not a trivial task . for this reasons we start from the bipartite network and use the method that determines the significance of link based on configuration model of random bipartite network . in this model of random networks ,the event size and the number of events a member attended are fixed , while all other correlations are destroyed ( see si for further explanations ) .based on this model , for each link in bipartite network , , we determine the probability that user has attended the event .the assumption of uncorrelated network enables us to also estimate the probability that two member , and , have attended the same event , which is equal to .the probability that two members have attended the same events is then given by poisson binomial distribution where is the subset of events that can be chosen from given events .we define -value as the probability that two members and has co - occurred on at least events , i.e. that the link weight between these two members is or higher the relationship between users and will be considered statistically significant if . in our case , threshold .all links with are the consequence of chance and are considered as non - significant and thus removed from the network .this way we obtain weighted social network of significant relations between members of the meetup group .the details on how we estimate and for each link are given in si .* topological measures * all topological measures considered in this work are calculated for weighted social network of significant relations .we consider the following topological measures of the nodes : * the node degree , where is heaviside function ( if otherwise ) ; * the node strength ; * non - weighted clustering coefficient of the node . * weighted clustering coefficient of the node .the weighted clustering coefficient of the network , , and its non - weighted counterpart , , are values averaged over all nodes in the network . in order to explore the relevance of event size and time ordering in evolution of social network topology we analyse how removal of events , according to specific ordering , influence the number of acquaintance and network cohesion .specifically , we observe the change of measure , which represents the fraction of the remaining acquaintances , and weighted clustering coefficient , , after the removal of certain fraction of events .the removal of event results in the change of the link weights between group members .for instance , if two members , and , have participated in the event , the removal of this event will result in the decrease of the link weight by one .the further removal of events in which these two members have co - occurred will eventually lead to termination of their social connection , i.e. . if is the matrix of link weights after the removal of fraction of events and is the original matrix of significant relations , then the value of parameter after the removal of events is calculated as the value of weighted clustering coefficient , , after the removal of fraction of events is calculated using the same formula as for the just using the value of instead of .we remove events according to several different strategies : * we sort events by the number of participants .then , we remove sorted events in both , descending and ascending order .* we sort events by arrangement in time .we remove sorted events in direct order .* we remove events in random order .we perform this procedure for each list of events times .numerical simulations were run on the paradox supercomputing facility at the scientific computing laboratory of the institute of physics belgrade .10 castellano c , fortunato s , loreto v. statistical physics of social dynamics .rev mod phys .2009;81:591646 .five rules for the evolution of cooperation . science .2006;314(5805):15601563 .fowler jh , christakis na .cooperative behavior cascades in human social networks .proc natl acad sci usa .2010;107(12):53345338 .granovetter ms .the strength of weak ties .am j sociol. 1973;78(6):13601380 .pastor - satorras r , castellano c , van mieghem p , vespignani a. epidemic processes in complex networks .rev mod phys .2015;87:925979 .mitrovi dankulov m , melnik r , tadi b. the dynamics of meaningful social interactions and the emergence of collective knowledge .2015;5:12197 .boccaletti s , latora v , moreno y , chavez m , hwang du . .2006;424(45):175 308 .holme p , saramki j. .2012;519(3):97 125 .aral s , walker d. identifying influential and susceptible members of social networks . science .2012;337(6092):337341 .gonzlez - bailn s , borge - holthoefer j , moreno y. broadcasters and hidden influentials in online protest diffusion .am behav sci. 2013;57(7):943965 .lin yr , chi y , zhu s , sundaram h , tseng bl .facetnet : a framework for analyzing communities and their evolutions in dynamic networks . in : proceedings of the 17th international conference on world wide web .www 08 ; 2008 .p. 685694 .mitrovi m , paltoglou g , tadi b. quantitative analysis of bloggers collective behavior powered by emotions .j stat mech .2011;2011(02):p02005 . garas a , garcia d , skowron m , schweitzer f. .2012;2:402 .trk j , iiguez g , yasseri t , san miguel m , kaski k , kertsz j. .phys rev lett. 2013;110:088701 .yasseri t , sumi r , rung a , kornai a , kertsz j. .plos one . 2012;7(6):112 .montazeri a , jarvandi s soghraand haghighat , vahdani a mariamand sajadian , ebrahimi m , haji - mahmoodi m. .patient educ couns .2001;45:195198 .davison kp , pennebaker jw , dickerson ss . .. 2000;55:205217 .tam cho wk , gimpel jg , shaw dr . .q j polit sci . 2012;7:105133 .weinberg bd , , williams cb . .j direct data digit mark pract .2006;8(1):4657 .smiljani j , chatterjee a , kauppinen t , mitrovi dankulov m. a theoretical model for the associative nature of conference participation .plos one . 2016;11(2):112 .qiao z , zhang p , zhou c , cao y , guo l , zhang y. event recommendation in event - based social networks . in : proceedings of the twenty - eighth aaai conference on artificial intelligence .aaai14 ; 2014 .p. 31303131 .zhang w , wang j , feng w. combining latent factor model with location features for event - based group recommendation . in : proceedings of the 19th acm sigkdd international conference on knowledge discovery and data mining .kdd 13 ; 2013 .p. 910918 .pham tan , li x , cong g , zhang z. a general graph - based model for recommendation in event - based social networks . in : 2015 ieee 31st international conference on data engineeringp. 567578 .macedo aq , marinho lb , santos rlt . .in : proceedings of the 9th acm conference on recommender systems .recsys 15 ; 2015 .p. 123130 .liu x , he q , tian y , lee wc , mcpherson j , han j. . in : proceedings of the 18th acm sigkdd international conference on knowledge discovery and data mining .kdd 12 ; 2012 .p. 10321040 .jiang jy , li ct . .in : international aaai conference on web and social media ; 2016 .p. 599602 .sessions lf . .information , communication & society . 2010;13(3):375395 .mccully w , lampe c , sarkar c , velasquez a , sreevinasan a. . in : proceedings of the 7th international symposium on wikis and open collaboration .wikisym 11 ; 2011 .p. 3948 .palla g , barabsi al , vicsek t. quantifying social group evolution .2007;446(7136):664667 .backstrom l , huttenlocher d , kleinberg j , lan x. group formation in large social networks : membership , growth , and evolution . in : proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining .kdd 06 ; 2006 .p. 4454 .arxiv e - prints .2016;. f , di clemente r , gabrielli a , squartini t. .arxiv e - prints .2016;. barrat a , barthlemy m , pastor - satorras r , vespignani a. the architecture of complex weighted networks .proc natl acad sci usa. van dijk j , maier g. pap reg sci .2006;85(4):483504 .cosgrave p. engineering serendipity : the story of web summit s growth ; 2014 .available from : https://goo.gl/h3awmi .mitrovi m , tadi b. .eur phys j b. 2010;73(2):293301 .mitrovi m , paltoglou g , tadi b. .eur phys j b. 2010;77(4):597609 .dianati n. unwinding the hairball graph : pruning algorithms for weighted complex networks .phys rev e. 2016;93:012304 .saracco f , di clemente r , gabrielli a , squartini t. randomizing bipartite networks : the case of the world trade web .2015;5:10595 .cellai d , bianconi g. multiplex networks with heterogeneous activities of the nodes .phys rev e. 2016;93:032302 .liebig j , rao a. fast extraction of the backbone of projected bipartite networks to aid community detection .europhys lett .2016;113(2):28003 .hong y. on computing the distribution function for the poisson binomial distribution .computational statistics and data analysis , 2013;59:4151 .the meetup dataset , containing information on organised events by certain meetup group and members of that group that confirmed attendance at an event , allows us to construct member - event bipartite network with adjacency matrix . for each member and event , matrix element if member participated event , or , otherwise .the degree of member is defined as the total number of events member participated in , , and similarly , the degree of event is defined as the total number of members attended the event , .given the matrix , social relations between meetup members can be analysed using the projected unipartite member - member weighted network , where the weight of the link between two members is equal to the number of events they both attended .the observed weighted network is the dense network where some of the non - zero edges can be a matter of coincidence .for instance , two frequent attendees can meet several times due to a chance not due to the fact that there is some relation between them , which means that the connection between them is not significant for our analysis .also , the connections between members that meet at big events and never again can not be regarded as social relations and thus they need to be excluded from our analysis . to make the distinction between significant and non - significant edges is nontrivial task .here we use the method which enables us to calculate the significance of the link between two members based on the probability for that link to occur in random network . as a null model we use configuration model of bipartite network .first we describe general framework for constructing randomized network ensemble with given structural constraints .the maximum - entropy probability of the graph in the ensemble , , is given by where the are lagrangian multipliers and the partition function of these network ensembles is defined as the ensemble average of a graph property can be expressed as then the constants could be determined from ( [ constraints ] ) .let us now consider configuration model of the member - event bipartite network with given degree sequence and . in this casethe partition function can be written as the lagrangian multipliers and are determined from finally , we can calculate the probability that a member attended event .if we define coupling parameter and write partition function in the form then , it holds now , when the probability is given , the probability that members and both participated in event is .the probability of having an edge of the weight between the nodes and is given by poisson binomial distribution where is the subset of events that can be chosen from given events .we use dft - cf method ( discrete fourier transform of characteristic function ) , proposed in , to compute poisson binomial distribution . on the basis of ,we define -value as the probability that edge has weight higher or equal than the edge will be considered statistically significant if . in our case ,threshold .if , the edge should be removed as spurious statistical connection between members ( set ) .
affiliation with various social groups can be a critical factor when it comes to quality of life of every individual , making these groups an essential element of every society . the group dynamics , longevity and effectiveness strongly depend on group s ability to attract new members and keep them engaged in group activities . it was shown that high heterogeneity of scientist s engagement in conference activities of the specific scientific community depends on the balance between the number of previous attendance and non - attendance and is directly related to scientist s association with that community . here we show that the same holds for leisure groups of meetup website and further quantify member s association with the group . we examine how structure of personal social networks is evolving with event attendance . our results show that member s increasing engagement in group activities is primarily associated with the strengthening of already existing ties and increase of bonding social capital . we also show that meetup social networks grow trough big events while small events contribute to the group s cohesiveness .
signal propagation in the form of waves is a ubiquitous feature of the functioning of neural networks .waves transmitting electrical activity across neural structures have been observed in a large variety of situations , both in artificially grown cultures and in living brain tissues , see _e.g _ and for an instance of each case ; many other examples can be found in the literature .this experimental phenomenology has fostered numerous computational and analytical studies on theoretical models for wave propagation . to mention a single category of exact results, one can cite proofs of existence of waves with context dependent shape : fronts , pulses , periodic wave trains , etc , both in full voltage / conductance models and in firing rate models , see _ e.g. _ .in parallel , numerical studies have investigated propagation features such as firing synchrony within cortical layers , and their dependence on dynamical ingredients : feedback , surrounding noise , or external stimulus , see for instance .our paper aims to develop a rigorous mathematical investigation of how the global dynamics of a ( simple model of a ) neural network may cause it to organize to a wave behavior , in spite of being forced by a rather unrelated signal .given that the natural setting of neural ensembles typically features an external environment that is prone to providing an array of irregular stimuli , it seems that such forcing is in no way _ a priori _ tailored toward generating periodic patterns in layered ensembles .nevertheless , recordings from tissues suggest that self - organization will often ensue despite this obvious and inherent mismatch . in order to get insight into the generation of periodic traveling waves through _ ad hoc _ stimulus , we consider unidirectional chains of coupled oscillators .inspired by the propagation of synfire waves through cortical layers , such systems can be regarded as basic phase variable models of feed - forward networks featuring synchronized groups , in which each pool can be treated as a phase oscillator that repetitively alternates a refractory period with a firing burst .in addition , chains with unidirectional coupling as in equation below are representative of some physiological systems , such as central pattern generators .also , acyclic chains of type - i oscillators have been used as simple examples for the analysis of network reliability .more specifically , the model under consideration deals with chains of coupled phase oscillators whose dynamics is given by the following coupled odes where and * .up to a rescaling of time , we can always assume that .* is the so - called phase response curve ( prc ) . recall that a type - i oscillator is onefor which the prc is a non - negative one - humped function .* mimics incoming stimuli from the preceding node and also takes the form of a unimodal function . *the first oscillator at site evolves according to some forcing signal ( external stimulus ) , _ i.e. _ we have for all .the forcing is assumed to be continuous , increasing and periodic ( _ viz ._ there exists such that for all ) .numerical simulations of the system for some smooth functions and and forcing signals such as the uniform function , have revealed that the asymptotic dynamics settles to a highly - organized regime , independently of initial conditions . as , the phase approaches the perturbation of a periodic function - the same function ( ) at each site , up to an appropriate time shift - and this perturbationis attenuated by the chain in going further and further down . in brief terms , traveling waves are typically observed in the far - downstream , large time limit ; see fig .[ snapshots ] .mathematically speaking , this means that , letting denote the solution to with forcing ( and say , typical initial condition ) , there exists a periodic function and a time shift such that we have moreover , this phenomenon occurs for arbitrary forcing period , nor to ; see for an illustration . ] in some range and also appears to be robust to changes in the coupling intensity . , , , , ) of oscillator phases with periodic boundary condition ( ) for a typical trajectory issued from a random initial condition , when forcing with a uniform signal .clearly , a traveling wave with periodic shape emerges in the far - downstream large time limit ( i.e. , as and ) . ]this behavior was somehow unanticipated because does not reveal any crucial properties usually required in the proofs of wave existence , such as the monotonicity of the profile dynamics ( analogous property to the maximum principle in parabolic pdes ) , see _ e.g. _ for lattice differential equations and for discrete time recursions ., we show that monotonicity with respect to pointwise ordering on sequences in fails in this system . ] in this context , proving the existence of waves remains unsolved and so is the stability problem , not to mention any justification of the generation phenomenon when forcing with _ ad hoc _ signal .notice however that , by assuming the existence of waves and their local stability for the single - site dynamics , a proof of stability for the whole chain has been obtained and applied to the design of numerical algorithms for the double - precision construction of wave shapes .( our stability proof here is inspired by this one . ) in order to get mathematical insights into wave generation under _ ad hoc _ forcing , here , we analyze simple piecewise affine systems for which the dynamics can be solved explicitly .this analysis can be viewed as an exploratory step in the endeavor of searching for full proofs in ( more general ) nonlinear systems .hence , the functions and are both assumed to be piecewise constant on the circle , taking on only the two distinct values of 1 ( on ) or 0 ( off ) . in this setting , our analysis shows that the numerical phenomenology can be mathematically confirmed . for all parameter values, we prove the existence of tw with arbitrary period in some interval , and their global stability with respect to initial perturbations in the phase space , not only when the forcing at is chosen to be a tw shape but also for an open set of periodic signals with identical period .in addition , this open set is shown to contain uniform forcing provided that the coupling intensity is sufficiently small .the paper is organized as follows .the next section contains the accurate definition of the initial value problem , the basic properties of the associated flow and the statements of the main results .the rest of the paper is devoted to proofs . in section [ s - exist ] ,we prove the existence of tw by establishing an explicit expression of their shape .we study the tw stability with respect to initial conditions in section [ s - stab ] by considering the associated stroboscopic dynamics , firstly for the first site , and then for the second site , from where the stability of the full chains is deduced .finally , stability with respect to changes in forcing is shown in section [ s - gener ] , as a by - product of the arguments developed in the previous sections .section [ s - concl ] offers some concluding remarks .as mentioned before , the dynamical systems under consideration in this paper are special cases of equation in which the stimulus and the prc are ( non - negative and normalized ) square functions , namely , and denotes the floor function ( recall that for all ) . and denote elements in whereas and represent functions of the real positive variable with values in . ]\\ 0\ \text{otherwise } \end{array}\right .\quad\text{and}\quad \delta(\vartheta)=\left\{\begin{array}{l } 1\ \text{if}\ \vartheta\ \text{mod}\ 1\in [ 0,a_1]\\ 0\ \text{otherwise } \end{array}\right.\quad\forall \vartheta\in\t.\ ] ] ( of note , the stimulus can be made arbitrarily brief by choosing arbitrarily close to 0 . moreover , that the two intervals ] have the same left boundary is a simplifying assumption that reduces the number of parameters .other cases are of interest , for instance , oscillators hearing poorly when transmitting , which corresponds to non - overlapping intervals . )more formally , we shall examine the following system of coupled differential equations for semi - infinite configurations where * the forcing signal is assumed to be a lipschitz - continuous , -periodic - valued as -periodic provided that for all . ] ( ) , and increasing function with slope ( wherever defined ) at least 1 , is thought of being piecewise affine or even simply affine . ] and satisfying , * is an arbitrary initial configuration , * and are arbitrary parameters .solutions of equation are in general denoted by but the notation is also employed when the dependence on initial condition needs to be explicitly mentioned .the solutions of equation have a series of basic properties which we present and succinctly argue in a rather informal way .these facts can be formally established by explicitly solving the dynamics .the details are left to the reader . *existence of the flow .* given any forcing signal and any initial condition , for every , there exists a unique function which satisfies equation for all .this function is continuous , increasing , and piecewise affine with alternating slope in .moreover each piece of slope 1 must have length .these facts readily follow from solving the dynamics inductively down the chain . assuming that is given for some ( or considering the forcing term if ) , the slope of the first piece of only depends on the relative position of with respect to and of with respect to .the length of this piece depends on its slope , on and on the smallest such that ; this infimum time has to be positive . by induction, this process generates the whole function by using the location of and at the end of each piece , and the next time when . * continuous dependence on inputs . *endow with pointwise topology and , given , endow continuous and monotonic functions of ] continuously depends both on the forcing signal } ] at which they reach must be close . if , in addition , the initial conditions and are close , then the trajectories and alternate their slopes at close times ; hence }-\theta_1^{(g)}(\xi_1,\cdot)|_{[0,t]}\| ] and } ] must be small when and are sufficiently close. then , the result for an arbitrary follows by induction .* semi - group property . * as suggested above , a large part of the analysis consists in focusing on the one - dimensional dynamics of the first oscillator forced by the stimulus , prior to extending the results to subsequent sites .indeed , the dynamics of the oscillator can be regarded as a forced system with input signal .for the one - dimensional forced system , letting for all denotes the time translations , we shall especially rely on the semi - group property of the flow , _ viz ._ if is a solution with initial condition and forcing , then , for every , is a solution with initial condition and forcing . * monotonicity failure . * consider the following partial order on sequences in ( employed in typical proofs of existence of tw in lattice dynamical systems ) .we say that iff for all .clearly , we may have for some and yet for another .for instance , it suffices to choose and with and sufficiently close .see figure [ monotonicity_fail ] below . and at site , and and at site . ]a solution of the system is called a * traveling wave * ( tw ) if there exists such that in other words , a tw is a solution for which the forcing signal exactly repeats at every site , modulo an appropriate time shift : the phase at any given site and time mimics the phase at the previous site and time ( see fig . [ tw_rep - and - converge ] , left panel ) . for such solutions, the forcing signal also plays the role of the wave shape and the quantity represents the wave number . in our setting , any tw shape / forcing signal obviously has to be a piecewise affine function with slopes only taking the values and . for and vs. time , together with the forcing signal ( which satisfies condition ( c ) in thm [ mainres1 ] , so as to ensure that the wave is stable see text ) .* subsequent panels : * example of a trajectory for and disjoint consecutive time windows .the initial phases have been chosen randomly and the forcing signal is the same as in the left panel .clearly , the convergence occurs at the first site s=1 and then propagates down the chain , as described in the text ; the higher the value of is , the longer the transient time needs in order to get close to . ] as we shall see below , tw exist that are asymptotically stable , not only with respect to perturbations of the initial phases , but more importantly , also with respect to changes in the forcing signal . for convenience ,we first separately state existence and uniqueness .( existence . ) for every and , there exists a ( non - empty ) interval and for every , there exist a -periodic forcing signal and such that is a tw .( uniqueness . ) for every , the forcing signal and the shift as above are unique , provided that the following constraint is required \(c ) is a piecewise affine forcing signal whose restriction has slope only on a sub - interval whose left boundary is .[ mainres1 ] for the proof , see section [ s - exist ] , in particular corollary [ existw ] . notice that constraint ( c ) has no intrinsic interest other than unambiguously identifying appropriate tw shapes for the stability statement . to identify stable waves matters because the system also possesses neutral and unstable traveling waves . for stability , we shall use notions that are appropriate to forced systems , and adapted to our setting .in particular , since the information flow is unidirectional here , it is natural to only require that perturbations relax in pointwise topology , rather than in uniform topology .therefore , we shall consider the dynamics on arbitrary finite collections of sites which , without loss of generality , can be chosen to be the first sites , for an arbitrary .moreover , there is no need for local stability considerations here because we shall be concerned with tw for which the basin of attraction is as large as it can get from a topological viewpoint .accordingly , we shall say that a tw is * globally asymptotically stable * if there exists a sequence such that , for every and every initial condition for which for all , we have in other words , a solution is globally asymptotically stable if its basin of attraction is as large as it can get from a topological viewpoint .again , the exceptional initial conditions must exist for fixed - point index reasons . as can be expected , our next statement claims global asymptotic stability of waves , provided they are suitably chosen according to the above criterion. there exists a ( non - empty ) sub - interval such that for every , the tw determined by constraint ( c ) is globally asymptotically stable .[ mainres15 ] see fig .[ tw_rep - and - converge ] for an illustration of this result .theorem [ mainres15 ] is proved in section [ s - stab ] , see especially the concluding statement corollary [ stabtw ] .in addition , for initial conditions not satisfying the stability condition , our proof shows that the first coordinate for which this condition fails asymptotically approaches an unstable periodic solution . in theorem [ mainres15 ] , we claim stability of the tw with respect to perturbations of initial conditions when forcing with .we now consider the analogous property when the forcing signal is also perturbed . as described in the introduction , in general , for any given site , one may not expect relaxation to exactly but rather to a perturbation of this signal , which is itself attenuated as .accordingly , we shall say that a tw is * robust with respect to perturbations of the forcing * if , for every forcing signal in a -neighborhood of , there exists a neighborhood ( product topology ) of such that , for every , we have as indicated in the introduction .robustness with respect to forcing perturbations is expected to hold in general smooth systems of the form ( as is the global stability of tw ) . in the current piecewise affinesetting , the phase actually relaxes to at every site ( a stronger result ) , as described in the following statement . for every , there exists a -neighborhood of such that , for every , the tw determined by constraint ( c ) globally attracts solutions of the system with forcing .that is to say , there exists a sequence such that for every and every initial condition for which for all , we have in addition , for small enough ( depending on ) , there exists a ( non - empty ) sub - interval such that , for every , the neighborhood contains the uniform forcing , for all .[ mainres2 ] theorem [ mainres2 ] is established in section [ s - gener ] and implies in particular robustness with respect to forcing perturbations . for every ,the tw determined by the constraint ( c ) is robust with respect to perturbations of the forcing .an alternative characterization of tw solutions can be given as follows together with the semi - group property of the first oscillator dynamics , in order to prove the existence of tw , it suffices to find a forcing signal and a phase shift such that for all . using the translation operator , this is equivalent to solving the following delay - differential equation for the pair .the purpose of the section is precisely to solve this equation . to that goal, it is useful to begin by identifying all possible cases .since we assume , the forcing signal / tw shape can be entirely characterized by the partition of ] .* and .then there exist and such that we have [ specify ] for ] , is given as follows is a consequence of the fact that - see figures in cases ( a ) and ( c ) . ] by translation , this determines the shape over the interval ] , the tw shape writes where the 2-parameter family is defined by notice that for every .moreover , we must have which yields . therefore , in order to prove the lemma , it suffices to prove uniqueness of given a phase shift .this is the purpose of the next statement . for every choice of parameters and every phase shift , the equation ( resp . ) has a unique positive solution denoted ( resp . ) .moreover , we have .notice that the quantity in this statement satisfies the inequalities ._ proof of the claim ._ assuming , the equation has unique solution assuming , the equation gives .now , is the first time in ] can be summarized as follows : * both and have speed for . * has speed 1 and has speed for . *both and have speed for .it results that has speed on a longer time interval and thus we also have by strict monotonicity ; hence and the argument can be repeated with to obtain by induction , it follows that the sequence is decreasing and non - negative ; hence it converges .a standard contradiction argument based on the contraction and on the continuity of concludes that the limit must be 0 .this proves local asymptotic stability with respect to negative initial perturbations .a similar argument applies to positive perturbations. _ unstable case ._ similarly to as before , let be sufficiently small so that for every , we have . comparing again the two trajectories , we have : * both and have speed for .( nb : ) .* has speed and has speed for .* both and have speed for . *both and have speed for . in this case , has speed on a shorter interval and thus for every , _ viz . _the tw is unstable with respect to negative perturbations .a similar argument applies to positive perturbations ._ neutral case . _the analysis is similar .one shows that the total duration when the perturbed trajectory has speed is identical to that of the tw ; hence . in the proof of lemma [ exist ] above , we have identified the condition ( see equation ) as sufficient to ensure being in case ( a ) with when . by cross - checking this constraint with the one in the statement of that lemma ( and using also lemma [ prouniqcas ] , _i.e. _ that the tw is entirely determined by the parameters and its phase shift ) , we get the following conclusion . for every choice of parameters , there exists an interval such that for every , there exists a -periodic tw shape and a phase shift such that is a locally asymptotically stable fixed point of .[ globexists ] _ proof ._ in order to be able to choose that simultaneously satisfies and the conditions of lemma [ exist ] , it suffices to prove that the inequality holds for all parameter values . to that goal, we consider separately different parameter regimes . if , then the left hand side of the inequality is equal 0 , while the right hand side always remains non - negative ; hence the inequality holds . in order to investigate the case , we notice that direct calculations imply the following conclusions * the inequality is equivalent to , * the condition simplifies to and we obviously have when .furthermore , we clearly have ; hence the statement also holds in the case where . from now on, we can assume .we have accordingly , one has to consider three cases * if then and the statement holds provided that , which is true . * if then and the inequality to check is which holds true .* finally , if and , then the inequality to verify is which is equivalent to . however , the inequality implies and we have because of .the proof of the corollary is complete . once local stability has been established , a careful computation of allows one to show that the fixed points are actually globally stable .when , the fixed point is globally stable .that is to say , there exists a unique unstable fixed point such that we have [ globstab ] to be more accurate , the proof below actually shows that for every , there exists such that _ proof ._ we are going to prove that , for the tw in case ( a ) with simultaneously satisfying and the conditions of lemma [ exist ] , the restriction of to the interval consists of four affine pieces ( each piece being defined on an interval ) : two pieces are rigid rotations and they are interspersed by one contracting and one expanding piece .since is a lift of an endomorphism of the circle that preserves the orientation , and since it has a locally stable fixed point ( corollary [ globexists ] ) , the graph of the contracting piece must intersect the main diagonal of . as a consequence, the graphs of the following and preceding neutral pieces can not intersect this line . by continuity and periodicity , the graph of the remaining expanding piece must intersect this line as well .let be this unique unstable fixed point .the proposition immediately follows . in order to prove the decomposition into the four desired pieces, we are going to consider various cases . to that goal , recall first that and , _ i.e. _ can only have speed for ( within the interval ) . consider the trajectory of the coordinate with initial value .since we have , the largest integer reaches before time is at most 1 , _viz._ we consider separately two cases : * either the last rapid phase with speed ( when ) stops when .this occurs iff * or its stops when reaches . , _i.e. _ we may have . ]this occurs when see figure [ f - four_intervals ] for an illustration of the action of according to these two cases , when . throughout the proof, we shall frequently make use of the time when the coordinate reaches the value , _i.e. _ . here are arbitrary ._ case ( a ) ._ by continuous and monotonic dependence on initial conditions , every trajectory starting initially with sufficiently close to 0 ( and ) , will not only experience the same number of rapid phases with speed , but the last rapid phase ( the unique rapid phase if ) will also stop at .assume .as time evolves between 0 and , speed changes for such trajectories occur when the level lines and are reached ( _ i.e. _ when and ) . consequently , the delays between the corresponding instants for two distinct trajectories remain constants _ i.e. _ we have this equality implies that the cumulated lengths of the rapid phases satisfy _i.e. _ they are independent of .we conclude that ; _ viz ._ the map is a rigid rotation in some right neighborhood of 0 .if , the same conclusion immediately follows from the fact that lengths of the rapid phases are simply given by .moreover , the largest initial condition for which this property holds is defined by and we have .therefore , is a rigid rotation on . to continue , we separate case ( a ) into 2 subcases : * either . * or assume first that case ( a1 ) holds . in the case , using similar considerations as above , for trajectories now starting in ]. however , by continuity of and the property ( p ) above , the set of these accumulation points must be invariant under , _viz._ since the only invariant set of that intersects $ ] is the fixed point , we conclude as desired .to conclude the proof , it remains to prove the existence of a unique such that according to the first part of the proof , it suffices to prove the existence of a unique such that the sequence eventually enters and remains inside the interval where is expanding ( because , henceforth , the sequence must approach ) .let be the derivative of on .we claim ( and prove below ) the existence of a closed subinterval , of a number and of a sufficiently large such that more precisely , is not unique and its boundaries can be chosen arbitrarily close to those of ( and similarly can be chosen arbitrarily close to ) .accordingly , the threshold depends on ( and ) and diverges as approaches .now , given , consider the intersection set thanks to the property , these sets form a family of nested closed non - empty intervals whose diameter tend to 0 as . by the nested ball theorem, we conclude the existence of a unique point which depends on the whole solution at site 1 , such that furthermore , as a composition of homeomorphisms , the map is itself a homeomorphism of .hence there exists a unique such that .clearly , we have as desired . to complete the proof , it remains to establish the property .notice first that , as gets large , the times at which the forcing signals and respectively cross the levels 0 , and 1 come close together . as a consequence , the restriction of to the interval , since close to by the property ( p ) , must consist of an expanding piece , possibly preceded and/or followed by a rigid rotation . moreover , the length of these ( putative ) rigid rotation pieces must be bounded above by a number that depends only on and on , and which vanishes as .( indeed , if otherwise , the length(s ) of the rigid rotation piece(s ) remained bounded below by a positive number , we would have a contradiction with the property ( p ) above , because the distance between and would remain bounded below by a positive number for close to the boundaries of . )this proves the existence of on which all for sufficiently large , must be expanding .in addition , by taking even larger if necessary ( so that is even smaller ) , we can make sure that the expanding piece of over intersects the diagonal , since the limit map does so .the first claim of property then easily follows .the second claim that the expanding slope must be bounded below by can proved using a similar contradiction argument as above .the proof of the proposition is complete . in order to extend the stability results down the chain , proceeding similarly as when introducing before proposition [ stab2site ] , we consider the maps ( ) that specify the coordinates . in particular , for , a similar reasoning as the one in the proof of the property ( p ) above shows that the conclusion of proposition [ stab2site ] implies as soon as and .moreover , by repeating _mutatis mutandis _ the proof of proposition [ stab2site ] , we conclude that , under the same conditions , there exists a unique such that by induction , one obtains the following statement , from which theorem [ mainres15 ] easily follows .given an arbitrary , assume that the initial phases have been chosen so that the solution behaves as follows then there exists a unique so that we have [ stabtw ]recall that for we have for the associated tw where is such that .hence , for every continuous increasing and -periodic forcing such that and , the stroboscopic map in the neighborhood of remains unchanged , _ viz ._ by repeating _mutatis mutandis _ the reasoning in the proof of proposition [ globstab ] , one shows that the restriction of to also consists of four pieces ; two rigid rotations interspersed by a contracting and an expanding piece .therefore , there must exist such that in other words , the asymptotic dynamics of is the same as when forcing with .the conclusion for the rest of the chain immediately follows .finally , in order to make sure that the conclusion applies to the uniform forcing , it suffices to show that . using the expression and , we obtain after simple algebra it is easy to see that this condition defines an interval of for all parameter values . moreover , and depending whether is smaller than or not , for , this interval either contains or coincides with the interval defined in the proof of corollary [ globexists ] . by continuity , for small , there exists an interval of - that corresponds to periods in a subinterval - in which both the condition and the ones in the proof of corollary [ globexists ] hold .the statement easily follows .for simple feed forward chains of type - i oscillators , our analysis proved that periodic wave trains can be generated from arbitrary initial condition , even when the root node is forced using an unrelated signal .moreover , these stable waves exist for an open ( parameter - dependent ) interval of wave number and period .the existence of globally attracting waves for arbitrary wave number in some range is reminiscent of the inertia - free dynamics of tilted frenkel - kontorova chains , which constitute coupled oscillator models for spatially modulated structures in solid - state physics .there are however essential differences between the two situations . instead of a uni - directional interaction ,the coupling is of bi - directional type in frenkel - kontorova chains and involves left and right neighbors .more importantly , the overall dynamics there is monotonic and , as mentioned in the introduction , this property is critical for the proof of existence and stability of waves .finally , we notice that the results on asymptotic stability and on stability with respect to changes in forcing are based on hyperbolic properties of the stroboscopic dynamics .accordingly , we believe that , using continuation methods , these results , and more generally , results on generation of traveling waves in unidirectional chains of type - i oscillators can be established in a rigorous mathematical way , in more general models with smooth prc and stimulus nonlinearities . this will be the subject of future studies .the authors declare that they have no competing interests .bf and sm both designed the research , proceeded to the analysis , and wrote the paper .the work of bf was supported by eu marie curie fellowship piof - ga-2009 - 235741 and by cnrs peps _ physique thorique et ses interfaces_. r.o .dror , c.c .canavier , r.j .butera , j.w .clark and j.h .byrne , _ a mathematical criterion based on phase response curves for stability in a ring of coupled oscillators _cybernetics * 80 * ( 1999 ) 11 - 23 .d. kleinfeld , k.r .delaney , m.s .fee , j.a .flores , d.w .tank and a. gelperin , _ dynamics of propagating waves in the olfactory network of a terrestrial mollusk : an electrical and optical study _ , j. neurophysiol . * 72 * ( 1994 ) 1402 - 1419 .
we investigate the dynamics of unidirectional semi - infinite chains of type - i oscillators that are periodically forced at their root node , as an archetype of wave generation in neural networks . in previous studies , numerical simulations based on uniform forcing have revealed that trajectories approach a traveling wave in the far - downstream , large time limit . while this phenomenon seems typical , it is hardly anticipated because the system does not exhibit any of the crucial properties employed in available proofs of existence of traveling waves in lattice dynamical systems . here , we give a full mathematical proof of generation under uniform forcing in a simple piecewise affine setting for which the dynamics can be solved explicitly . in particular , our analysis proves existence , global stability , and robustness with respect to perturbations of the forcing , of families of waves with arbitrary period / wave number in some range , for every value of the parameters in the system . _ keywords _ : nonlinear waves , forced feedforward chains , coupled oscillators , type i neural oscillator laboratoire de probabilits et modles alatoires + cnrs - universit paris 7 denis diderot + 75205 paris cedex 13 france + fernandez.univ-paris-diderot.fr + department of mathematics + the cooper union + new york ny 10003 usa + mintchev.edu +
erasure channels , the first example of which was introduced in , have attracted an increasing attention in the last decades . originally regarded as purely theoretical channels , they turned out to be a very good abstraction model for the transmission of data over the internet , where packets get lost randomly due to, for example , buffer overflows at intermediate routers .erasure channels also find applications in wireless and satellite channels where deep fading events can cause the loss of one or multiple packets .traditionally , mechanisms have been used in order to achieve reliable communication .a good example is the that is used for data transmission over the internet .relies on feedback from the receiver and retransmissions and it is known to perform poorly when the delay between transmitter and receiver is high or when multiple receivers are present ( reliable multicasting ) .an early work on erasure coding is , where reed - solomon codes and ( dense ) linear random codes are proposed .however , those techniques become impractical due to their complexity already for small block lengths .more recently tornado codes were proposed for transmission over erasure channels .tornado codes have linear encoding and decoding complexities ( under decoding ) . however , the encoding and decoding complexities are proportional to their block lengths and not their dimension . hence , they are not suitable for low rate applications such as reliable multicasting in which the transmitter needs to adapt its code rate to the user with the worst channel ( highest erasure probability ) .codes have also been proposed for use over erasure channels and they have been proved to be practical in several scenarios even under decoding .fountain codes are erasure codes potentially able to generate an endless amount of encoded symbols .they find application in contexts where the channel erasure rate is not known a priori . the first class of practical fountain codes , codes ,was introduced in together with an iterative decoding algorithm that is efficient when the number of input symbols is large .one of the shortcomings of codes is that in order to have a low probability of unsuccessful decoding , the encoding cost per output symbol has to be .raptor codes were introduced in as an evolution of codes .they were also independently proposed in , where they are referred to as online codes .raptor codes consist of a serial concatenation of an outer code ( usually called precode ) with an inner code .the code design can thus be relaxed requiring only the recovery of a fraction of the input symbols with small .this can be achieved with linear encoding complexity .the outer code is responsible for recovering the remaining fraction of input symbols , . if the outer code is linear - time encodable , then the raptor code has a linear encoding complexity , , and therefore the overall encoding cost per output symbol is constant with respect to .if decoding is used , the decoding complexity is also linear in the dimension and not in the blocklegth , as it is the case for ldpc and tornado codes .this leads to a constant decoding cost per symbol , regardless of the blocklength ( i.e. , of the rate ) .furthermore , in it was shown that raptor codes under decoding are universally capacity - achieving on the binary erasure channel .most of the works on and raptor codes consider decoding which has a good performance for very large input blocks ( at least in the order of a few tens of thousands symbols ) . often in practice ,smaller blocks are used .for example , for the raptor codes standardized in and the recommended values of range from to . for these input block lengths ,the performance under decoding degrades considerably . in this context, an efficient decoding algorithm in the form of inactivation decoding may be used in place of .some recent works have studied the decoding complexity of raptor and codes under inactivation decoding . in bounds on the distance and error exponent are derived for a concatenated scheme with random outer code and a fixed inner code . in is shown how the rank profile of the constraint matrix of a raptor code depends on the rank profile of the outer code parity check matrix and the generator matrix of the lt code . in and lower bounds on the bit error probability of and raptor codes under decoding are derived .the outer codes there considered in this work are picked from a linear ensemble in which the elements of the parity check matrix are independently set to one with a given probability .this work is extended in , where upper and lower bounds on the codeword error probability of codes under decoding are developed .another extension of this work is where a pseudo upper bound on the performance of raptor codes under decoding is derived under the assumption that the number of erasures correctable by the outer code is small .hence , this approximation holds only if the rate of the outer code is sufficiently high .in lower and upper bounds on the probability of successful decoding of codes under decoding as a function of the receiver overhead are derived , while corresponding bounds are developed in for raptor codes . in finite length protograph - based raptor - like ldpc codesare proposed for the awgn channel . despite their rateless capability ,raptor codes represent an excellent solution also for fixed - rate communication schemes requiring powerful erasure correction capabilities with low decoding complexity .it is not surprising that raptor codes are used in a fixed - rate setting by some existing communication systems ( see , e.g. , ) . in this context, the performance under erasure decoding is determined by the distance properties of the fixed - rate raptor code ensemble .in contrast to , in this work we consider raptor codes in a fixed - rate setting analyzing their distance properties .in particular , we focus on the case where the outer code is picked from the linear random code ensemble .the choice of this ensemble is not arbitrary .the outer code used by the r10 raptor code , the most widespread version of binary raptor codes ( see ) , is a concatenation of two systematic codes , the first being a high - rate regular code and the second a pseudo - random code characterized by a dense parity check matrix. the outer codes of r10 raptor codes were designed to behave as codes drawn from the linear random ensemble in terms of rank properties , but allowing a fast algorithm for matrix - vector multiplication .thus , the ensemble we analyze may be seen as a simple model for practical raptor codes with outer codes specifically designed to mimic the behavior of linear random codes .this model has the advantage to make the analytical investigation tractable .moreover , although it is simple , the results obtained using this model allow predicting the behavior of binary raptor codes in the standards rather accurately , as illustrated by simulation results in this paper .for the considered raptor ensemble we develop a necessary and sufficient condition to guarantee a strictly positive normalized typical minimum distance , that involves the degree distribution of the inner fixed - rate code , its rate , and the rate of the outer code .it identifies a positive normalized typical minimum distance region on the plane , where and are the inner and outer code rates .this can be used as an instrument for fixed - rate raptor code desing . in particular , for a given overall rate of the fixed - rate raptor ensemble , it allows to identify the smallest fraction of that has to be assigned to the outer code to obtain good distance properties .a necessary condition is also derived which , beyond the inner / outer code rates , depends on the average output degree only .finally we show how the analytical results presented in this paper may be used to predict the performance of finite length fixed - rate raptor codes .this work extends the earlier conference paper .the rest of the paper is organized as follows . in section [ sec : ensemble ]we introduce the main definitions .section [ sec : dist ] provides the derivation of the average weight distribution of the raptor code ensemble considered and the associated growth rate .section [ sec : rate_reg ] provides necessary and sufficient conditions for a linear growth of the minimum distance with the block length ( positive normalized typical minimum distance ) .numerical results are presented in section [ sec : results ] .the conclusions follow in section [ sec : conclusions ] .we consider fixed - rate raptor code ensembles based on the encoder structure depicted in figure [ fig : raptor ] .the encoder is given by a serial concatenation of an outer code with an inner fixed - rate code .we denote by the outer encoder input , and by the corresponding random vector . similarly , and denote the input and the output of the fixed - rate encoder , with and being the corresponding random vectors .the vectors , , and are composed by , , and symbols respectively .the symbols of are referred to as _ source _ symbols , whereas the symbols of and are referred to as _ intermediate _ and _ output _ symbols , respectively .we restrict ourselves to symbols belonging to .we denote by the hamming weight of a binary vector . for a generic output symbol , denotes the output symbol degree , i.e. , the number of intermediate symbols that are added ( in ) to produce .we will denote by , , and the rates of the outer , inner codes .we consider the ensemble of raptor codes obtained by a serial concatenation of an outer code in the binary linear random block code ensemble , with all possible realizations of an fixed - rate code with output degree distribution , where is the probability of having an output symbols of degree .we also denote by the average output degree , . picking randomly one code in the ensemble performed by randomly drawing the parity - check matrix of the linear random outer code and the low density generator matrix of the fixed - rate encoder .the parity - check matrix of the outer code is obtained by drawing bernoulli uniform random variables .the generator matrix of the fixed - rate encoder is generated by independently drawing degrees according to the probability mass function ( p.m.f . ) and , for each such degree , by choosing uniformly at random distinct symbols out of the intermediate ones .we make use of the notion of exponential equivalence .two real - valued positive sequences and are said to be exponentially equivalent , writing , when if and are exponentially equivalent , then given two pairs of reals and , we write if and .in this section we characterize the expected of a fixed - rate raptor code picked randomly in the ensemble .an expression for the expected is first obtained .then , the asymptotic exponent of the is analyzed .[ theorem : we ] let be the expected multiplicity of codewords of weight for a code picked randomly in the ensemble . for we have where for a serially concatenated code we have where is the average of the outer code , and is the average of the inner fixed - rate code .the average of an linear random code is known to be we now focus on the average of the fixed - rate code .let us denote by the hamming weight of the input word to the encoder and let us denote by the probability that any of the output bits of the encoder takes the value given that the hamming weight of the intermediate word is and the degree of the code output symbol is , i.e. , for any .this probability may be expressed as removing the conditioning on we obtain , the probability of any of the output bits of the fixed - rate encoder taking value given a hamming weight for the intermediate word , i.e. , for any .we have since the output bits are generated by the encoder independently of each other , the hamming weight of the codeword conditioned to an intermediate word of weight is a binomially distributed random variable with parameters and .hence , we may write the average of a code may now be easily calculated multiplying by the number of weight- intermediate words , yielding making use of , and , we obtain .as opposed to with , whose expression is given by ( 1 ) , the expected number of codewords of weight , , is given by an expected number of weight- codewords larger than one is related to the fact that we have a nonzero probability that the generator matrix of the fixed - rate lt code is not full - rank .this matrix , in fact , is generated `` online '' in the standard way for lt encoding , i.e. , by drawing i.i.d . discrete random variables with p.m.f . , representing the weights of the columns . for each such column , the corresponding ` ' entries are placed in random positions .it will be shown in section [ sec : rate_reg ] , theorem [ theorem : zero_codeword ] , that if the pair belongs to the region there called `` positive normalized typical minimum distance region '' , the expected number of zero weight codewords approaches ( exponentially ) as increases .next we compute the asymptotic exponent ( growth rate ) of the weight distribution for the ensemble , that is the ensemble in the limit where tends to infinity for constant and .hereafter , we denote the normalized output weight of the raptor encoder by and the normalized output weight of the outer code ( input weight to the encoder ) by .the growth rate is defined as [ theorem : growth_rate ] the asymptotic exponent of the weight distribution of the fixed - rate raptor code ensemble is given by where being and defined as follows , & \textrm{otherwise } \, , \end{array } \right.\end{aligned}\ ] ] with defined as .\label{eq_npnl}\end{aligned}\ ] ] let us define .from we have inequality follows from the well - known tight bound while from and from the fact that the maximum can not be taken for for large enough , hence for large enough ( as shown next ) .inequality is due again to , to being a monotonically increasing function , and to being a scaling factor not altering the result of the maximization with respect to . that the maximum is not taken for , for large enough ,may be proved as follows . by direct calculation of for and is easy to show that we have since for increasing , there exists such that for all .hence , for all such values of the maximum can not be taken at .next , by defining the right - hand side of may be recast as the two terms and in the last expression converge to zero as . moreover , also the term converges to zero regardless of the behavior of the sequence .in fact , it is easy to check that the term converges to zero in the limiting cases and , so it does in all other cases . developing the right hand side of further , for large enough , we have where is the set of rational numbers .inequality follows from the fact that , as it can be shown , ( uniformly in ) for large enough and from the fact that the supremum over upper bounds the maximum over the finite set .equality is due to the density of . in equality ,the function of being maximized is regarded as a function over the real interval ( i.e. , is regarded as a real parameter ) .the upper bound on is valid for any finite but large enough .if we now let tend to infinity , all inequalities are satisfied with equality . in particular : for this follows from the well - known exponential equivalence ; for from the exponential equivalence ; for from ( due to vanishing for large ) ; for from the fact that , asymptotically in , applying the definition of limit we can show that the maximum over the set upper bounds the supremum over ( while at the same time being upper bounded by it for any ) .the expression of is obtained by assuming tending to using the expression of .alternatively , the same expression is obtained by assuming tending to and letting an output symbol of degree choose its neighbors _ with _ replacement . by letting tend to infinity and by cancelling all vanishing terms , we finally obtain the statement . note that we can replacethe supremum by a maximum over as this maximum is always well - defined. ] it is continuous for all . ] the next two lemmas , which will be useful in the sequel , characterize the derivative of the growth rate function . for the sake of clarity, we use the notation instead of .[ lemma : growth_rate_derivative ] the derivative of the growth rate of the weight distribution of a fixed - rate raptor code ensemble is given by where let us rewrite the expression of in as .we must have taking the derivative with respect to , after elementary algebraic manipulation we obtain which , applying , yields the statement . [corollary : der ] for all , the derivative of the growth rate of the weight distribution of a fixed - rate raptor code ensemble fulfills by imposing , from lemma [ lemma : growth_rate_derivative ] we obtain which implies since the function is monotonically decreasing for .next , due to the definition of in we know that the partial derivative must be zero when calculated for .the expression of this partial derivative is so we obtain as shown above , for any such that we have . substituting in the latter equationwe obtain which implies .therefore , the only value of such that is . due to continuity of and to the fact that as ( as shown in subsection b of appendix [ sec : proof_inversion ] ) we conclude that for all . the normalized typical minimum distance of an ensemble is the real number fig .[ fig : growth ] shows for the ensemble , where is the output degree distribution used in the standards , ( see details in table [ table : dist ] ) and for three different values . the growth rate , , of a linear random code ensemble with rate is also shown .it can be observed how the curve for does not cross the -axis , the curve for has and the curve for has . .the continuous line shows the growth rate of a linear random code with rate .the dot - dashed , dashed , and dotted lines show the growth rates of the ensemble for , and , respectively . ] fig .[ fig : gilbert ] shows the overall rate of the raptor code ensemble versus the normalized typical minimum distance .it can be observed how , for constant overall rate , increases as the outer code rate decreases .it also can be observed how decreasing allows to get closer to the asymptotic gilbert - varshamov bound . vs. the normalized typical minimum distance .the continuous line represents the asymptotic gilbert - varshamov bound .the markers represent raptor codes ensembles with different outer code rates , . ]in this section we aim at determining under which conditions the ensemble exhibits good normalized typical distance properties .more specifically , given a distribution and an overall rate , we are interested in the allocation of the rate between the outer code and the fixed - rate lt code to achieve a strictly positive normalized typical minimum distance .we define the _ positive _ normalized typical minimum distance region of an ensemble as the set of code rate pairs for which the ensemble possesses a positive normalized typical minimum distance .formally : where we have used the notation to emphasize the dependence on , and .the positive normalized typical distance region for an lt output degree distribution is developed in the following theorem .the region is given by [ theorem_inner ] see appendix [ sec : proof_inversion ] .the next two theorems characterize the distance properties of a fixed - rate raptor code with linear random outer code picked randomly in the ensemble with belonging to .[ theorem : min_dist ] let the random variable be the minimum nonzero hamming weight in the code book of a fixed - rate raptor code picked randomly in an ensemble . if then exponentially in , for all .it is well known that this probability can be upper bounded via union bound as we will start by proving that the sequence is non - decreasing for and sufficiently large .as , the expression converges to , being given in . from lemma [ corollary : der ] we know that for . as , from theorem [theorem : growth_rate ] we have .hence , for sufficiently large , , and is non decreasing .we can now write where we have used , being given in . as we have .moreover , for all , provided .hence , tends to exponentially on . as from theorem [ theorem :min_dist ] , we have an exponential decay of the probability to find codewords with weight less than when the pair belongs to the region . such an exponential decay shall be attributed to the presence of the linear random outer code characterized by a dense parity - check matrix , which makes the growth rate function monotonically increasing for the values of for which it is negative . as a comparison , for code ensembles characterized by a positive normalized typical minimum distance ,the growth rate function starts from with negative derivative , reaches a minimum , and then increases to cross the -axis . in this case , for the sum in the upper bound is dominated by those terms corresponding to small values of , yielding either a polynomial decay ( as for gallager s codes ) or even tending to a constant ( as it is for irregular unstructured ensembles ) . [theorem : zero_codeword ] let the random variable be the multiplicity of codewords of weight zero in the code book of a fixed - rate raptor code picked randomly in the ensemble . if then in order to prove the statement we have to show that the probability measure of any event with vanishes as .we start by analyzing the behavior of =a_0 ] . using an argument analogous to the one adopted in the proof of theorem [ theorem : growth_rate ] , for large enough we have where therefore we can upper bound ] which , if , implies \rightarrow 1 ] and , via linear programming , that the minimum is attained if and only if and for all .since in the limit as of \rightarrow 1 $ ] , we necessarily have a vanishing probability measure for any event with . from theorem [ theorem : min_dist ] and theorem [ theorem : zero_codeword ] , a fixed - rate raptor code picked randomly in the ensemble is characterized with probability approaching as by a minimum distance at least equal to and by an encoding function whose kernel only includes the all - zero length message ( hence bijective ) . in the following we introduce an outer region to that only depends on the average output degree . [ pro : outer ] the positive normalized typical minimum distance region of a fixed - rate raptor code ensemble fulfills , where with being the only root of in , numerically .see appendix [ sec : proof_outer ] . ' '' '' ' '' '' + & 0.0098 & 0.0048 + & 0.4590 & 0.4965 + & 0.2110 & 0.1669 + & 0.1134 & 0.0734 + & & 0.0822 + & & 0.0575 + & & 0.0360 + & 0.1113 & + & 0.0799 & + & & 0.0012 + & & 0.0543 + & 0.0156 & + & & 0.0182 + & & 0.0091 + & 4.6314 & 5.825 ' '' '' ' '' '' + [ table : dist ] [ example_region ] in fig .[ fig : region ] we show the positive normalized typical minimum distance region , for and ( see table [ table : dist ] ) together with their outer bound .it can be observed how the outer bound is tight in both cases except for inner codes rates close to .the figure also shows several isorate curves , along which the rate of the raptor code is constant .for example , in order to have a positive normalized typical minimum distance and an overall rate , the figure shows that the rate of the ouer code must lay below for both distributions .let us assume we want to design a fixed - rate raptor code , with degree distribution or , overall rate and for a given length , which we assume to be large .different choices for and are possible . if is not chosen as the average minimum distance of the ensemble will not grow linearly on . hence , many codes in the ensemble will exhibit high error floors even under erasure decoding .0.48 and the dashed lines with white markers represents its outer bound .the gray dashed lines represent isorate curves for different rates .,title="fig:",scaledwidth=105.0% ] [ fig : region_a ] 0.48 and the dashed lines with white markers represents its outer bound .the gray dashed lines represent isorate curves for different rates .,title="fig:",scaledwidth=105.0% ] [ fig : region_b ]in this section experimental results are presented to validate the analytical results obtained in the previous sections . by means of exampleswe illustrate how the developed results can be used to make accurate statements about the performance of fixed - rate raptor code ensembles in the finite length regime .furthermore , we provide some results that show a tradeoff between performance and decoding complexity .finally we present some simulation results that show that the results obtained for linear random outer codes are a fair approximation for the results obtained with the standard r10 raptor outer code ( see ) . in this section we will consider raptor code ensembles for different values of , , and but keeping the overall rate of the raptor code constant to . fig . [fig : region_results ] shows the boundary of and for distribution together with an isorate curve for .the markers along the isorate curve in the figure represent the two different combinations of and that will be considered in this section .the first point ( , ) , marked with an asterisk , is inside but very close to the boundary of for .we will refer to ensembles corresponding to this point as _ bad _ ensembles .the second point , ( , ) marked with a triangle , is inside and quite far from the boundary of for .we will refer to ensembles corresponding to this point as _ good _ ensembles .0.49 its outer bound .the dashed - dotted line represents the isorate curve for and the markers represent two different points along the isorate curve with the same rate but different values of and .the lower figure shows the typical minimum distance as a function of the blocklength for ensembles with and and .the markers represent whereas the lines represent .,title="fig : " ] [ fig : region_results ] 0.49 [ fig : d_min_delta_star ] in order to analyze ensembles of finite length raptor codes it is useful to introduce a notion of minimum distance for finite length .the typical minimum distance , of an ensemble is defined as the integer number this definition will come in handy when we expurgate raptor code ensembles .in fact , at least half of the codes in the ensemble will have a minimum distance of or larger .the equivalent of in the asymptotic regime is , the ( _ asymptotic _ ) normalized minimum distance of the ensemble . for sufficiently large expects that converges to .[ fig : d_min_delta_star ] shows and as a function of the blocklength .it can be observed how the good ensemble has a larger typical minimum distance than the bad ensemble .in fact for all values of shown in fig . [fig : d_min_delta_star ] for the bad ensemble .we can also see how already for small values of the and are very similar .hence , the result of our asymptotic analysis of the minimum distance holds already for small values of .the expression of the average weight enumerator in theorem [ theorem : we ] can be used in order to upper bound the average over a with erasure probability , .however , the upper bound proposed in needs to be slightly modified to take into account codewords of weight .we have \leq p^{(\mathsf s)}_{b}(n , k,\epsilon ) \nonumber \\ & + \sum_{e=1}^{n - k } { n \choose e } \epsilon^e ( 1-\epsilon)^{n - e } \min \left\{1 , \sum_{w=1}^e { e \choose w } \frac{{a}_w}{{n \choose w}}\right\ } + { a}_0 - 1 \ ] ] where is the singleton bound considering raptor codes in a fixed - rate setting also allows us to expurgate raptor code ensembles as it was done in for ldpc code ensembles .let us consider an integer so that we can define the expurgated ensemble as the ensemble of codes in the ensemble whose minimum distance is .the expurgated ensemble will contain a fraction at least of the codes in the original ensemble .from it is known that the average of the expurgated ensemble can be upper bounded by : for each ensemble considered in this section codes were selected randomly from the ensemble .for each code monte carlo simulations over a were performed until errors were collected or a maximum of codewords were simulated .we remark that the objective here was not so much characterizing the performance of every single code but rather to characterize the average performance of the ensemble .[ fig : ub_128 ] shows the vs the erasure probability for two ensembles with and that have different outer code rates , ( good ensemble ) and ( bad ensemble ) .the good ensemble is characterized by a typical minimum distance whereas the bad ensemble is characterized by ( cf .[ fig : d_min_delta_star ] ) .for the two ensembles the upper bound in holds for average cer . however , the performance of the codes in the ensemble shows a high dispersion due to the short blocklength ( ) .in fact in both ensembles there are codes with minimum distance equal to zero which have ( around for the good ensemble and for the bad ensemble ) . comparing fig .[ fig : ub_good_128 ] and fig .[ fig : ub_bad_128 ] one can easily see how the fraction of codes performing close to the random coding bound is larger in the good ensemble than in the bad ensemble . for the good ensemble fig .[ fig : ub_good_128 ] shows also an upper bound on the average for the expurgated ensemble with , that has a lower error floor . for the bad ensembleno expurgated ensemble can be defined ( no exists that leads to in ) .0.49 for two ensembles with and but different values of .the solid , dashed and dot - dashed lines represent respectively the singleton bound , the berlekamp random coding bound and the upper bound in .the dotted line represents the upper bound for the expurgated ensemble for .the markers represent the average of the ensemble and the thin gray curves represent the performance of the different codes in the ensemble , both obtained through monte carlo simulations.,title="fig : " ] [ fig : ub_good_128 ] 0.49 for two ensembles with and but different values of .the solid , dashed and dot - dashed lines represent respectively the singleton bound , the berlekamp random coding bound and the upper bound in .the dotted line represents the upper bound for the expurgated ensemble for .the markers represent the average of the ensemble and the thin gray curves represent the performance of the different codes in the ensemble , both obtained through monte carlo simulations.,title="fig : " ] [ fig : ub_bad_128 ] fig .[ fig : ub_256 ] shows the vs for two ensembles using the same outer code rates as in fig .[ fig : ub_128 ] but this time for .it can be observed how the shows somewhat less dispersion than for . if we compare fig .[ fig : ub_good_256 ] and fig .[ fig : ub_good_128 ] we can see how for the good ensemble ( ) the error floor is much lower for than for , due to an increase in the typical minimum distance .in fact , whereas for there were some codes with minimum distance zero for we did not find any code with minimum distance zero out of the codes which were simulated . for the good ensemble it is possible again to considerably lower the error floor by expurgationhowever , comparing fig .[ fig : ub_bad_256 ] and fig .[ fig : ub_bad_128 ] we can see how the error floor is approximately the same for and , because in both cases the typical minimum distance is zero .0.49 for two ensembles with and but different values of .the solid , dashed and dot - dashed lines represent respectively the singleton bound , the berlekamp random coding bound and the upper bound .the dotted line represents the upper bound for the expurgated ensemble for .the markers represent the average of the ensemble and the thin gray curves represent the performance of the different codes in the ensemble , both obtained through monte carlo simulations.,title="fig : " ] [ fig : ub_good_256 ] 0.49 for two ensembles with and but different values of .the solid , dashed and dot - dashed lines represent respectively the singleton bound , the berlekamp random coding bound and the upper bound .the dotted line represents the upper bound for the expurgated ensemble for .the markers represent the average of the ensemble and the thin gray curves represent the performance of the different codes in the ensemble , both obtained through monte carlo simulations.,title="fig : " ] [ fig : ub_bad_256 ] so far we have only considered the performance under decoding . in practical systemsone needs to consider decoding complexity as well .when inactivation decoding is used the decoding throughput is largely determined by the number of inactivations needed for decoding , since the decoding complexity is cubic in the number of inactivations .[ fig : inact ] shows the averaged number of inactivations needed for ensembles of raptor codes with output degree distribution section .it can be observed how the good ensembles ( ) need more inactivations than bad ensembles ( ) .hence , the better performance obtained by using an outer code with lower rate comes at the cost of a higher decoding complexity . and the square markers for .the solid line stands for and the dashed line for . ] in this section we illustrate by means of a numerical example how the results obtained for linear random outer code closely approximate the results with the standard r10 raptor outer code .we consider raptor codes with an degree distribution .[ fig : region_compare ] shows the positive growth rate region for such a degree distribution ( assuming a linear random outer code ) and the three different rate points , two of which are inside the region while the third one lays outside .the rate pairs for the three points are specified in the figure caption .[ fig : cer_compare ] shows the average obtained through monte carlo simulations for the ensembles of raptor codes with , output degree distribution and two different outer codes , the standard r10 outer code and a linear random outer code .for the three rate points considered the average cer using the standard outer code and a linear random outer code are very close .as it can be observed , the error floor behavior of the raptor code ensemble with r10 outer code is in agreement with the position of the corresponding point on the plane with respect to the region , although this region is obtained using the simple linear random outer code model . for rate points inside error floor is much lower , and it tends to become lower the further the point is from the boundary of .the markers represent three different rate points all of them with but with different inner code rates , , and .the lower figure shows the average for raptor code ensembles using as output degree distribution and two different outer codes , the standard outer code of r10 raptor codes and a linear random outer code , ( l.r . ) in the legend.,title="fig : " ] [ fig : region_compare ] 0.49 [ fig : cer_compare ]in this work we have considered ensembles of binary fixed - rate raptor codes which use linear random codes as outer codes .we have derived the expression of the average of an ensemble and the expression of the growth rate of the as functions of the rate of the outer code and the rate and degree distribution of the inner code . based on these expressionswe are able to determine necessary and sufficient conditions to have raptor code ensembles with a positive typical minimum distance .a simple necessary condition has been developed too , which only requires ( besides the inner and outer code rates ) the knowledge of the average output degree .simulation results have been presented that demonstrate the applicability of the theoretical results obtained for finite length raptor codes .moreover , simulation results have been presented that show that the performance of raptor codes with linear random outer codes is close to that of raptor codes with the standard outer code of r10 raptor codes .thus , we speculate that the results obtained for raptor codes with linear random outer codes hold as first approximation for standard r10 raptor codes . the work presented in this paper helps to understand the behavior of fixed - rate raptor codes under decoding and can be used to design raptor codes for decoding . despite the fact that only the fixed - rate setting has been considered , we speculate that raptor code ensembles with a good fixed - rate performance will have also a good performance in a rateless setting .although only binary raptor codes have been considered , the authors believe that the work can be extended to non - binary raptor codes with a limited effort .we will first prove that for all pairs in we have a positive normalized typical minimum distance .then we will prove that this is not possible for any other pair .a sufficient condition for a positive normalized typical minimum distance is which , from theorem [ theorem : growth_rate ] , is equivalent to as done in lemma [ lemma : growth_rate_derivative ] and lemma [ corollary : der ] , let us use the notation to emphasize the dependence on .we now show that \end{aligned}\ ] ] that is we can invert maximization with respect to and limit as , so that the region in is obtained .this fact is proved by simply showing that that is the function is right - continuous at .it suffices to show where is an interval independent of and such that the function is bounded over it , i.e. , in fact , under these conditions we have uniform convergence of to in the interval as , namely , the second inequality in implies .moreover , denoting by the maximizing , we have which implies .so we have which yields as desired .next , we prove .we first observe that in the case for all even ( in which case is strictly increasing ) by direct computation we have for all and for all .hence in this case we can take . in all of the other cases thereexists such that for all and we can take .the existence of ( independent of ) such that the maximum is not taken for all is proved as follows . denoting and , we have since for all and since there exists such that this latter inequality implies uniformly with respect to , which is equivalent to for all , independently of .therefore the maximum can not be taken between and , with independent of .so far we have proved that the condition on expressed by theorem [ theorem_inner ] is sufficient to have a positive normalized typical minimum distance .now we need to show that this condition is also necessary .we need to prove that for the ensemble all rate pairs such that , the derivative of the growth rate at is positive , . according to lemma [ lemma : growth_rate_derivative ]the expression of corresponds to hence , since is the sum of two terms the first of which diverges to as , a necessary condition for the derivative to be negative is that the second term diverges to , i.e. , .this case is analyzed in the following lemma .[ lemma : limit_p ] if then in case the lt distribution is such that for all odd , and for any other lt distribution .let us recall that is the probability that the lt enconder picks an odd number of nonzero intermediate bits ( with replacement ) given that the intermediate codeword has hamming weight .if for at least one odd , then the only case in which a zero lt encoded bit is generated with probability is the one in which the intermediate word is the all - zero sequence .if for all odd , there is also another case in which a nonzero bit is output by the lt encoder with probability , i.e. , the case in which the intermediate word is the all - one word .consider now a pair such that .for a fixed - rate raptor code ensemble corresponding to this pair , we have a positive typical minimum distance if and only if . by lemma[ lemma : limit_p ] this implies when for at least one odd .it implies either or otherwise . that can not converge to follows from the proof of sufficiency ( as shown, the maximum for is taken for ) . to complete the proofwe now show that , in the case where for all odd , assuming leads to a contradiction . in case for all odd ,a taylor series for around is .assuming , we consider the left - hand side of and calculate its limit as .we obtain where the last equality follows from the above - stated taylor series . according to ,the last expression must be equal to zero , a constraint which requires the second limit to diverge to ( as the first limit diverges to ) .this , however , can not be fulfilled in any case when converge to zero and to one .in fact , using standard landau notation , when or the second limit converges , while when it diverges to . the proof consists of deriving a lower bound for and evaluating it for . to derive a lower bound for first derive a lower bound for .observing we see how is obtained as a summation over all possible intermediate hamming weights .a lower bound to can be obtained by limiting the summation to the term yielding to where we have introduced representing the probability that the inner encoder outputs a codeword with hamming weight given that the encoder input has weight .hence , we can write we shall now lower bound .we denote by note that .we have that with being the probability that the intermediate symbols selected to encoder are all zero . for large , we have denoting by , we have by jensen s inequality we have thus that replacing in and recalling that we get if we now impose the we obtain : this expression is only valid when the denominator is negative , that is , for , being the only root of the denominator in , whose approximate numerical value is .the authors would like to to thank prof .massimo cicognani for the useful discussions about the proof of theorem 3 .e. paolini , g. liva , b. matuz , and m. chiani , `` maximum likelihood erasure decoding of ldpc codes : pivoting algorithms and code design , '' _ ieee trans ._ , vol . 60 , no . 11 , pp . 32093220 , nov . 2012 .b. schotsch , h. schepker , and p. vary , `` the performance of short random linear fountain codes under maximum likelihood decoding , '' in _ proc . of the 2011 ieee int .conf . on commun ._ , kyoto , japan , jun .2011 , pp . 15 .f. lzaro blasco , e. paolini , g. liva , and g. bauch , `` on the weight distribution of fixed - rate raptor codes , '' in _ proc . of the 2015 ieee int .symp . on inf .theory _ , hong kong , china , jun .2015 , pp . 28802884 . c. di , d. proietti , i. telatar , t. richardson , and r. urbanke , `` finite - length analysis of low - density parity - check codes on the binary erasure channel , '' _ ieee trans .inf . theory _ ,48 , no . 6 , pp . 1570 1579 , jun . 2002
raptor code ensembles with linear random outer codes in a fixed - rate setting are considered . an expression for the average distance spectrum is derived and this expression is used to obtain the asymptotic exponent of the weight distribution . the asymptotic growth rate analysis is then exploited to develop a necessary and sufficient condition under which the fixed - rate raptor code ensemble exhibits a strictly positive typical minimum distance . the condition involves the rate of the outer code , the rate of the inner fixed - rate code and the lt code degree distribution . additionally , it is shown that for ensembles fulfilling this condition , the minimum distance of a code randomly drawn from the ensemble has a linear growth with the block length . the analytical results can be used to make accurate predictions of the performance of finite length raptor codes . these results are particularly useful for fixed - rate raptor codes under erasure decoding , whose performance is driven by their weight distribution . fountain codes , raptor codes , erasure correction , maximum likelihood decoding .
in the last half century , considerable studies have been made on the boltzman - gibbs - shannon entropy and the fisher information entropy ( matrix ) , both of which play important roles in thermodynamics and statistical mechanics of classical and quantum systems - .the entropy flux and entropy production have been investigated in connection with the space volume contraction . in the information geometry , the fisher information matrix provides us with the distance between the neighboring points in the rieman space spanned by probability distributions .the fisher information matrix gives the lower bound of estimation errors in the cramr - rao theorem . in a usual system consisting of particles ,the entropy and energy are proportional to ( _ extensive _ ) , and the probability distribution is given by the gaussian distribution belonging to the exponential family . in recent year , however , many efforts have been made for a study on _ nonextensive _ systems in which the physical quantity of particles is not proportional to . the nonextensivity has been realized in various systems such as a system with long - range interactions , a small - scale system with large fluctuations in temperature and a multi - fractal system .tsallis has proposed the generalized entropy ( called the tsallis entropy hereafter ) defined by where is the entropic index , denotes the probability distribution of a state at time , the boltzman constant is hereafter unity and expresses the -logarithmic function defined by .the tsallis entropy accounts for the nonextensivity of the entropy in nonextensive systems . in the limit of , reduces to the normal and then agrees with the boltzman - gibbs - shannon entropy expressed by the probability distribution derived by the maximum - entropy method ( mem ) with the use of the tsallis entropy is given by non - gaussian distribution , which reduces to the gaussian and cauchy distributions for and , respectively .many authors have discussed the fisher information matrix in nonextensive systems - . in order to derive the _generalized _ fisher information matrix g whose components are given by - the generalized kullback - leibler distance of between the two distributions and has been introduced : with \ : dx , \nonumber \\ & = & - \:\frac{1}{(q-1 ) } \left [ 1-\int p(x)^q\:p'(x)^{1-q } \:dx \right ] , \label{eq : a6}\end{aligned}\ ] ] where and denotes a set of parameters specifying the distribution . in the limit of , given by eq .( [ eq : a4 ] ) reduces to the conventional fisher information matrix .it should be remarked that csiszr had proposed the generalized divergence measure given by dx , \label{eq : a7}\end{aligned}\ ] ] where is assumed to be a convex function with the condition . for , eq .( [ eq : a7 ] ) yields the conventional kullback - leibler divergence given by dx .\label{eq : a8}\end{aligned}\ ] ]equation ( [ eq : a7 ] ) for leads to the generalized kullback - leibler distance given by eqs .( [ eq : a5 ] ) and ( [ eq : a6 ] ) . the generalized divergence given by eq .( [ eq : a6 ] ) , which is in conformity with the tsallis entropy , is equivalent to the -divergence of amari with .the escort probability and the generalized fisher information matrix are discussed in refs .the fisher information entropy in the cramr - rao inequality has been studied for nonextensive systems .extensive studies on the tsallis and fisher entropies have been made for reaction - diffusion systems , by using the mem with exact stationary and dynamical solutions for nonlinear fpe .these studies nicely unify the concept of normal , super- and sub - diffusions by a single picture .the purpose of the present paper is to investigate the stationary and dynamical properties of the information entropies in the coupled langevin model which has been widely adopted for a study of various stochastic systems ( for a recent review , see ) . the langevin model subjected to multiplicative noiseis known to be one of typical nonextensive systems .recently the coupled langevin model subjected to additive and multiplicative noise has been discussed with the use of the augmented moment method which is the second - moment method for local and global variables .we will obtain the probability distribution of the nonextensive , coupled langevin model by using the fokker - planck equation ( fpe ) method with the mean - field approximation .we have made a detailed study on effects on the stationary information entropies of additive and multiplicative white noise , external force , input signal , couplings and the number of constituent elements in the adopted model . by solving the fpe both by the proposed analytical scheme and by the partial difference equation ( pde ) method , we have investigated the transient responses to an input signal and an external force which are applied to the stationary state .the outline of the paper is as follows . in sec .2 , we describe the adopted , -unit coupled langevin model .analytical expressions for the tsallis entropy and generalized fisher information entropy in some limiting cases are presented .numerical model calculations of stationary and dynamical entropies are reported . in sec . 3 ,discussions are presented on the entropy flux and entropy production and on a comparison between -moment and normal - moment methods in which averages are taken over the escort and normal distributions , respectively .section 4 is devoted to our conclusion . in the appendix, we summarize the information entropies calculated with the use of the probability distribution derived by the mem .the cramr - rao inequality in nonextensive systems is shown to be expressed by the _ extended _ fisher information entropy which is different from the _ generalized _ fisher entropy .we will discuss effects of additive and multiplicative _ colored _ noise on information entropies , by using the result recently obtained by the functional - integral method .we have adopted the -unit coupled langevin model subjected to additive and multiplicative white noise given by with +i(t ) .\hspace{1cm}\mbox{( to ) } \label{eq : f2}\ ] ] here and denote arbitrary functions of , the coupling strength , an external input , and are the strengths of multiplicative and additive noise , respectively , and and express zero - mean gaussian white noises with correlations given by we have adopted the mean - field approximation for given by +i(t ) , \label{eq : f3}\end{aligned}\ ] ] with , \label{eq : f4}\end{aligned}\ ] ] where the m=1,2 ] where +=y ] which diverges at .with the use of the total distribution of given by eq .( [ eq : g1 ] ) , the tsallis entropies of single - unit and -unit ensembles are given by with eliminating from eqs .( [ eq : x2 ] ) and ( [ eq : x3 ] ) , we get where . equation ( [ eq : x5 ] ) shows that the tsallis entropy is non - extensive except for , for which reduces to the extensive boltzmann - gibbs - shannon entropy : . substituting the stationary distributions given by eqs .( [ eq : g16 ] ) , ( [ eq : g19 ] ) and ( [ eq : g22 ] ) to eq .( [ eq : a1 ] ) , we get the analytic expression for the tsallis entropy of a single unit given by , \hspace{1cm}\mbox{for , } \label{eq : h1}\\ & = & \left ( \frac{1-c_q}{q-1 } \right ) , \hspace{0.5cm}\mbox{for } \label{eq : h2}\end{aligned}\ ] ] with where stands for the beta function , and in eqs .( [ eq : h4 ] ) and ( [ eq : h3 ] ) are given by eqs .( [ eq : g24 ] ) and ( [ eq : g23 ] ) , respectively .we consider the generalized fisher information entropy given by from eqs .( [ eq : g1 ] ) and ( [ eq : h7 ] ) , the generalized fisher entropy for the -unit system is given by , \label{eq : x7 } \\ & = & q \int \cdot\cdot \int \left(\sum_i \frac{\partial \ln p_i(x_i)}{\partial \theta } \right)^2 \:\pi_i\ : p_i(x_i)dx_i , \label{eq : x8 } \\ & = & q \sum_i \int \left ( \frac{\partial \ln p_i(x_i)}{\partial \theta}\right)^2 \:p_i(x_i)\:dx_i + \delta g_q , \label{eq : x9 } \\ & = & n g_q^{(1 ) } , \label{eq : x10}\end{aligned}\ ] ] because the cross term of eq .( [ eq : x9 ] ) vanishes : where stands for the generalized fisher entropy in a single subsystem .the generalized fisher information entropy is extensive in the nonextensive system as shown in : the probability distribution obtained by the fpe for our langevin model is determined by the six parameters of , , , , and .when adopt in eq .( [ eq : h7 ] ) , for example , we get the generalized fisher entropy given by -e\left [ \left ( \frac{\partial y(x)}{\partial i } \right ) \right]^2 \right),\end{aligned}\ ] ] where ] given by }{d t } & = & \frac{d}{d t } \int p_q(x , t)\:x^n \:dx,\\ & = & \frac{q}{c_q}\int \left ( \frac{\partial p(x , t)}{\partial t}\right)\:p(x , t)^{q-1}\:x^n \:dx -\frac{1}{c_q}\left ( \frac{d c_q}{dt } \right ) e_q[x^n ] , \label{eq : l2 } \\ \frac{d c_q}{dt } & = & q \int \left ( \frac{\partial p(x , t)}{\partial t}\right)\:p(x , t)^{q-1}\:dx , \label{eq : l0}\end{aligned}\ ] ] we have obtained equations of motion for ( ] ) , valid for and , as given by equations ( [ eq : l3 ] ) and ( [ eq : l4 ] ) lead to the stationary solution given by \(2 ) we rewrite the distribution of given by eqs .( [ eq : g11])-([eq : g15 ] ) in terms of , and , as ^{\frac{1}{1-q } } e^{y(x ) } , \label{eq : l7 } \end{aligned}\ ] ] with where expresses the normalization factor [ eq .( [ eq : g15 ] ) ] . in deriving eqs .( [ eq : l7])-([eq : l9 ] ) , we have employed relations given by ,\end{aligned}\ ] ] which are obtained from eqs .( [ eq : g11 ] ) , ( [ eq : l5 ] ) and ( [ eq : l6 ] ) .\(3 ) then we have assumed that a solution of of the fpe given by eq .( [ eq : g8 ] ) is expressed by eqs .( [ eq : l7])-([eq : l9 ] ) in which stationary and are replaced by time - dependent and with equations of motion given by eqs .( [ eq : l3 ] ) and ( [ eq : l4 ] ) . dotted curves in the frames ( a ) and ( b ) of figs . 3 - 6express the results of stationary and calculated by eqs .( [ eq : l5 ] ) and ( [ eq : l6 ] ) for some typical sets of parameters .they are in good agreement with those shown by solid curves obtained with the use of the stationary distribution of given by eq .( [ eq : g14 ] ) . as will be shown shortly , the approximate , analytical method given by eqs .( [ eq : l3 ] ) , ( [ eq : l4 ] ) , ( [ eq : l7])-([eq : l9 ] ) provides fairly good results for dynamics of , and , and also for that of except for the transient period . in order to examine the validity of the analytical method discussed above ,we have adopted also the numerical method , using the partial difference equation ( pde ) derived from eq .( [ eq : g8 ] ) , as given by \left(\frac{b}{2 a}\right)[p(x+a)-p(x - a ) ] \nonumber \\ & + & \left(\frac{\alpha^2}{2 } x^2 + \frac{\beta^2}{2 } \right ) \left(\frac{b}{a^2}\right)[p(x+a , t)+p(x - a , t)-2p(x , t ) ] , \label{eq : k4}\end{aligned}\ ] ] with where and denote incremental steps of and , respectively .we impose the boundary condition : with , and the initial condition of where is the stationary distribution given by eqs .( [ eq : g14 ] ) and ( [ eq : g15 ] ) .we have chosen parameters of and such as to satisfy the condition : , which is required for stable , convergent solutions of the pde .* response to * we apply the pulse input signal given by where and denotes the heaviside function : for and zero otherwise .figure [ figg ] shows the time - dependent distribution at various for , , and .solid and dashed curves express the results of the pde method and the analytical method ( sec .3.6.1 ) , respectively .when input of is applied at , the distribution is gradually changed , moving rightward .the results of the analytical method are in good agreement with those obtained by the pde method , except for and .this change in induces changes in , , and , whose time dependences are shown in figs . [figh](a ) and [ figh](b ) , solid and dashed curves expressing the results of the pde method and the analytical method , respectively . by an applied pulse input , , and are increased while is decreased .the result for of the analytical method is in fairly good agreement with that obtained by the pde method .the calculated of the analytical method is also in good agreement with that of the pde method besides near the transient periods at and just after the input signal is on and off .this is expected due to the fact that given by eq .( [ eq : h11 ] ) is sensitive to a detailed form of because it is expressed by an integration of over , while is obtained by a simple integration of . for a comparison ,we show by chain curves , the results of the pde method when the step input given by is applied .the relaxation time of and is about 2.0 .it is noted that input signal for induces no changes in and , which has been already realized in the stationary state as shown by chain curves in figs .[ figd](c ) and [ figd](d ) .* response to * we modify the relaxation rate as given by which expresses an application of an external force of ( ) at with .figure [ figi](a ) and [ figi](b ) show the time dependence of , and with , , and for which .solid and dashed curves express the results of the pde method and the analytical method , respectively . when an external force is applied , and decreased whereas is increased .the results of the analytical method are in good agreement with those of the pde method .the relaxation times of and are 0.47 and 0.53 , respectively .in the preceding sec .2 , we have discussed the information entropies by using the probability distribution obtained by the fpe for the langevin model .it is worthwhile to compare it with the probability distribution derived by the mem .the variational condition for the tsallis entropy given by eq .( [ eq : a1 ] ) is taken into account with the three constraints : a normalization condition and -moments of and , as given by =\int p_q(x ) \:x \ : dx , \label{eq : b2 } \\\sigma_q^2 & = & e_q[(x-\mu_q)^2 ] = \int p_q(x ) \:(x-\mu_q)^2\:dx , \label{eq : b3}\end{aligned}\ ] ] where ] , we may see that eqs .( [ eq : p11 ] ) and ( [ eq : p12 ] ) lead to and in the limit of . in the opposite limit of , eqs .( [ eq : p6])-([eq : p8 ] ) yields that each of . and is proportional to and then divergent in this limit , though .it is noted that for and .we present some model calculations of , and in the stationary state , which are shown in fig .[ figl ] as a function of for ( dashed curves ) , ( chain curves ) and ( solid curves ) .we note that and . with increasing , decreased in the case of , while it is increased in the cases of and 1.0 .bag showed that is always decreased with increasing which disagrees with our result mentioned above : eqs .( [ eq : p6])-([eq : p8 ] ) are rather different from eqs .( 36 ) and ( 37 ) in ref . where non - gaussian properties of the distribution is not properly taken into account . in refs . , we have discussed equations of motion for normal moments of ( ] ) [ eq .( [ eq : g25 ] ) ] in the langevin model with , as given by these equations of motion are rather different from those for the -moments of and given by eqs .( [ eq : l3 ] ) and ( [ eq : l4 ] ) . indeed , eqs .( [ eq : l11 ] ) and ( [ eq : l12 ] ) yield stationary normal moments given by which are different from the stationary -moments of and given by eqs .( [ eq : l5 ] ) and ( [ eq : l6 ] ) , and which diverge at and , respectively .the time dependence of and becomes considerably different from those of and for an appreciable value of .figure [ figp](a ) , ( b ) , ( c ) and ( d ) show some examples of , , and , respectively , when a pulse input given by eq .( [ eq : k7 ] ) is applied with ( chain curves ) , ( dashed curves ) and ( solid curves ) . although is independent of , , and much increased at for larger . in particular , is significantly increased because of the term in eq .( [ eq : l12 ] ) .we have so far considered additive and multiplicative _ white _ noise .in our recent paper , we have taken into account the effect of _ colored _ noise by employing the functional - integral method .we have assumed the langevin model subjected to additive ( ) and multiplicative ( ) colored noise given by with , \label{eq : r2 } \\\frac{d \phi(t)}{d t } & = & -\frac{1}{\tau_m } [ \phi(t)- \alpha \;\eta(t ) ] , \label{eq : r3 } \end{aligned}\ ] ] where and ( and ) express the relaxation time and strength of additive ( multiplicative ) noise , respectively , and and stand for independent zero - mean gaussian white noise . by applying the functional - integral method to the langevin model given by eqs .( [ eq : r1])-([eq : r3 ] ) , we have obtained the effective one - variable fpe , from which the effective langevin model is derived as with }. \label{eq : r6 } \ ] ] here is given by eq .( [ eq : l11 ] ) with , from which is determined in a self - consistent way . in the stationary state where given by eq .( [ eq : l13 ] ) with , we get from eq .( [ eq : r6 ] ) : } , \label{eq : r7 } \\ & = & \frac{1}{\tau_m } \left[(1+\lambda \tau_m)-\sqrt{(1+\lambda \tau_m)^2 - 2 \tau_m \alpha^2}\right ] .\label{eq : r8 } \end{aligned}\ ] ] we get an approximate expression given by which is shown to be a good approximation both for and . equations ( [ eq : r5 ] ) and ( [ eq : r9 ] ) show that effects of additive and multiplicative colored noise are described by and which are reduced by factors of and ) , respectively , from original values of and . the dependence of and is plotted in fig .[ figq](a ) and [ figq](b ) with for ( chain curves ) , ( dashed curves ) and ( solid curves ) with , and .we note that with increasing , is much increased for smaller whereas is much decreased for smaller .the dependence of and on may be understood from their dependence shown in figs . [ figc](c ) and [ figc](d ) .the dependence of and is plotted in fig . [ figq](a ) and [ figq](b ) with with , and . with increasing , ( )is decreased ( increased ) for and while no changes for .these behavior may be explained from figs . [figb](c ) and [ figb](d ) showing the dependence of and .we have discussed stationary and dynamical properties of the tsallis and fisher entropies in nonextensive systems .our calculation for the -unit coupled langevin model subjected to additive and multiplicative noise has shown the followings : \(i ) the dependence of and on the parameters of , , , , and in the coupled langevin model are clarified ( figs .[ figb]-[figf ] ) , and ( ii ) dynamical properties are well described by the analytical method for the fpe proposed in sec .2.6.1 , which shows that the relaxation times in transient responses of and to a change in are short ( ) while those in are fairly long ( ) .the difference between the parameter dependence of and in the item ( i ) arises from the fact that provides us with a global measure of ignorance while a local measure of positive amount of information .we have calculated the information entropies also by using the probability distribution derived by the mem , from which we get the followings : \(iii ) derived by the mem is rather different from that of the fpe for ( figs .[ figr ] and [ figs ] ) , for which the information entropies of the mem are independent of while those of the fpe depend on ( _ i.e. _ ) , and \(iv ) the cramr - rao inequality is expressed by the extended fisher entropy [ eq .( [ eq : e0 ] ) ] which is different from the generalized fisher entropy [ eq .( [ eq : d0 ] ) ] derived from the generalized kullback - leibler divergence [ eq .( [ eq : a6 ] ) ] .the item ( iv ) has not been clarified in previous studies on the fisher entropies in nonextensive systems - .the langevin model has been employed for a study of a wide range of stochastic systems .quite recently , the present author has proposed the generalized rate - code model for neuronal ensembles which is described by the coupled langevin - type equation . it would be interesting to discuss the dynamics of information entropies in such neural networks , which is left for our future study .this work is partly supported by a grant - in - aid for scientific research from the japanese ministry of education , culture , sports , science and technology . with the use of eqs .( [ eq : a1 ] ) and ( [ eq : b6 ] ) , the tsallis entropy is given by , \hspace{1cm}\mbox{for } \label{eq : c1 } \\ & = & \left(\frac{1-c_q}{q-1 } \right ) , \hspace{2cm}\mbox{for } \label{eq : c2}\end{aligned}\ ] ] with which yield here for and are given by eqs .( [ eq : b8 ] ) and ( [ eq : b10 ] ) , respectively . the distribution given by eq .( [ eq : b6 ] ) is characterized by two parameters of . by using eqs .( [ eq : a4 ] ) and ( [ eq : b6 ] ) , we obtain the component of the generalized fisher information matrix g given by - , \label{eq : d0 } \\ & = & q\:e [ ( x_i - e[x_i ] ) ( x_j - e[x_j ] ) ] , \label{eq : d1 } \hspace{1cm}\mbox{for }\end{aligned}\ ] ] with , \label{eq : d2}\end{aligned}\ ] ] where ] stands for the average over the escort distribution of . substituting the probability given by eq .( [ eq : b6 ] ) to eq .( [ eq : d0 ] ) , we get which yield a similar calculation leads to the ( 2,2)-component given by next we discuss the cramr - rao inequality in nonextensive systems . for the escort distribution given by eq .( [ eq : b4 ] ) which satisfies eqs .( [ eq : b2 ] ) and ( [ eq : b3 ] ) with = \int p_q(x)\ : dx,\ ] ] we get the cramr - rao inequality here denotes the covariance error matrix whose explicit expression will be given shortly , and is referred to as the _ extended _ fisher information matrix whose components are expressed by , \hspace{1cm}\mbox{for } \label{eq : e0}\\ & = & e_q \left [ ( \tilde{x}_i - e_q[\tilde{x}_i ] ) ( \tilde{x}_j - e_q[\tilde{x}_j ] ) \right],\end{aligned}\ ] ] with , \\ & = & q ( x_i - e[x_i ] ) , \label{eq;e2}\end{aligned}\ ] ] being given by eq .( [ eq : d2 ] ) . note that is different from given by eq .( [ eq : d0 ] ) except for . the ( 1,1 ) component of is given by , \label{eq : e3 } \\ & = & \left ( \frac{q^2}{c_q } \right ) \int p(x)^q \left ( \frac{\partial \ln p(x ) } { \partial x}\right)^2 \:dx , \label{eq : e4 } \\ & = & \left(\frac{2q^2}{\nu \sigma_q^2 ( q-1)}\right ) \frac{b(\frac{3}{2},\frac{q}{(q-1)}+\frac{1}{2 } ) } { b(\frac{1}{2},\frac{q}{(q-1)}-\frac{1}{2 } ) } , \hspace{1cm}\mbox{for } \label{eq : e5 } \\ & = & \frac{1}{\sigma_q^2 } , \hspace{7cm}\mbox{for } \label{eq : e6 } \\ & = & \left ( \frac{2q^2}{\nu \sigma_q^2 ( 1-q ) } \right ) \frac{b(\frac{3}{2},\frac{q}{(1-q)}-1 ) } { b(\frac{1}{2},\frac{q}{(1-q)}+1 ) } , \hspace{1cm}\mbox{for } \label{eq : e7}\end{aligned}\ ] ] which lead to similarly , the ( 2,2 ) component of is given by , \label{eq : e10 } \\ & = & \frac{(q+1)}{4 ( 2 q-1)\sigma_q^4}. \hspace{1cm}\mbox{for } \label{eq : e9}\end{aligned}\ ] ] the extended fisher information matrix is expressed by whose inverse is given by a calculation of the component ( ) of the covariance error matrix leads to in the limit of , the matrices reduce to chain and solid curves in fig .[ fign](a ) express the dependence of and , respectively .when is further from unity , is much decreased and it vanishes at and 3 .the lower bond of is expressed by the cramr - rao relation because it is satisfied by : chain , dashed and solid curves in fig . [fign](b ) show , and , respectively .it is noted that diverges at .the following relations hold : equation ( [ eq : e12 ] ) means that can not provide the lower bound of . equations ( [ eq : e11])-([eq : e13 ] ) clearly show that the lower bound of is expressed by the extended fisher information matrix , but not by the generalized fisher information matrix .a. palstino and a. r. plastino , physica a * 222 * , 347 ( 1995 ) . c. tsallis and d. j. bukman , phys .e * 54 * , r2197 ( 1996 ) .a. palstino , a. r. plastino , and h. g. miller , physica a * 235 * , 577 ( 1997 ) .f. pennini , a. r. plastino , and a. plastino , physica a * 258 * , 446 ( 1998 ) .l. borland , f. pennini , a. r. plastino , and a. plastino , eur .j. b. * 12 * , 285 ( 1999 ) .a. r. plastino , m. casas , and a. plastino , physica a * 280 * , 289 ( 2000 ). s. abe , phys .e * 68 * , 031101 ( 2003 ) .
the tsallis entropy and fisher information entropy ( matrix ) are very important quantities expressing information measures in nonextensive systems . stationary and dynamical properties of the information entropies have been investigated in the -unit coupled langevin model subjected to additive and multiplicative white noise , which is one of typical nonextensive systems . we have made detailed , analytical and numerical study on the dependence of the stationary - state entropies on additive and multiplicative noise , external inputs , couplings and number of constitutive elements ( ) . by solving the fokker - planck equation ( fpe ) by both the proposed analytical scheme and the partial difference - equation method , transient responses of the information entropies to an input signal and an external force have been investigated . we have calculated the information entropies also with the use of the probability distribution derived by the maximum - entropy method ( mem ) , whose result is compared to that obtained by the fpe . the cramr - rao inequality is shown to be expressed by the _ extended _ fisher entropy , which is different from the _ generalized _ fisher entropy obtained from the generalized kullback - leibler divergence in conformity with the tsallis entropy . the effect of additive and multiplicative _ colored _ noise on information entropies is discussed also . = 1.333 * stationary and dynamical properties of information entropies + in nonextensive systems * hideo hasegawa _ department of physics , tokyo gakugei university + koganei , tokyo 184 - 8501 , japan _ ( ) _ pacs no . _ 05.10.gg , 05.45.-a
in this paper we aim to develop a general , model free , method for analyzing current status data using machine learning techniques .in particular , we propose a support vector machine ( svm ) learning method for estimation of the failure time expectation for current status data .svm was originally introduced by vapnik in the 1990 s and is firmly related to statistical learning theory .the choice of svms for current status data is motivated by the fact that svms can be implemented easily , have fast training speed , produce decision functions that have a strong generalization ability and can guarantee convergence to the optimal solution , under some weak assumptions .current status data is a data format where the failure time is restricted to knowledge of whether or not exceeds a random monitoring time .this data format is quite common and includes examples from various fields . mention a few examples including : studying the distribution of the age of a child at weaning given observation points ; when conducting a partner study of hiv infection over a number of clinic visits ; and when a tumor under investigation is occult and an animal is sacrificed at a certain time point in order to determine presence or absence of the tumor . for instance , in the last example of carcinogenicity testing , is the time from exposure to a carcinogen and until the presence of a tumor , and is the time point at which the animal is sacrificed in order to determine presence or absence of the tumor .clearly , it is difficult to estimate the failure time distribution since we can not observe the failure time .these examples illustrate the importance of this topic and the need to find advanced tools for analyzing such data .we present a support vector machine framework for current status data .we propose a learning method , denoted by csd - svm , for estimation of the failure time expectation .we investigate the theoretical properties of the csd - svm , and in particular , prove consistency for a large family of probability measures . in order to estimate the conditional expectation we use a modified version of the quadratic loss . using the methodology of , we construct a data dependent version of the quadratic loss .since the failure time is not observed , our data dependent loss function is based on the censoring time and on the current status indicator .finally , in order to obtain a csd - svm decision function for current status data , we minimize a regularized version of the empirical risk with respect to this data - dependent loss .there are several approaches for analyzing current status data .traditional methods include parametric models where the underlying distribution of the survival time is assumed to be known ( such as weibull , gamma , and other distributions with non - negative support ) .other approaches include semiparametric models , such as the cox proportional hazard model , and the accelerated failure time ( aft ) model ( see , for example , * ? ? ?in the cox model , the hazard function is assumed to be proportional to the exponent of a linear combination of the covariates . in the aft model, the log of the failure time is assumed to be a linear function of the covariates .several works including , and others have suggested the cox proportinal hazard model for current status data , where the cox model can be represented as a generalized linear model with a log - log link function .other works including discussed the use of the aft model for current status data and suggested different algorithms for estimating the model parameters . needless to say that both parametric and semiparametric models demand stringent assumptions on the distribution of interest which can be restrictive .for this reason , additional estimation methods are needed . over the past two decades ,some learning algorithms for censored data have been proposed ( such as neural networks and splitting trees ) , but mostly with no theoretical justification .additionally , most of these algorithms can not be applied to current status data but only to other , more common , censored data formats . recently , several works suggested the use of svms for survival data . suggested the use of svms for survival analysis , and formulated the task as a ranking problem .shortly after , suggested the use of svms for regression problems with censored data ; this was done by asymmetrically modifying the -insensitive loss function .both examples were empirically tested but lacked theoretical justification . proposed an empirical quantile risk estimator , which can also be applied to right censoring , and studied the estimator s performance . studied an svm framework for right censored data and proved that the algorithm converges to the optimal solution . suggested two svm - based formulations for classification problems with survival data .these examples illustrate that initial steps in this direction have already been taken . however , as far as we know , the only svm - based work that can also be applied to current status data is by which has a more computational and less theoretic nature .the authors studied the use of svm for regression problems with interval censoring and , using simulations , showed that the method is comparable to other missing data tools and performs significantly well when the majority of the training data is censored .the contribution of this work includes the development of an svm framework for current status data , the study of the theoretical properties of the csd - svm , and the development of new oracle inequalities for censored data .these inequalities , together with finite sample bounds , allow us to prove consistency and to compute learning rates .the paper is organized as follows . in section [ sec : preliminaries ] we describe the formal setting of current status data and discuss the choice of the quadratic loss for estimating the conditional expectation . in section [ sec : support - vector - machines ] we present the proposed csd - svm and its corresponding data - dependent loss function . section [ sec : theoretical - results ] contains the main theoretical results , including finite sample bounds , consistency proofs and learning rates . in section [ sec :estimation - of - the ] we illustrate the estimation procedure and show that the solution has a closed form .section [ sec : simulation - study ] contains the simulations .concluding remarks are presented in section [ sec : concluding - remarks ] .the lengthier proofs appear in appendix [ sec : appendix ] .the matlab code for both the algorithm and for the simulations can be found in the [ sub : supplementary - material ] .in this section we present the notations used throughout the paper .first we describe the data setting and then we discuss briefly loss functions and risks .assume that the data consists of i.i.d.random triplets .the random vector is a vector of covariates that takes its values in a compact set .the failure - time is non - negative , the random variable is the censoring time , the indicator is the current status indicator at time , and is contained in the interval \equiv\mathcal{y} ] .a function that achieves the minimum -risk is called a _bayes decision function _ and is denoted by , and the minimal -risk is called the _ bayes risk _ and is denoted by .finally , the empirical -risk is defined by .for example , it is well known ( see , for example , * ? ? ? * ) that the conditional expectation is the bayes decision function with respect to the quadratic loss .let be a reproducing kernel hilbert space ( rkhs ) of functions from to , where an rkhs is a function space that can be characterized by some kernel function . by definition ,if is a universal kernel , then is dense in the space of continuous functions on , ( see , for example , ( * ? ? ?* definition 4.52 ) ) .let us fix such an rkhs and denote its norm by and let be some sequence of regularization constants .an svm decision function for uncensored data is defined by : we recall that current status data consists of independent and identically - distributed random triplets .let and be the cumulative distribution functions of the failure time and censoring , respectively , given the covariates .let be the density of . for current status data ,we introduce the following identity between risks , following .we extend this notion and incorporate loss functions and covariates in the following identity .let be a loss function differentiable in the first variable .let be the derivative of with respect to the first variable .we would like to find the minimizer of over a set of functions .note that \\ = & e_{z}\left[\int_{0}^{\tau}\ell(t , f(z))(1-f(t|z))dt-\left.l(t , f(z))(1-f(t|z))\right|_{0}^{\tau}\right]\\ = & e_{z}\left[\int_{0}^{\tau}\ell(t , f(z))(1-f(t|z))dt\right]+e[l(0,f(z))]\,,\end{aligned}\ ] ] where the equality before last follows from integration by parts .note also that and thus = & e_{z , t}\left[e_{c}\left[\left.\frac{\mathbf{{1}}\{t > c\}\ell(c , f(z))}{g(c|z)}\right|z = z , t = t\right]\right]\\ = & e_{z , t}\left[\int_{0}^{\tau}\frac{\mathbf{{1}}\{t > c\}\ell(c , f(z))g(c|z)}{g(c|z)}dc\right]\\ = & e_{z , t}\left[\int_{0}^{\tau}\mathbf{{1}}\{t > c\}\ell(c, f(z))dc\right]\\ = & e_{z}\left[\int_{0}^{\tau}\ell(c , f(z))\int_{0}^{\tau}\mathbf{{1}}\{t > c\}df(t|z)dc\right]\\ = & e_{z}\left[\int_{0}^{\tau}\ell(c , f(z))(1-f(c|z))dc\right]\,.\end{aligned}\ ] ] this shows that the risk can be represented as the sum of two terms +e[l(0,f(z))] ] ( note the use of instead of in the denominator ) .we note that for current status data , the assumption of some knowledge of the censoring distribution is reasonable , for example , when it is chosen by the researcher . in other cases ,the density can be estimated using either parametric or nonparametric density estimation techniques such as kernel estimates .it should be noted that the censoring variable itself is not censored and thus simple density estimation techniques can be used in order to estimate the density .in this section we prove consistency of the csd - svm learning method for a large family of probability measures and construct learning rates .we first assume that the censoring mechanism is known , and thus the true density of the censoring variable is known . using this assumption , and some additional conditions , we bound the difference between the risk of the csd - svm decision function and the bayes risk in order to form finite sample bounds .we use this result , together with oracle inequalities , to show that the csd - svm converges in probability to the bayes risk .that is , we demonstrate that for a very large family of probability measures , the csd - svm learning method is consistent .we then consider the case in which the censoring mechanism is not known and thus the density needs to be estimated .we estimate the density using nonparametric kernel density estimation and develop a novel finite sample bound .we use this bound to prove that the csd - svm is consistent even when the censoring distribution is not known .finally we construct learning rates for the csd - svm learning method for both known and unknown .[ def : normalized loss]let be the normalized quadratic loss , let be its derivative with respect to the first variable , and let be the data - dependent version of this loss . for simplicity, we use the normalized version of the quadratic loss .since both and are convex functions with respect to , then for any compact set \subset\mathbb{r} ] and , for some [ as : a2 ] . is compact [ as : a3 ] . is an rkhs of a continuous kernel with [ as : a4 ] .define the approximation error by define and , where is defined in remark [ rem : loss bounds ] .in this section we develop finite sample bounds assuming that the censoring density is known .[ theorem 1 - g known]assume that ( a[as : a1])-(a[as : a4 ] ) hold . then for fixed , and , with probability not less than where is the covering number of the of with respect to supremum norm and where is the unit ball of ( for further details see ) .the proof of this theorem appears in appendix [ sub : proof - of - theorem 1 ] .we now move to discuss consistency of the csd - svm learning method . by definition, -universal consistency means that for any , where is the bayes risk .universal consistency means that ( [ eq:3 ] ) holds for all probability measures on .however , in survival analysis we have the problem of identifiability and thus we will limit our discussion to probability measures that satisfy some identification conditions . let be the set of all probablity measures that satisfy assumptions ( a[as : a1])-(a[as : a2 ] ) .we say that a learning method is -universal consistent when ( [ eq:3 ] ) holds for all probability measures . in order to show -universal consistency, we utilize the finite sample bounds of theorem [ theorem 1 - g known ] .the following assumption is also needed for proving -universal consistency : 1 . [ as : a5 ] , for all probability measures on assumptio ( a[as : a5 ] ) means that our rkhs is rich enough to include the bayes decision function .[ cor - consistency ] assume the setting of theorem [ theorem 1 - g known ] and that assumptio ( a[as : a5 ] ) holds .let be a sequence such that and choose for some then the csd - svm learning method is -universal consistent . in theorem [ theorem 1 - g known ] we showed that with probability not greater than .choose from assumption ( a[as : a5 ] ) together with lemma 5.15 of , converges to zero as converges to infinity .clearly finally , from the choice of , it follows that both and converge to 0 as .hence for every fixed with probability not less than 1- .the right hand side of this converges to 0 as , which implies consistency ( * ? ? ?* lemma 6.5 ) .since this holds for all probability measures , we obtain -universal consistency .in this section we form finite sample bounds for the case in which the censoring density is not known and needs to be estimated .we assume that the density of the censoring variable is estimated using nonparametric kernel density estimation . in lemma [ lemma on density ]we construct finite sample bounds on the differnce between the estimated density and the true density . in theorem[ theorem g unknown ] we utilize this bound to form finite sample bounds for the csd - svm learning method .[ kernel of order m]we say that ( not to be confused with the kernel function of the rkhs is a kernel of order , if the functions are integrable and satisfy and .[ holder class ] the hlder class of functions is the set of times differentiable functions whose derivative satisfies for some constant .[ lemma on density]let be a kernel function of order m satisfying and define is the bandwidth .suppose that the true density satisfies .let us also assume that belongs to the class .finally , assume that .then for any , where and are constants , and for some ] that converges to zero if for all and all ] such that .choose = .then \(i ) if is known , the learning rate is given by .\(ii ) if is not known and the setup of theorem [ theorem g unknown ] holds , then the leraning rate is given by .the proof of the theorem appears in appendix [ sub : proof - of - theorem 3 ] .in this section we demonstrate how to compute the csd - svm decision function with respect to the quadratic loss .in addition we show that the solution has a closed form .since is convex , then for any rkhs over and for all , it follows that there exists a unique svm solution .in addition , by the representer theorem , there exists constants such that .hence the optimization problem reduces to estimation of the vector .a more general approach will also include an intercept term such that .let be the feature map that maps the input data into an rkhs such that .our goal is to find a function that is the solution of ( [ eq : csd - svm ] ) . from the representer theorem, there exists a unique svm decision function of the form .define for each the function by .then for , the optimization problem reduces to : +\frac{1}{2}\|w\|^{2}\ ] ] this is an optimization problem under equality constraints and hence we will use the method of lagrange multipliers .the lagrangian is given by +\frac{1}{2}\|w\|^{2}+\sum_{i=1}^n \alpha_{i}\left(c_{i}-<w,\phi(z_{i})>-b - r_{i}\right)\ ] ] minimizing the original problem yields the following conditions for optimality : since these are equality constraints in the dual formulation , we can substitute them into to obtain the dual problem . by the strong duality theorem ( * ? ? ? * theorem 6.2.4 ) , the solution of the dual problem is equivalent to the solution of the primal problem . +\frac{1}{2}\sum_{i=1}^n\sum_{j=1}^n \alpha_{i}\alpha_{j}k(z_{i},z_{j})\\ + & \sum_{i=1}^n \alpha_{i}\left(c_{i}-\sum_{j=1}^n \alpha_{j}k(z_{i},z_{j})-b-\left(\frac{\alpha_{i}}{c_{\lambda}}+c_{i}-\frac{(1-\vardelta_{i})}{2\hat{g}(c_{i}|z_{i})}\right)\right).\end{aligned}\ ] ] some calculations yield : subject to the constraint and where .this is a quadratic programming problem subject to equality constraints .its solution satisfies : note that if we do not require an intercept term , the solution is .it is interesting to note that this solution is equivalent to the solution attained by the representer theorem for differentiable loss functions : ( * ? ? ?* section 5.2 ) . in our case , ; hence and since we see that i.e. , .in this section we test the csd - svm learning method on simulated data and compare its performance to current state of the art . we construct four different data - generating mechanisms , including one - dimensional and multi - dimentional settings . for each data type ,we compute the difference between the csd - svm decision function and the true expectation .we compare this result to results obtained by the cox model and by the aft model . as a reference , we compare all these methods to the bayes risk . for each data setting , we considered two cases ; : ( i ) the censoring density is known ; and ( ii ) the censoring density is not known . for the secondsetting , the distribution of the censoring variable was estimated using nonparametric kernel density estimation with a normal kernel .the code was written in matlab , using the spider library . in order to fit the cox model to current status data, we downloaded the ` icsurv ' r package . in this package ,monotone splines are used to estimate the cumulative baseline hazard function , and the model parameters are then chosen via the em algorithm .we chose the most commonly used cubic splines . to choose the number and locations of the knots, we followed and who both suggested using a fixed small number of knots and thus we placed the knots evenly at the quantiles .for the aft model , we used the ` surv reg ' function in the ` survival ' r package . in order to call r through matlab, we installed the r package rscproxy , installed the statconndcom server , and download the matlab r - link toolbox . for the kernel of the rkhs , we used both a linear kernel and a gaussian rbf kernel , where and were chosen using 5-fold cross - validation .the code for the algorithm and for the simulations is available for download ; a link to the code can be found in the [ sub : supplementary - material ] .we consider the following four failure time distributions , corresponding to the four different data - generating mechanisms : ( 1 ) weibull , ( 2 ) multi - weibull , ( 3 ) multi - log - normal , and ( 4 ) an additional example where the failure time expectation is triangle shaped .we present below the csd - svm risks for each case and compare them to risks obtained by other methods .the risks are based on 100 iterations per sample size .the bayes risk is also plotted as a reference . in setting 1 ( weibull failure - time ) , the covariates are generated uniformly on , ] and the failure time is generated from a weibull distribution with parameters .the failure time was then truncated at . figure [ fig : weibull - failure - time ] compares the results obtained by the csd - svm to results achieved by the cox model and by the aft model , for different sample sizes .it should be noted that both the ph and the aft assumption hold for the weibull failure time distribution . in particular , when the ph assumption holds , estimation based on the cox regression is consistent and efficient ; hence , when the ph assumption holds , we will use the cox regression as a benchmark .figure [ fig : weibull - failure - time ] shows that when is known , even though the csd - svm does not use the ph assumption or the aft assumption , the results are comparable to those of the cox regression , and are better than the aft estimates , especially for larger sample sizes . however , when is not known , the cox model produces the smallest risks , but its superiority reduces as the sample size grows .this coincides with the fact that when is not known , the learning rate of the csd - svm is slower ..[fig : weibull - failure - time ] ] in setting 2 ( multi - weibull failure - time ) , the covariates are generated uniformly on ^{10}, ] , as in setting 1 .the failure time is generated from a weibull distribution with parameters .the failure time was then truncated at . note that this model depends only on the first three variables . in figure[ fig : multi - weibull - failure - time ] , boxplots of risks are presented .figure [ fig : multi - weibull - failure - time ] illustrates that the csd - svm with a linear kernel is superior to the other methods , for all sample sizes and for both the cases known and uknown .however , since the data may be sparse in the feature space , the csd - svm with an rbf kernel might require a larger sample size to converge . .[ fig : multi - weibull - failure - time ] ] in setting 3 ( multi - log - normal ) , the covariates are generated uniformly on ^{10}, ] is generated uniformly on ]. then , for are i.i.d .random variables with zero mean and with variance : &=e_{g}\left[\left(\eta_{i}(c)\right)^{2}\right]=e_{g}\left[\left(k\left(\frac{c_{i}-c}{h}\right)-e_{g}\left[k\left(\frac{c_{i}-c}{h}\right)\right]\right)^{2}\right]\leq e_{g}\left[k^{2}\left(\frac{c_{i}-c}{h}\right)\right ] \\ & = \int_{u}{k^{2}\left(\frac{u - c}{h}\right)g(u)du}\leq g_{max}\int_{u}k^{2}\left(\frac{u - c}{h}\right)du\stackrel{}{=g_{max}\int_{v}k^{2}\left(v\right)dv = c_{1}h}\end{aligned}\ ] ] where the equality before last follows from change of variables and where . thus =\frac{1}{nh^{2}}e_{g}\left[\eta_{1}^{2}(c)\right]\leq\frac{c_{1}h}{nh^{2}}=\frac{c_{1}}{nh}. ] hence \right|\right]\leq\sqrt{\frac{c_{1}}{nh}}. ] , both and are bounded and lipschitz continuous with lipschitz constants and .hence , recall that and , where is some bound on the derivative of the loss . since , then , and therefor .earlier we defined such that .thus , where we define .+\frac{n}{\sqrt{\lambda}}\sqrt{\frac{2\theta}{n}}\\ \leq & \left(\frac{p}{2}\right)^{\frac{-p}{1+p}}\frac{n}{\sqrt{\lambda}}\left[\sqrt{2}\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}+\frac{mp}{2n}\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}\right]+\frac{n}{\sqrt{\lambda}}\sqrt{\frac{2\theta}{n}}\\ \leq & \left(\frac{p}{2}\right)^{\frac{-p}{1+p}}\frac{n}{\sqrt{\lambda}}\left[2\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}+\frac{mp}{n}\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}\right]+\frac{n}{\sqrt{\lambda}}\sqrt{\frac{2\theta}{n } } \end{alignedat}\ ] ] we would like to choose a sequence that will minimize the bound in ( [ eq : learning rate bound ] ) .define .$ ] differentiating with respect to and setting to zero yields : =0\\ \leftrightarrow\\ c\gamma\lambda^{\gamma-1}= & \frac{1}{2}n\lambda^{-\frac{3}{2}}\left[6\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}+\sqrt{\frac{2\theta}{n}}\right]\\ \leftrightarrow\lambda= & \left(\frac{1}{2c\gamma}n\left[6\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}+\sqrt{\frac{2\theta}{n}}\right]\right)^{\frac{1}{\gamma+\frac{1}{2}}}\propto\left(\frac{1}{n}^{\frac{1}{2 + 2p}}+\left(\frac{1}{n}\right)^{\frac{1}{2}}\right)^{\frac{2}{2\gamma+1}}\\ \rightarrow\lambda\propto & n^{-\frac{1}{(1+p)(2\gamma+1 ) } } \end{alignedat}\ ] ] \\ = & cn^{-\frac{\gamma}{(1+p)(2\gamma+1)}}+n\cdot6\left(2a\right)^{\frac{1}{2 + 2p}}n^{-\frac{\gamma}{(1+p)(2\gamma+1)}}+n\left(2\theta\right)^{\frac{1}{2}}n^{-\frac{2\gamma(1+p)+p}{2(1+p)(2\gamma+1)}}\\ \leq & cn^{-\frac{\gamma}{(1+p)(2\gamma+1)}}+n\cdot6\left(2a\right)^{\frac{1}{2 + 2p}}n^{-\frac{\gamma}{(1+p)(2\gamma+1)}}+n\left(2\theta\right)^{\frac{1}{2}}n^{-\frac{\gamma}{(1+p)(2\gamma+1)}}\\ = & n^{-\frac{\gamma}{(1+p)(2\gamma+1)}}\left(c+n\cdot6\left(2a\right)^{\frac{1}{2 + 2p}}+n\left(2\theta\right)^{\frac{1}{2}}\right)\\ \leq & q(1+\sqrt{\theta})n^{-\frac{\gamma}{(1+p)(2\gamma+1 ) } } \end{alignedat}\ ] ] where and with probability not greater than .choose , , , and define , then as in ( [ eq : learning rate bound ] ) , a very similar calculation shows that + 2\eta.\end{alignedat}\ ] ] + 2\eta\\ \leq & c\lambda^{\gamma}+\frac{n}{\sqrt{\lambda}}\left[6\left(\frac{2a}{n}\right)^{\frac{1}{2 + 2p}}+\sqrt{\frac{2\theta}{n}}\right]+\frac{4k^{2}\left(e^{\frac{2\beta+1}{2\beta}}\left(c_{1}\right)^{\frac{1}{2}}\left(2\beta c_{2}\right)^{\frac{1}{2\beta}}+c_{2}\kappa^{\beta}\right)}{b_{1}n^{\frac{\beta}{2\beta+1 } } } \end{alignedat}\ ] ] similarly to case i , choosing minimizes the last bound ( note that the choice of does not depend on ) .hence that the resulting learning rate is given by i. d. diamond , j. w. mcdonald , and i. h. shah .proportional hazards models for current status data : application to the study of differentials in age at weaning in pakistan ._ demography _ , 230 ( 4):0 607620 , 1986 .antonio eleuteri and azzam f. g. taktak .support vector machines for survival regression . in elia biganzoli , alfredo vellido , federico ambrogi , and roberto tagliaferri , editors , _ computational intelligence methods for bioinformatics and biostatistics _, number 7548 in lecture notes in computer science , pages 176189 .springer berlin heidelberg , 2012 .n. jewell and m. van der laan .current status data : review , recent developments and open problems . in n.balakrishnan and c.r .rao , editors , _ handbook of statistics , advances in survival analysis _ , number 23 , pages 625642 .elsevier , 2004 .f.m . khan and v.b .support vector regression for censored data ( svrc ) : a novel tool for survival analysis . in _eighth ieee international conference on data mining , 2008 .icdm 08 _ , pages 863868 , december 2008 .v. van belle , k. pelckmans , j.a.k .suykens , and s van huffel . support vector machines for survival analysi . in _ proceedings of the third international conference on computational intelligence in medicine and healthcare ( cimed2007 )_ , plymouth ( uk ) , 2007 .
current status data is a data format where the time to event is restricted to knowledge of whether or not the failure time exceeds a random monitoring time . we develop a support vector machine learning method for current status data that estimates the failure time expectation as a function of the covariates . in order to obtain the support vector machine decision function , we minimize a regularized version of the empirical risk with respect to a data - dependent loss . we show that the decision function has a closed form . using finite sample bounds and novel oracle inequalities , we prove that the obtained decision function converges to the true conditional expectation for a large family of probability measures and study the associated learning rates . finally we present a simulation study that compares the performance of the proposed approach to current state of the art . and
the barely covered story of rising foreclosures among the condominiums of florida or california in early 2007 was a harbinger of a much larger collapse in the worldwide financial system .the increase of foreclosures over the priced in foreclosure risk in mortgage - backed securities , otherwise deemed high - grade assets , began the confusion of the value of collateral assets and subsequent seizing up of credit markets around the globe .the collapse of several institutions , such as bear stearns , lehman , and fortis , has accentuated the level of crisis now facing the world markets .previously , loosely regulated titans of finance , such as hedge funds and private equity groups , have been hit by waves of unprecedented losses and demands by investors for redemptions , causing them to sell even more assets or close positions and creating a positive feedback death spiral .though the hardest hit markets are lesser - known markets , such as commercial paper , the equity markets have become the most widely known indicators of the ongoing meltdown .in fact , most non - experts likely use the movements of the equity markets , fallaciously , as a key gauge of the severity or progress of the crisis .the equity markets , however , did not originate the crisis nor are they the key force perpetuating it . in this short paper ,the spread of the credit crisis will be discussed by referring to a correlation network of stocks in the s&p 500 and the nasdaq-100 indices .the fact that the spread resembles a contagion or cascade , however , may be mainly superficial given the underlying dynamics are completely different .in this paper , a stock correlation network , similar to the one in refs . , is created .we start by defining a correlation matrix of returns between two stocks , where the correlation between stocks and , is defined as with and being the log - returns of stocks and at a given time , and being the mean value of the stock log - returns over the measured time period , and and being the standard deviations of and over the measured time period . the correlation is taken over the time period august 1 , 2007 , to october 10 , 2008 , where each daily value of is the log - return of the closing price from the previous day . as refs . demonstrate , however , correlation is not a distance metric ; therefore , we create an adjacency matrix with weights on the edges matching the distance metric between stocks , and . that matrix is defined as using these distanceswe finally create a graphical minimal spanning tree by using the python - graph module , pydot , and graphviz .because over 500 stocks are included , the ticker labels are relatively small but the central part of the component is dominated ( though not exclusively ) by certain finance and service sector stocks , which are heavily cross - correlated and thus tightly linked with each other , while the outer branches are more industry specific , including utilities , basic materials , technology , and some less - central financial stocks .these are the stocks later impacted by the credit crisis ( see fig . [ sector ] ) .the average correlations among stocks both within each category and between stocks of each category are given in table [ corrtable ] ., green at distance , blue at distance , purple at distance , and black at distance ( the maximum allowed by the metric in eq .[ distanceeq]).,width=491 ] the stocks in fig .[ spread ] , represented as nodes , are colored according to the following methodology based on the stock return since august 1 , 2007 .events in the figures are taken from the timeline at ref .the fall in stock valuations flows outward in the correlation network from stocks with relatively high centrality in the center to those on the periphery , which are more industry specific or otherwise uncorrelated to the core sectors of the stock market . in fig .[ distanceplot ] , this spread is emphasized by showing the average return among stocks at a distance from the stock with the highest betweeenness centrality ( here cbs , a major s&p 500 stock , and here classified under the services industry ) , where is defined by eq .[ distanceeq ] . here , we see that the greater the distance from the central part of the network , the more delayed the decline in valuation. therefore , the credit crisis spreads among affected stocks from more centralized nodes to more outer ones as the news of the extent of the damage to the global economy spreads .using methods of statistical physics and complex networks to investigate phenomena in stock markets is increasingly common . the increasing complexity and globalization of financial markets has led to many large and sometimes unpredictable effects . in ref . , the effects of globalization upon the korea stock exchange were demonstrated by showing the increasing grouping coefficient of stocks from 1980 - 2003 .the credit crisis , however , presents a challenge of a whole new magnitude .as viewed by the wider market , the collapse in stock price returns began in the financial and services sector of the economy .soon it moved across more mainline banks and firms , and more recently has affected stocks across the board .though the spread of the collapse in stocks down the tree resembles an infection or cascade on a network , such ideas are more appropriately viewed as analogies or metaphors than explanations . unlike a disease or cascading collapse , the stock crash is not being transmitted from one stock to another .what the collapse reveals is a complex and collective systemic collapse of the financial system , which spreads as its extent becomes more recognized and affects the credit or demand for sectors across the economy .the spread is carried both by the news of the extent of the crisis and the fact that similar risky asset bases make the co - movement of certain stocks more likely and thus more highly correlated .in addition , as credit becomes restricted , capital flows formerly relied on as a given begin to disappear , causing financial difficulties in companies and selling of equities ( among other assets ) to raise capital . as panic and the extent of the devastation spread, stocks are punished accordingly . in normal times , the failure of a company and its stock is not a cause for a systemic crisis . also , since the correlation was calculated over an entire year s activity , the stock prices are correlated because they tend to fall similarly over time .the correlation shown in this network does not cause the transmission chain of collapse , but is inextricably tied to it . in addition , the correlation generally increases with volatility ( for example , see ref . ) and negative returns affect volatility more than positive returns of the same magnitude , so over time , the correlation has been increasing among stocks , and the network will likely be more dense and structured differently due to the steadily increasing market volatility . finally , one should note that this is not an example of the widely cited ` financial contagion ' in the press .financial contagion refers to the coupling of financial panic across national borders and not among stocks in an exchange .however , these do illustrate the spread of the credit crisis and how what was once a problem among home builders and mortgage finance companies has engulfed the entire economy .gower , biometrika , * 53 * , 325 ( 1966 ) .mantegna , eur.phys .j. b * 11 * , 193 ( 1999 ) .r.n . mantegna and h.e .stanley , an introduction to econophysics : correlations and complexity in finance , ( cambridge university press , cambridge , 1999 ) .kim , i.m .kim , y. lee , and b. kahng , j. korean phys ., * 40 * , 1105 ( 2002 )kim , y.k .lee , b. kahng , and i.m .kim , j. phys .japan , * 71 * , 2133 , 2002 .j. cox,``credit crisis timeline '' , the university of iowa center international finance and development , retrieved october 17 , 2008 ( 2008 ) .zhuang , z.f .min , and s.y .chen , journal of northeastern university(natural science ) , * 28 * , 1053 ( 2007 ) ( in chinese ) .l. ping and b.h .wang , systems engineering , * 24 * , 73 ( 2006 ) ( in chinese ) .oh and s.h .kim , j. korean phys .* 48 * , 197 ( 2006 ) .jo and s.y .kim , j. korean phys .* 49 * , 1691 ( 2006 ) .jung , o. kwon , j.s .yang , and h.t .moon , j. korean phys .* 48 * , 135 ( 2006 ) .andersen , t. bollerslev , f.x .diebold , and h. ebens , j. of financial economics , * 61 * , 43 ( 2001 ) . f. black , proceedings of the american statistical association , business and economic statistics section , 177 ( 1976 ) .j.y . campbell and l. hentschel , j. of financial economics , * 31 * , 281 ( 1992 ) .hoovers , http://www.hoovers.com
the credit crisis roiling the world s financial markets will likely take years and entire careers to fully understand and analyze . a short empirical investigation of the current trends , however , demonstrates that the losses in certain markets , in this case the us equity markets , follow a cascade or epidemic flow - like model along the correlations of various stocks . this phenomenon will be shown by the graphical display of stock returns across the network and by the dependence of the stock return on topological measures .
recently , the emergence and evolution of language attracts a growing interest . in thisinterdisciplinary field problems like differentiation of languages , development of the speech apparatus or formation of linguistically connected social groups require joint efforts of many specialists such as linguists , neuroscientists , or anthropologists . however , there are also more general questions concerning this almost exclusively human trait .why do we use words and then combine them into sentences ?why all languages have grammar ?to what extent is our brain adapted for acquisition of language ?can learning direct the evolution ?this sample questions justify an increasing involvement of researchers also from other disciplines such as artificial intelligence , computer sciences , evolutionary biology or physics .computer modeling is a frequently used tool in the studies of language evolution . in this techniquetwo main approaches can be distinguished . in the first one , known as an iterated learning model , oneis mainly concerned with the transmission of language between successive generations of agents .the important issue that the iterated learning model has successfully addressed is the transition from holistic to compositional language .however , since the number of communicating agents is typically very small , the problem of the emergence of linguistic coherence must be neglected in this approach . to tackle this problem steels introduced a naming game model . in this approachone examines a population of agents trying to establish a common vocabulary for a certain number of objects present in their environment .the change of generations is not required in the naming game - model since the emergence of a common vocabulary is a consequence of the communication processes between agents .it seems that the iterated learning model and the naming - game model are at two extremes : the first one emphasizes the generational turnover while the latter concentrates on the single - generation ( cultural ) interactions . since in the language evolution both aspects are present , it is desirable to examine models that combine evolutionary and cultural processes . in the present paperwe introduce such a model .agents in our model try to establish a common vocabulary like in the naming - game model , but in addition they can breed , mutate , and die .moreover , they are equipped with an evolutionary trait : learning ability . as a resultevolutionary and cultural ( learning from peers ) processes mutually influence each other .when communication between agents is sufficiently frequent , cultural processes create a favourable niche in which a larger learning ability becomes advantageous .but gradually increasing learning abilities in turn speed up the cultural processes . as a resultthe model undergoes an abrupt bio - linguistic transition .one can speculate that the proposed model suggests that linguistic and biological processes at a certain point of human history after crossing a certain threshold started to have a strong influence on each other and that resulted in an explosive development of our species . that learning in our model modifies the fitness landscape of a given agent and facilitates the genetic accommodation of learning ability is actually a manifestation of the much debated baldwin effect .in our model we consider a set of agents located at sites of the square lattice of the linear size .agents are trying to establish a common vocabulary on a single object present in their environment .an assumption that agents communicate only on a single object does not seem to restrict the generality of our considerations and was already used in some other studies of naming - game or language - change models .a randomly selected agent takes the role of a speaker that communicates a word chosen from its inventory to a hearer that is randomly selected among nearest neighbours of the speaker .the hearer tries to recognize the communicated word , namely it checks whether it has it in its inventory .a positive or negative result translates into communicative success or failure , respectively . in some versions of the naming - game model success means that both agents retain in their inventories only the chosen word while in the case of failure the hearer adds the communicated word to its inventory . to implement the learning ability we modified this rule and assigned weights ( ) to each -th word in the inventory .the speaker selects then the -th word with the probability where summation is over all words in its inventory ( if its inventory is empty , it creates a word randomly ) . if the hearer has the word in its inventory , it is recognized .in addition , each agent is characterized by its learning ability ( ) that is used to modify weights .namely , in the case of success both speaker and hearer increase the weights of the communicated word by learning abilities of the speaker and hearer , respectively . in the case of failurethe speaker subtracts its learning ability from the weight of the communicated word .if after such a subtraction a weight becomes negative the corresponding word is removed from the repository .the hearer in the case of failure , i.e. , when it does not have the word in its inventory , adds the communicated word to its inventory with a unit weight .in addition to communication , agents in our model evolve according to the population dynamics : they can breed , mutate , and eventually die . to specify intensity of these processeswe introduce the communication probability . with probability the chosen agent becomes a speaker and with probability we will attempt a population update .during such a move the agent dies with the probability , where $ ] and and are certain parameters whose role is to ensure a certain speed of population turnover .moreover , is the age of an agent and is the average ( over agents ) sum of weights .such a formula takes into account both its linguistic performance ( the bigger the larger ) and its age .if the agent survives ( it happens with the probability ) it breeds , provided that there is an empty site on one of the neighbouring sites .the offspring typically inherits parent s learning ability and the word from its inventory that has the highest weight . in the offspring inventorythe weight assigned initially to this word equals one .with the small probability a mutation takes place and the learning ability of an offspring is selected randomly anew . with the same probability an independent check is made whether to mutate the inherited word . a diagram illustrating the dynamics of our modelis given in the appendix .let us also notice that the behaviour of our model , that is described below , is to some extent robust with respect to some modifications of its rules .for example , qualitatively the same behaviour is observed for modified parameters and , different form of the survival probability ( provided it is a decreasing function of and an increasing function of ) , or different breeding and/or mutation rules .to examine the properties of the model we used numerical simulations .most of the results are obtained for and but simulations for and lead to the similar behaviour .simulations start from all sites occupied by agents and having a single , randomly chosen for each agent word in their inventories with a unit weight .the learning ability of each agent is also chosen randomly .an important parameter of the model is the communication probability that specifies intensity of communication attempts in comparison with populational changes . in general , for small the model remains in the phase of linguistic disorder with only small clusters of agents using the same language .we define the language of an agent as the largest - weight word in its inventory .such a definition means that agents using the same language usually ( but not always ) use a recognizable word and it ensures a relatively large rate of communication successes of such agents .a typical distribution of languages in this disordered small- phase is shown in the left panel of fig .[ conf ] where agents using the same language are drawn with the same shade of grey . upon increasing the communication probability the clusters of agents only slightly increase , but after reaching a certain threshold an abrupt transition takes place and the model enters the phase of linguistic coherence with almost all agents belonging to the same cluster ( fig .[ conf ] , right panel ) . to examine the nature of this transition we measured the communication success rate defined as an average over agents and simulation time of the fraction of successes with respect to all communication attempts .moreover , we measured the average learning ability .the measured values of and as a function of are shown in fig .[ steady ] .one can notice that the abrupt transition around manifests not only through the jump of the communicative properties ( ) but also through the jump of the biological endowment ( jump of ) .such a coincidence is by no means obvious and in principle one can imagine these two transitions being separated . before examining the mechanism responsible for such an agreement let us mention about the behaviour of the model with the learning ability kept fixed during entire simulations . in this casethere is also a phase transition between disordered and linguistically coherent phases but this time the transition is much smoother ( fig .[ steady ] ) .large fluctuations of the success rate in the vicinity of the transition and absence of the jump suggest that this might be a continuous transition . in the last section we will return to this point .let us also notice that sudden transitions in linguistic models were also reported in some other models .the agreement of the transitions as seen in fig .[ steady ] shows that communicative and biological ingredients in our model strongly influence each other and that leads to the single and abrupt transition . in our model successful communication requires learning .a new - born agent communicating with some mature agents who already worked out a certain ( common in this group ) language will increase the weight of a corresponding word . as a result , in its future communications the agent will use mainly this word . in what way such a learningmight get coupled with evolutionary traits ?the explanation of this phenomenon is known as a baldwin effect .although at first sight it looks like a discredited lamarckian phenomenon , baldwin effect is actually purely darwinian .there are usually some benefits related with the task a given species has to learn and there is a cost of learning this task .one can argue that in such a case there is some kind of evolutionary pressure that favours individuals for which the benefit is larger or the cost is smaller .then , evolution will lead to the formation of species where the learned behaviour becomes an innate ability .it should be emphasized that the acquired characteristics are not inherited .what is inherited is the ability to acquire the characteristics ( the ability to learn ) . in the context of language evolution the importance of the baldwin effectwas suggested by pinker and bloom .perhaps this effect is also at least partially responsible for the formation of the language acquisition device - the hypothetical structure in our brain whose existence was suggested by chomsky .however , many details concerning the role of the baldwin effect in the evolution of language remain unclear . in our model the baldwin effect is also at work .let us consider a population of agents with the communication probability below the threshold value ( ) .in such a case learning ability remains at a rather low level ( since clusters of agents using the same language are small , it does not pay off to be good at learning the language of your neighbours ) .now , let us increase the value of above the threshold value ( fig .[ time ] ) .more frequent communication changes the behaviour dramatically .apparently , clusters of agents using the same language are now sufficiently large and it pays off to have a large learning ability because that increases the success rate and thus the survival probability .let us notice that of an agent depends on its linguistic performance ( ) rather than its learning ability .thus clusters of agents of good linguistic performance ( learned behaviour ) can be considered as niches that direct the evolution by favouring agents with large learning abilities , which is precisely the baldwin effect .it should be noticed that linguistic interactions between agents ( whose rate is set by the probability ) are typically much faster than evolutionary changes ( set by ) . to observe such a difference in our simulations shown in fig .[ time ] we increased up to the value 0.98 . and indeed , the linguistic changes ( success rate ) , that might be considered as niches - forming processes , are ahead of the evolutionary adaptations ( learning ability ) . as a result of a positive feedback ( large learning ability enhances communication that enlarges clusters that favourseven more the increased learning ability ) a discontinuous transition takes place both with respect to the success rate and learning ability ( fig .[ steady ] ) .an interesting question is whether such a behaviour is of any relevance in the context of human evolution .it is obvious that development of language , which probably took place somewhere around years ago , was accompanied by important anatomical changes such as fixation of the so - called speech gene ( foxp2 ) , descended larynx or enlargement of brain .linguistic and other cultural interactions that were already emerging in early hominid populations were certainly shaping the fitness landscape and that could direct the evolution of our ancestors via the baldwin effect .our model predicts that when intensity of linguistic ( or cultural ) processes was large enough , a transition took place where both linguistic performance and biological endowment of our species experienced an abrupt change that perhaps lead to the rapid expansion of human civilization .but further research would be needed to claim that such a transition did take place and explain it within the framework suggested in our paper .in the present paper we examined an evolutionary naming - game model .simulations show that coupling of linguistic and evolutionary ingredients produces a discontinuous transition and learning can direct the evolution towards better linguistic abilities ( baldwin effect ) .the present model computationally is not very demanding .it seems to be possible to consider agents talking on more than one object , or to examine statistical properties of simulated languages such as for example , distributions of their lifetimes or of the number of users .one can also study effects like diffusion of languages , the role of geographical barriers , or formation of language families .there is already an extensive literature documenting linguistic data as well as various computational approaches modeling for example competition between already existing natural languages .the dynamics of the present model , that is based on an act of elementary communication , offers perhaps more natural description of dynamics of languages than some other approaches that often use some kind of coarse - grained dynamics .there are also more physical aspects of the proposed model that might be worth further studies .as we have already mentioned , when the learning ability is kept fixed , the transition between disordered and linguistically coherent phases seems to be continuous . on the other hand, such a transition resembles the symmetry breaking transition in the -state potts model , where at sufficiently low temperature the model collapses on one of the ground states .however , in the two dimensional case and for large ( in our case corresponds to the number of all languages used by agents ) such a transition is known to be discontinuous .of course the dynamics of our model is much different from glauber or metropolis dynamics that reproduce the equilibrium potts model , but very often such differences are irrelevant as long as , for example , the symmetry of the model is preserved ( which is the case for our model ) .another possibility that would explain a continuous nature of the transition in our case might be a different nature of ( effective ) domain walls between clusters . in our model these domain walls in some casesmight be much softer and that would shift the behaviour of our model toward models with continuous - like symmetry ( as e.g. , xy model ) . to clarify this issuefurther work is , however , needed .+ * acknowledgments : * we gratefully acknowledge access to the computing facilities at pozna supercomputing and networking center .m. a. nowak , n. l. komarova , and p. niyogi , nature * 417 * , 417 ( 2002 ) .s. kirby and j. hurford , _ the emergence of linguistic structure ; an overview of the iterated learning model _ , in _ simulating the evolution of language _ , a. cangelosi and d. parisi ( eds . ) ( springer - verlag , berlin , 2001 ) .h. brighton , artif .life * 8 * , 25 ( 2002 ) .l. steels , artif .life * 2 * , 319 ( 1995 ) .h. yamauchi , _ baldwinian accounts of language evolution _ ,phd thesis , the university of edinburgh , edinburgh , scotland ( 2004 ) .p. turney , _ myths and legends of the baldwin effect_. in t. fogarty and g. venturini ( eds . ) , proceedings of the icml-96 ( 13th international conference on machine learning , bari , italy ) a. baronchelli , m. felici , v. loreto , e. caglioti , and l. steels , j. stat .* p06014 * ( 2006 ) . l. dallasta , a. baronchelli , a. barrat , and v. loreto , phys .e * 74 * , 036105 ( 2006 ) .d. nettle , lingua * 108 * , 95 ( 1999 ) .* ibid . , * * 108 * , 119 ( 1999 ) .java applet that illustrates dynamics of our model is available at : http://spin.amu.edu.pl/~lipowski/biolin.html or http://www.amu.edu.pl/~lipowski/biolin.html g. hinton and s. nowlan , complex systems * 1 * , 495 ( 1987 ) .s. pinker and p. bloom , behav .brain sci .* 13 * , 707 ( 1990 ) . s. munroe and a. cangelosi , artif .life * 8 * , 311 ( 2002 ) . c. holden ,science * 303 * , 1316 ( 2004 ) d. abrams and s. h. strogatz , nature * 424 * 900 ( 2003 ) . c. schulze , d. stauffer , and s. wichmann , commun .phys . * 3 * , 271 ( 2008 ) .p. m. c. de oliveira , d. stauffer , s. wichmann , and s. m. de oliveira , _ a computer simulation of language families _ , e - print : arxiv:0709.0868 .
we examine an evolutionary naming - game model where communicating agents are equipped with an evolutionarily selected learning ability . such a coupling of biological and linguistic ingredients results in an abrupt transition : upon a small change of a model control parameter a poorly communicating group of linguistically unskilled agents transforms into almost perfectly communicating group with large learning abilities . when learning ability is kept fixed , the transition appears to be continuous . genetic imprinting of the learning abilities proceeds via baldwin effect : initially unskilled communicating agents learn a language and that creates a niche in which there is an evolutionary pressure for the increase of learning ability.our model suggests that when linguistic ( or cultural ) processes became intensive enough , a transition took place where both linguistic performance and biological endowment of our species experienced an abrupt change that perhaps triggered the rapid expansion of human civilization .
[ sintro ] genome - wide association studies ( gwas ) constitute a popular approach for investigating the association of common single nucleotide polymorphisms ( snps ) with complex diseases .usually , a large number of snp are tested across the genome . great interest lies in improving power of testing snp effects by borrowing additional biological information .indeed , a major criticism of genetic association studies lies in its agnostic style [ ] : none of the biological knowledge was encoded in the standard genetic association analyses . to overcome such limitations ,multi - marker analysis has been advocated to integrate biological information into statistical analyses and to decrease the number of tests [ ; ] .analysis using snp - sets grouped by physical locations has better performance than the standard single snp analysis in reanalyzing the breast cancer gwas data [ ] .snps can also be grouped into a snp - set according to biological pathways , in which a gene harmonizes with other genes to exert biological functions .the two factors we try to bridge in genetic association studies are snps and disease risk . despite the success of snp - set analyses in assembling multiple snps based on biological information ,mechanistic pathways between snps ( snp - sets ) and disease are still neglected .given the availability of multiple sources in genomic data ( e.g. , gene expression and snps ) [ ; ] , it is desirable to perform joint analysis by integrating multiple sources of genomic data .here we combine the information of snps and gene expression by introducing gene expression as a mediator in the causal pathway from snps to disease ._ biologically _ , gene expression can be determined by the dna genotype [ ; ; ] and that gene expression can also affect disease risk [ ] .moreover , results from the snp - set analysis augmented by a biological model can be more scientifically meaningful ._ statistically _ , gene expression can help explain variability of the effect of snps on disease when there exists an effect of snps on disease via gene expression and thus increases the power of detecting the overall effect of snps on disease risk .snps that regulate mrna expression of a gene are so - called expression quantitative trait loci ( eqtl ) [ ] . statistically , eqtl snps can be viewed as the snps that are correlated with mrna expression of a gene ._ cis_-eqtl snps are the snps that are within or around the corresponding gene , and _trans_-eqtl snps are those that are far away or even on different chromosomes .numerous genome - wide eqtl analyses have been reported to comprehensively capture such a dna rna ( i.e. , snps - gene expression ) association in the genome in different tissues and organisms [ ; ; ] .eqtl results can be external information to prioritize the discovery of susceptibility loci in genome - wide association studies [ ; ; ] .methods are available to integrate multiple genomic data to draw causal inference on a biological network [ ; ; ; ] .we focus in this paper on _ joint analysis _ of multiple eqtl snps of a gene and their corresponding mrna expression for their effects on disease phenotypes .compared with multi - snp analyses , this approach further incorporates eqtls into genetic association studies and accounts for a biological process ( from dna to rna ) within a gene to improve power .this paper is motivated by an asthma genome - wide association study of subjects of british descent ( mrc - a ) , in which the association between snps at the gene and the risk of childhood asthma was investigated [ ; ] .the mrc - a data set consists of 108 cases and 50 controls with both snp genotype ( illumina 300k ) and gene expression ( affymetrix hu133a 2.0 ) data available .the original genome - wide study reported that the 10 typed snps on chromosome 17q21 where _ormdl3 _ is located were strongly associated with childhood asthma in mrc - a data , and the results were validated in several other independent studies .the authors also found that each of these 10 snps was highly correlated with gene expression of _ormdl3 _ , which is also associated with asthma .the 10 snps , _ormdl3 _ expression and asthma status can be illustrated as the , and , respectively , in figure [ fig1 ] .instead of analyzing snp - expression , expression - asthma and snp - asthma associations separately and univariately , here we are interested in assessing the overall genetic effect of _ ormdl3 _ on the occurrence of childhood asthma , by jointly analyzing snp and gene expression data and accounting for the possibility that the _ ormdl3 _ gene expression might be a causal mediator for the association of the snps in the _ ormdl3 _ gene and asthma risk .our ultimate goal is to integrate multiple sources of genomic data for genetic association analyses .is a set of correlated exposures , for example , snp set ; is a mediator , for example , gene expression ; is an outcome , for example , disease ( yes / no ) ; and are covariates , including the true and potential confounders . ] in this paper we propose to jointly model a set of snps within a gene , a gene expression and disease status , where a logistic model is used to model the dependence of disease status on the snp - set and the gene expression , and a linear model is used for the dependence of the gene expression on the snp - set , both adjusting for covariates .we are primarily interested in testing whether a gene , whose effects are captured by snps and/or gene expression , is associated with a disease phenotype .we formulate this hypothesis in the causal mediation analysis framework [ ; ( ) ; ] .note that the previous work on causal mediation analysis is mostly focused on estimation .we use the joint model to derive the direct and indirect effects of a snp - set mediated through gene expression on disease risk . for eqtl snps ,we show that the total effect of a gene on a disease captured by a set of snps and a gene expression corresponds to the total effects of the snp - set , which are the combined direct effects and indirect effects of the snps mediated through the gene expression , on a disease .this framework allows us to study how the use of gene expression data can enhance power to test for the total effect of a snp - set on disease risk .we study the impact of model misspecification using the conventional snp - only models when the true model is that both the snps and the gene expression affect the disease outcome . for non - eqtl snps, the null hypothesis simply corresponds to the joint effects of the snps and the gene expression .due to a potentially large number of snps within a gene and some of them possibly being highly correlated , that is , in high linkage disequilibrium ( ld ) , conventional tests , such as the likelihood ratio test , for the total effects of multiple snps and a gene expression , do not perform well .we propose in this paper a variance component test to assess the overall effects of a snp set and a gene expression on disease risk , under the null that the test statistic follows a mixture of distributions , which can be approximated analytically or empirically using a resampling - based perturbation procedure [ ; ] .as the true disease model is often unknown , we construct an omnibus test to improve the power by accommodating different underlying disease models .the rest of the paper is organized as follows . in section [ sec2 ]we introduce the joint model for snps , a gene expression and disease as well as the null hypothesis of no joint effect of the snps and the gene expression on a disease phenotype . in section [ sec3 ]we propose a variance component score test for the total effect of snps and gene expression , and construct an omnibus test to maximize the test power across different underlying disease models . in section [ sec4 ]we interpret the null hypothesis and study the assumptions within the framework of causal mediation modeling for eqtl snps and non - eqtl snps . in section [ sec5 ]we evaluate the finite sample performance of the proposed test using simulation studies and show that the omnibus test is robust and performs well in different situations .in section [ sec6 ] we apply the proposed method to study the overall effect of the _ ormdl3 _ gene contributed by both the snps and the gene expression on the risk of childhood asthma , followed by discussions in section [ sec7 ] .[ smodel ] the statistical problem is to jointly model the effect of a set of snps and a gene expression on the occurrence of a disease .assume for subject ( ) an outcome of interest is dichotomous ( e.g. , case / control ) , whose mean is associated with covariates ( , with the first covariate to be 1 , i.e. , the intercept ) , snps in a snp - set ( ) , mrna expression of a gene ( ) and possibly the interactions between the snps and the gene expression as where , and are the regression coefficients for the covariates , the snps , the gene expression , and the interactions of the snps and the gene expression , respectively .a snp - set and gene expression pair can be defined in multiple ways .for example , can be the snps in a gene and is the mrna expression of the gene . in the asthma data example , are the 10 typed snps around _ ormdl3 _ and is the expression of _ormdl3_. alternatively , one can choose the snp - set / expression pair based on the eqtl study : eqtl snp - set / corresponding gene expression . as snps can affect gene expression[ ; ; ] , for each subject , we next consider a linear model for the continuous gene expression ( i.e. , the mediator ) , which depends on the covariates ( ) and the snps ( ) : where and are the regression coefficients for the covariates and the snps , respectively , and follows a normal distribution with mean and variance . again, the snps can be the snps within a gene or the eqtl snps corresponding to a gene for which the expression level is measured .our goal is to test for the total effect of a gene captured by the snps in a set and a gene expression on , which can be written using the regression coefficients in model ( [ ymodel ] ) as note that the null hypothesis ( [ null ] ) only involves the parameters in the ] model in section [ sinterpret ] to facilitate interpretation of the null hypothesis ( [ null ] ) and to study the assumptions within the causal mediation analysis framework . throughout the paper, we term this null hypothesis as a test for the _ total effect of a gene_. later , in section [ sinterpret ] we will show that it corresponds to the _ total effect of snps _ for eqtl snps and simply to the joint effect of snps and expression for non - eqtl snps .[ stest ] [ ssvct ] we propose in this section to test for the null hypothesis of no total effect of a gene ( [ null ] ) under model ( [ ymodel ] ) . asthe number of snps ( ) in a gene might be large and some might be highly correlated ( due to linkage disequilibrium ) , the likelihood ratio test ( lrt ) or multivariate wald test for the null hypothesis ( [ null ] ) uses a large degree of freedom ( df ) and has limited power . to overcome this problem , we assume under model ( [ ymodel ] ) the regression coefficients of the individual main snp effects are independent and follow an arbitrary distribution with mean 0 and variance , and the snp - by - expression interaction coefficients ( ) are independent and follow another arbitrary distribution with mean 0 and variance .the resulting outcome model ( [ ymodel ] ) hence becomes a logistic mixed model .the problem for testing the null hypothesis ( [ null ] ) becomes a joint test of variance components ( ) and the scalar regression coefficient for the fixed gene expression effect ( ) in the induced logistic mixed models as and .one can easily show that the scores for , and under the induced logistic mixed models are where , , , and ; and is the mean of under , and is the maximum likelihood estimator of under the null model to combine the three scores to test for the null hypothesis and , one may consider the conventional score statistic , where and is the efficient information matrix of .however , this approach has several major limitations .first , notice that the score of the regression coefficient of gene expression is a linear function of , while the scores of the variance components of main effects of snps and snp - by - expression interactions and are quadratic functions of .hence , has a different scale from .it follows that are likely to dominate .a combination of the three scores using is hence not desirable .second , involves quartic functions of and the information matrix involves the 8th moment of .hence , calculations of are not stable , and it is difficult to analytically approximate the null distribution of .we hence propose the following weighted sum of three scores as the test statistic for the null hypothesis ( [ null ] ) : \\[-8pt ] & = & n^{-1}({\mathbf{y}}-\bmuhat_0)^t \bigl(a_1\mathbb{ss}^t+a_2 { \mathbf{gg}}^t+a_3\mathbb{cc}^t\bigr ) ( { \mathbf{y}}- \bmuhat_0),\nonumber\end{aligned}\ ] ] where are some weights . is a nice quadratic function of .hence , its null distribution can be easily approximated by a mixture of distributions .different weights can be chosen . with an equal weight , is equivalent to the variance component test for by assuming , and follow a common distribution with mean zero and variance .however , this common distribution assumption is strong , as , and generally have different scales and so do their effects , and .notice that are all quadratic functions of in similar forms ; we propose to weight each term using the inverse of the square root of their corresponding variances .this allows each weighted term to have variance 1 and be comparable .specifically , the variances for are , , , respectively , where denotes the component - wise multiplication of conformable matrices and , denotes a vector of ones , the diagonal and off - diagonal elements of are and [\hat\mu_{0i'}(1-\hat\mu _ { 0i'})] ] .this means under the null hypothesis follows a mixture of distributions , which can be approximated using a scaled distribution by matching the first two moments [ ] as , where ] , and the expressions of and are given in section 3 of the supplementary material [ ] .alternatively , one can approximate the mixture of distribution using the characteristic function inversion method [ ] .[ somnibus ] so far we derive the test statistic under the outcome model specified in ( [ ymodel ] ) , which assumes the disease risk of depends on snps , gene expression and their interactions .denote this by .suppose that the disease risk of depends on snps and gene expression but not their interactions , or even depends only on snps , then it is more powerful to test for the total snp effect using the test statistics and , respectively . under these simpler disease models ,the test statistic loses power as it tests for extra unnecessary parameters . on the other hand ,if the disease risk indeed depends on snps , expression and their interactions , performing tests only using snps or main effects will lose power , compared to .since in reality we do not know the underlying true disease model , it is difficult to choose a correct model .it is hence desirable to develop a test that can accommodate different disease models to maximize the power .moreover , in a genome - wide association study , it is almost impossible that one disease model is true for tens of thousands of genes .thus , we further propose an omnibus test where we identify the strongest evidence among the three different models with : ( 1 ) only snps , ( 2 ) snps and gene expression , and ( 3 ) snps , gene expression and their interactions .specifically , we calculate the -value under each of the three models , then compute the minimum of the three -values and compare the observed minimum -value to its null distribution . because of the complicated correlation among , and , it is difficult to analytically derive the null distribution of the minimum -value . to this end , we resort to a resampling perturbation procedure . as shown in section [ ssvct ] , converges in distribution to .the empirical distribution of can be estimated using the resampling method via perturbation [ ; ] .the perturbation method approximates the target distribution of by generating random variables from the estimated asymptotic distribution of .this perturbation procedure is also called the score - based wild bootstrap [ ] .specifically , set , where s are independent . by generating independent repeatedly, the distribution of conditional on the observed data is asymptotically the same as that of .denoted by , where is the number of perturbations , it follows that the empirical distribution of the is the same as that of asymptotically .the -value can be approximated using the tail probability by comparing with the observed .hence , one can calculate the -values of , and by setting , and and generate their perturbed realizations of the null counterpart as , and , and compare them with corresponding observed values , respectively .note that for each perturbation , the random normal perturbation variable is the same across the three tests such that the correlation among , and can be preserved .the -value of the omnibus test can be easily calculated using the perturbation method .let , and be the three -values using the three statistics , where , and .the null distribution of the minimum -value , , can be approximated by , ( . the -value of the omnibus test hence can be calculated by comparing the observed minimum -value with its empirical null distribution .note that different from permutation where the observed data are shuffled and resampled to calculate the test statistic , the perturbation procedure resamples from the asymptotic null distribution of without recalculating using the shuffled data .thus , it is much more efficient than the permutation method .using a cpu ( 2.53 ghz ) to run 100 genes ( each with 10 snps ) and 100 cases/100 controls , the computation time is 134.10 and 809.76 seconds for the perturbation and permutation methods ( both with 200 resampling ) , respectively .furthermore , covariates can be easily adjusted using the perturbation method , but adjustment is more difficult using permutation .specifically , the permutation - based -values calculated by simply permuting snps and gene expression fail if snps / gene expression are correlated with covariates .[ sinterpret ] [ scmm ] to understand the null hypothesis of no total effect of a gene captured by snps in a gene and a gene expression and the underlying assumptions , we discuss in this section how to interpret the null in ( [ null ] ) within the causal mediation analysis framework .causal interpretation can be helpful for understanding genetic etiology of diseases as well as for applications in pharmaceutical research [ ] .although genotype is essentially fixed at , it is at that time effectively randomized , conditional on parental genotypes , and could be viewed as subject to have hypothetical intervention .the statistical problem of jointly modeling a set of snps , a gene expression and a disease can be presented as a causal diagram [ ; ] in [ fig1 ] and be framed using a causal mediation model [ ( ) ; ] based on counterfactuals [ ( ) ] . and have used the causal mediation analysis for epidemiological and social science studies , respectively , where the exposure of interest is univariate .one can decompose the _ total effect _ ( te ) of snps into the _ direct effect _ ( de ) and the _ indirect effect _ ( ie ) .the _ direct effect _ of snps is the effect of the snps on the disease outcome that is not through gene expression , whereas the _ indirect effect _ of the snps is the effect of the snps on the disease outcome that is through the gene expression . within the causal mediation analysis framework, we derive in the supplementary material the te , de and ie of the snps on the disease outcome [ ] .specifically , we define the total effect ( te ) of snps as that is , the equation ( [ ymodel ] ) marginalizes over gene expression . in section 1 of the supplementary material [ ] , we show that for rare diseases , the total effect of the snps on the log odds ratio ( or ) of disease risk can be expressed in terms of the regression coefficients in models ( [ ymodel ] ) and ( [ gmodel ] ) and is approximately equal to \\[-8pt ] & & { } + \tfrac{1}{2}\sigma_g^2({\mathbf{s}}_1 + { \mathbf{s}}_0)^t{\bolds{\gamma}}({\mathbf{s}}_1-{\mathbf{s}}_0)^t{\bolds{\gamma}}.\nonumber\end{aligned}\ ] ] we can express the de and ie of the snps on the log odds ratio of disease risk in terms of the regression coefficients in models ( [ ymodel ] ) and ( [ gmodel ] ) . for rare diseases , they are , respectively , approximately equal to \nonumber\\[-8pt]\label{de}\\[-8pt ] & & { } + \tfrac{1}{2}\sigma_g^2({\mathbf{s}}_1 + { \mathbf{s}}_0)^t{\bolds{\gamma}}({\mathbf{s}}_1- { \mathbf{s}}_0)^t{\bolds{\gamma}},\nonumber \\ \mathrm{ie}&=&({\mathbf{s}}_1-{\mathbf{s}}_0)^t{\bolds { \delta}}\bigl(\beta_g+{\mathbf{s}}_1^t{\bolds{\gamma } } \bigr ) .\label{ie}\end{aligned}\ ] ] these are derived in section 1 of the supplementary materials using counterfactuals under the assumptions of no unmeasured confounding [ ] .the sum of the direct and indirect effects is equal to the total effect of the snps , that is , . as shown in the supplementary material [ ] and discussed in section 3.2 , identification of the total effectrequires a much weaker assumption than those required for the direct and indirect effects .[ ssnull ] under the assumption that the gene expression is associated with the snps ( i.e. , eqtl snps ) , that is , , using equations ( [ de ] ) and ( [ ie ] ) , the test for the joint effects of snps in a snp set and a gene expression on , that is , the total effect of a gene , is equivalent to a test for the total snp effect on the outcome ( ) .in fact , for eqtl snps , which have nonzero effects on expression ( i.e. , ) , using the expressions of de and ie in ( [ de ] ) and ( [ ie ] ) , one can show that the null hypothesis of no direct and indirect genetic ( snp ) effects is equivalent to the null hypothesis ( [ null ] ) that all the regression coefficients ( , and ) equal zero : the null hypothesis ( [ null ] ) that all the regression coefficients ( , and ) are equal to zero is also equivalent to the null hypothesis of no total effect of the snps provided if or is not 0 for eqtl snps ( , that is , we show in section 1.4 of the supplementary material that the null hypothesis ( [ null1 ] ) requires only the assumption of no unmeasured confounding for the effect of eqtl snps ( ) on the outcome ( ) after adjusting for the covariates ( ) [ ] .most genetic association studies make this assumption . in other words, we make no stronger assumption than standard snp only analyses for testing the null hypothesis of no total effect of the snp set in a gene. note that in models ( [ ymodel ] ) and ( [ gmodel ] ) we allow other covariates ( ) to affect both the gene expression and the disease .if the covariates affect both expression and disease , ignoring may cause confounding in estimating de and ie . as shown in figure [ fig1] , if arrows from to and exist and is not controlled for , assumption ( 2 ) in section 1.2 of the supplementary material is violated [ ] .but if the covariates , the common causes of expression and disease , do not affect the snps ( no arrow from to ) , the estimation and hypothesis testing for te is still valid . however , if there does exist an effect of on , then it violates the above assumption of no unmeasured confounding for the association and , thus , the test or estimation for te will be biased .if snps have no effect on gene expression ( ) , that is , they are not eqtl snps , then there is no indirect effect of the snps on , so that the null hypothesis of no total effect of a gene ( ) is not equivalent to testing for no total snp effect on ( ) . in this case, what the null hypothesis , , tries to evaluate is simply whether there exists a joint effect of the given set of snps and the given gene expression , and possibly their interactive effect , on disease risk .to test for such a joint effect , we need the first two assumptions regarding no unmeasured confounding in section 1.2 of the supplementary material : no unmeasured confounding of the snps on the outcome and no unmeasured confounding of the gene expression on the outcome [ ] .[ ssgwas ] in standard genetic association analysis , we usually fit the following snp only model : which does not take gene expression into account , but simply considers the association between the outcome and snps adjusting for covariates .note for the special case where snp , , is univariate , the model ( [ gwas ] ) corresponds to single snp analysis , the most common approach in gwas . and have developed tests for a snp - set for under ( [ gwas ] ) , which can be more powerful than individual snp tests for the association between the joint effects of the snps in a gene and the outcome by borrowing information across snps within a gene , especially when the snps are in good linkage disequilibrium ( ld ) . assuming the true models that depend on both snps and a gene expression are specified in ( [ ymodel ] ) and ( [ gmodel ] ), we study in this section how in the misspecified standard snp only model ( [ gwas ] ) is related to the regression parameters and in the true model ( [ ymodel ] ) and what the null hypothesis under ( [ gwas ] ) tests for . to focus on the fundamental issues and for simplicity, we first discuss the case of no interaction effect between snps and gene expression on disease risk , that is , in model ( [ ymodel ] ) . underthe true ] models in ( [ ymodel ] ) and ( [ gmodel ] ) assuming no interaction ( ) , by plugging ( [ gmodel ] ) into ( [ ymodel ] ) , the true ] model as \approx c \bigl\{{\mathbf{x}}_i^t ( { \bolds{\alpha}}+ \beta_g{\bolds{\phi}})+{\mathbf{s}}_i^t ( { \bolds{\beta}}_s+\beta_g{\bolds{\delta } } ) \bigr\ } , \label{trueysmodel}\ ] ] where [ ] .a comparison of ( [ gwas ] ) with ( [ trueysmodel ] ) shows that and that the effect of versus on the outcome under the snp only model ( [ gwas ] ) corresponds to , which is proportional to the total effect of snps in ( [ te ] ) when .it follows that testing for in the snp only model ( [ gwas ] ) is approximately equivalent to testing for no total effect of the snps .however , if there exists a snp - by - expression interaction on and the snps are eqtl snps , the naive snp only analysis using ( [ gwas ] ) does not provide obvious correspondence to the total snp effect . as shown in section 2 of the supplementary material [ ] , the induced true ] follows the interaction model ( [ ymodel ] ) , the induced true ] . the test for under the misspecified snp only model ( [ gwas ] )will still be valid for testing the total effects of snps , because under the null the two models are the same .however , the misspecified model is subject to power loss , compared to the test based on the correctly specified model . with only an interaction effect ( , , ) , ( [ trueysint ] ) can be written as \approx c_{i}^ * \{{\mathbf{x}}_i^t{\bolds{\alpha } } + { \mathbf{x}}_i^t{\bolds{\phi}}{\mathbf{s}}_i^t{\bolds{\gamma}}+{\mathbf{s}}_i^t{\bolds{\delta}}{\mathbf{s}}_i^t{\bolds{\gamma } } \ } , $ ] where . if we assume this is the true model and fit the conventional gwas model ( [ gwas ] ) to test for the snp effect , the test is again still valid under the null , but loses power under the alternative .the approach here differs in several ways from that based on mendelian randomization [ smith and ebrahim ( and ) ] in which genetic markers ( snps ) are instrumental variables to assess the effect of an exposure ( in our case , a gene expression value ) on an outcome .here we are interested in using a gene expression to increase power for testing for the total effect of snps on a disease outcome .furthermore , mendelian randomization makes the assumption that snps do not have an effect on an outcome except through an exposure ( e.g. , gene expression in our case ) , in other words , no direct effect .no such assumption is being made here .this is because we are interested in testing for a different effect , that is , the effect of snps , rather than the effect of an exposure ( gene expression ) on disease risk .[ ssimulation ] to make the simulation mimic the motivating asthma data [ ] , we simulated data using the _ ormdl3 _ gene on chromosome 17q21 .we generated the snp data in the _ ormdl3 _ gene by accounting for its linkage disequilibrium structure using hapgen based on the ceu sample [ ] .the genomic location used to generate the snp data is between 35.22 and 35.39 mb on chromosome 17 , which contains 99 hapmap snps .ten of the 99 hapmap snps are genotyped on the illumina humanhap300 array , that is , 10 typed snps .to generate gene expression and the disease outcome , we assumed there is one causal snp and varied the causal snp among the 99 hapmap snps in each simulation . in section [ sec5.4 ]we further perform a simulation study assuming three causal snps . for subject , gene expression generated by the linear regression model , .the outcome was generated by the logistic model the parameters and the range of , and were based on the empirical estimates from analysis of the asthma data . for each simulation, we first generated a cohort with 1000 subjects , and 100 cases and 100 controls were randomly selected from the 1000 subjects to form a case control sample .two sets of simulations were performed . in the first set , we selected the snp rs8067378 as the causal snp , as this snpis highly associated with asthma in the original gwas [ ] . for each configuration of , , and , we generated 2000 data sets to calculate the empirical size and power .in the second set of simulation , the causal snp was chosen one at a time out of the 99 hapmap snps .for each selected causal snp , we generated 1000 data sets and evaluated statistical power for two different disease models : or . in both simulation settings, we used the 10 typed snps of the gene on the illumina chip to form the snp - set for the model ( [ ymodel ] ) , that is , , in calculating the test statistics , , and the omnibus test . for and , both weighted and unweighted methods were investigated , where , and for the weighted statistic , and for the unweighted statistic .the -values were calculated using the scaled approximation , the davies method by inverting the characteristic function [ ] and the perturbation procedure with 500 perturbations .the results of these approximations were very similar at the significance level of 0.05 .we performed the omnibus test by combining the evidence from , weighted and weighted .[ ssinglesnp ] we first evaluated the sizes of the proposed score tests , where the null distribution was approximated by either the scaled approximation or the perturbation procedure ( table [ tab1 ] ) .type i errors are well protected using both approximation methods under the three models with statistics . the empirical size is close to 0.05 for the omnibus test and the three models . as the results using different approximation methods are similar at the level of 0.05 , we only present in the following the empirical power using the perturbation method .we also evaluate the performance of the proposed tests using the characteristic function inversion method [ ] and the perturbation method at smaller sizes ( and ) ( table [ tab1 ] of the supplementary material [ ] ) , and find the methods perform well .@@ & & + & & + & * unweighted * & * weighted * & * unweighted * & * weighted * + + snps & & + snps and expression & 4.80 & 4.95 & 4.80 & 4.80 + snps , expression and interaction & 4.75 & 4.60 & 4.60 & 4.35 + omnibus test & & 5.15 + + snps & & + snps and expression & 4.87 & 4.90 & 4.83 & 4.87 + snps , expression and interaction & 4.60 & 4.80 & 4.77 & 5.07 + omnibus test & & 5.13 + [ cols="^,^ " , ] we performed additional simulations to assess how model misspecification influences our proposed test .gene expression was generated without the normality assumption , .two outcome models are explored .the first model generates the outcome by the logistic model assuming nonlinear effects of snps and as , and the second model generates by a probit model .although the model is not correctly specified in our proposed test under these settings , the joint analyses of snps and expression still outperform the snp - only analyses when the gene expression contributes to the risk of developing disease .similarly , the performance of the omnibus test is very close to the optimal test obtained under the true model for different scenarios ( figure [ fig4 ] ) .we conducted two additional simulation studies by varying the number of causal variants and ld structures .the pattern of the results from these additional studies is very similar to what is presented above ( figures [ fig1 ] and [ fig2 ] in the supplementary material [ ] ) .the first additional study is similar to the study in section [ ssinglesnp ] , except that there are three causal snps in the _ ormdl3 _ gene instead of a single causal snp . using the same ten typed snps for the analyses, we again found that the test performs the best when the model is correctly specified and the omnibus test approaches the optimal test obtained under the true model with limited power loss ( figure [ fig1 ] of the supplementary material [ ] ) .similar to analyses in section [ smultiplesnp ] , the second additional study investigates the performance of the proposed test at 15q2415q25.1 , where snps have a different ld pattern from the _ormdl3 _ gene .assuming one causal snp at a time , we used the same ten typed snps to perform our proposed test .again , the test performs the best when the model is correctly specified , and the omnibus test is robust and approaches the optimal test obtained by assuming the true model , and the power depends on the correlation of the causal snp and the typed snps ( figure [ fig2 ] of the supplementary material [ ] ) .[ sdataanalysis ] we applied the proposed testing procedures to reinvestigate the genetic effects of the _ ormdl3 _ gene on the risk of childhood asthma in the mrc - a data [ ; ] .this subset of the data contained 108 asthma cases and 50 controls where we have complete data of the 10 typed snps and gene expression of _ ormdl3_. the snp data were genotyped using the illumina 300k chip and the gene expression was collected using the affymetrix hu133a 2.0 .we analyzed the data using both additive and dominant modes : in the additive mode , the genotype was coded as the number of the minor allele ( i.e. , 0 , 1 , 2 ) , whereas in the dominant mode , the genotype was coded as whether or not the minor allele was present ( i.e. , 0 , 1 ) .@@ & * multivariate * & * bonferroni- * & * permutation- * & * vct * & * vct * + & * wald * & * adjusted * & * adjusted * & * unweighted * & * weighted * + + snps & 0.122 & 0.102 & 0.039 & + snps , gene & 0.342 & 0.194 & 0.057 & 0.039 & 0.033 + snps , gene and interaction & 0.013 & 0.303 & 0.093 & 0.0025 & 0.0028 + omnibus test & & + + snps & 0.018 & 0.015 & 0.018 & + snps , gene & 0.094 & 0.031 & 0.018 & 0.0040 & 0.0040 + snps , gene and interaction & 0.131 & 0.098 & 0.048 & 0.0035 & 0.0023 + omnibus test & & + we applied the proposed tests for the total snp effect of _ormdl3 _ using the snp and gene expression data .there are strong associations between the snps and the gene expression ( 8 out of 10 with -value.05 and the other two with -values of 0.076 and 0.21 ) , that is , the snps are eqtl snps .we considered six test statistics : , unweighted , weighted , unweighted , weighted , and omnibus test ( table [ tab2 ] ) .we compared our methods with the standard multivariate or univariate methods : the multivariate wald test , which has 10 , 11 and 21 degrees of freedom under the three models ( snps only , snps and gene expression , and snps , gene expression and interactions ) .we also included in the comparison the test using the smallest -value from the 10 single snp analyses with the bonferroni adjustment or the adjustment using the permutation procedure to account for the correlation among the snps .the results in table [ tab2 ] show that our proposed methods give smaller -values compared to the standard testing procedures .the test , which accounts for the effects of snps , gene expression and their interactions , gives the smallest value compared to the tests only using snps , in both additive and dominant modes .for example , the -values using weighted and are 0.0028 and 0.044 , respectively , using the additive snp model .the omnibus test calculated using the perturbation procedure by computing the minimum -value from , weighted and weighted also provides a more significant signal than those only considering snps , with the -value being 0.0055 .these results are consistent with the findings in simulation studies .we also performed genome - wide analyses for both snp - sets and single snp .we first paired eqtl snps with their corresponding gene expression [ ] and performed snp - only analyses and our proposed method .for single snp analyses , after adjustment of multiple comparison using false discovery rate [ fdr ; ] , 56 snps with fdr.1 were identified in snp - only analyses and 97 snps were identified from the proposed omnibus test . for snp - set analyses, we grouped eqtl snps that correspond to the same gene as a snp - set , and we identified 5 and 15 snp - sets ( fdr.1 ) from snp - only analyses and omnibus tests , respectively .[ sdiscussion ] we proposed in this paper to integrate snp and gene expression data to improve power for genetic association studies .the major contributions of this paper are as follows : ( 1 ) to formulate the data integration problem of different types of genomic data as a mediation problem ; ( 2 ) to propose a powerful and robust testing procedure for the total effect of a gene contributed by snps and a gene expression ; and ( 3 ) to relax the assumptions required for mediation analyses in the test for the total effect .specifically , as shown in figure [ fig1 ] , we are able to integrate the information of snps and gene expression as a biological process through the mediation model .our proposed variance component score test for the total effect of snps and a gene expression circumvents the instability of estimation of the joint effects of multiple snps and gene expression , because only the null model needs to be fit .mediation analysis to estimate direct and indirect effects generally requires additional unmeasured confounding assumptions , and previous work mainly focused on estimation .here we focus on testing for the total effect of a gene using snps and a gene expression . for eqtl snps , we show that the total effect of a gene contributed by snps and a gene expression is equivalent to the total effect of snps , which is the sum of direct and indirect effects of snps mediated through gene expression . testing for the total snp effect only requires one assumption : no unmeasured confounding for the effect of snps on the outcome , which is the same assumption as the standard gwas and , thus , no stronger assumption is required .we characterize the relation among snps , gene expression and disease risk in the framework of causal mediation modeling .this framework allows us to understand the null hypothesis of no total effect of a gene contributed by snps and gene expression , and the underlying assumptions of the test for both eqtl snps and non - eqtl snps .we propose a variance component score test for the total effects of a gene on disease .this test allows to jointly test for the effects of snps , gene expression and their interactions .we showed that the proposed test statistic follows a mixture of distributions asymptotically , and proposed to approximate its finite sample distribution using a scaled distribution , a characteristic function inversion method or a perturbation method .we considered three tests : using only snps ( ) , snps and gene expression main effects ( ) , and snps , gene expression and their interactions ( ) .our simulation study shows that all three tests have the correct type i error for testing for the overall snp effect .the relative power of these tests depends on the underlying true relation between the predictors ( snps , gene expression and their interactions ) and disease .as the underlying biology is often unknown , we further constructed the omnibus test that identifies the most powerful test among the three disease models , and proposed to use the perturbation method to calculate the value for the omnibus test .further , the test using only the main effects of snps and gene expression loses limited power compared to the omnibus test and can be used as a simple alternative .our results also show that to test for the total effects of a gene , the tests that incorporate both snp and gene expression information , such as and , are more powerful if snps are associated with gene expression than if they are not .in other words , it is even more beneficial to incorporate gene expression data with snp data to detect genetic effects on disease if gene expression is a good causal mediator for the snps . to achieve this ,a natural way is to select snps located within or at the neighborhood of a gene , since it has been well destablished that the snp within a gene can alter its expression value via transcription regulation [ ] .alternatively , one can restrict the joint snp - expression analysis to known eqtl snps .if selection of eqtl snps is based on statistical significance , one also needs to be aware of the possibility that cis - action may be a confounding effect of snps on array hybridization [ ] .we mainly focus on testing for the total effect of a gene in this paper .the proposed method can be easily extended to test for direct and indirect effects separately for eqtl snps .using equation ( [ de ] ) , to test for the direct effect of the snps , one can test .using the notation in equation ( [ q - stat ] ) , one can test this null hypothesis using the statistic , where the null model is a logistic model with and . to test for the indirect effects of the snps , using equation ( [ ie ] ), one can test . using the notation in equation ( [ q - stat ] ), one can test this null hypothesis using the statistic , where the null model is a logistic model with and .as the number of snps might be large and some snps might be highly correlated ( with high ld ) , standard regression to fit the null model might not work well .one can fit the null model using ridge regression . to perform these tests, one will need to make the four unmeasured confounding assumptions required for estimating direct and indirect effects of snps stated in section 1.2 of the supplementary material [ ] .gene expression may not be the only mediator for the relation between snps and disease .other biomarkers , such as dna methylation , proteins , metabolites of the gene product in the blood , immunological or biochemical markers in the serum , and environmental factors can also serve as potential mediators , depending on the context or the disease to be studied .for instance , epigenetic variations have been reported to exert heritable phenotypic effects [ ] .furthermore , our proposed test can be applied to address many other scientific questions as long as there exists a causal relationship as illustrated in figure [ fig1 ] .for example , the snp - gene - disease relations can be replaced by the dna copy number - protein - cancer stage ( early vs. late ) in tumor genomics studies to assess if copy number can have any effect on the clinical stage of cancer .it is advantageous to set up a biologically meaningful model before applying our proposed test procedure , which makes the best use of the prior knowledge .the authors would like to thank the editor , the associate editor and the referees for their helpful comments that have improved the paper .
genetic association studies have been a popular approach for assessing the association between common single nucleotide polymorphisms ( snps ) and complex diseases . however , other genomic data involved in the mechanism from snps to disease , for example , gene expressions , are usually neglected in these association studies . in this paper , we propose to exploit gene expression information to more powerfully test the association between snps and diseases by jointly modeling the relations among snps , gene expressions and diseases . we propose a variance component test for the total effect of snps and a gene expression on disease risk . we cast the test within the causal mediation analysis framework with the gene expression as a potential mediator . for eqtl snps , the use of gene expression information can enhance power to test for the total effect of a snp - set , which is the combined direct and indirect effects of the snps mediated through the gene expression , on disease risk . we show that the test statistic under the null hypothesis follows a mixture of distributions , which can be evaluated analytically or empirically using the resampling - based perturbation method . we construct tests for each of three disease models that are determined by snps only , snps and gene expression , or include also their interactions . as the true disease model is unknown in practice , we further propose an omnibus test to accommodate different underlying disease models . we evaluate the finite sample performance of the proposed methods using simulation studies , and show that our proposed test performs well and the omnibus test can almost reach the optimal power where the disease model is known and correctly specified . we apply our method to reanalyze the overall effect of the snp - set and expression of the _ ormdl3 _ gene on the risk of asthma . ,
forecast of various time series has been widely using in many fields of science and practice and many methods have been advanced for prediction of time series .but common problem of each method is a priori estimate of its accuracy .common practice is to take a truncated series ( reference series ) ended in the past , investigate its statistical parameters , build prediction and compare it with existing continuation of series under investigation .using moving shift of reference series one can collect needed statistics and obtain estimate of accuracy of used method depending on length of prediction .after that obtained accuracy is assigned to real predictions . in most of predicted time series last observed point ( epoch ) preceding the first predicted one has its final value and is not subject of refinement in future ( e.g. , number of sunspots for some epoch ) .it is not the case for eop .all real eop predictions are based on operational solution that may differ substantially from final eop values that comes usually in one - two months . as an illustration of the foregoing ,let s consider fig . 1 . in this figure the epoch of beginning of prediction , operational eop series used for computation of prediction , prediction computed in real time ( at date ) , final eop series , prediction that would be computed from the final series if it were available at date . upon this circumstance , accuracy and other statistical parameters of operational eop series may differ substantially from ones for final series that are commonly used for a priori estimate of prediction accuracy .it means that estimates of prediction accuracy obtained by `` standard '' method may be far from reality and some its modification ( or at least investigation ) is desirable .evidently , the most simple way to make a priory estimate of accuracy of prediction is a fictive disturbances of one or more last points of reference interval to investigate reaction of given method of prediction on errors at the last observed epoch(s ) .this test was realized in ( malkin & skurikhina 1996 ) .two kind of fictive errors was applied to real observed points : _ test 1 _ :the value of 1 mas was added to ( or subtract from ) the c04 value corresponding to the last observed epoch . _test 2 _ : the values of 0.5 , 1.0 , 1.5 mas were added to ( or subtract from ) the c04 value corresponding to the three last observed epoch .this test was used only for arima method because its influence on the extrapolation of trend - harmonics model ( e.g. , mccarthy & luzum 1991 ) can be easily foreseen without special calculations .typical differences between predictions of real and distorted c04 series are presented in table [ tab : pred_xy_dist ] .one can see that serious degradation of accuracy may occur when arima method is used for erroneous observed eop values .it should be mentioned that this effect practically linearly depends on the value of error .analogous results were obtained for prediction of ut ( malkin & skurikhina 1996 ) ..influence of errors in the last values on prediction results .[ cols="^ , > , > , > , > , > , > , > , > " , ] [ tab : pred2 m ]in this paper we have attempted to estimate real accuracy of predictions of eop using real - time predictions made at the iaa and the usno has been also made .although collected statistics is too poor to make more or less final conclusions , we can state that : * estimate of accuracy of prediction based on use of old data is not adequate to accuracy of real - time prediction , especially for short - time prediction .a modification of commonly used method of a priori estimate of accuracy , e.g. , proposed in ( malkin & skurikhina 1996 ) can give more realistic estimates .* accuracy of methods of prediction of eop used at the iaa and the usno is approximately the same .more detailed conclusion can be made only after collecting supplement statistics .* estimate of both rms and maximal errors in prediction is very useful for potential users .it seems reasonable to provide such estimates for iers and other prediction series .
to estimate real accuracy of eop prediction real - time predictions made by the iers subbureau for rapid service and prediction ( usno ) and at the institute of applied astronomy ( iaa ) eop service are analyzed . methods of a priory estimate of accuracy of prediction are discussed .
in mathematical literature , there are important examples of models designed to describe the dynamical interaction between a set of players in a game - like context .players are undistinguishable members of a large population , characterized by a phenotype which determines the fixed strategy they choose among the available , when playing with any other individual randomly selected in the population . + the payoff earned in the game by each player depends on the elements of a game - specific payoff matrix , while the system dynamics is described by an ordinary differential equation defined in the n - simplex , namely the replicator equation .+ the solutions of the replicator equation may evolve towards evolutionary stable strategies , which are nash equilibria of the game and asymptotically stable stationary states .evolutionary stable strategies are robust against invasion by competing strategies .other kind of nash equilibria can also exist , such as equilibria corresponding to lyapunov stable stationary states of the replicator equation .+ replicator equation has been used to describe several phenomena , such as biological evolution driven by replication and selection , and reaction - diffusion dynamics .replicator dynamics including mutation have been studied in and may lead to more complex behaviors , characterized by hopf bifurcation and limit cycles . moreover ,the replicator equation has been used for solving decision and consensus control problems and machine learning for optimization .significant application have been also developed in the field of social science .although widespread , the replicator equation is grounded on several strong assumptions on the system under investigation . 1 .the population is very large . indeed , dominating strategies emerge due to higher rates of replication than the others , causing the frequency of less efficient strategies to become irrelevant in the total population .any member of the population can play with any other member with the same probability .participants of each game are chosen randomly and no social structures are present in the population .the payoff earned by each player is defined by the payoff matrix , which is unique for all the population . in some cases , two or more subpopulations with different payoff matricesare considered .4 . players are constrained to behave according to a single fixed strategy at each round of the game they are playing .for instance , if the phenotype of an individual is to be aggressive or generous , he will show the same level of aggressiveness or generosity in any situation at any time , without having the capacity of regulating the level of his natural impulses .the above assumptions are the basis of the well known equivalence between the replicator equation and some ecological models , such as the predator - prey model introduced by lotka and volterra .+ several modifications have been introduced in the replicator equation to overcome the limitations caused by the above assumptions .for example , in is presented a generalization of the replicator equation to players . concerning networked populations ,many efforts have been done to include the topology of connections between players in to deal with scenarios in which the connection between agents plays a fundamental role and may yield to the interpretation of inspected phenomena .for example , in is proposed an algorithm that uses the connections among players and specific updating rules to induce cooperative behavior in an evolutionary prisoner dilemma game . on the other hand , a seminal paper by ohtsuki et al . presented the replicator equation on infinite graphs , under the assumption that every vertex is connected to the same number of neighbours . in particular , he showed that the replicator equation on a graph with fixed degree is equivalent to a classical replicator equation when the degree goes to infinity .+ in this paper , we derive a suitable mathematical model to describe static and dynamical behaviour of an -players game interaction , where an agent is meant to engage his challenges only with a restricted set of players connected to him . to this aim , we make use of the standard non - cooperative game theory results and graph theory .the results presented in the paper generalize many assumptions under the classical replicator equation and recent results on the replicator equation on graphs .specifically , the equation derived in this paper meets the following : 1 . a finite ( even small ) or infinite population is considered .the elements of the population are the vertices of a graph .each element can be engaged in a game with another element only if they are connected in the graph .no constraints on the topology of the graph are assumed .moreover , the connections can be weighted to remark different perceived importance of each interaction .3 . the payoff matrix can be player - specific , including the situations where the perception of the game is different for each individual .4 . each player can behave according to a combination of strategies .he is a sort of `` mixed player '' , thus incorporating composite and multiple personality traits .his behavior can be driven contemporarily by heterogeneous impulses with different strengths , such as , for example , being cooperative and non cooperative , generous and selfish , at the same time .the proposed approach makes players more realistic than in the classical framework and naturally extends the evolutionary game theory to a social context with human players .the new framework presented in the paper generalizes the classical replicator equation , that can be obtained as a special case by assuming that any individual of the population possesses the same payoff matrix and starts playing from an identical initial condition .+ the paper is structured as follows .section [ sec : ncgames ] presents some preliminaries on noncooperative games on graphs .then , the extended version of the replicator equation on graphs with generic topology is introduced in section [ sec : reg ] .some properties of the new replicator equation are presented in section [ sec : mathreg ] , including the equivalence to the classical replicator equation when homogeneous initial conditions are used .extended simulations are reported in section [ sec : sim ] , while some conclusions and future work are discussed in section [ sec : conclusions ] .in real world situations , interactions between a finite number of rational players can be influenced by topological constraints ; in most cases , each player is only able to meet with a reduced number of opponents which are close with respect to a suitable topology , such as the distance between them . in this sense, we talk about networks of players , where interconnections depend on the context .an interesting case of interaction is represented by non - cooperative games , extensively studied in .+ in this section , we extend the classical game theory by introducing a network , represented by a graph , which describes the connection among the involved players . typically , networks are described by means of graphs , and in a game context , each player is represented by a vertex .an edge between two players indicates that they interact .however , a player can consider that some interactions are more important than others .moreover , two connected players can have different perception of the importance of their interaction .these aspects can be accounted by assuming that the graph is weighted and directed ; an edge starting from a player and ending to another , is labeled with a positive weight to indicate the importance that the first player attributes to the game .+ formally , let be a directed weighted graph of order , and let be the set of vertices ( players ) ( ) .the graph is fully described by its adjacency matrix ; in particular , when player is meant to play with player , then there is an edge which starts from and ends to . in this case ,-entry of , , is the positive weight attributed by to the game against .when and , there is an interaction between and , but only will get a payoff after the challenge . finally , if both and are equal to , then there is no interaction between these players . in general ,we assume that has no self - edge , which means that no player has interaction with himself ( i.e. ) .we indicate with the neighborhood of ( i.e. the set of vertices that interact with ) , and with the out - neighborhood of ( i.e. the set of vertices is connected to with and exiting edge ) .the cardinalities of these set are indicated with and , and they represent the degree and out - degree of respectively .note that , in general , , and also .+ a player will play exactly two - players games with all its neighbours . in each one - to - one competition ,the set of available strategies for both players is , while the outcome that player can obtain is defined by a payoff matrix ; when player uses strategy in a two - players game against a player which uses strategy , then he earns a payoff equal to the -entry of , , where and are the -th and -th versors of , respectively .+ each player decides to use the same strategy in all the games he is involved in .he will play against all vertices in , but he will earn a payoff only when he plays with a player , since when and , there is an interaction which is meaningful only for player .+ in an interconnected context , the effective payoff earned ( or the _ fitness _ of a strategy ) must be defined as an _ environmental measure _depending on all the interactions between near players .this measure must quantify how well a strategy behaves .since each connection between two players has a positive weight , we pose that the effective payoff for a generic player is the weighted average of all obtained payoffs .let s denote with the strategy of the generic player .then , the effective payoff of player , is the following : where is the normalization factor .this model of payoff based on weighted average will be denoted with wa . however , there are situations in which payoffs are cumulative and the weighted sum is used without the normalization factor .in this case we have that : the payoff model based on weighted sum ( ws ) can be considered as wa , where each payoff matrix is substituted by .for this reason , we will mainly work on wa model , unless differently specified .+ the term that appears in both wa and ws models , is a vector where all components are non - negative numbers which sum up to . in a certain way ,player fights against one _ virtual player _ which summarize all the strategies used by its opponents in the set ; in general , the strategy used by the virtual player is a mixed strategy which represents what player effectively _ sees _ around him .this aspect will be deeply investigated later in this paper , because it plays a fundamental role to reach our aim .+ notice that , for each , can be interpreted as a -dimensional tensor , where the -entry is . in this way , the game interaction between interconnected players on a finite graph is equivalent to a -players game , where the set of pure strategies is , and the payoff of player is represented by the tensor . the structure of the graph is embedded in this definition , since the payoff tensor depends on the adjacency matrix . moreover , there are no assumptions made on the structure of the graph itself .+ for example , consider the following matrices : , ~~\boldsymbol{b } = \left [ { } \begin{array}{cc } a & b \\ c & d \end{array } \right],\ ] ] where , and assume that for all . in this case , and .table [ table : payoffs ] shows the payoff tensors of each player , which depend on the model parameters and .both models wa and ws are considered .+ .payoff tensor of the game on graphs defined by matrices in equation for wa and ws payoff models .for each combination of strategies ( ) the payoffs and of players , , are reported . [ cols="^,^,^,^,^,^,^,^,^ " , ] it is evident that the presence of weights , the asymmetry of the matrix , and the use of a particular payoff model may lead to very different calculation of the payoff tensor , and hence , the structure of the game itself changes .indeed , the effective payoff obtained by a player when he is engaged in a game is essentially evaluated by means of tensors , depending on the adjacency matrix of the graph .these payoffs define the _ virtual player _ mentioned at the end of section [ sec : payoff_graphs ] , which embodies all the strategies used by the player s opponents . as a consequence ,each player in the game is a sort of `` mixed player '' , thus incorporating composite and multiple personality traits , and behaving according to heterogeneous impulses with different strengths .+ as a natural consequence , a -players -strategies game ( from now on , -game ) can be extended over the set of mixed strategies : ^{t } \in \mathbb{r}^{m } : \sum_{i=1}^{m } z_{i } = 1 \wedge z_{i } \geq 0 ~~ \forall i \in \mathcal{s } \}.\ ] ] we indicate with ^{t } \in \delta_{m} ] ) .+ systems represents the replicator equation on a graph .note that no assumptions on the structure of the graph is needed to derive the equation .indeed , the adjacency matrix of the network is fully embedded in the payoff tensors .+ it is straightforward to note that the equation has a structure similar to the classical replicator equation ; for example , dominant strategies are the fittest , and hence when the relative fitness is better than the average , the corresponding frequencies will grow over time . in the next section , the very strong correlation between the two equations will be rigorously shown .furthermore , the relationship between nash equilibria of the underlying -game and the rest points of the dynamical equation will also be discussed in section [ sec : reproperties ] .let be the unique solution of problem , obtained by posing .in addition , suppose that there exists a time instant where .since all the components of the solution are continuous and non - negative at , then there must be a time such that .following equation , we can state that , and hence , this component will be for all times after . for the unicity of the solution, this implies that no time for which exists .thus , for each we have that : for all strategies and for all times .notice that the total variation of the strategies distribution in a vertex is null at time when .in fact : this means that : imposing that , the last equation asserts that for all time .joining the results provided by and , we conclude the following : in other words , all trajectories that start inside remain inside itself for all time . at any time, can be always interpreted as a distribution of strategies .recall that the best response function for the static -game is : suppose that is a nash equilibrium .then : for each vertex .this means that : moreover , from we know that : and then : we can conclude that every nash equilibrium is also a rest point of the replicator equation on graph .suppose that .then and in addition , if , and again : for this reason : this implies that if each represents a pure strategy ( i.e. it is equal to a versor of ) , then we have a rest point of the replicator equation on graph .suppose to fix a time lag , and assume that the mixed strategies are all the same for each vertex and for any time .that is : ^{t } \in \delta_{m } ~~ \forall v \in \mathcal{v } , ~~\forall t_{0 } \in [ 0 , \tau).\ ] ] consider the payoff model wa and suppose that for all vertices .following equations and , we obtain that and . in this case , we can rewrite the difference equation as follows : since previous equations do not depend on , we are able to impose that and , , and hence : it s straightforward to note that any other iteration of the previous map leads to quantities that are independent from .for example , applying a second iteration we get that : and hence we can pose that . generalizing to any time lag , for any non negative integer .similarly , and are also independent from . for these reasons , we pose that and , the discrete map becomes the following : note that , for any there exist a non - negative integer and a real number , with fixed , such that .then , equation becomes : considering the difference ratio , and letting , we obtain the following differential equation : which is the classical replicator equation . +this result is quite straightforward if we imagine to divide a wide population of replicators into subpopulations , assuming that all of them are described by the same mixed strategy of the total one at initial time .then , each subpopulation will behave exactly as the total one .hence , the dynamics of a single subpopulation in a vertex can be described by the classical replicator equation applied to the single population , whatever is the graph used .in this chapter , we present some simulations produced by equation . the payoff model is used . in particular , we set up experimental sessions by considering different -strategies payoff matrices ( ) ; it is assumed that every vertex has the same payoff matrix .each session has been developed over different graphs with vertices as reported in figure [ fig : graphs ] .all edges represented in figure [ fig : graphs ] have the same weight , except for thicker ones in the asymmetric weighted graph .note that we are using only undirected graphs ( i.e. ) .our aim is to show the behavior of the replicator equation on graphs when initial players strategies are almost pure .in fact , a vertex player with a pure strategy is in steady state ; for this reason , initial conditions used for vertex players are equals to slightly perturbed pure strategies ( i.e. ^{t} ] are used in place of pure strategies and , respectively ) .replicator equation on graphs has been simulated until a steady state behavior is reached , starting from different distribution initial conditions on the graph .+ the steady state situations are shown in figures [ fig : sym_bistable ] , [ fig : asym_bistable ] , [ fig : prisoner ] and [ fig : coexistence ] .the first column of each figure gives a picture of the initial conditions used , while others report the solution of the simulations when steady state is reached for each of the considered graphs .the color of each vertex indicates the value of , and hence it visually quantifies the inclination of player toward one of the feasible pure strategies ; yellow is used for player with strategy ( ) , red is for strategy ( ) .mixed strategies ( ) are indicated by shaded colors , according to the color bar at the bottom of the figures .moreover , figures [ fig : timecourse_pd ] and [ fig : timecourse_coex ] report the dynamical evolution obtained on the asymmetric weighted graph ; the same initial condition is used in both figures , while payoff matrices are different .the following sections will discuss in detail the results of each simulation . in this first experimental session , we used the following payoff matrix : ,\ ] ] with .the -players game described by has strict pure nash equilibria ( i.e. both players use strategy or ) and a mixed nash equilibrium ^{t}.\ ] ] the classical replicator equation , based on matrix , has exactly rest points which coincide with the nash equilibria reported above .moreover , mixed equilibrium is repulsive , while pure equilibria are attractive ; for this reason , we say that is a bistable payoff matrix .+ figure [ fig : sym_bistable ] reports some results obtained when .row ( a ) of the figure shows what happens when an homogeneous initial condition is used ; as said in section [ sec : classical ] , the dynamics is the same for each vertex player , and it is equivalent to the solution given by the classical replicator equation , whichever is the underlying graph structure . after a certain time, all vertex players adopt pure strategy , since it represents an attractive rest point , and initial condition is in inside the relative basin of attraction . + in the row ( b ) of figure[ fig : sym_bistable ] are reported the steady state situations obtained by using an homogeneous initial condition , where only one peripheral player uses the quasi - pure strategy . at the end of simulation ,the pure strategy spreads all over the considered graphs .let s consider the open graph situation : the vertex player , which is the unique neighbors of player , has no will to change his own strategy , since he is surrounded by yellow players .similarly , on the closed and asymmetric weighted star , neighbors of player see an equivalent player which is almost yellow .thus , none of them wants to change , and player must modify his strategy .hence player must change his strategy to obtain a good payoff . in a certain way ,the rebel peripheral player decides to adapt himself to the majority .+ the dynamical behavior is slightly different when the central hub is the rebel . in the row ( c ) of the figure [ fig : sym_bistable ]are shown the solutions of the replicator equation on graphs for this initial condition .when the open star is used , player sees a yellow equivalent player , while all peripheral players have only him as neighbor .player decides to change his own strategy to yellow , while all others do the exact opposite . after a certain time, they meet half way , at the mixed equilibrium ^{t}$ ] .the different position of the rebel player in the graph influences a lot the dynamics of the whole system ; the leader ( player ) understands that he must modify his own strategy according to his neighborhood , while all other players do the same , since their only opponent is player himself .however , closed and asymmetric weighted graphs are more resistant to the influence of player , because the peripheral players have more than one neighbors ; in these situations , player does not play anymore as a leader able to change the whole dynamics .+ the last row ( d ) of figure [ fig : sym_bistable ] reports the final solutions when both player and use the quasi - pure strategy .while the closed star structure remains resistant to the influence of rebel players , the other graphs do not .the open star becomes all red at final time .this is because player sees only player : they are both red , so player does nt want to change strategy .simultaneously , yellow neighbors of change their strategy to red , since they see only a red player . + changing the value of the parameter leads to different behaviors .when , first strategy becomes stronger and it spreads all over the considered graphs as goes to . in figure[ fig : asym_bistable ] are reported some results obtained with .in particular , when player uses strategy at the beginning , then mixed equilibrium is not reached anymore on the open star graph ; all vertices adopt strategy , which is slightly better than strategy .the strength of strategy is also visible on the asymmetric weighted star , when at the beginning both player and adopt strategy ; in figure [ fig : sym_bistable ] ( ) we have shown that on steady state , players and are the only ones red , while when , also players and do . in general , when , strategy becomes stronger and it spreads all over the considered graphs as grows up . in this section ,we show the results obtained with the replicator equation on graphs , by using a modified version of the prisoners dilemma game , as proposed in .the payoff matrix is the following : ,\ ] ] where .+ _ cooperate _ and _ defect _ are the names typically used to indicate , respectively , the strategy and of this classic game .the dilemma is that mutual cooperation produces a better outcome than mutual defection ; however , at the individual level , the choice to cooperate is not rational from a selfish point of view . in other words ,the -players game has only one nash equilibrium , reached when both players defect .note that in this version of the prisoner s dilemma , the nash equilibrium is non strict .moreover , classical replicator equation based upon payoff matrix reported in equation , has rest points , which correspond to the pure strategies and . in particular ,the first one is repulsive , while the latter is attractive .+ although mutual defection represents both a nash and a dynamical equilibrium , many works have shown that cooperation does not vanishes when games are played over graphs and is equal to suitable values ( see ) .the resilience of cooperation is shown in [ fig : prisoner ] , where is set to .steady states depend on the initial conditions and on the type of graph used , and behaviors can be very heterogeneous .when an homogeneous initial condition is considered ( row ( a ) ) , all players on graphs become defectors ( again , this is the case when the classical and proposed replicator equations are the same ) . on the other side , when initial conditions are not homogeneous ( rows ( b ) , ( c ) and ( d ) ) , cooperation does not always completely vanish .figure [ fig : timecourse_pd ] shows the time course of the variable for each vertex of the graph . in particular , the initial conditions with external outlayer and the asymmetric weighted graph have been used . in some -players gamesthere are no pure nash equilibria .nevertheless , nash theorem guarantees that at least a mixed equilibrium exists .for example , this happens when payoff matrix is defined as follows : .\ ] ] the unique mixed nash equilibrium is : ^{t}.\ ] ] classical replicator equation has rest points ; symmetric couples of pure strategies , and are repulsive , while the mixed equilibrium is attractive . in this case, we speak about of both feasible strategies . + in figure [ fig : coexistence ] the steady state solutions when payoff matrix defined in equation is used , are reported .again , when we have an homogeneous initial condition , everything works like a classical replicator equation , and hence , all players go to the mixed nash equilibrium .when initial condition is not homogeneous , behaviors obtained through the replicator equation on graphs are strongly based on the topological structure of the underlying graph and on the initial conditions .figure [ fig : timecourse_coex ] shows in details the behavior of the population when the asymmetric weighted graph and initial conditions with external outlayer are supposed .in this work a new mathematical model for evolutionary games on graphs with generic topology has been developed .we proposed a replicator equation on graphs , dealing with a finite population of players connected through an arbitrary topology .a link between two players can be weighted by a positive real number to indicate the strength of the connection .furthermore , the different perception that each player has about the game is modeled by allowing the presence of directed links and different payoff matrices for each member of the population .a player obtains his outcome after -players games are played with his neighbors ; payoffs of each game are averaged ( wa model ) or simply summed up ( ws model ) .moreover , it has been shown that the proposed replicator equation on graphs extends the classical one , under the hypotheses that wa model for payoffs is used , homogeneous initial conditions over the vertices are considered , all vertex players have the same payoff matrix . in any case , no limitations are imposed to the underlying graph .+ experimental results showed that the dynamics of evolutionary games are strongly influenced by the network topology . as expected , more complex behavior emerges with respect to the classical replicator equation .for example , in the prisoner s dilemma game , cooperative and non - cooperative behaviors can coexist over the graph .moreover , when a -player game with strictly dominant strategies is considered , heterogeneous behavior is obtained , i.e. a part of the population chooses to play a dominant strategy , while others use different strategies .then , players become mixed ( coexistence of strategies ) .+ the very first step for extending this work is the study of dynamical and evolutionary stability of the rest points . by the way , we imagine that the concept of evolutionary stability must be revisited to deal with the proposed evolutionary multi - players game model based on graph , for which a theoretical effort is needed . indeed , in our opinion , the basic question `` is strategy resistant to invasion ? ''must be reformulated to fit with the new model , where the population of players is finite and is organized according to a social structure .+ the theory developed in this paper can also be extended to 3 or more strategies and can consider more complex topologies of the graph , such as small world , scale free , and random complex networks .+ from an applicative point of view , the authors intend to use the replicator equation on graphs to deal with biological and physical processes , such as bacterial growth , model of brain dynamics and reaction - diffusion phenomena .the developed model can be also profitably applied to solve networked socio - economics problems , such as decision making for the development of marketing strategies .
a new mathematical model for evolutionary games on graphs is proposed to extend the classical replicator equation to finite populations of players organized on a network with generic topology . classical results from game theory , evolutionary game theory and graph theory are used . more specifically , each player is placed in a vertex of the graph and he is seen as an infinite population of replicators which replicate within the vertex . at each time instant , a game is played by two replicators belonging to different connected vertices , and the outcome of the game influences their ability of producing offspring . then , the behavior of a vertex player is determined by the distribution of strategies used by the internal replicators . under suitable hypotheses , the proposed model is equivalent to the classical replicator equation . extended simulations are performed to show the dynamical behavior of the solutions and the potentialities of the developed model . * keywords : * replicator equation on graph , evolutionary game theory , finite populations , complex networks .
any cellular automaton ( ca ) with two states and local transition rule can be used for definition of a reversible second - order ca with new rule on a pair + c ' \mod 2\ , , c ) .\label{2nd}\ ] ] an inverse rule is + c ' \mod 2 ) .\label{inv2nd}\ ] ] and also may be rewritten where is exchange of states the ca acting on pairs of binary states can be considered as four - state ca due to simple correspondence .let us denote state of a cell with notations let us consider a few different two - dimensional ca 1 . with local rule : 2 . with local rule : : 3 . with local rule : + with local rule : and second - order reversible ca derived from them using eq .( [ 2nd ] ) .if to start with a single cell with value one and all others zero , then total number of cells with nonzero values at -th stage is some sequence .it is also possible to consider sequences , for number of cells with value .the sequence was initially introduced due to consideration of `` noise '' in computationally universal ca , but it is shown below , that for other three ca the sequences are the same and . due to definition of second - order ca eq .( [ 2nd ] ) a simple property is true and so initial terms of the sequences are represented in the table below : recursive equations are proved in this paper : the negative value of can be used because ca are reversible . due to eq .( [ r1r2 ] ) last formula is equivalent with both eq . ( [ recr1 ] ) and eq . ( [ recr ] )are simply derived from the equation eq .( [ recr2 ] ) : an _ alternative form of recursive equations _ is also valid for and : these equations are equivalent due to eq .( [ r1r2 ] ) and together with eq .( [ rsum ] ) imply a simple relation between the sequences equations eq .( [ rec2r1 ] ) and eq .( [ rec2r2 ] ) can be proved _ by induction _ using eq .( [ recr1 ] ) and eq .( [ recr2 ] ) respectively . due to( [ r1r2 ] ) , it is enough to consider only one of them . the eq .( [ recr2 ] ) holds for .assume eq .( [ recr2 ] ) holds for any , with , .( [ recr2 ] ) allows us to express as a linear combination with terms smaller than and to show that the equation holds also for : where , . it remains to prove eq .( [ recr2 ] ) .the recursion is proved below for simpler case with ca and with straightforward demonstration of equivalence for ca and .let us start with consideration of and .these ca are linear ( additive ) , _ i.e. _ for any two configurations and local rule defines global map with property where is symmetric difference configurations and considered as sets ( regions ) of cells with unit values . a configuration of 2d ca can be described with ( characteristic ) polynomial \equiv p_{x , y}[c ] = \sum_{i , j=-\infty}^\infty c_{i , j } x^i y^j \label{poly}\ ] ] and eq .( [ lin2 ] ) corresponds to = p[f(a ) ] \oplus p[f(b ) ] \equiv p[f(a ) ] + p[f(b ) ] \mod 2 .\label{polin2}\ ] ] it is convenient further for ca with two states to treat eq .( [ poly ] ) as a polynomial over .let us consider evolution of pattern with single nonzero cell for ca .it can be described using equation for global transition rule & \mapsto & ( x^{-1}y^{-1 } + x y^{-1 } + x^{-1}y + xy)\ , p_{x , y}[c ] \notag\\ & = & ( x^{-1}+x ) ( y^{-1 } + y)\,p_{x , y}[c ] .\label{c1poly } \end{aligned}\ ] ] here treatment of ] and the eq .( [ c1poln ] ) corresponds to decomposition on two characteristic polynomials of 1d cellular automata with local rule also known as `` rule 90 '' and initial pattern with single nonzero cell .the number of cells on -th step may be described by equation there is number of units in binary decomposition of .the polynomial is over and a property used further is simply derived using recursion on : eq .( [ binom2k ] ) can be used for inductive proof of eq .( [ n90 ] ) . for ( [ n90 ] ) holds : .assume for .for characteristic polynomial is and because and are not `` overlapped , '' . due to for : .so , eq . ( [ n90 ] ) holds for . the decomposition eq .( [ c1poly ] ) produces some simplification with comparison to \mapsto ( x^{-1 } + x + y^{-1 } + y)\ , p_{x , y}[c ] .\label{c2poly}\ ] ] on the other hand , ( ) may be considered as two independent copies of ( ) on two _ `` diagonal '' sublattices _ corresponding with even and odd respectively : visually , they correspond to cells with black and white colors on checkerboard pattern after rotation of the board . because belongs to even sublattice , configuration of after any steps always belongs to and it is equivalent with acting on the diagonal sublattice . due to( [ binom2k ] ) and eq .( [ c1poln ] ) application of steps of to arbitrary configuration may be expressed as \mapsto ( x^{-2^k}y^{-2^k}+x^{2^k}y^{-2^k } + x^{-2^k}y^{2^k}+x^{2^k}y^{2^k})\,p_{x , y}[c ] \label{c1expmov}\ ] ] and analogue property can be proved for \mapsto ( x^{-2^k}+x^{2^k}+y^{-2^k } + y^{2^k})\,p_{x , y}[c ] .\label{c2expmov}\ ] ] so patterns bounded by are replicated into four copies after steps both for and . for coordinates of four copiesare shifted due to eq .( [ c1expmov ] ) as , , , and for due to eq .( [ c2expmov ] ) the shifts are , , , .such ca with replicating property was initially considered by e. fredkin in 1970s .for and an analogue of eq .( [ n90 ] ) is true configuration of is represented as product eq .( [ c1poln ] ) of two `` rule 90 '' ca and eq .( [ n90sqr ] ) can be derived directly from eq .( [ n90 ] ) . in more general case for such products of two 1d configurations and an equation can be used , where is 2d configuration with values of cells .a direct proof by induction for or is also useful due to similarity with further approach to second - order ca . for ( [ n90sqr ] ) holds : .assume for .for characteristic polynomial for satisfies eq .( [ c2expmov ] ) and describes four shifted nonoverlapping copies of region .so , and eq . ( [ n90sqr ] ) holds for . similar proof by induction for uses eq .( [ c1expmov ] ) .a second - order ca corresponds to pair of polynomials . for second - order ca derived from ca with two states described by polynomials over gf(2 ) local rule eq .( [ 2nd ] ) can be simply rewritten as a global one + p_2(x , y)\ , , p_2(x , y)\bigr ) .\label{plin2nd}\ ] ] for , due to eq .( [ c1poly ] ) and eq .( [ c2poly ] ) = t(x , y ) p(x , y ) \mod 2 \label{tp}\ ] ] with let us prove that for , with initial configuration with single nonempty cell after steps the configuration is described by polynomial = \bigl(f_{k+1}(t),f_k(t)\bigr ) , \label{pft}\ ] ] where are polynomials over gf(2 ) defined using recursive equation and is application of the polynomial to eq .( [ tp ] ) also considered over gf(2 ) . for ( [ pft ] ) holds = ( 1,0) ] and recursive equation eq .( [ recr ] ) corresponds to union of five disjoint regions without gaps , fig .[ fig : compos ] .let us check recursive equation for pair of polynomials eq .( [ pft ] ) representing all states of second - order ca and used for calculation of & = & \bigl(f_{2^k+j+1}(t),f_{2^k+j}(t)\bigr ) \notag \\ & = & \bigl(t^{2^k}f_{j+1}(t)+f_{2^k - j-1}(t),t^{2^k}f_j(t)+f_{2^k - j}(t)\bigr ) \notag \\ & = & t^{2^k}\bigl(f_{j+1}(t),f_j(t)\bigr)+ \bigl(f_{2^k - j-1}(t),f_{2^k - j}(t)\bigr ) \notag \\ & = & t^{2^k } p[c_j ] + p[{{\mathsf x}}c_{2^k - j-1 } ] , \label{c2kj } \end{aligned}\ ] ] where operation eq .( [ swap ] ) swaps values .( [ c2kj ] ) illustrates dynamics of pattern growth , fig .[ fig : compos ] . due to( [ invswap ] ) application of transition rule to pattern for any index satisfies property so , application of to eq .( [ c2kj ] ) corresponds to increase of four patterns and decrease of central region until . for four outer configurationsreach maximal size and may not grow more , so on next step they are joined into single central configurations and four cells appear near corners as centers for future growth .proofs of eqs .( [ recr][rec2r2 ] ) for directly follow from consideration of , because ( similarly with relation between and discussed earlier ) is equivalent with acting on _ a diagonal sublattice_. in such representation patterns for may look more closely packed fig .[ fig : composr2 ] , but it does not change recursive equations due to above mentioned equivalence . let us now consider and .local rule for both and uses only four closest cells with common sides in so - called _ von neumann neighborhood_. due to eq .( [ 2nd ] ) it is enough to consider actions of local rules for and on the first element of pair to describe differences between rules .if the rules act in the same way for any configuration under consideration , then actions of and for patterns derived from are also the same .comparison of definition and shows that local rules differ only for _ three nonempty cells _ in von neumann neighborhood . on fig .[ fig : compos1r2 ] for simplicity are shown only cells with nonzero first components in the pair for configurations used earlier , fig .[ fig : composr2 ] .all such pattern have 0,1,2,4 nonempty cells in von neumann neighborhood and so and act in the same way for such pattern .let us proof the property by induction .any new configuration is composition of five previous patterns and it is enough to consider new configurations near contiguities of they boundaries . due to consideration below for are four contacts of central pattern with outer configurations .four cells with two neighbors corresponds them .the cases correspond to contacts of four outer patterns and due to symmetry number of neighbors there are always even .in fact , it may be simply shown that all such configuration ( of cells with state 1 ) are simple diamond - like checkerboard patterns with cells , fig .[ fig : compos1r2 ] .let us now consider .the only difference between and is additional requirement about cells with common corners .the limitation always holds due to `` coloring '' properties already discussed earlier on page .indeed , each new generation of cells with state 1 for may appear only on checkerboard sublattice with opposite colors , _i.e. _ all cells with common corner for an empty cell going to be switched into the state 1 are empty .so , evolution of starting with configuration is also the same as for and . 9 t. toffoli and n. margolus , `` invertible cellular automata : a review , '' _ physica d * 45 * _ , 229253 ( 1990 ) . s. wolfram , _ cellular automata andcomplexity : collected papers _ , ( addison - wesley , reading ma 1994 ) . l. le bruyn and m. van den bergh , `` algebraic properties of linear cellular automata , '' _ lin .157 * _ , 217234 ( 1991 ) .s. wolfram , `` statistical mechanics of cellular automata , '' _ rev .phys . * 55 * _ , 601644 ( 1983 ) ; _ reprinted _ in , 370 .o. martin , a. m. odlyzko , and s. wolfram , `` algebraic properties of cellular automata , '' _ comm .phys . * 93 * _ , 219258 ( 1984 ) ; _ reprinted _ in , 71113 .b. chopard and m. droz , _ cellular automata modeling of physical systems _ , ( cambridge university press , cambridge 1998 ) .t. koshy , _ fibonacci and lucas numbers with applications _ , ( john wiley & sons , new york , 2001 ) .
recursive equations for the number of cells with nonzero values at -th step for some two - dimensional reversible second - order cellular automata are proved in this work . initial configuration is a single cell with the value one and all others zero .
coherent actions of apparently distinct physical systems often provoke questions of their possible interactions .such coherence in interacting systems is often a result of their synchronization .it became a popular topic with the discovery of synchronization of non - identical chaotic oscillators .over the years different types of synchrony were studied , notably phase synchronization .there were also numerous attempts to study more complicated interactions under the names of generalized synchronization or interdependence .in biological context synchronization is expected to play a major role in cognitive processes in the brain such as visual binding and large - scale integration .various synchronization measures were successfully applied to electrophysiological signals . in this workwe concentrate on nonlinear interdependence . for an experimentalist it is often interesting to know how two systems synchronize during short periods of evoked activity .such questions arise naturally in analysing data from animal experiments .one measures there electrical activity on different levels of sensory information processing and aims at relating changes in synchrony to the behavioral contex , such as attention or arousal. it may be the case that the stationary dynamics ( with no sensory stimulation ) corresponds to a fixed point .for instance , when one measures the activity in the barrel cortex of a restrained and habituated rat , the recorded signals seem to be noise . onthe other hand transient activity evoked by specific stimuli seems to provide useful information .for example , bending a bunch of whiskers triggers non - trivial patterns of activity ( evoked potentials , eps ) in both the somatosensory thalamic nuclei and the barrel cortex .explorations described in this paper aim at solving the following problem .suppose we have two pairs of transient signals , for example recordings of evoked potentials from thalamus and cerebral cortex in two behavioral situations .can we tell in which of the two situations the strength of coupling between the structures is higher ?thus we investigate if one can measure differences in the strength of coupling between two structures using nonlinear interdependence measures on an ensemble of eps .since eps are short , transient signals , straightforward application of the measures motivated by studies of systems moving on the attractors ( stationary dynamics ) is rather doubtful and a more sophisticated treatment is needed .our approach is similar in spirit to that advocated by janosi and tel for the reconstruction of chaotic saddles from transient time series .( note that the transients we study should not be confused with the transient chaos studied by janosi and tel . )thus we cut pieces of the recordings corresponding to well - localized eps and paste them together one after another .since we are interested in the coupled systems , unlike janosi and tel , we obtain two artificial time - series to which we then apply nonlinear interdependence measures and linear correlations .it turns out that this approach allows to extract the information about the strength of the coupling between the two systems .we test our method on a population model of information processing in structure of the model of the thalamocortical loop used in the simulations.,scaledwidth=35.0% ] thalamocortical loop ( figure [ schfig ] ) consisting of two coupled wilson - cowan structures .sensory information is relayed through thalamic nuclei to cortical fields , which in return send feedback connections to the thalamus . this basic framework of the early stages of sensory systems is to a large extent universal across different species and modalities . to check that the results are not specific to this particular system we also study evoked dynamics of two coupled rssler - type oscillators in non - chaotic regime .the paper is organized as follows . in sec .[ sec : measures ] we define the measures to be used . in sec .[ sec : models ] we describe the models used to test our method .our model of thalamocortical loop is discussed in sec .[ sec : models1 ] and a system of two coupled rssler - type oscillators is described in sec . [ sec : models2 ] . in sec .[ sec : results ] we present the results . in sec .[ sec : results1 ] we show how various interdependence measures calculated on the transients are related to the coupling between the systems , while in sec .[ sec : results2 ] we study how the resolution of our methods degrades with noise .finally , in sec .[ sec : results3 ] , we apply time - resolved interdependence measure and compare its utility with our approach . we summarize our observations in sec .[ sec : concl ] .in the present paper we mainly study the applicability of nonlinear interdependence measures on the transients .these measures , proposed in , are non - symmetric and therefore can provide information about the direction of driving , even if the interpretation in terms of causal relations is not straightforward .these measures are constructed as follows .we start with two time series and , , measured in systems and .we then construct -dimensional delay - vector embeddings , similarly for , where is the time lag .the information about the synchrony is inferred from comparing the size of a neighborhood of a point in -dimensional space in one subsystem to the spread of its equal - time counterpart in the other subsystem .the idea behind it is that if the systems are highly interdependent then the partners of close neighbors in one system should be close in the other system .several different measures exploring this idea can be considered depending on how one measures the size of the neighborhood .these variants include measures denoted by , , , .we have studied the properties of most of these measures but for the sake of clarity here we report only the results for the `` robust '' variant and a normalized measure , as they proved most useful for our purposes .let us , following , for each define a measure of the spread of its neighborhood equal to the mean squared euclidean distance : where are the time indices of the nearest neighbors of , analogously , denotes the time indices of the nearest neighbors of .to avoid problems related to temporal correlations , points closer in time to the current point than a certain threshold are typically excluded from the nearest - neighbor search ( theiler correction ) .then we define the -conditioned mean where the indices of the nearest neighbors of are replaced with the indices of the nearest neighbors of .the definitions of and are analogous .the measures and use the mean squared distance to random points : and are defined as the interdependencies in the other direction , are defined analogously and need not be equal , .such measures base on repetitiveness of the dynamics : one expects that if the system moves on the attractor the observed trajectory visits neigborhoods of every point many times given sufficiently long recording .the same holds for the reconstructed dynamics .however , if the stationary part of the signal is short or missing , especially if we observe a transient such as evoked potential , this is not the case .still , if we have noisy dynamics , every repetition of the experiment leads to a slightly different probing of the neighborhood of the noise - free trajectory .this observation led us to an idea of gluing a number of repetitions of the same evoked activity ( with different noise realizations ) together and using such pseudo - periodic signals as we would use trajectories on a chaotic attractor .a similar idea was used by janosi and tel in a different context for a different purpose .an example of a delay embedding of a signal obtained this way is presented in fig .[ nbsfig ] .note that artifacts may emerge at the gluing points .this is discussed in , and some countermeasures are proposed . for simplicitywe proceed with just gluing as we expect that the artifacts only increase the effective noise level .the influence of noise is studied in sec .[ sec : results2 ] .delay - vector embeddings ( shown in planes defined by the first two principal components ) of pseudo - periodic signals obtained by gluing 50 evoked potentials generated in a model of thalamocortical loop . on the left ( signal from `` thalamus '' ) a point is chosen ( black square ) and its 15 nearest neighbors are marked with red ( gray ) diamonds .on the right ( `` cortex '' ) the equal - time partners of the marked points from the left picture are shown ., scaledwidth=47.0% ] recently , time - resolved variants of the methods described above were studied . they are applied to ensembles of simultaneous recordings , each consisting of many different realizations of the same ( presumably short ) process .let us denote the -th state vector in -th realization of the time - series by ( , respectively ) , .the idea in is , for given to find one neighbor in each of the ensembles .then a measure ( denoted ) based on distances to these neighbors is constructed .the proposition of is to look not at the nearest neighbors of a given no matter what time they occur at , but rather at the spread of state - vectors at the same latency across the ensemble . in sec .[ sec : results3 ] we study the measure as defined in .let denote the ensemble index of the -th nearest neighboor of among the whole ensemble .define the quantities the time - resolved interdependence measure is further defined as analogously one can define and also time - resolved variants of other interdependence measures . in the numerical experiments described in this paper we use the following parameters of the nonlinear interdependence measures : time lag for construction of delay - vectors : , embedding dimension , number of nearest neighbors , theiler correction . to calculate the interdependencies we used the code by rodrigo quian quiroga and chee seng koh available at http://www.vis.caltech.edu/~rodri/synchro/synchro_home.htm . in case of the measure we use the same embedding dimension and time lag ; here . to calculate this measure we used the code provided in supplementary material to . to compare the linear and nonlinear analysis methods we calculated the cross - correlation coefficients using matlab .while in numerical studies the correctness of reconstruction can often be easily checked by comparison with original dynamics , in analysis of experimental data it can be a complex issue .correct reconstruction is a prerequisite for application of our technique . for technical details on best practices of delay embedding reconstructions , pitfalls and caveats ,see .our model of the thalamocortical loop was based on the wilson and cowan mean - field description of interacting populations of excitatory and inhibitory neural cells . in the simplest version , which we used ,each population is described by a single variable standing for its mean level of activity the variables and are the mean activities of excitatory and inhibitory populations , respectively , and form the phase space of a localized neuronal aggregate .the symbols , , , denote parameters of the model , are sigmoidal functions , and are input signals to excitatory and inhibitory populations , respectively .these equations take into account the absolute refractory period of neurons which is a short period after activation in which a cell can not be activated again .such models exhibit a number of different behaviors ( stable points , hysteresis , limit cycles ) depending on the exact choice of parameters . to relate the simulation results to the experiment we considered the observable , since the electric potential measured in experiments is related to the difference between excitatory and inhibitory postsynaptic potentials( see the discussion in ) .we studied a model composed of two such mutually connected aggregates , which we call `` thalamus '' and `` cortex '' ( figure [ schfig ] ) .note that the parameters characterizing the two parts are different ( see the appendix [ appa ] for a complete specification of the model ) . specifically , there are no excitatory - excitatory nor inhibitory - inhibitory connections in the thalamus .only the thalamus receives sensory input , and we assume that is always a constant fraction of .the connections between two subsystems are excitatory only . to model the stimulus we assumed that the input ( ) switches at some point from 0 to a constant value ( ) , and after a short time ( on the time - scale of relaxation to the fixed point ) switches back to zero .this is clearly another simplification , as the real input , which could be induced by bending a bunch of whiskers , would be a more complex function of time .however , the transient nature of the stimulus is preserved . in this simplesetting we can understand that the `` evoked potential '' corresponds to a trajectory approaching the asymptotic solution of the `` excited '' system ( with the non - zero input ) , followed by a relaxation to the `` spontaneous activity '' in the system with null input .the model parameters were chosen so that its response to brief stimulation were damped oscillations of both in the thalamus and the cortex similar to those observed in the experiments , both in terms of shape and time duration ( figure [ epfig ] ) .however , apart from that , we exercised little effort to match the response of the model to the actual activity of somatosensory tract in the rat brain .our main goal in the present work was establishing a method of inferring coupling strength from transients and not a study of the rat somatosensory system .for this reason it was convenient to use a very simplified , qualitative model .interestingly , the response of the model , measured for example as the activity of excitatory cells in the thalamus , extends in time well beyond the end of the stimulation ( figure [ epfig ] ) .such behavior is not observed in a single aggregate and requires at least two interconnected structures . `` evoked potentials '' ( ) , ( a ) , ( b ) and their delay - vector embeddings shown in a plane defined by the first two principal components ( c ) , ( d ) .plots ( a ) and ( c ) : thalamus , ( b ) and ( d ) : cortex . the intervals above the ep indicate the duration of the non - zero stimulus .black ( thick ) lines are solutions for the system without noise , blue ( thin ) curves are five different realisations of noisy dynamics.,scaledwidth=47.0% ] we performed numerical simulations in three modes : either stationary ( null or constant input ) , or not ( transient input ) .the dynamics of the model is presented in figure [ dynfig ] . in case of transient inputthe simulation was done for .we used the stimulus and which was 0 except for the time when it was and .the system settled in the stationary state during the initial segment ( ) which was discarded from the analysis .the noise was simulated as additional input to each of the four populations , see the appendix [ appa ] for the equations . for each population we used different gaussian ( mean , standard deviation )white noise , sampled at 1khz and interpolated linearly to obtain values for intermediate time points . in case of stationary dynamics we simulated longer periods , .the signals were sampled at 100hz before the synchronization measures were applied . in case of constant or null stimulationthe system approaches one of the two fixed - point solutions which are marked by large dots in figure [ dynfig ] . for the amount of noise used here the dynamics of the system changes as expected : the fixed points become diffused clouds ( figure [ dynfig ] ) . during the transient `` evoked potential '' the switching input forces the system to leave the null - input fixed point , approach the constant - input attractor , and then relax back to its original state ( figure [ dynfig ] ) .of course , in the presence of noise the shape of the transient is affected ( figure [ dynfig ] ) . observe the similarity between the embedding reconstructions of the evoked potentials ( figure [ epfig ] , bottom row ) and the actual behavior in - coordinates ( figure [ dynfig ] , bottom row ) .while we are specifically interested in the dynamics of thalamocortical loop which dictated our choice of the studied system , we checked if our approach is not specific to this model . our second model of choice consisted of two coupled rssler - type oscillators we used the frequency detuning parameter and the maximum coupling constant .the scaling parameter took values from to .the stimulation parameter was except for where it was set to ; the noise inputs , were gaussian white noise with parameters as for the wilson - cowan model .the simulation was done for ] . everywhere except in section [ sec : results3 ] we used . in section[ sec : results3 ] we used either ] and .
we propose an approach for inferring strength of coupling between two systems from their transient dynamics . this is of vital importance in cases where most information is carried by the transients , for instance in evoked potentials measured commonly in electrophysiology . we show viability of our approach using nonlinear and linear measures of synchronization on a population model of thalamocortical loop and on a system of two coupled rssler - type oscillators in non - chaotic regime .
wave scattering from irregular surfaces continues to present formidable theoretical and computational challenges , especially with regard to analytical treatment of statistics , and numerical solution for wave incidence at low grazing angles , where the insonified / illuminated region may become very large .computationally , the cost of the necessary matrix inversion scales badly with wavelength and domain size and can rapidly become prohibitive ; this is compounded by the large number of green s function evaluations , whose overall cost is therefore sensitive to the form which this function takes . under the assumption of purely forward - scattering ,a successful approach has been the parabolic integral equation method ( pie) .this makes use of a ` one - way ' parabolic equation ( pe ) green s function , leading to the replacement of the helmholtz integral equations by their small - angle analogue . for 2d problems this green s function takes a particularly tractable form ;this , together with the volterra ( one - sided ) form of the governing integral operator , affords the key advantage of high numerical efficiency , and in the perturbation regime allows derivation of analytical results .nevertheless , the method yields no information about the field scattered back towards the source . on the other hand , where backscatter is required ,operator series solution methods such as left - right splitting and method of ordered multiple interactions have proved highly versatile , in both 2 and 3 dimensions .these use the full free - space green s function and proceed by expanding the surface fields about the dominant ` forward - going ' component , and thereby circumvent the difficulties of tackling the full helmholtz equations . in this paperwe combine these approaches , extending the standard pie description to a ` two - way ' method , thus allowing for both left- and right - travelling waves .this is obtained in the obvious way by replacing the parabolic equation green s function by a form symmetrical in range. the integral operator can be split into left- and right - going parts ; under the assumption that forward scattering dominates , the solution can then be written as a series and truncated .every term of this series is a product of volterra operators and is therefore treated as efficiently as the standard pie method , which corresponds approximately to truncation at the first term . in the second part of the paperwe impose the additional restriction to the perturbation regime of small surface height , within which analytical expressions for the mean field and autocorrelation function are obtained .this extends the corresponding results derived under the pie method .the approach there was first to obtain the scattered field to second order in at the mean surface plane , and find the far - field under the assumption that propagation outwards from the surface is governed by the full helmholtz equation . in the standard pe case, this modification allows a small amount of backscatter , but precludes any backscatter enhancement which can be thought of as due to coherent addition of reversible paths , because interactions at the surface are assumed to be take place in the forward direction only .the formulation presented here allows one to remove this restriction , and separate the forward and backward going interactions to various orders , although this aspect is not explored in detail here .in particular this method produces a correction term , whose statistics can be obtained in the perturbation regime .the paper is organised as follows : the standard parabolic integral equation method and preliminary results are given in section [ i ] .in section [ ii ] the full two - way parabolic integral equation method is set out , and the iterative solution explained .analytical results for the statistics under the extended method are derived in section [ iii ] .we consider the problem of a scalar time - harmonic wave field scattered from a one - dimensional rough surface with a pressure release boundary condition .( equivalently , is an electromagnetic or polarised wave and is a perfectly conducting corrugated surface whose generator is in the plane of incidence . )the wavefield has wavenumber and is governed by the wave equation .the coordinate axes are and where is the horizontal and is the vertical , directed out of the medium ( see fig .[ fig.1 ] ) .angles of incidence and scatter are assumed to be small with respect to the positive -direction .it will be assumed that the surface is statistically stationary to second order , i.e. its mean and autocorrelation function are translationally invariant .we may choose coordinates so that has mean zero .the autocorrelation function is denoted by , and we assume that at large separations .( the angled brackets here denote the ensemble average . )then is the variance of surface height , so that the surface roughness is of order .1 true cm schematic view of scattering geometry , title="fig:",height=226 ] since the field components propagate predominantly around the -direction , we can define a slowly - varying part by ( x , z ) = p(x , z ) ( -ikx ) . slowly varying incident and scattered components and defined similarly , so that .it may be assumed that for , so that the area of surface insonification is restricted , as it would be for example in the case of a directed gaussian beam .the governing equations for the standard parabolic equation method are then _ i ( * r*_s ) = - _ 0^x g_p ( * r*_s ; * r * ) dx [ 1 ] where both , lie on the surface ; and now , equations ( [ 1 ] ) and ( [ 2 ] ) do not apply to plane wave scattering at small or negative because of the truncated lower limit of integration , equivalent to the restricted surface insonification .nevertheless , we can formally apply the integral equation to a plane wave , to obtain a solution which will be physically meaningful and asymptotically accurate at large values of .this procedure has been used to derive the field statistics ; where necessary we will assume that is sufficiently large for this to hold .consider an incident plane wave ) ] , where s = - 1 , [ 5 ] which we refer to as the reduced plane wave .in this section the two - way version of the pie method will be described , and the iterative solution will be given .this provides an efficient means of calculating the back - scattered component at small angles of scatter . the governing equations ( [ 1 ] ) , ( [ 2 ] )must first be modified to take into account scattering from the right . to do this , we simply replace by its symmetrical analogue .this form arises if we apply the small angle approximation described in section [ i ] to the full free space green s function without requiring to vanish when .we thus obtain g(x , z ; x , z ) \ { lr & = , x < x & = x x [ 6 ] .the factor ] is incident on the rough surface at an angle measured from the normal .we first summarize the perturbational calculation used to obtain the scattered field statistics previously .suppose that a plane , say , can be chosen ` close ' to every point on the surface .the scattered field is obtained to second order in surface height along this plane , for a given incident plane wave , and the statistics are found from this .statistical results obtained in this way do not depend on the choice of so for convenience we may set .an expression is thus found for the scattered field _s(x,0 ) = -_i^(x , h ) - h - 12 h^2 ^2 _ i(x,0 ) z^2 + o(^3 ) .[ 16 ] the only term here which is not known _ a priori _ is .the standard pie solution for is given to second order in by : = - 1 d dx_0^x _ i^(x,h(x ) ) dx .[ 17 ] this arises from ( [ 13 ] ) by substitution of the flat surface form of ( see ( [ 20 ] ) below ) .denote by the approximation to obtained by substituting ( [ 17 ] ) in ( [ 16 ] ) , so that_ s(x,0 ) - _ i^(x , h ) & + h + & - h^22 ^ 2_i^(x , 0)z^2 . [ 18 ] we wish to calculate the backscatter correction to this expression due to the replacement of in ( [ 16 ] ) by the corrected two - way pe solution ( equations ( [ 14 ] ) , ( [ 15 ] ) ) .we therefore repeat the above derivation replacing ( [ 13 ] ) by ( [ 14 ] ) , to obtain _ s ( x,0 ) = _ s(x,0 ) + h(x ) c(x ) .[ 19 ] since the correction term appears here with a factor , it is necessary to evaluate it only to order . expanding and ( eqs .( [ 9])-([10 ] ) ) in surface height , it is seen that , , where , denote the deterministic ( i.e. flat surface ) forms of the operators and respectively : l_0 = _0^x 1 dx , r_0 = _x^ dx [ 20 ] in evaluating ( eq . ( [ 15 ] ) ) to order we may thus ignore fluctuating parts of the operators , and replace , by , respectively .we can therefore write c = l_0 ^ -1 r_0 + o(^2 ) .[ 21 ] an expression of the form is abel s integral equation , which has the well - known solution g(x ) = 1ddx _ 0^x 1 f(y ) dy . now to first order in , in ( 21 ) is given by ( r ) - . [ 22 ] where , for large , takes the form ( see eq .( [ 15 ] ) of ) d_(r ) ~-2ik e^iksr [ 23 ] and is an integral i(r ) = _ 0^r ikh(r ) dr .[ 24 ] therefore and are and respectively , so that in eq .( [ 21 ] ) becomes c(x ) = 1 ^ 2 ddx .[ 25 ] to second order in surface height the scattered field at the mean surface is therefore described by eq .( [ 19 ] ) , with given by ( [ 25 ] ) .the effect of the correction term on the scattered field statistics can now be examined .we first find the mean field .it is sufficient to obtain this quantity on the mean surface plane , using equation ( [ 19 ] ) , i.e. = < _ s(x ) > + < h(x)c(x ) > .the solution for has been obtained previously , and we can restrict attention to finding the correction to this . denote the correlation by for any , , i.e. ( x , x)=<h(x)c(x ) > .consider first the function . since vanishes , eq .( [ 22 ] ) gives = - .[ 26 ] now from eq .( [ 25 ] ) ( x , x ) = . the term can be taken under the integral signs as part of the operand of .the order of integration and averaging can then be reversed so that , by ( 26 ) , ( x , x ) = -1 ^ 2 .[ 27 ] consider the term in the inner integrand . by ([ 24 ] ) , = & + = & ik [ 28 ] this may be substituted into ( [ 27 ] ) to give an analytical expression for the correlation .we can simplify this expression by evaluating the derivatives explicitly .the term is independent of , so writing = f(r , r ) [ 29 ] the expression ( [ 28 ] ) becomes = ik \label{30 } \\ = & ik\cos\theta ~\lim_{\epsilon\rightarrow 0 } { 1\over\epsilon } \left[k_1 + k_2 - k_3 \right ] \nonumber\end{aligned}\ ] ] where consider these three integrals in detail .the first gives k_1= 1 _0^ f(r+,r)(x - r)dr ( x)_0^ 1 dr -0.8 cm \label{32 } \\\cong & { \rho(x ) \over \alpha\sqrt{r } } \nonumber\end{aligned}\ ] ] using a taylor expansion in . changing variables , in can be written _ ^r+ f(r+,r)(x - r)dr = _ 0^rf(r+,r+ ) ( x - r- ) dr . [ 33 ] now f(r+,r+)=e^iks(r+ ) = e^iksf(r , r so from ( [ 33 ] ) _ ^r+ f(r+,r)(x - r)dr = _0^r e^iks f(r , r)(x - r- ) dr .[ 34 ] thus the difference in ( [ 30 ] ) becomes & _0^r f(r , r ) dr + & _0^r f(r , r ) dr [ 35 ] where , which may be assumed to be differentiable , has been expanded to leading order in . substituting ( [ 32 ] ) and( [ 35 ] ) in ( [ 28 ] ) , we obtain = ik\ { ( x ) +_ 0^r e^iksr dr } .[ 36 ] this removes the derivative with respect to in ( [ 27 ] ) , and indeed for several important autocorrelation functions( [ 36 ] ) can be written in closed form .the term is an artifact of the finite lower bound of integration and can be dropped , as we can assume the range variable to be large .equation ( [ 27 ] ) therefore becomes ( x , x ) = -ik^3 [ 37 ] where r(x , r)=iks(x - r)-d(x - r)dx .[ 38 ] the derivative with respect to in ( [ 37 ] ) can be evaluated similarly , and after further manipulation ( see appendix ) the required expression can be written , setting , = -ik^3 & _ x = x [ 39 ] where = \{(1+ik)r(x , r ) + drdr } . [ 40 ] the main quantity of interest is the angular spectrum of intensity , which may be defined as the fourier transform of the autocorrelation function ( i.e. the second moment ) of the scattered field .this remains essentially unchanged with distance from the surface , so that we may again concentrate on obtaining the form on the mean surface plane , .denote the second moment m_2(x , y)= where indicates the complex conjugate , and denote its approximation using the standard parabolic equation method by ( x , y ) .the perturbational solution of was obtained in .it is relatively straightforward to express , to second order in surface height under the present two - way pie method , as the sum of and correction terms .these additional terms , which are expected to be small , represent the ` indirect ' contribution to the backscatter . from ( [ 19 ] )we have _ s ( x ) _s^*(y ) = ( x ) ^*(y ) + ( x ) h(y ) c^*(y ) + ^*(y ) h(x ) c(x ) + h(x)h(y)c(x)c^*(y ) .[ 41 ] we can write and to zero and first order in surface height , = _ 0 + _ 1 + o(^2 ) where _ 0 ( x ) & = - e^iksx + _ 1 ( x ) & = - 2ik h(x ) e^iksx h(x ) d_(x ) , [ 42 ] and c = c_0+c_1 [ 43 ] where therefore to the second moment can be written & m_2(x , y ) = ( x , y ) + _ 0(x ) + c_0^*(y ) + & + _ 0^*(y ) + c_0(x ) + ( x - y ) c_0(x)c_0^*(y ) . [ 44 ] since , equation ( [ 44 ] ) can be expressed as & m_2(x , y ) = ( x , y ) + _0(x ) ^*(y ) + _ 0^*(y ) ( x ) + & + ( x - y ) c_0(x)c_0^*(y ) + c_0^*(y ) + c_0(x ) .[ 45 ] in this equation , only the last two terms remain to be determined . from ( [ 42 ] ) , is just = ( x - y ) d_[46 ] and similarly for so that ( [ 45 ] ) becomes & m_2(x , y ) = ( x , y ) + _ 0(x ) ^*(y ) + _ 0^*(y ) ( x ) + & + ( ) [ 47 ] where .the parabolic integral equation method has been extended here to allow the calculation of backscatter of due to a scalar wave impinging on a rough surface at low grazing angles .the solution is written in terms of a series of volterra operators , each of which is easily evaluated , and which allows examination of multiple scattering resulting from increasing orders of surface interaction .truncation at the first term the leading forward- and back - scattered components ; higher - order multiple scattering are available from subsequent terms .the parabolic green s function is applicable for wave components at low angles of incidence and scatter , which imply small surface slopes , but without restriction on surface heights . with the additional assumption of small surface heights , analytical solutionshave then been obtained , to second order in height , for the mean field and its autocorrelation .these provide backscatter corrections to the solutions given in the purely forward - scattered case with the potential for further insight into the role of different orders of multiple scattering .( small height perturbation theory derived directly from helmholtz equation has of course been well established for many years and yields particularly simple single scattering results .the results here are from a different perspective ; the first term already includes multiple - forward - scattering , and subsequent terms incorporate back- and forward - scatter contributions systematically at higher orders . ) in the context of long - range propagation at low grazing angles , parabolic equation methods remain very widely used . in this regimethe form of the green s function together with the series decomposition provide computational efficiency and the means to extend existing pe methods to include backscatter , in addition to yielding tractable analytical results for statistical moments .these benefits should , nevertheless , be put in context .the computational advantages of the pe green s function over the full free space green s function are lost in fully 3-dimensional problems ( since evaluation of the 3d pe green s function is computational expensive ) , or those for which wide - angle scatter needs to be taken into account . on the other hand thereremains a need for further theoretical understanding of the mechanisms of enhanced and multiple backscatter , and the approach here may be applied in a more general setting .computational and theoretical results in application to long - range propagation over rough sea surfaces will appear in a separate paper .we can write the expression ( [ 34 ] ) as ( x , x)= -ik^3 ddx_0^x g(x , y ) h(x , y ) dy [ a.1 ] where g(x , y ) = 1 , [ a.2 ] h(x , y)= __ 0^r e^iksr r(x , r ) dr dr , [ a.3 ] and is given by ( [ 35 ] ) .differentiation with respect to is carried out as for the -derivative ( equations ( [ 27])-([33 ] ) ) : the -derivative is thus expressed as a limit of a finite difference , and the integral split into three parts , ( x , x ) = -ik^3 _ 0 1 where we thereby obtain _ 0^x g(x ,y ) h(x , y ) dy = 2 h(x , y ) + _0^x g(x , y ) dy .[a.4 ] the term is then = ddy_y^a(y , r ) j(x , r ) dr [ a.5 ] where a(y , r ) = e^ikr , [ a.6 ] j(x , r)= _ 0^r e^iksr r(x , r ) dr [ a.7 ] treating the derivative as before gives = - _ y^a(y , r ) ( dj(x , r)dr + ikj(x , r ) ) dr .[ a.8 ] finally , = ddr _ 0^r e^iksr r(x , r ) dr [ a.9 ] from which we similarly get = r(x,0 ) + _0^r e^iksr \ { iks r(x , r ) + drdr } dr .[ a.10 ] as before ( see ( [ 34 ] ) ) the expression vanishes for large and can be dropped . successively substituting ( [ a.8 ] ) , ( [ a.10 ] ) , ( [ a.3 ] ) and ( [ a.4 ] ) into ( [ a.1 ] ) , we eventually obtain = -ik^3 \label{a.11 } \end{aligned}\ ] ] where = \{(1+ik[1+s])r(x , r ) + drdr } .[ a.12 ] in this expression , is given by ( [ 35 ] ) , so that = iksd(x - r)dr - d^2(x - r)dx^2 .[ a.13 ] it is clear then that the correction term introduces a higher - order dependence on the correlation function .i. simonsen , a.a .maradudin , & t.a .leskova , scattering of electromagnetic waves from two - dimensional randomly rough perfectly conducting surfaces : the full angular intensity distribution , _ physical review a _ , * 81 * 013806 ( 2010 ) 1 - 13 m .- j .kim , h.m .berenyi & r.e .burge , scattering of scalar waves by two - dimensional gratings of arbitrary shape ; application to rough surfaces near grazing incidence , _ proc .a _ * 446 * ( 1994 ) 1 - 20 . c. qi , z. zhao , w. yang , zp nie , & g. chen , electromagnetic scattering and doppler analysis of three - dimensional breaking wave crests at low - grazing angles , _ progress in electromagnetics _ , * 119 * ( 2011 ) 239 - 252 .p. tran , calculation of the scattering of electromagnetic waves from a two - dimensional perfectly conducting surface using the method of ordered multiple interaction , _ waves in random media _ , * 7 * , 295 - 302 ( 1997 ) .pino , l. landesa , j.l .rodriguez , f. obelleiro , & r.j .burkholder , the generalized forward - backward method for analyzing the scattering from targets on ocean - like rough surfaces _ ieee trans ._ , * 47 * , 961 - 969 ( 1999 ). a. ishimaru , j.s .chen , p. phu , & k. yoshitomi , numerical , analytical , and experimental studies of scattering from very rough surfaces and backscattering enhancement , _ waves in random media _ * 1 * ( 1991 ) s91-s107 .
this paper extends the parabolic integral equation method , which is very effective for forward scattering from rough surfaces , to include backscatter . this is done by applying left - right splitting to a modified two - way governing integral operator , to express the solution as a series of volterra operators ; this series describes successively higher - order surface interactions between forward and backward going components , and allows highly efficient numerical evaluation . this and equivalent methods such as ordered multiple interactions have been developed for the full helmholtz integral equations , but not previously applied to the parabolic green s function . in addition , the form of this green s function allows the mean field and autocorrelation to be found analytically to second order in surface height . these may be regarded as backscatter corrections to the standard parabolic integral equation method . = cmr8 1 true cm department of applied mathematics and theoretical physics , the university of cambridge cb3 0wa , uk 1.5 true cm
in this paper we investigate the problem of optimal attitude control of a 3-dimensional body which can be rotated around two fixed axes .the problem goes back to euler who proved in 1776 that an arbitrary rotation of a 3-dimensional body may be factored as where ( resp . ) is a rotation in angle around -axis ( resp .-axis ) .the parameters are called the euler s angles .we could allow decompositions for with more factors : clearly we will get infinitely many such decompositions for a fixed element in the group of rotations . thus it is natural to pose the question of finding a decomposition ( [ eutwo ] ) that minimizes the total angle of rotation .it can happen that decompositions with more factors have a smaller total angle of rotation than the euler s decomposition ( [ eu ] ) .it turns out that this problem is not well - posed : for some an optimal decomposition ( [ eutwo ] ) does not exist . instead, the infimum of the total angle of rotation is attained as a limit on a sequence of decompositions ( [ eutwo ] ) with .we can overcome this difficulty by noting that hence it is natural to extend the set of controls from to . implementing a rotation corresponds to carrying out rotations around axes and simultaneously with the ratio of angular velocities .once we extend the control set , our optimization problem becomes well - posed and for every there is an optimal decomposition with a finite number of factors .we study this problem in a more general setting , where we allow an arbitrary angle between the axes and .we also introduce a more general cost function to be minimized , where a rotation in angle around -axis has the same cost as a rotation around -axis in angle , .we solve the optimization problem in this greater generality and determine possible patterns for the optimal decompositions .each of these patterns has ( at most ) 3 independent time parameters , and it is fairly easy to find numerically the decompositions of a given element according to each pattern .this produces a finite number of decompositions and we can immediately see which one of them is optimal .it happens that our optimization problem has a bifurcation at .for the cases and we get different lists of optimal patterns .there are also special cases when or .let us present the list of optimal patterns in case when the axes and are perpendicular to each other and .since in this case the problem is symmetric with respect to the dihedral group of order , generated by transformations , , , the list of patterns will also be symmetric with respect to this group .we denote this group of symmetries by and use it to present the list of patterns in a more compact form .let the angle between the axes and be and let . for an element is an optimal decomposition with of one of the following types : and symmetric to these under the group of transformations .suppose we would like to decompose a rotation as a product of rotations around - and -axes , where is the standard orthogonal basis of .the pattern for the optimal decompositions will depend on the value of . if then the following decomposition realizes the minimum of the total rotation angle : for the euler s decomposition ( [ eu ] ) becomes optimal : when both patterns are optimal .for the optimal decompositions may be obtained by switching with in the above expressions . in case when the axes and are perpendicular to each other and ( meaning that rotations around -axis have zero cost ) ,the optimal decompositions are precisely those described by euler ( [ eu ] ) . in 2009nasa launched a space telescope kepler with a mission of finding planets outside the solar system .this spacecraft was placed in an orbit around the sun . to take images of stars ,the telescope needs to be pointed in the target direction , with its solar panels facing the sun .the attitude control of kepler is done with reaction wheels , which are heavy disks mounted on electric motors .once the reaction wheel is turned , the spacecraft will turn around the same axis in the opposite direction due to the angular momentum conservation law .if we have three reaction wheels with linearly independent axes , by rotating them simultaneously with appropriate relative angular velocities , we can implement a continuous rotation of the spacecraft around an arbitrary axis . for redundancy ,kepler was equipped with four reaction wheels with their axes in a tetrahedral configuration , so that any three of them could provide an efficient attitude control .however by may 2013 , two of the four reaction wheels failed , leaving kepler with just two available axes of rotation .the results of our paper provide optimal methods for attitude control with two rotation axes , like in situation with the kepler space telescope .this paper builds on our previous work , where we studied a similar problem for , also with two available controls , but with a restriction that only a positive time evolution is allowed . that paper was motivated by the applications to quantum control in a 1-qubit system .in the present paper we use the geometric control theory , which is an adaptation of the pontryagin s maximum principle to the setting of lie groups .the maximum principle provides only necessary conditions for optimality , which need not be sufficient . in section 3we identify decompositions that satisfy the necessary conditions of the pontryagin s maximum principle .then we go into a more detailed analysis in section 4 by showing that decompositions with a large number of factors are not optimal , even when they satisfy the conditions of the maximum principle .our main results are stated in theorems [ czero ] [ meq ] at the end of the next section .i thank cornelius dennehy , ken lebsock , eric stoneking and alex teutsch for the stimulating discussions .support from the natural sciences and engineering research council of canada is gratefully acknowledged .for a unit vector denote by an operator of rotation of in angle around , with the plane perpendicular to turning counterclockwise when viewed from the endpoint of . as a matrix , given by the formula : where for with , is the adjoint matrix of with respect to the cross product , so that : for with we set , where and .the set of all rotations of forms the group . for a fixed set is a 1-parametric subgroup in .it is well - known that for any two non - proportional unit vectors , the corresponding -parametric subgroups together generate the whole group .this means that every element may be decomposed into a product with .decomposition ( [ dec ] ) is of course not unique .it is then natural to consider the optimization problem of finding the infimum of over all decompositions ( [ dec ] ) with fixed .more generally , we may assign cost to each generator and minimize the total cost in ( [ dec ] ) .introduction of the cost parameters may be warranted in case when the body that we control has unequal momenta of inertia with respect to the axes and , thus making it easier to rotate it around one of the axes . without loss of generality ,we assume that and renormalize the cost function by fixing , with . for the rest of the paper we fix two non - proportional vectors with .an important parameter is the angle between these vectors . without loss of generalitywe assume , otherwise we can replace with . throughout the paper we will the use parameter , .let be a vector perpendicular to and , , .it could happen that the infimum of cost is not attained on any particular decomposition ( [ dec ] ) , but rather as a limit on a sequence of such decompositions with .it turns out that we can overcome this difficulty by enlarging the set of generators to be note that rotations corresponding to elements of can be realized as limits of products of rotations with axes : from the point of view of the attitude control , this corresponds to turning on controls and simultaneously with intensities and respectively .we extend the definition of the cost function in such a way that the cost of both sides of ( [ explim ] ) is the same : our goal is to solve the following * problem 1 .* for a given find a decomposition with , , realizing the the infimum of it was shown in , theorem 1.4 , that the infimum cost in this problem problem is the same as for its more restricted version where the set of controls is taken to be instead of .in fact , we shall see that we would not need the whole set , but require in addition to controls only the elements , where ( resp . ) is a linear combination of and , which is orthogonal to ( resp . ) .since the cost of and is the same , we can rescale the generators without changing the cost of decompositions .we can thus drop the requirement for the generators .we fix and .taking into account that and , it is easy to check that and . now we can state the main results of the paper .it turns out that the problem we consider has a bifurcation at , and we need to consider the cases and separately .there will be also a special case when . we will give the solution of the above optimal control problem by specifying the patterns of optimal decomposition ( [ dec ] ) .we begin with some elementary observations .obviously we may restrict all angles of rotation to be less or equal to .if is an optimal decomposition then a decomposition with , , , is also optimal .we call ( [ sub ] ) a _ subword _ of . we shall present the optimal decompositions as subwords of certain patterns .since the number of patterns can be fairly large , we shall use various symmetries in order to group several patterns together .for example , if we have an optimal decomposition with , then is also an optimal decomposition ( for a different element of ) .this follows from the fact that multiplication of controls by is an automorphism of our problem .we denote this symmetry transformation on the set of patterns by . whereas the set of optimal patterns is always invariant with respect to the symmetry , other types of symmetries that we shall consider are not universal and are present only for some patterns .if we make the following schematic representation of the controls , all symmetries that we consider will be elements of the dihedral group of symmetries of a square : consider a transformation , , , .together with the symmetry this generates a set of transformations .we denote this set of symmetries by .we assume that all symmetries we consider are compatible with multiplication by , even though they are not linear in general .we also consider a transformation , , , .together with , this generates a set of transformations , which we denote by . finally ,if we consider all of the above transformations together , we generate a full set of symmetries of the square in fig.1 , which we denote by .[ czero ] let , .for an element the infimum of the optimization problem 1 is attained on a subword of one of the following patterns : \(i ) where , , and symmetric to it under .\(ii ) , with , and symmetric to it under .\(iii ) , with , and symmetric to it under .when we apply symmetry transformations , e.g. , , we change the parameters , accordingly , but the relation in ( i ) is preserved . under this symmetry transformation , the pattern ( i ) takes form with . set [ mgt ] let .for an element the infimum of the optimization problem 1 is attained on a subword of either pattern ( i ) or one of the following : \(iv ) , with , and symmetric to it under .\(v ) , with , and symmetric to it under .\(vi ) , with , and symmetric to it under .\(vii ) , with , and symmetric to it under .[ mlt ] let . for an element the infimum of the optimization problem 1 is attained on a subword of either patterns ( i ) , ( iv ) , ( v ) given above , or the following pattern \(viii ) , with , and symmetric to it under .[ meq ] let , .for an element the infimum of the optimization problem 1 is attained on a subword of one of the following two patterns \(ix ) , with , and symmetric to it under .\(x ) , with , and symmetric to it under . when we have proportional to , and the lists of patterns in theorems [ mgt ] and [ mlt ] become equivalent .in this section we will review the geometric optimization theory following , and apply it to our optimization problem .the lie algebra of the lie group is the tangent space to at identity and consists of skew - symmetric matrices .the lie bracket of two matrices in is = ab - ba ] such that and .an absolutely continuous function has a measurable derivative \rightarrow so(3) ] leading to , satisfying for almost all .the parameter in problem 2 is not fixed and when taking the infimum we consider the curves with all .it is clear that the restriction to the case of piecewise constant controls gives precisely problem 1 .on the other hand we shall see that the solutions of problem 2 indeed have piecewise constant controls , which implies equivalence of problems 1 and 2 .[ existence ] for any there exists an absolutely continuous optimal solution for problem 2 .the proof of this proposition is based on the observation that the cost assigned to a curve \rightarrow so(3) ] and the corresponding reparametrized curve . then and have the same cost : this computation shows that rescaling of the set of controls does not change the cost of a curve leading to with .let us modify problem 2 by replacing the set with its convex hull once the control set is convex , we can apply theorem 4.10 from to obtain the existence of an absolutely continuous optimal solution \rightarrow so(3) ] and a constant such that for almost all ] .[ cons ] the quantity is conserved . note that for our problem the parameter can not be zero , otherwise condition ( i ) implies that , hence is proportional to for almost all , and so is .however ( ii ) implies that and thus and is a constant multiple of . inspecting ( ii ) again , we conclude that must be zero for almost all , which contradicts the last claim of the theorem . in casewhen the parameter is non - zero , it can be rescaled to any negative value .a convenient choice for us is .consider a second basis of the -plane , where then , .in this basis and . according to theorem [ pmp ] , the value of determines the value of via ( i ) , while by ( ii ) the value of determines the evolution of .let us analyze ( i ) to see which values of are admissible , and what are the corresponding controls .let us write and .then since the set is closed under symmetry , , we see that the maximum in of is attained when has the same sign as and has the same sign as .hence by property ( i ) of the theorem , , thus the admissible values of satisfy either , or , .we summarize this in the following lemma , which describes controls in the resulting regions : [ reg ] ( a ) let .\(i ) if , then , , the control is ; \(ii ) if , then , , the control is ; \(iii ) if , then , , the control is ; \(iv ) if , then , , the control is .\(b ) if then , . when and we could have either control or . at the points where two regions meet ,the whole segment joining the corresponding two controls is allowed .for example , when and we could have any control with , .we will call such values of _critical_. if the curve reaches a critical point , one of three things could happen : the curve could cross the boundary of a region , in which case the control will switch ; the curve could return to the same region where it came from without a switch of control ; or the curve may stay inside the critical boundary for some positive time .let us describe evolution of inside the critical boundary .[ critreg ] ( a ) suppose for ] .\(b ) suppose for ] .cases , and , are analogous , the controls are and respectively and . to prove ( a ) consider equation ( ii ) in theorem [ pmp ] .we get taking into account that we get that since and are constant , this implies for ] , = jk - kj = 2i ], we see that two lie algebra structures on coming from and differ by a factor of . for this reason there is a factor of in the formula for the homomorphism : here for a vector the exponential is computed in the algebra of quaternions .note that the rotation operator is also an exponential : .pontryagin s maximum principle that we use above is essentially a local first derivative test . in order to obtain stronger results ,we need to either apply non - local transformations ( those that do not come from a small variation of parameters ) or use higher derivatives . in proposition [ epsfive ] we will be using the second derivative in order to show that certain decompositions are not optimal .an example of a non - local transformation is the identity where .this trivial observation may be generalized in the following way .suppose is a rotation in angle .then we get a relation .note that both sides of this equality have the same cost .this non - local relation and its consequences will be quite useful for our analysis .clearly , is a rotation in angle if and only if is the identity matrix , but is not identity .this is equivalent to , in .it is easy to see that the only solutions to are .thus the preimages of rotations in angle are precisely with , or equivalently , .since , this becomes . for is equivalent to .the group acts on its lie algebra by conjugation , and its center acts trivially .this gives the action of on , which is the natural action of on .since this action is transitive on pairs of unit vectors with a given angle between them , we may set without loss of generality , .we complete this to a basis of by setting = k \sin \alpha$ ] .we can easily verify that = 2 y - 2 \cos \alpha x , \quad & \quad [ z , y ] = - 2 x + 2 \cos \alpha y , \\ xzx = z , \quad & \quad yzy = z . \end{split}\ ] ] we also note that and and likewise for .we may assume without loss of generality that .let us begin with the case of .we take its preimage under : , where , , , .we claim that the decomposition given by the previous proposition will have a lower cost .since we need to show that , where we have and . if then and we get that the new cost is lower . if then since and , we get that , so the new cost is again lower .we now apply the same approach to .we again take its preimage in and transform it into using proposition [ flip ] . herethe values of , are still given by ( [ sprime ] ) with .we have and consider the sign of .the case is treated in the same way as before . when we consider two subcases : and .if , the claim of the proposition follows from the observation that on the diagrams ( a ) , ( c ) in fig .[ kltc ] the arcs , , and correspond to an angle not exceeding .let us assume . to show that the transformed expression has a lower cost , we need to prove that .however . since , we get , which completes the proof of the proposition .let us assume by contradiction that the given decomposition is optimal .as before , we take a preimage , where , , .we shall express the given decomposition in the following way : we are going to solve for , , and in terms of , and show that the new decomposition has a lower cost .note that the parameters and are bound by the relation .we will use the campbell - hausdorff formula ( [ ch ] ) to rewrite both sides of ( [ twoexp ] ) in the form we shall calculate up to the second order in . applying ( [ ch ] ) to the left hand side of ( [ twoexp ] ) , we get that + { { o}}(\ep^2),\ ] ] where and let us carry out the detailed calculations. we shall use the basis and relations ( [ rel ] ) as in the proof of the proposition [ flip ] . next , doing the same calculations for the right hand side of ( [ twoexp ] ) , we get + [ l_3 , l_5 ] + [ l_3 , l_6 ] + [ l_4 , l_5 ] + [ l_4 , l_6 ] + [ l_5 , l_6 ] \right ) + { { o}}(\ep^2),\ ] ] where we begin by solving ( [ twoexp ] ) to the first order in . equating ( [ firstl ] ) with ( [ secondl ] )we get : we divide both sides of this equation by , which allows us to express everything in terms of . using the relation ,we further eliminate . to make the equations more compact we denote by . by proposition [ epsthree ]we have .since we also have the relation , we use the taylor expansion to find the relation between and to the first order : expressing this in terms of , we get equating the coefficients at , we get a system of equations the determinant of this system is the jacobian of ( [ twoexp ] ) and equals since , , , we see that the only case when the jacobian vanishes is , , .we will consider this case separately below . in all other casesthe jacobian is non - zero , hence by the implicit function theorem , equation ( [ twoexp ] ) has a unique solution for small .this completes the proof of the proposition for the decomposition .the cases of the decompositions obtained from this one by applying symmetries are analogous .for example , in the case of , we use the transformation the cost of the right hand side is which is lower than the cost of the left hand side .it follows from proposition [ epsfive ] that for the optimal decompositions corresponding to the trajectory , and symmetric to it , the number of factors is at most .this corresponds to pattern ( i ) in theorems [ czero ] [ mlt ] .suppose .for the trajectory , and symmetric to it , the number of factors is bounded by , since the evolution times and exceed , and optimal decompositions can not have such time parameters .the case of two factors is incorporated in patterns ( iii ) and ( vii ) with in theorems [ czero ] and [ mgt ] . in the case , the decompositions corresponding to the trajectory , could have up to factors , since -evolution time exceeds , but -evolution time does not exceed , which is less than .this corresponds to pattern ( viii ) in theorem [ mlt ] . to complete the proof of theorems [ czero ] [ meq ] , we need to establish a bound on the number of factors for the trajectories that pass through the critical points . for the critical points with the existence of such a bound immediately follows from the diagrams in fig .[ kgtc ] , since the trajectories connected to these points are full circles and require time evolution of to complete the circle , while any evolution with time exceeding is not optimal .let us consider a preimage in for . here , .it follows from ( [ tim ] ) that then by lemma [ dz ] , the image of in is a rotation in angle and the first equality in part ( b ) holds .it is easy to see that is orthogonal to : which implies that the axis of rotation corresponding to is orthogonal to and also that . taking the exponential of both sides, we get that , from which the third claim of part ( b ) follows .first we consider the case and .we have pointed out above that the factor may only appear when and there will be only one such a factor in that case . in case when , we have that is proportional to , and we do not need to consider the factors of the form at all .consider an optimal decomposition with factors . without loss of generalityassume that the time parameter in the first such factor is positive. then it will necessarily have the form where and for by proposition [ wconj ] we can combine all factors of type into one without changing the cost : next suppose .using proposition [ wconj](c ) and applying the above argument we see that there is an optimal decomposition with at most one factor of type .this completes the proof of theorem [ meq ] .now let us consider the case .here we could have a decomposition that contains factors of both types , and .note that .suppose that an optimal decomposition of contains a factor with .this factor will be followed by either an -evolution or -evolution .let us assume it is -evolution that follows .if the time parameter for -evolution is less than , that will be the last factor in the decomposition , as the control switch can not occur .otherwise , we get followed by a factor . but this will imply optimality of the expression , which gives a contradiction since can not be optimal since it does not satisfy the necessary conditions for optimality of theorem [ pmp ] .all other cases are analogous and we conclude that in case factors in optimal decompositions may be preceded or followed by just a single factor or with , thus completing the proof of theorem [ czero ] .[ fivew ] let , .suppose an optimal decomposition for contains a factor with .then there exists an optimal decomposition of , which is a subword in one of the following : let us show that in an optimal decomposition of the number of factors following is at most two . indeed ,if it is followed by three or more factors , such evolution must begin with either or . by proposition [ wconj ] , .this will be followed by evolution with control or .this would imply optimality of either or for small .let us show that these decompositions are not optimal .consider a preimage in for . here , .since we get that and we can assume that .choose such that .since , we conclude that .then by proposition [ flip](a ) we get however the latter decomposition is not optimal since it does not correspond to a trajectory in fig .[ kgtc ] , [ kltc ] , yet both sides in the above equality have the same cost .this implies that is not optimal .the argument for is analogous .this completes the proof of proposition [ fivew ] and theorems [ mgt ] and [ mlt ] .nasa , ( 2013 ) .nasa ends attempts to fully recover kepler spacecraft , potential new missions considered .[ online ] available at : http://www.nasa.gov/content/nasa-ends-attempts-to-fully-recover-kepler-spacecraft-potential-new-missions-considered [ accessed 14 aug .2014 ] .p. petersen , riemannian geometry , graduate texts in mathematics * 171 * , springer , 2006 .g. v. smirnov , introduction to the theory of differential inclusions , graduate studies in mathematics * 41 * , amer .soc . , 2002 .
euler proved that every rotation of a 3-dimensional body can be realized as a sequence of three rotations around two given axes . if we allow sequences of an arbitrary length , such a decomposition will not be unique . in this paper we solve an optimal control problem minimizing the total angle of rotation for such sequences . we determine the list of possible optimal patterns that give a decomposition of an arbitrary rotation . our results may be applied to the attitude control of a spacecraft with two available axes of rotation .
fractal transformations are mappings between pairs of attractors of iterated function systems .they are defined with the aid of code space structures , and can be quite simple to handle and compute .they can be applied to digital images when the attractors are rectangular subsets of .they are termed `` fractal '' because they can change the box - counting , hausdorff , and other dimensions of sets and measures upon which they act . in this paperwe substantially generalize and develop the theory and we illustrate how it may be applied to digital imaging . previous work was restricted to fractal transformations defined using fractal tops .fractal tops were introduced in and further developed in .the main idea is this : given an iterated function system with a coding map and an attractor , a _ section _ of the coding map , called a tops function , can be defined using the `` top '' addresses of points on the attractor . given two iterated function systems each with an attractor , a coding map , and a common code space , a mapping from one attractor to the other can be constructed by composing the tops function , for the first iterated function system , with the coding map for the second system .under various conditions the composed map , from one attractor to the other , is continuous or a homeomorphism . in the cases of affine and projective iterated function systems , practical methods based on the chaos game algorithm are feasible for the approximate digital computation of such transformations .fractal tops have applications to information theory and to computer graphics .they have been applied to the production of artwork , as discussed for example in , and to real - time image synthesis . in the present paperwe extend the theory and applications .much of the material in this paper is new .the underlying new idea is that diverse sections of a coding map may be defined quite generally , but specifically enough to be useful , by associating certain dynamical systems with the iterated function system .these sections provide novel collections of fractal transformations ; by their means we generalize the theory and applications of fractal tops .we establish properties of fractal transformations , including conditions under which they are continuous .the properties are illustrated by examples related to digital imaging . a notable result , theorem [ goldthm ] , states the existence of nontrivial fractal homeomorphisms between attractors of some affine overlapping iterated function systems .the proof explains how to construct them .an example of one of these new homeomorphisms , applied to a picture of lena , is illustrated in figure [ goldenlenna ] .goldenlenna.png in section [ ifssec ] we review briefly the key definitions and results concerning point - fibred iterated function systems on compact hausdorff spaces .since this material is not well - known , it is of independent interest .the main result is theorem [ mainifsthm ]. this can be viewed as a restatement of some ideas in ; it describes the relationship between the coding map and the attractor of a point - fibred iterated function system. in section [ topsec ] we define , and establish some general properties of , fractal transformations constructed using sections of coding maps . in theorem [ sectionthm ] we present some general properties of coding maps. then we use coding maps to define fractal transformations and , in theorem [ ctyfractaltransthm ] , we provide sufficient conditions for a fractal transformation to be continuous or homeomorphic . in section [ masksec ]we define two different types of section of a coding map : ( i ) with the aid of a _ masked _ dynamical system ; and ( ii ) with the aid of fractal tops .theorem [ maskthm ] establishes the connection between the masked dynamical system and a _ masked section _ of the coding map .theorem [ maskbranchthm ] includes a statement concerning the relationship between the masked section to the coding map and the shift map .here we also establish the relationship between fractal tops and masked systems .a key result , theorem [ l : disjoint ] , gives a condition under which the ranges of different masked sections intersect in a set of measure zero .this enables the approximate storage of multiple images in a single image , as illustrated in figure [ allmasked ] .in section [ appsec ] we apply and illustrate the theoretical structures of sections 2,3 , and 4 , in the context of digital imaging .our goal is to illustate the diversity of imaging techniques that are made feasible by our techniques , and to suggest that fractal transformations have a potentially valuable role to play in digital imaging . in section [ appsec1 ]we illustrate how fractal transformations may be applied to image synthesis , that is to making artificial interesting and even beautiful pictures .specifically we explain how the technique of color - stealing extends to masked systems . in section [ fhomsec ]we apply fractal homeomorphisms , using theorem [ ctyfractaltransthm ] , to transform digital images , for image beautification , roughening , and special effects ; in particular , we present and illustrate theorem [ goldthm ] which extends the set of known affine fractal homeomophisms .in section [ filtsec ] we consider the idea of composing a fractal transformation , discretization , and the inverse of the transformation to make idempotent image filters . in section [ packsec ]we apply theorem [ l : disjoint ] to the approximate storage or encryption of multiple images in a single image . in section [ meassec ]we provide a second technique for combining several images in one : it combines invariant measures of a single iterated function system with several sets of probabilities , to make a single `` encoded '' ] [ ifssec]point - fibred iterated function systems ---------------------------------------------- let be a nonempty compact hausdorff space , and let be the set of nonempty compact subsets of .it is known that endowed with the vietoris topology is a compact hausdorff space , see for example ( * ? ? ?* theorem 2.3.5 , p.17 ) .this encompasses the well - known fact that if is a compact metric space then endowed with the hausdorff metric is a compact metric space .let be a finite index set with the discrete topology .let be a sequence of continuous functions .following is called an _ iterated function system _ over . following ( * ? ? ?* definition 4.1.4 , p.84 ) we define a map for all sequences belonging to .the map is well - defined because is the intersection of a nested sequence of nonempty compact sets .the following definition is based on ( * ? ? ?* definition 4.3.6 , p.97 ) .let be an iterated function system over a compact hausdorff space .if is a singleton for all then is said to be * * point - fibred** and the * coding map * of is defined by where denotes the range of .theorem [ kieningerthm ] , due to kieninger , plays a central role in this paper .it generalizes a classical result of hutchinson that applies when is a compact metric space and each is a contraction .[ kieningerthm]let have the product topology .if is a point - fibred iterated function system on a compact hausdorff space then the coding map is continuous .this follows from ( * ? ? ?* proposition 4.3.22 , p.105 ) .we define by slight abuse of notation we use the same symbol for the iterated function system , the maps that it comprises , and the latter function .we define , the identity map on , and for .the following definition is a natural generalization of the notion of an attractor of a contractive iterated function system , see for example ( * ? ? ?* definition on p.1193 and theorem 11.1 , p.1206 ) , , and also .[ attractordef]let be an iterated function system on a compact hausdorff space .an * attractor * of is with these properties : ( i ) ; ( ii ) there exists an open set such that and for all with .( the limit is with respect to the vietoris topology on . )the largest open set such that equation [ attractoreq ] holds for all with is called the * basin * of . the relationship between coding maps and attractors is provided by noting that .this leads to the following theorem .[ mainifsthm]if is a point - fibred iterated function system on a compact hausdorff space then \(i ) has a unique fixed - point i.e. ; \(ii ) is the unique attractor of ; \(iii ) is equal to the range of the coding map , namely \(iv ) the basin of is \(v ) if then for all this follows from ( * ? ? ?* proposition 4.4.2 , p.107 , see also proposition 3.4.4 , p.77 ) .the following remark tells us that if an iterated function system possesses an attractor then it is point - fibred when restricted to a certain neighborhood of the attractor .necessary and sufficient conditions for an iterated function system of projective transformations to possess an attractor are given in .let be an attractor of an iterated function system on a compact hausdorff space , and let be the basin of then , following , defines an iterated closed relation , and is an attractor of . by (* theorem 7.2 , p.1193 ) there exists a compact neighborhood of such that and ( denotes the interior of the set in the subspace topology induced on by . ) it follows that is a point - fibred iterated function system on a compact hausdorff space .the attractor of is let be a point - fibred iterated function system on a compact hausdorff space .the set is called the * code space * of .a point is called an * address * of . in the rest of this paperthe underlying space is a compact hausdorff space .also in the rest of this paper the symbols denote point - fibred iterated function systems on compact hausdorff spaces .we will say that an iterated function system is _ injective _ when all of the maps that it comprises are injective .we will say that an iterated function system is _ open _ when all of the maps that it comprises are open .here we present a generalized theory of fractal transformations and establish some continuity properties .fractal transformations are defined using sections of coding maps .we are concerned with continuity properties ; for example , theorem [ ctyfractaltransthm ] ( i ) provides a sufficient condition for a fractal transformation to be continuous .[ branchdef]let be the coding map of .a subset is called an * address space * for if and is one - to - one .the corresponding map is called a * section * of * * * * .theorem [ sectionthm ] summarises the properties of sections of [ sectionthm]let be a point - fibred iterated function system on a compact hausdorff space , with attractor , code space , and coding map if is a section of then \(i ) is bijective ; \(ii ) is continuous ; \(iii ) , the identity map on and , the identity map on ; \(iv ) if is injective and for all with , then ; \(v ) if is closed then is a homeomorphism ; \(vi ) if is connected and is not a singleton , then is not continuous .\(i ) by definition [ branchdef ] is bijective , so is bijective .\(ii ) by theorem [ kieningerthm ] is continuous .it follows that is continuous .\(iii ) if then also .\(iv ) suppose .then there are , , such that .we show that this is impossible . if then theorem [ mainifsthm ] ( v ) implies .then and .since is injective , each is injective , which implies that is injective .since , it now follows that . since and it now follows that .\(v ) if is closed then it is compact , because is compact .it follows that is a continuous bijective mapping from a compact space onto a hausdorff space . by (* theorem 5.6 , p.167 ) it follows that is a homeomorphism .it follows that is a homeomorphism .\(vi ) suppose that is connected and is not a singleton .it follows that is not a singleton .it follows that is not connected .( since contains more than one point and is a subset of which is totally disconnected when it contains more than one point , it follows that is not connected . )now suppose is continuous .then is a homeomorphism .it follows that is not connected . but is connected .so is not continuous . in definition [ ftransdef ]we define a type of transformation between attractors of iterated function systems by composing sections of coding maps with coding maps .we call these transformations `` fractal '' because they can be very rough ; specific examples demonstrate that the graphs of these transformations , between compact manifolds , can possess a non - integer hausdorff - besicovitch dimension .[ ftransdef]let be a point - fibred iterated function system over a compact hausdorff space .let be the attractor of .let be the coding map of .let be a branch of .let be a point - fibred iterated function system over a compact hausdorff space let be the attractor of .let be the coding map of .the corresponding * fractal transformation * is defined to be in theorem [ ctyfractaltransthm ] we describe some continuity properties of fractal transformations .these properties make fractal transformations interesting for applications to digital imaging .[ ctyfractaltransthm]let and be point - fibred iterated function systems as in definition [ ftransdef ] .let be the corresponding fractal transformation .\(i ) if , whenever , then is continuous .\(ii ) if is an address space for , and if , whenever , then is a homeomorphism and .\(i ) assume that , whenever , .we begin by showing that is the same as let .then there is such that because is a code space for hence ( using theorem [ sectionthm ] ( iii ) ) .hence , where we have used our initial assumption .it follows that .it follows that is a continuous map from a compact space to a compact hausdorff space but is the composition of a continuous mapping from a compact hausdorff space onto a hausdorff space , with a mapping from into a hausdorff space .it follows by a well - known theorem in topology , see for example ( * ? ? ?* proposition 7.4 , p. 195) that is continuous .\(ii ) assume that is an address space for , and that , whenever , .then by ( i ) both of the mappings and are continuous . using the fact that the range of is it is readily checked that and hence is a homeomorphism and .suppose that the equivalence relations and , induced by and respectively , are the same .then it is well known that , using compactness , the quotient topological space is homeomorphic to both and .see for discussion of relationships between the topology of an attractor and the equivalence class structure induced by a coding map .in order to construct a fractal transformation we need to specify a section of . in order to construct a section of have to construct an address space ; that is , we need to specify one element from each of the sets in the collection . to do this in a general wayseems to be difficult ; for example , if contains a nonempty open set , then is non - denumerable for all .however there are two particular related methods .these methods yield interesting structure ; for example , in both cases the resulting address space is mapped into itself by the shift operator , see theorem [ maskbranchthm ] ( ii ) .they are as follows .\(a ) ( masked iterated function system method . )this method requires that is injective .define a dynamical system on with the aid of inverses of the functions in .follow orbits of to define ; in effect one uses a markov partition associated with to define .\(b ) ( fractal tops method . )use the dictionary order relation on to select a unique element of for each .this method applies when is not required to be injective .when is injective , it is a special case of ( a ) . to date, the fractal tops method seems to be the easiest to convert to computational algorithms and applications .definition [ maskdef ] introduces a special partition of an attractor .[ maskdef]let be a point - fibred iterated function system on a compact hausdorff space .let be the attractor of . a finite sequence of sets called a * mask * for if 1 . , ; 2 . , , ; 3 . . note that for any there exists a unique such that .this enable us , in definition [ dynsysdef ] , to define an associated dynamical system on the attractor . [ dynsysdef ]let be a mask for an injective point - fibred iterated function system with attractor the associated * masked dynamical system * for is theorem [ maskthm ] associates a unique section of with a masked dynamical system .[ maskthm]let be an injective point - fibred iterated function system with attractor let be a masked dynamical system for , associated with mask .let and let be the orbit of under ; that is , and for .let be the unique symbol such that , for .then is an address space for .let .we begin by proving that , for all and we have fix we use induction on since it follows that so .it follows that equation [ inductioneq ] is true for .suppose that equation [ inductioneq ] is true for .it follows that .we also have so which implies hence equation [ inductioneq ] is true for .this completes the induction on .it follows that for all it follows that .it follows that for all it follows that ; that is , is injective , suppose for some then for some we have which implies . hence it follows that .let be an injective point - fibred iterated function system .the address space provided by theorem [ maskthm ] is called a * masked address space * for .the corresponding section of , say , is called a * masked section * of masked address spaces and masked sections have all of the properies of address spaces and sections , such as those in theorem [ sectionthm ] , and associated fractal transformations have the properties in theorem [ ctyfractaltransthm ] .but these objects have additional properties that derive from the existence and structure of the masked dynamical system .some of these additional properties are described in theorem [ maskbranchthm ] .[ maskbranchthm]if is an injective point - fibred iterated function system , with attractor , code space , coding map , mask , masked address space and masked section , then \(i ) if is open then is continuous at if and only if for all ; \(ii ) the shift map is well - defined , with ; \(iii ) the following diagram commutes {ccc}a & \overset{t}{\rightarrow } & a\\ \tau\downarrow\text{\ \ \ \ } & & \text { \ \ \ } \downarrow\tau\\ \omega_{\mathcal{m } } & \overset{s}{\rightarrow } & \omega_{\mathcal{m}}\end{array } \label{commutediagram}\ ] ] \(iv ) if there is such that then \(i ) suppose that for all .let converge to let and let . since there is an integer such that for all .since , for all , is continuous ( because is invertible and open ) we have to as .similarly , for any given , there is such that for all and all .it follows that , for any given , for all and all . it follows that converges to , i.e. converges to it follows that is continuous at . to prove the converse we assume that it is not true that for all .it follows that there is some such that here as in the first part of the proof , we write .it follows that there is a sequence that converges to with for all .( any neighborhood of must contain a point that is in . )it follows that converges to since is continuous .( we define . ) but while because .it follows that is not continuous .the desired conclusion follows at once .( ii)&(iii ) let let .then , whence .it follows that the shift map is well - defined , with .\(iv ) let and let be such that .then whence . by ( iii ) we have so .we remark that masked dynamical systems are related to markov partitions in the theory of dynamical systems .see for example ( * ? ? ?* proposition 18.7.8 , p.595 ) .in general a masked dynamical system depends on the mask . by suitable choice of maskwe can sometimes obtain a dynamical system with a desired feature such as continuity , or which relates the iterated function system to a known dynamical system , as illustrated in the following example .[ e : example1 ] consider the ifs where is a parameter .the attractor of . ] .then the masked transformation is continuous .this is the well - known one - parameter tent map dynamical system , see for example ( * ? ? ?* exercise 2.4.1 , p.78 ) .we note that , for any there exist a positive integer such that ] then the masked address space for is example1r.png theorem [ l : disjoint ] concerns the relationship between masked address spaces corresponding to distinct masks .it has an application to packing multiple images into a single image , as illustrated in example [ expack3 ] .[ l : disjoint ] let be a point - fibred iterated function system with attractor . let be a masked dynamical system for corresponding to mask .let be a measure on .let be a mask for such that let and be the masked sections of corresponding to and .then for all .if it follows that for all , where is the masked dynamical system corresponding to .then for all . butthis is impossible for all .`` fractal tops '' is the name we use to refer to the mathematics of tops functions , tops code spaces , tops dynamical systems , and associated fractal transformations ; see for example . herewe show that , in the case where is injective , fractal tops arise as a special case of masked iterated function systems .specifically , a _ tops code space _ is a special case of _ masked address space , a tops function _ is a special case of a _ section of _ and a _ tops dynamical system _ is a special case of a _ _ masked dynamical system__ the computations associated with fractal tops tend to be less complicated than those for masked systems .define a dictionary ordering ( see for example ) on as follows : if , then where is the least index for which , for all .with this ordering , every subset of possesses a greatest lower bound and a least upper bound . since is continuous and is compact , possesses a unique largest element , , for each .this allows us to define an address space for by the corresponding section of is if is injective then the tops dynamical system is a masked dynamical system , corresponding to the mask defined by the fact that can be computed from the set without reference to other points on the orbit of simplifies the computation of in applications , see for example .note that we can use the orbits of a tops dynamical system to calculate the top address of any point according to where we generalize the technique of color - stealing , introduced in and implemented for example in .define a _ picture _ to be a function of the form where is a color space .the set is the _ domain _ of the picture .we are concerned with situations where is a subset of an attractor of an iterated function system .let be an injective iterated function system with attractor .let be an iterated function system with attractor and the same code space as for .let be a mask for and let be the corresponding section of let be the coding map for .then we can define a mapping from the space of pictures on into the space of pictures on according to we refer to this procedure as color - stealing because colors from the picture are mapped onto the attractor to define the new picture .it follows from theorem [ maskbranchthm ] ( i ) that the transformation is continuous at all points whose orbits lie in where denotes the boundary of . in some cases , such as those in example [ stealingex ] , is a set of lebesgue measure zero , so the transformation is continuous almost everywhere .this explains patches of similar colors tends to exist in pictures that are obtained by color - stealing from real world photos , where patches of similar colors occur for physical reasons .[ stealingex ] figure [ combined ] illustrates color - stealing using ( i ) fractal tops ( left ) , and ( ii ) a masked iterated function system ( right ) that is not a tops system .the picture from which colors are stolen is lena embedded in a black surround , shown in the middle panel . in ( i ) is an affine iterated function system whose attractor is a filled square , the domain of , such that is a set of tiles that tile by rectangles . in ( i ) is the projective iterated function system , where{|c|c|c|c|c|c|c|c|c|c|}\hline & & & & & & & & & \\\hline & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\\hline \end{tabular } .\ ] ] and is an affine ifs whose attractor is a filled square ; see also . in ( ii ) the iterated function system and mask are the same as in example [ goldenlennaex ] , while is a perturbed version of in example [ goldenlennaex ] .combined.png under the conditions of theorem [ ctyfractaltransthm ] ( ii ) the fractal transformation is a homeomorphism such homeomorphisms can be applied to pictures to yield new pictures that have the same topological properties as the original .for example the connectivity properties of the set defined by a particular color will be preserved , as will be the property that certain colors lie in an arbitrary neighborhood of a point . butgeometrical properties , such as hausdorff dimension and collinearity , may not be preserved and , indeed , may be significantly changed . techniques for constructing and computing fractal homeomorphisms using fractal tops , with projective , affine , and bilinear ifss , have been discussed in .families of fractal homeomorphisms , built from such transformations in , may be established by using code space arguments .typically , the attractors of the iterated function systems in question are non - overlapping , which simplifies the proofs : in some situations one only needs to show that the equivalence classes of addresses agree on certain straight line segments .the resulting families of transformations are described by a finite sets of real parameters .these parameters may be adjusted to achieve desired effects such as increased roughness , or continuous ( but non - differentiable ) transformation from a meaningless picture into a meaningful one .both of these effects are illustrated in example [ roughex ] .[ roughex ] figure [ transforms ] illustrates three fractal homeomorphisms of the unit square applied to lena .all the iterated function systems involved are constructed using bilinear functions defined as follows .let ^{2}\subset\mathbb{r}^{2} ] where and have mask where ] and ] have mask .then there exists ] is a homeomorphism when the inverse of this homeomorphism is \rightarrow\lbrack0,1] ] .define affine transformations by let and .we consider the dynamical system{c}w_{-}^{-1}(x , y)\text { if } ( x , y)\in s_{-},\\ w_{+}^{-1}(x , y)\text { if } ( x , y)\in s_{+}. \end{array } \right.\ ] ] this possesses a `` repeller '' , a compact set such that in order to define , we define and is well - defined because it is the intersection of a decreasing sequence of nonempty compact sets .( it is quite easy to see that is the graph of a monotone function from ] ) using symmetry about the line and the contractivity of and in both the and directions it can be proved that has the following properties .\(i ) \(ii ) is symmetrical about the line \(iii ) , ] such that , , for all , ] ; \(v ) there is a unique ] such that , , for all ] ; \(vii ) \(viii ) for all ] obeys;\ ] ] \(x ) if , then the masked dynamical system \rightarrow\lbrack0,1] ] .figure [ goldenimage ] illustrates the `` repeller '' .it is a subset of the attractor of the iterated function system and may be used to compute as illustrated in figure [ figure ] .goldenimage.png an example of a fractal transformation , arising from a masked pair of overlapping affine iterated function systems , is given in example [ goldenlennaex ] .the resulting homeomorphism with and yields a picture of lena with extra large eyes , figure [ goldenlenna ] .[ goldenlennaex ] let be the family of affine iterated function systems defined by let and , for , ] .we are concerned with fractal transformations from ^{2} ] . in applications to digital imaging , ^{2} ]. then is injective and invertible on its range , .moreover , from theorem [ l : disjoint ] it follows that for all , where is lebesgue measure .let and be two pictures , each supported on . then is a picture supported on and is a picture supported on .we choose and - lena , where both pictures are . the digitized combined picture , also of resolution ,is shown in the middle panel of figure [ allmasked ] .pixels which correspond to points in both are colored white .the left - hand panel in figure [ allmasked ] illustrates the picture and the right - hand panel shows .hence we can `` store '' the two pictureswe have that for each distinct choice of ] .if we associate probabilities and with and respectively , then the associated markov operator has invariant probability measure equal to uniform lebesgue measure supported on ] , but singular continuous and concentrated on the set of points whose binary expansions contain nine times as many ones as zeros . hence ,if the chaos game is applied to using the first set of probabilities , the points of the resulting random orbit will tend to be disjoint from those obtained by of applying the chaos game using the second set of probabilities .the precise manner in which the orbits of the two systems concentrate is governed by convergence rates associated with the central limit theorem .the same idea can be applied to `` store '' several images in a single image e : a stored image is retrieved by applying the appropriate fractal transformation to e. all.png we describe the method by means of an example .[ expack4 ] we use three iterated function systems: then is associated with probabilies similarly the iterated function system is associated with probabilities the goal is to `` store '' two standard digital color images , pepper and lena , each in a single color image also . the image is supported on and associated with two probability measures , and invariant under with probabilities and respectively .\(1 ) in order to `` encode '' pepper , we run a coupled chaos game algorithm with associated with pepper , supported on a copy of and associated with with ( supposedly ) i.i.d .probabilities ; that is , we compute a random sequence of points where where ( associated with pepper ) and ( associated with ) and where with probability .at each step the pixel containing in the ( initially blank ) image is plotted in the color of pepper at the pixel containing . at the end of this process consists of an `` encoded '' version of pepper .\(2 ) in order to `` encode '' lena , we again run a coupled chaos game algorithm with associated with lena , supported on a copy of and associated with ( already `` painted '' with an encoding of pepper ) with probabilities ; that is , we compute a sequence of points where where ( associated with lena ) and ( associated with ) and where with probability . at each stepthe pixel containing in the image is plotted in the color of lena at the pixel containing .( the pixel is overwritten by the latest colour value . )we have used half as many iterations in the encoding of lena as we did for pepper , because a proportion of the points that correspond to pepper are overwritten by points corresponding to lena .an image that is a realization of steps ( 1 ) and ( 2 ) is shown in the central panel of figure [ all ] .the image is approximately homeomorphic to both of the images pepper and lena , under the fractal transformations and respectively , that is in practice , to obtain the decoded images , shown on the left and right hand sides of figure [ all ] , we use the chaos game algorithm again , as follows .\(3 ) in order to `` decode '' pepper , we run a coupled chaos game algorithm , with probabilities with associated with an image ( initially blank ) , supported on a copy of and associated with ( now encoding both pepper and lena ) .that is , we compute a random sequence of points where where ( associated with pepper ) and ( associated with ) and where with probability . at each stepthe pixel containing in the ( initially blank ) copy of is plotted in the color of at the pixel containing .the result of such a decoding , starting from the encoded illustrated in the middle panel , is shown in the left panel in figure [ all ] .\(4 ) in order to decode lena , we run a coupled chaos game algorithm , with probabilities with associated with an image ( initially blank but to become the decoded image ) , supported on a copy of and associated with .that is , we compute a random orbit where where ( associated with pepper ) and ( associated with ) and where with probability . at each stepthe pixel containing in the ( initially blank ) copy of is plotted in the color of at the pixel containing .the result of following this decoding algorithm , starting from the encoded in the middle panel , is shown in the left panel in figure [ all ] .we thank louisa barnsley for help with the illustrations .
we present a general theory of fractal transformations and show how it leads to new type of method for filtering and transforming digital images . this work substantially generalizes earlier work on fractal tops . the approach involves fractal geometry , chaotic dynamics , and an interplay between discrete and continuous representations . the underlying mathematics is established and applications to digital imaging are described and exemplified . = 1 iterated function systems , dynamical systems , fractal transformations . 37b10 , 54h20 , 68u10
immunotherapy seeks to arm the patient with an increased number of effective immune cells that are specifically trained to seek out and destroy harmful cells .one target for immunotherapy that has earned a place in the clinical spotlight recently is melanoma , since specific peptides have been isolated from melanoma cells that can be used in cancer vaccines .a subset of immune cells , called dendritic cells , are able to stimulate the production of immune effector cells that are capable of recognizing and killing specific tumor cells .researchers have been able to extract these cells from patients and culture them _ ex vivo _ to create a vaccine that can boost a patient s response against their own cancerous cells , .the success of clinical trials of dendritic cell ( dc ) vaccines has resulted in the recent fda approval of the first cancer vaccine for prostate cancer , despite promising clinical responses in vaccine trials , it remains difficult to predict which patients will actually respond to these vaccines and why , .this may be due to the complicated kinetics of the immune response to the presentation of antigen by the dendritic cells . in this paper, we analyze the effect of the delay between the time the dcs come into contact with the t - cell population and the initiation of the expansion , or proliferating phase .we focus on the dynamics describing the active effector t - cell population in the spleen , using a phenomenological description that captures the effect of the delay .our goal here is to isolate the role of this delay term in order to analyze its effect on the sustainability of t - cell production .the model we present is very simple , and is not intended to realistically capture the entire cascade of immune events in the response to antigen presentation by dendritic cells . nor does this model describe the trafficking of immune cells between compartments of the body , in particular between lymphoid organs , such as the spleen , and the tumor site .these effects are crucial to the understanding of the immune response and must be considered in the design of vaccine therapies .one group of experiments that looks at the trafficking of dcs between compartments is described in , and a mathematical model of dc trafficking and t - cell activation is presented in .elsewhere we combine these ideas , along with previous work on cell - lysis rates , , in a more realistic model that includes several types of immune cells , and trafficking between compartments . in section [ modeldescription ]we describe the model itself , and then we show stability switching as a function of the delay in section [ stability ] .we conclude with some discussion of the model simulations and results in section [ conclusion ] .after encountering antigen , such as tumor cells or tumor peptides , dendritic cells migrate to lymph organs such as the spleen . as they migrate , they mature and , upon arrival in the spleen , they are able to activate the proliferation of t - cells that have the ability to seek out and destroy ( _ lyse _ ) the specific target cells that provided the antigen .this activation requires some contact time between the dendritic cells and the naive t - cells ; this connection time is called the synaptic connection time . " in our model , we represent this time as a delay , .once proliferation is initiated , new antigen - specific t - cells are created .a somewhat simplified description of the fate of these new cells is as follows : the cells must go through a check - point during which the new cells are tested for efficacy against the antigen .if they pass this test , they either move into the blood stream to migrate to the target cells , they become memory cells in order to protect against future challenges by the same antigen , or they return to the proliferating compartment to produce more specific t - cells .proliferation continues only while certain cytokines are present .these cytokines are produced by previously activated t - cells that have been in contact with dendritic cells for the synaptic connection time , .when the dendritic cells have done their job educating " the t - cells , it is believed that they die , , and are then cleared " from the spleen compartment .when the dcs are no longer present , cell proliferation ceases , and the activated t - cells either move into the blood stream or become memory cells . this process is represented graphically in figure [ fig : modelprocess ] . in this model ,the concentration of the dendritic cells at a given time , , is given by , while the number of activated t - cells is given by .the activation / proliferation rate is then given by where is represents the feedback " function .feedback " function depends on the number of t - cells present in the proliferating compartment time earlier in the presence of the dcs ; therefore they are producing the necessary cytokines .the current number of proliferating t - cells is represented by .we assume that the feedback " function , , is a positive function that increases to some maximum level , , and then decreases .the justification for this functional form is that activation by dcs is maximized at a certain concentration of dcs in the spleen .the addition of more dcs has the effect of keeping the activated cells in the area .if they are present for too long , the effectiveness of the t - cells begins to diminish .this is sometimes called the decreasing potential hypothesis " in the literature , . one simple function that captures these dynamicsis given by : .3 in -.5 in .2 in where represents the population of active t - cells from time ago .we assume that t - cells leave the proliferating compartment through death , moving into the blood stream or the memory compartment at an average per - cell rate , .putting these pieces together results in the following single delay equation : where for simplicity : by rescaling the time variable , we can assume that , ( the decay parameter , , must also be appropriately scaled ) .simulated solutions are shown in figures [ fig : sampletrajectories_tauzero ] and [ fig : switchingtimes ] .these solutions show that changing the value of , as well as the initial history ( the values of for ) , can have a strong effect on the long - term behavior of the system .we verify this in the next section by performing a stability analysis .when dendritic cells are present in the spleen , , so that the differential equation becomes : equation [ eqn : scaleddde ] potentially has three equilibria .one is the zero fixed point : , and the other two are solutions to these two solutions are depicted graphically in figure [ fig : twoequil ] .note that the function has a maximum which occurs at .when the maximum of this function is greater than the two additional equilibria will exist .this occurs for .we denote the greater of these two equilibria by , and the smaller by , if they exist . if is stable , then proliferation is sustained , and active effector cells continue to be produced and sent into the blood stream and into the memory compartment .however , if is unstable , then production may die off and no new effector cells will be produced , even when dcs are still present in the spleen .our goal in this section is to show that changing the value of the delay , , can change the stability of .linearization around an equilibrium , gives : where , , evaluated at the equilibrium , and .the characteristic equation is : evaluating and at gives : when we consider the characteristic equation in the case of the equilibrium , we always obtain , _ independent of . _thus we see that the equilibrium at is * always stable*. on the other hand , linearizing around the upper equilibrium at in the case when gives : using the fact that , we see that stability of the upper equilibrium is ensured if we saw earlier that the maximum of occurs at , and we know that . because we assume , we see that equation [ eqn : xstarcondition ] is satisfied .therefore , when , is * stable*. since corresponds to the ode case , and lies between the two stable equilibria and , we know that is * unstable*. sample trajectories are shown in figure [ fig : sampletrajectories_tauzero ] .changes in stability can occur when the real part of switches sign , i.e. when is purely imaginary . in this case, the characteristic equation becomes at the upper equilibrium , , .equating the real and imaginary parts in equation [ eqn : om ] in this case gives : the first equation implies that .substituting this into the second equation gives : for a fixed value of , this gives a sequence of possible switching times , at which the equilibrium at can change stability .for example , taking , we see that , and possible switching times occur when figure [ fig : switchingtimes ] shows numerical solutions for increasing values of .we see that the equilibrium point at loses stability when , when a limit cycle appears .increasing destroys this limit cycle and , for larger values of , solutions that once tended to converge to the zero equilibrium .we have presented a simplified model of the interaction between dendritic cells and t - cells at the initiation of a specific immune response .the model contains one delay that represents the synaptic connection time between dendritic cells and t - cells which is necessary for the development of cytokines that stimulate proliferation .there is experimental evidence that prolonged contact between dendritic cells and t - cells results in a decrease in viable t - cell production .the model presented here shows a change in stability of the high t - cell equilibrium as the delay is increased .this equilibrium goes from being stable when the delay , , is small to being unstable for larger values of .these results support the experimental evidence for the decreasing potential hypothesis " .the model also suggest a role for dendritic cell therapies in boosting the immune defense against certain diseases such as cancer .however , the model indicates that levels of dendritic cells in the spleen should not be maintained at high levels for too long in order to optimize the expansion of active effector t - cells .we would like to thank the organizers of the 8th aims conference on dynamical systems , differential equations and applications that took place in dresden , germany , may 25 - 28 , 2010 , at which this work was presented .n. koike , s. pilon - thomas , and j.j .mule , _ nonmyeloblative chemotherapy followed by t - cell adoptive transfer and dendritic cell - based vaccination results in rejection of established melanoma_ j. immunotherapy * 31 * 4 , may ( 2008 ) , 402412 .
the activation of a specific immune response takes place in the lymphoid organs such as the spleen . we present here a simplified model of the proliferation of specific immune cells in the form of a single delay equation . we show that the system can undergo switches in stability as the delay is increased , and we interpret these results in the context of sustaining an effective immune response to a dendritic cell vaccine . angela gallegos ami radunskaya
fractoluminescence is the emission of light during fracture of solids .this phenomenon has been reported in brittle materials such as sucrose , ice , mgo , mica , and in soft solids such as pressure - sensitive tape .it fits into the more general framework of fracto - emission : the emission of particles , including electrons , positive ions or x - rays during fracture .even though fracto - emission has been known by the scientific community since francis bacon s experiments in 1605 , it has been very little used as a tool to study the failure of materials .the common experimental techniques to study material failure are based on acoustic emissions or on direct observations .acoustic emissions allow to measure the elastic waves produced by events such as microcracking or dislocation avalanches . in a previous article , we demonstrated that in certain scintillating materials used for particle detection , fracture was accompanied by emission of light with the same spectrum as the visible scintillating light , and that fractures could therefore contribute to backgrounds in rare - event searches .we also argued that light emission could give new insight on the rupture dynamics and provide quantitative information about the energy fracture budget . to fully benefit from the excellent resolution of the photomultiplier ( single photons with nanosecond timing ) , to be able to follow the actual crack propagation , and to control the atmosphere around the sample, we have developed an enhanced multi - channel setup .it allows us to measure accurately light emissions , acoustic emissions , crack velocity , compression and loading force during stable and fast crack propagation in controlled atmosphere .our multi - channel setup has been specially designed to study inorganic crystal scintillators such as bgo ( ) , and , which are all very brittle materials . in order to break the specimens in a controlled and reproducible manner , the double cleavage drilled compression ( dcdc ) geometry defined by janssen in 1974 has been adopted .each sample is cut into a rectangular parallelepiped .all faces are polished to optical quality and a 2 mm diameter circular hole is made through the center of the faces .when such a sample is compressed along its long axis ( see figure [ figdcdc ] ) , a crack can be initiated on each side of the hole in the mirror plane parallel to the faces .two cracks can then propagate in opposite directions in this plane .the applied compression stress allows to control the velocity of both cracks up to a critical length , at which point the specimen abruptly breaks into two parts ( fast fracture ) .the slow crack propagation below the critical length is not only determined by the applied stress but also by subcritical rupture processes i.e. a thermally activated energy path at the crack tip which ease bond breaking .the surrounding gas is a well - known agent that may reduce the critical energy barrier required for thermally activated fracture . in order to take advantage of the sample geometry and cover the different regimes of fracture, we developed a setup which precisely controls the applied stress on the specimen and the atmosphere surrounding it .the setup is represented in figure [ figsetupoverview ] .the sample to be studied is squeezed between a backstop and a pushrod . the backstop is fixed and the pushrod is actuated by a linear stepper motor .a load cell monitors the force applied on the sample by the motor .the deformation of the sample is measured by a custom - made displacement sensor .two acoustic sensors record the elastic vibrations produced by the crack propagation , and a photomultiplier above the sample measures the visible light emitted . simultaneously , the propagation of cracks in the sample is observed using a near - infrared camera and a near - infrared light source ( filtered and not visible by the photomultiplier ) .signals produced by the sensors are digitized and recorded by a computer ; the video is separately saved on another computer . in order to allow accurate light measurement and to control the atmosphere , the sample , and most of the instruments ,are located in a sealed chamber .this enclosure is light - tight , with the exception of a quartz viewport for the photomultiplier tube , and has been tested at vacuum pressures down to .the vacuum is achieved using a turbo vacuum pump : hicube 80 eco manufactured by pfeiffer vacuum .the pressure is measured by a pkr 251 compact fullrange gauge .all sensors but the photomultiplier are inside the enclosure , and were tested for vacuum operations .the photomultiplier is outside mostly for size and cabling reasons .the linear actuator has also been left outside because its heat could not be properly dissipated in vacuum and the grease trapped inside may not have been suitable for vacuum operations . during fracture tests, the sample is squeezed between the pushrod and the backstop .the pushrod and the backstop are actually the parts which transfer the force to the specimen .they are designed to compensate for parallel defects of crystal faces with a ball joint adapter at the tip of the pushrod .otherwise , the load will not properly spread on the whole faces and the edges of the specimen will potentially break .since the crystals we use have a relatively high young s modulus , the pushrod and the backstop are made out of stainless steel , and the faces in contact with the specimen are mirror - polished for a good mechanical contact . the linear stepper motor which drives the applied stress on the sample is the na23c60 model manufactured by zaber .its best step resolution ( its smallest displacement ) is below and it allows a maximum continuous thrust of 950 n. the motor is position - controlled but its position is not relevant to determine the sample deformation .indeed , it would be biased by the deformation of the materials surrounding the sample . moreover, tests have shown some skipping of the motor gears and they do not seem to be taken into account by the motor controller .the applied force on the sample is regulated in real - time via an action on the stepper motor position .the load cell is the key component which gives the feedback on the applied stress .it converts the applied force ( correlated to the stress in the double - cleavage drilled compression geometry ) into an electrical signal .the load cell is the sml-300 model with a sga signal conditioner , both produced by interface .its gain is set to 7.7 mv / n for our application .according to the manufacturer , the sml-300 load cell had never been tested at low pressure .however , its design does not allow any air to be trapped inside the gauge , so it is safe for tests in vacuum . in vacuum , the strain gauges glued on the load cell produce of the order of 1 w of heat , which can only be dissipated by conduction through the metal parts holding the gauge , and not by conduction and convection in the air .the temperature of the cell is then expected to be slightly higher than at room pressure , but it should nt introduce a large error as the temperature is expected to remain within the compensated range ( the thermal error is for temperatures from to ) . a displacement sensor allows to measure the deformation of the sample , and then to get an evaluation of the work supplied by the motor .we chose a capacitive sensor for its good resolution , and for its ability to work in vacuum and at low temperatures .laser - based systems were disregarded because of potential interference with the optical readout .a capacitive sensor is made of two electrodes separated by a variable gap .the target electrode is fixed , while the probe electrode is attached to the point whose position is to be measured . neglecting fringe effects , two parallel platesform a capacitance inversely proportional to their separation , the gap : where is the dielectric constant of the medium in the gap , the vacuum permittivity , and the surface area of each electrode .the dielectric constant of air is equal to at normal pressure and temperature , meaning that measurements in air will be consistent with those in vacuum to .our implementation is based on the toth - meijer design .as shown in figure [ figcapasensor_electrodeprincipe ] , the use of a large , concentric , guard electrode around the small , circular , probe electrode allows to mitigate fringe effects with the larger target electrode .the guard and probe electrode are isolated electrically but at the same virtual voltage .the electric field lines are then linear between the probe and the target electrode making the capacitance relatively close to the parallel plate model of equation [ eq : capacitance_parallel_plate ] .the electrodes ( shown in figure [ figcapasensor_electrodes ] ) have been made out of brass and are rotationally symmetrical .the outer diameter of the target and the guard ring is 10 mm and the diameter of the probe is 6 mm .the space between the probe and the guard ring is about and is filled with stycast 2850 ft epoxy ( catalyst 9 ) .coaxial cables connect the electrodes to the capacitance - to - period converter .readout is accomplished by a martin voltage - controlled oscillator that converts the capacitance into a period .the components used for the capacitance - to - period converter are listed in table [ dispsensor_companents ] .[ 1]>b#1 + & oa1 : ltc6244 + & oa2 : opa132u + this board runs a xilinx spartan-3e fpga which provides a basic package for fast prototyping .a oscillator generates the main clock for the fpga .a daughter board has also been designed to connect the capacitance - to - period converter inputs and outputs to the fpga and to carry the digital - to - analog converter ( dac ) .the system output is thus a voltage proportional to the gap between the electrodes .our capacitive sensor was calibrated using the linear stepper ( sec .[ sec : compressionsystem ] ) to move the probe electrode .results are shown in figure [ figcapasensor_calibration_maintext ] .the sensor does not provide the absolute distance between the electrodes , but rather the change in distance between them as the stepper moves , which is the relevant quantity as the sample is compressed . for the calibrations ,the linear stepper is directly attached to the probe electrode ( in standard operations , deformation of various intermediate objects like the force gauge and the pushrod mean that the displacement seen by the probe is different from that generated by the motor ) .the linear stepper therefore provides a calibrated displacement .comsol simulations have been used to determine the actual capacitance , taking into account the full geometry of the electrodes .matlab was used to simulate the capacitance - to - period electronic conversion .together , these simulations yield the offset to convert the motor steps into the absolute distance depicted in fig .[ figcapasensor_calibration_maintext ] .the sensor can measure displacements , rather than the absolute gap , with a relative accuracy of 1% , as determined by comsol and matlab simulations accounting for factors including nonlinearities and misalignment of the electrodes .the sensor functions over gaps ranging from 0.2 mm to 0.8 mm .the lower end of the range corresponds to the electrodes nearly touching ; the upper end corresponds to the dac saturating in amplitude .use of updated components ( tab .[ dispsensor_companents ] ) allows to refresh the measurement at a rate of 126 hz , compared to the 10 hz of the original design .operation has been demonstrated from ambient pressure down to . during material failure , crack growth and avalanches of dislocations release the stored strain energy .a part of this energy is converted into elastic waves , which propagates through the material .these elastic waves are commonly called acoustic emission and are generally in the ultrasonic range . in order to monitor the acoustic activity , two piezoelectric transducers , converting the transient waves into an electrical signal ,are placed on the surface of the specimen .the pico model , manufactured by mistras group and shown in figure [ figae_pico ] , is employed .it is a miniature and lightweight sensor ( outside diameter 5 mm , height 4 mm , weight ) .it is well - suited for small specimens and it allows to observe a large range of frequencies , with a 20 db bandwidth between 200 khz and 1 mhz .the current generated by the acoustic sensors is converted into a voltage and amplified using a custom circuit .an external board cuts off all the frequencies outside the range 10 khz1 mhz before recording the signals . in order to be sensitive to ultrasonic waves ,the sensors require a coupling to the specimen which minimizes the density of air bubbles trapped between the specimen and the sensors .we therefore glue the sensors to the sample using dow corning 732 silicone adhesive , which offers good adhesion on multiple substrates .a small force is applied to keep the sensors in place for at least 12 hours , allowing the glue to dry . during fracture ,no added mechanical device is needed to couple the sensors to the specimen , leaving a clear field of vision for the light sensor and for the camera .once a fracture test is done , the acoustic sensors can be simply removed by twisting .the pico sensors , adhered with the dow corning 732 adhesive , have been operated successfully at pressures down to .the light emissions during our fracture experiments are measured by a photomultiplier viewing the sample inside the experimental chamber through a quartz window .photomultipliers use the photoelectric effect to convert an incident photon into an electron that is then multiplied into a measurable signal . during scintillation and fracture of the bgo crystals we often use ,the light emissions are within the range 350650 nm with a peak at 480 nm ( see also sec . [sec : spectro ] ) . the r7056 model from hamamatsu is employed because it is sensitive over the range 185650 nm and is well - suited for photon counting .its maximum quantum efficiency is at 420 nm .one contribution to background comes from dark counts caused by the emission of electrons from the photocathode by thermal fluctuations ( called thermionic emission ) .since our setup does not allow to control the temperature of the photomultiplier , fracture tests are usually performed in less than two hours to avoid large fluctuations of temperature and ensuing changes in backround . the amount of light emitted during fracture tests turned out to be quite large . in order to deal with large amounts of light ,two spacers are placed in front of the photomultiplier .filters such as neutral density filters or hot filters ( see section [ sec : camera_tracking ] ) can then reduce the light reaching the light sensor . between calibrations and fracture measurements, filters can be switched without changing the distance between the sensors and the specimen to be studied , in other words , without changing the solid angle of the specimen for the light sensor .one of the novelties of our setup is the ability to observe the propagation of cracks within the sample , while measuring the light emission .based on the fact that the photomultiplier is relatively insensitive to near - infrared light , the specimen is illuminated using a near - infrared light source to allow a modified webcam to record the material failure . a 980 nm laser - diode ( vcsel-980 from thorlabs )is used as near - infrared light source because it can provide enough power ( 2 mw ) to illuminate the fracture scene and its narrow spectral emission range ( full width at half maximum 0.75 nm ) is outside the photomultiplier spectral range .the narrow beam of the laser diode is spread thanks to some light - diffusing plastic sheets scavenged from a computer screen . a modified c920 webcam from logitech is employed to record high - definition videos 1080 pixels ) with h.264 hardware encryption . ] of samples during fracture tests .its hot filter has been removed to broaden the camera range up to the actual limit of the cmos sensor ( above 1000 nm ) and to be then sensitive to the light at 980 nm .the bulky microphones on the camera board and the blue leds have been unsoldered for practical reasons . the webcam is placed inside the vacuum chamber , as close as possible to the sample .the cmos sensor quickly gets warm at ambient pressure , so a brass arm is used to make a thermal junction between the sensor and the metal baseplate of the setup . in vacuum , it helps heat to dissipate by conduction .operation confirmed that the stock webcam lens was not damaged when the air was pumped out of the setup , allaying fears the lens would fail because of air trapped inside .first tests showed that the photomultiplier was definitely sensitive to the light emitted by the laser - diode . an hypothesis to explainthis unexpected result is that multiple photon interactions with an electron might give to the electron enough energy to escape from the photocathode .the relative high power of the laser - diode make this hypothesis plausible .indeed , a power of 2 mw means about is . for a 980 nm photon , .hence , a light source generates . ] which is far from being negligible .short - pass filters have thus been added in front of the photomultiplier to attenuate the unwanted signal caused by the laser - diode . during fracture tests ,the sample is illuminated in such a way that the cracks are dark on a bright background ( see figure [ figcamera_detection ] ) .the good contrast allows to simply convert the images to grayscale and then to black - and - white using a well - chosen threshold which highlights the crack surfaces .the crack tips are found as the horizontal extrema of the crack surfaces .the absolute crack lengths are then computed using the pinhole camera model which describes the relationship between the 3d scene and the 2d projection on the cmos sensor .this tracking algorithm has been fully implemented in matlab .figure [ figcamera_detection ] shows the successive steps to get a measure of the crack length .though pixel size corresponds to on the sample , the position of the crack front in the long dimension of the sample is known to about , and is dominated by the nonlinear shape of the crack front .in addition , our tracking algorithm assumes that the fracture strictly propagates in the longitudinal plane of the sample .however , certain cracks tend to slightly twist during their propagation , meaning the true crack length may be underestimated by up to 10% . during fracture tests ,the frames generated by the camera have to be synchronized with the other signals ( acoustic , light , force and displacement signals ) .flashes of light are generated using the 980 nm laser - diode to mark the time on the camera .this is done manually by closing the circuit feeding the diode by tapping it against the vacuum chamber which is the ground of the circuit .these taps also propagate acoustic waves through the setup to the acoustic sensors .the propagation delay is quite negligible compared to the exposure time of the camera ( 0.2 s ) because the acoustic waves go through approximately of aluminum at a speed of which gives a delay of only .we have verified that any drift between the camera and the data acquisition system remains shorter than the duration of a frame on tests lasting up to 2 hours . in order to obtain spectroscopic information on the emitted light , at the expense of any time resolution ,the setup can be optically coupled to a spectrometer .this is done by replacing the photomultiplier by an optical fiber leading to a horiba micro hr monochromator coupled to a liquid - nitrogen cooled horiba symphony ii ccd .the small solid angle under which the scintillator sees the optical fiber means that light collection efficiency is much lower than with the photomultiplier .the experimental protocol is to use an exposure of a few seconds , during which a sufficiently large force is applied to break the sample completely .an example of a spectrum is shown in fig .[ figspectro ] .the spectrometer was previously calibrated in terms of wavelengths using a mercury lamp to within 2 nm , but the spectrum amplitudes are not corrected for the various efficiencies .as described before , our study of fracture involves the use of multiple sensors : a photomultiplier , two acoustic sensors , a force gauge and a displacement sensor .a data acquisition system has been developed to simultaneously digitize the signals generated by these sensors at drastically different sampling rates , and to control the force applied on the sample in real - time . since fracture is a one - shot process , fracture tests imply to continuously digitize all the channels , without any dead time , for hours .two high - speed digitizers manufactured by cronologic are used .the ndigo5 g is a 5 giga - sample - per - second ( gsps ) analog - to - digital converter ( adc ) with a 10 bit resolution and the ndigo250 m is a 250 mega - sample - per - second ( msps ) adc with a 14 bit resolution .these digitizers are designed to be plugged on pcie slots for a high throughput .the ndigo5 g is used to record the light signal at full speed since single photoelectron pulses are a few ns long .the ndigo250 m is used to continuously record the slower acoustic , force and displacement signals at a reduced speed of 2 msps .two important features of these digitizers are the synchronization , done internally , and the onboard zero suppression . on the ndigo5 g, the onboard field programmable gate array suppresses noise between events ( see sec . [sec : lightchannel ] ) , which significantly reduces the load on the downstream software and hardware . in a typical fracture experiment ,less than of the light data are actually relevant , and then , automatically recorded by the data acquisition system . without this zero suppression , continuously sampling the light channel at 5 samples per nanosecond over an hour , would by itself lead to over 20 tbytes of data , straining data storage and slowing offline data reduction .the architecture of the data acquisition system is shown in figure [ figdigitizers ] .different software tools have been developed in c++ using boost and qt .they allow to record the data generated by the digitizers , display some signals in real - time , and regulate the force applied on the specimen .two computers are needed to perform a fracture test .the ndigo5 g and the ndigo250 m are plugged into computer 1 .the digitizing server on computer 1 collects all the data generated by both digitizers and saves them in binary files on two solid - state drives ( samsung ssd pro series ) .the ndigo250 m is configured to create only 4 kib packets of data .this size of packets allows a fast treatment by the drives .the size of data packets for the light channel strongly depends on the light signal dynamic , but light data are saved only when a sufficient amount of data is collected to minimize the number of calls to the drives .large buffers in the digitizer server are set in order to absorb punctual data floods caused by strong light emissions , and then spread over time the load on the solid - state drives . the data from the ndigo250 m are also sent to computer 2 over ethernet using the user datagram protocol ( udp ) .it is a communication protocol designed to be minimalist , it does not guarantee that data will be delivered because packets are sent without prior communications , but it reduces the latency compared to connection - oriented communication protocols .another advantage is that it makes computer 2 fully independent of computer 1 .computer 2 collects the data from ndigo250 m received over ethernet .the force signal is extracted to regulate the force applied on the specimen using a proportional - integral - derivative controller which commands the motor setpoint .acoustic , force and displacement signals are sent to matlab where they are displayed in order for the user to monitor fracture tests in real - time .the principle of the light acquisition is shown in figure [ figtriggering ] .each photon detected by the photomultiplier produces a negative pulse on its output .an offset is added to the signal to use the full range of the digitizer .onboard zero suppression allows to only record the photons .when the signal goes below a threshold , a trigger occurs on the digitizer . for each trigger, a predefined number of signal samples before and after the trigger ( pre - trigger and post - trigger , respectively ) are grouped in a data packet and saved on the drives . if the signal is still below the threshold at the end of the post - trigger , the post - trigger is lengthened and the extra data samples are added to the same packet . packets also contains information such as timestamp , number of data samples and flags ( errors or warnings during the acquisition ) .on the drives , only the signal samples containing useful information are then saved thanks to this onboard zero suppression .when ionizing particles interact in a scintillator such as bgo , they deposit a certain amount of energy .some of this energy is reemitted as visible photons .the light yield of scintillators tells how many photons are actually emitted per unit of deposited energy . for -ray sources ,the light yield of bgo is 8 photons / kev . for a single interaction ,the time distribution of emitted photons generally follows one or more exponential decays . in the case of bgo ,the main time constant is ns . in , of the photons are emitted . in other scintillators , or at other temperatures ,the time distribution of photons can be more complex .the group of photons emitted by an interaction is called a scintillation event . to identify the scintillation events in the data , a sliding integralis computed . for each saved signal sample ,all the preceding signal samples within are integrated .thus , an integral is calculated for each saved signal sample , called signal integral .the analysis is described in figure [ figintegrationlightevent ] . to obtain energy spectrum . ] to identify scintillation events on the signal integral , a condition on the distance between two successive photons is used .if two successive photons are separated by more than ( called waiting time in figure [ figintegrationlightevent ] ) , two different scintillation events are considered . in the case where two scintillation events are very close to each other ( successive photons separated by less than ) , a single scintillation event is considered .if this method is compared to the detection of local maxima on the integral signal ( using a schmitt trigger for example ) , it is not required here to set threshold(s ) on the integral signal , which might depend on the photomultiplier and its voltage . for each scintillation event , the maximum integral is measured .it represents the maximum integral on a sliding window of .one of the first steps that needs to be carried out is determining the response of the system to single photons .to do so , a data acquisition as described previously can be performed with the photomultiplier in a dark chamber .it is then not excited by any light source but room - temperature thermal fluctuations eject some electrons from the photocathode .these electrons generate pulses similar to pulses generated by single incident photons .these pulses can be easily detected as separated events by the previous analysis and their integral can be calculated .figure [ spe ] shows the distribution of the integrals of the events from which the typical value is taken as the central value ( here , 22,480 a.u . ) of a gaussian fit over part of the spectrum .the typical rate of dark counts is a few hundred hertz .next , once the scintillator is placed in the setup , and before it is fractured , the light collection efficiency of the configuration can be determined , and the light channel calibrated .a calibration using two -ray sources , ( mainly 122 kev photons ) and ( mainly 662 kev photons ) , is performed to determine a relation between the number of photons measured by the photomultiplier and the number of photons actually emitted by the sample .the radioactive sources deposit a known energy in the sample ; therefore using the light yield , one can determine the number of photons emitted .the light collection efficiency is determined by comparing this number to the one obtained by building the spectrum of the detected integrals , and scaling it to the response to single photelectrons .figure [ figcs ] shows typical calibration spectra that we get during an experiment .thanks to the average integral of the single photoelectron obtained with figure [ spe ] , we can deduce the number of detected photoelectrons for a given event integral .thus , for the full collection peak of 662 kev , a gaussian fit over part of the peak gives an average event integral of photoelectrons that are detected .based on the light yield of 8 photons / kev for bgo , the light collection efficiency can be determined as .since the light emission spectra during scintillation and fracture are identical , the relation can be extended to the energy , so the calibration gives the ratio between the measured energy and the actual energy radiated by the specimen . during the fracture measurement itself, the photon rate is computed .this is achieved by integrating the light signal over a sliding window .the calculated integrals are converted into a number of emitted photons , using the light calibrations performed before the fracture test . by dividing the number of photons by the width of the sliding window ,the photon rate is thus determined .the typical width for the sliding window is about one second .this duration is a compromise , as shorter windows generally suffer from statistical fluctuations , whereas longer ones do not allow to precisely observe sharp changes in the photon rate . the two acoustic sensors ( sec .[ sec : acousttrans ] ) produce analog signals which are run through a 10 khz1 mhz bandpass filter before being continuously recorded at 2 msps , as shown in figure [ aes]a .since the acoustic sensors are not very efficient below 100 khz , a 6th - order butterworth high - pass filter with 100 khz cut - off frequency is used offline .it suppresses environmental noises ( figure [ aes]b ) . after this filtering, well - separated events are determined when the power of the signal overcomes a threshold above the baseline ( figure [ aes]c ) .the power of the data at each instant is computed as the average of the squares of the signal samples in the previous 0.5 ms .next , at each point in time , the average ( ) of the power , and its standard deviation ( ) are both computed over the previous second .at each point in time , an adaptive threshold is then defined as .this adaptive threshold is designed to rise when the data become noisier , avoiding spurious events .when the data surpass the threshold , the value of the threshold is frozen until the data drop again beneath it , and computation of the threshold resumes .individual events are defined over the interval during which the power is above the threshold . at the end of this process, all the individual events are saved to another file .operation of the setup described above requires several steps to achieve controlled propagation of a fracture in a sample . the sample , cut into the geometry described in sec .[ dcdcgeo ] , requires extra preparation .first , the edges in contact with the backstop and the ball joint are sanded down , which avoids unwanted microcracks at the sharp edges during compression . then , using a surgical blade , small notches are made on either side of the hole and on each face of the sample , in the direction in which the fracture will propagate .these artificial flaws will help to initiate the cracks .next , the sample is cleaned , and the acoustics sensors are glued as described in sec .[ sec : acousttrans ] .once the crystal is prepared , it is carefully positioned in the setup between the ball joint and the backstop .it must be precisely centered on the loading axis in order to avoid deviations of the crack from the nominal propagation plane .the specimen is also tilted so that the infrared camera can monitor the whole crack propagation .next , the enclosure is closed which allows to control the gases present during the fracture test and their pressures .the sample is illuminated with infrared light and observed by the camera .the data acquisition system is then started , which records all the sensors : load cell , displacement sensor , acoustic sensors , photomultiplier and camera .the sample is then ready for a controlled crack growth experiment .a typical fracture test at ambient pressure with a bgo sample is presented in figure [ bullegaryplot ] .an initial force of n is applied .the force is slowly increased at n / s up to n ( around s ) , and is then kept constant by the regulation system ( fig .[ bullegaryplot]a , pink curve ) .the opposite of the displacement , measured by the capacitive sensor ( fig .[ bullegaryplot]a , purple curve ) , gives the variation of sample compression .the compression increases by about during the force ramp .afterwards , the displacement decreases slowly except for a jump that occurs at s and that is associated to fast and full rupture of the sample into two halves .the propagation of the two opposite cracks observed by the infrared camera is shown in figure [ bullegaryplot]b .the center of the sample is chosen as origin , hence the crack length only starts at a distance of mm which is the edge of the central hole . at , a jump in crack growth is observed , corresponding to the reopening of the crack which had been initiated previously and allowed to close .then , while the force is kept constant , a slow growth of the crack is observed up to a critical length ( near s ) when it accelerates and cleaves the material in two .light and acoustic emissions are shown in figure [ bullegaryplot]c .acoustic emissions are correlated with the whole crack propagation . in this setup ,light emissions are prominently observed at the moment of fast fracture .these correlations confirm our previous observations .full analysis of these results , and a host of experiments , are underway .in order to study fractures in scintillating materials , we have developed a novel multi - channel setup under controlled atmosphere . using the dcdc geometry for the sample ,our setup allows the simultaneous measurement of the force applied to the sample , the compression , the acoustic and visible light emissions , all the while filming the propagation of the fracture under infrared lighting .the data acquisition system enables continuous recording of all channels , allowing analysis of rupture dynamics over extremely large time and energy ranges .for instance , nanosecond timing is achieved for the light channel , with onboard zero supression to limit the amount of data stored .various loading profiles can be applied , allowing the study of different fracture regimes .thanks to this setup , we will be able to precisely explore light emision during the subcritical regime in particular .we also plan to study effects such as that of the atmosphere on the fracture and light emission .lastly , our setup is not restricted to scintillators , but could be used for other materials such as pmma and glass .we thank gary contant and chuck hearns ( queen s university ) for contributing to the mechanical development , and jean - michel combes and franois gay ( universit de lyon ) who developed the acoustic amplifiers and participated to the electronic design .this work has been funded in canada by nserc ( grant no .sapin 386432 ) , cfi - lof and orf - sif ( project no .24536 ) , and by the france - canada research fund ( project `` listening to scintillating fractures '' ) .alexis tantot has been supported by an exploradoc grant from rhnes - alpes ( france ) .10 m. v sivers , m. clark , p. c. f. di stefano , a. erb , a. gtlein , j .- c .lanfranchi , a. mnster , p. nadeau , m. piquemal , w. potzel , s. roth , k. schreiner , r. strauss , s. wawoczny , m. willers , and a. zller .low - temperature scintillation properties of cawo4 crystals for rare - event searches . , 118(16):164505 , october 2015 .
to investigate fractoluminescence in scintillating crystals used for particle detection , we have developed a multi - channel setup built around samples of double - cleavage drilled compression ( dcdc ) geometry in a controllable atmosphere . the setup allows the continuous digitization over hours of various parameters , including the applied load , and the compressive strain of the sample , as well as the acoustic emission . emitted visible light is recorded with nanosecond resolution , and crack propagation is monitored using infrared lighting and camera . an example of application to ( bgo ) is provided . _ keywords _ : brittle fracture , fractoluminescence , scintillators +
the functional behavior of neurons in primary visual cortex can be partially modeled as a linear response to visual stimuli .this linear response can be formally described as a generalized wavelet analysis of images , intended as functions from to ( we consider greyscale images ) , with respect to a family of functions .such a family can be indexed by a parameter set that encodes some relevant geometric properties , which correspond to local features of the input image , and that is mapped on the cortex .each neuron can be identified with a given collection of such properties , i.e. by an element of , and it is said to be _ selective _ with respect to the associated features . a neuron receives a visual input only from a bounded region in the visual plane . inside this region , which is called the _ receptive field _ , a neuron can be selective with respect to several _ local features _ , such as the local orientation , the local spatial frequency ( the rate of oscillations ) , the local apparent velocity ( the component of the speed of motion that is orthogonal to the locally detected orientation ) .the neurophysiological and computational meaning of this selectivity will be discussed in sections [ sec : neuro ] and [ sec : mathmod ] .the primary visual cortex ( v1 ) of many mammals , notably including humans , shows a remarkably regular spatial organization of the features its neurons are selective to .v1 is arranged in such a way that each feature varies smoothly on the cortical layer : neurons that are close to one another are selective to similar features .receptive fields are arranged in a topographic , or _ retinotopic _ way ,i.e. neurons that are close to one another in the cortex are associated to similar regions in the visual plane .they are also highly overlapping , in the sense that two adjacent neurons respond to two almost entirely overlapping regions .orientation varies smoothly , in the sense that two adjacent neurons are selective for similar local orientations , and the same is true for the other listed features .moreover , the variability of some of the features is mainly associated to the horizontal position of the neurons , according to a so - called hypercolumnar structure whose discovery dates back to the nobel prizes hubel and wiesel .the cerebral cortex can be thought as a thick layered stack of ( folded ) planes and , roughly speaking , the hypercolumnar paradigm postulates that the function ( feature selectivity ) of neurons does not vary along the transversal direction but depends only on the position on the cortical plane ( see also ) . this structure has led to the representation of the _ functional architecture _ of primary visual cortex in terms of maps from the cortical plane to the space of a given feature , called _ cortical maps _ , which describe the displacement of the neurons in terms of their selectivity .probably the best known cortical map is that of orientation preferences , depicted in figure [ fig : ohki ] .it is a map that indicates , for each point on the cortex , what is the orientation the underlying neuron ( population ) is selective to .this means that if a light stimulus is present in the receptive field of a neuron , and that stimulus is mainly oriented along the depicted direction ( e.g. it contains an edge with that orientation ) , then the neuron will be maximally stimulated .conversely , if the stimulus contains an edge in the orthogonal direction , the neuron is minimally stimulated .a satisfactory understanding of the properties of the family or of the cortical representation of the set from the point of view of image processing is still an open problem , even if many partial results are available .this issue is relevant for artificial vision , because the capabilities of analyzing , compressing , and processing images possessed by the visual cortex still outperform any artificial system .it is also relevant for neuropsychology and cognitive studies , because it is still not clear what is the role of the different constituents of the brain in order to build the structured and organized visual representation generated by the visual cortex . on the other hand, this approach suffers from several limitations : just to mention a few , it does not include the various nonlinear mechanisms that contribute to the neural responses , and it does not take into account the dynamics of the network of connected neurons as well as the dynamic of activation of each single neuron .the purpose of this paper is to review some key facts about mathematical models of neurophysiological behaviors , to show some new properties of these models , to present some open problems that can be of some interest to the community working on harmonic analysis and approximation theory , and to provide some relevant terminology and references commonly used in the neuroscience literature .in this section we will introduce the main features of v1 that we will consider .we will focus on linear behaviors and on cortical maps , disregarding the several issues on nonlinear behaviors , such as normalizations or surround effects , on dynamical effects , such as spatio - temporal behaviors of receptive profiles , neural oscillations , neurodynamics , or on properties of connectivities and their geometric models . _primary visual cortex _v1 is the first and largest cortical area dedicated to the processing of visual stimuli .it is located in brodmann area 17 , involving both hemispheres of the occipital lobe of the brain , and it is also called striate cortex . it is followed by higher visual areas , from v2 to the so - called v5/mt , also called _ extrastriate cortices _ , that are located in brodmann areas 18 and 19 .v1 receives direct sensory input from the _ lateral geniculate nucleus _lgn , which is located in the thalamus and acts as a preprocessing unit of the visual stimuli collected by the retina .the retinal receptors are connected by optic nerve fibers to the lgn and finally to the v1 cells . on the one hand, this gives a strong indication that alredy in the retina there must occur a cospicuous processing of information . on the other hand, the units of infomations reaching v1 are evidently highly reprocessed .v1 transmits its information to extrastriate cortical areas by connections to v2 , which is then connected to higher visual areas . the electric inputs that v1 receives are due to three main classes of _ connectivities _ : _ feedforward _ inputs coming from the eye , through the lateral geniculate nucleus ; _ lateral _ inputs coming from other v1 neurons ; and _ feedback _ inputs coming from higher cortical areas .v1 neurons are characterized by their feature selectivity , by their physical displacement , and by their connectivities . just like all neurons ,the neurons that populate v1 accumulate an electric potential on their boundary membrane by means of chemical mechanisms involving ion concentrations . when this electric potential exceeds a given threshold , the neuron releases an impulsive discharge called _ action potential _ , that is transmitted by synapses as an electric current .such current flows through connections until it reaches other synapses , where it is again converted into an electric potential that accumulates in the neuron that receives the signal . a neuron that emits an action potentialis said to be _ firing _ , the action potential itself being called a spike .the activity of a neuron is its temporal sequence of spikes , and it is generally quantified by a nonnegative scalar quantity that measures its average _ firing rate _ , that is the frequency of spikes per second that it emits .a neuron has always a positive firing rate : even when there is no external stimulus acting on the network , each neuron fires from time to time ( rest state ) .the _ activity _ of a neuron is then measured as the difference between its firing rate and the firing rate at rest .the firing rate depends in a nonlinear way on the membrane potential , which is generally modeled as a sigmoid function ; in particular , each neuron has a maximum firing rate , a linear range of membrane potentials where the firing rate is linear , and a minimum firing rate equal to zero .a v1 neuron s visual receptive field is defined as the region of the visual field where the presence of light ( i.e. a visual stimulus ) causes a neural response .the direct activity response of a neuron to a stimulus does not depend only on the stimulus location on the visual field , but also on the spatio - temporal distribution of the light in the receptive field region .one of the basic needs of a neurophysiologically based formal theory of visual perception is to have a quantitative description of such dependence , that is sufficiently accurate to allow to predict the response of a neuron to any chosen visual stimulus .the simplest ( though very effective ) model is the _ linear - nonlinear - poisson _ model .even if it does not account for many nonlinear and/or collective phenomena , it is able to reproduce much of the single - neuron behavior .it consists of three stages : 1 . the visual stimulus produces a feedforward input that feeds a neuron with a voltage : this voltage is considered as depending linearly on the visual stimulus . if is a visual stimulus , then the resulting membrane potential is a linear map generally realized as an linear functional = \langle f , \psi_{\xi}\rangle_{l^2({\mathbb{r}}^2)}\ ] ] where the vector depends on the specific neuron .activity is computed by applying to ] .this is equivalent to assume that no information is encoded in the time process of spikes .the fundamental objects of the lnp model are then the filters , where represents the family of visual neurons .they are generally called _ receptive profiles _ , or also _ classical _ receptive fields or its support . ] .other effects , which take place in outer regions and/or that contribute nonlinearly to the receptive profile behavior , are referred to as _ nonclassical _ or _ _ extra - classical__ . in order to describe quantitatively the receptive profiles, one needs to set up an experimental framework that allows to measure them .the technique used is borrowed from a technique introduced by wiener , that is called _ white noise analysis _ , or _spike - triggered average _ or , more commonly , _reverse correlation_. it basically consists of showing random sequences of stimuli while performin electrophysiological recordings of the spike trains produced by a neuron , and performing averages over the stimuli that preceded a spike . in this wayone identifies the type of stimuli that generate the highest linear response of the neuron .this in general is not sufficient to describe the complete behavior , which is more properly described in terms of so - called _ wiener series _ including nonlinearities of different types .the procedure is however made consistent by imposing the lnp model to this stochastic analysis .effective experimental techniques have been developed to visualize large portions of cortical tissue activity in vivo .they are called _ optical imaging _ , and rely either on _ voltage - sensitive dyes _ or on _ _ intrinsic signals__ . the first one , developed in the 70 s and 80 s makes use of dye molecules that are capable to transform changes in the membrane potentials into optical signals .this technique offers a temporal resolution of about and a spatial resolution of about , while the produced signals do not correspond to single cell activity but are due to the averaged electric activity of a local population of neurons ( about 250 - 500 neurons ) .this technique is considered most often for real - time measurements .the second one , developed in the 90 s , is based on the intrinsic changes in the amount of light reflected by brain tissue , which is correlated with the presence of neuronal activity .this mechanism is thought to arise at least in part from local changes in the blood composition that accompany such activity , which can be detected when the cortical surface is illuminated with red light .active cortical regions absorb more light than less active ones .this technique provides a more accurate spatial resolution of or better , but it provides a very low temporal resolution .however , since the experimental set up is easier , it represents the most frequently used technique to obtain cortical maps .more recently , noninvasive techniques with fmri have been used to provide evidence of orientation maps in humans . in figure[ fig : bosking ] one can see the experimental data that produce the map of figure [ fig : ohki ] .the anesthetized animal is presented with a plane wave ( called _ grating _ ) drifting back and forth along a given direction , and the cortical activity is measured in terms of intrinsic signals .one obtains then a collection of maps indexed by a parameter , that we can denote with , where represent the local activity around the point on the cortex with respect to the orientation .observe also that provides the _ orientation tuning _ of the neurons located at , i.e. their selectivity curve to different orientations .the information about the local orientation preference given by the map depicted in figure [ fig : ohki ] is obtained by the vector sum since at each this depends only on one fourier coefficient of , of course it can not contain all the information given by the function . as a consequence, different activity maps can produce the same orientation preference map .more precisely , since then for all such that the integral at the right hand side is real one obtains the same map .several experiments have thus focused on the joint displacement of orientation preferences and orientation tuning .another well studied cortical map concerns the columnar organizations associated to __ ocular dominance__ , that consists of columns where all cells respond preferentially to the same eye .they show a regularly organized structure , which is strictly related to that of orientation preference. however , for the present purposes of the analysis of purely two dimensional visual stimuli , binocularity does not seem to play a central role , so we will disregard completely issues regarding ocular dominance .this section is devoted to the introduction of some formal properties of classical receptive fields , regarding locations , sizes and shapes , that can be deduced from experimental data .v1 possesses a three dimensional layered structure .we will consider each layer as flat and continuous , and distinguish coordinates on the two dimensional leaves from the transversal coordinate. some relevant features that will be addressed are indeed approximately constant when measured transversally on v1 , while others are commonly considered in average over the layers : for this reason the two dimensional space corresponding to layers is sometimes called the _cortical plane_. each v1 cell will then be identified by a point .\ ] ] we will keep the subscript in order to distinguish each such plane from the retinal plane , which we will identify with the visual field ( this is reasonable only locally : the optical properties of the eye and the retina require spherical coordinates , and for this reason the distances on the visual field are generally measured in angular degrees ) .v1 cells can be classified in terms of several properties which concern their functional behavior , typically parameters of their receptive profiles .let a given property be described by the elements of a set .cortical map _ is then defined as a function that associates to a cell at a property .the first properties one is interested to are related to the location and size of receptive fields : if is the receptive field of the cell at , denote its center with and its size with each v1 layer is organized in a_ retinotopic _way , i.e. for any fixed ] represents the variability of scales with the transversal cortical direction . with this approximation ,the set of cortically implemented receptive profiles can then be written , for an appropriate parameter , as . while there are apparently no models concerning the function , which can probably be considered simply as a smooth bijection of ] , several models have been proposed for the map .a common one is based on the observation that the power spectrum of orientation preference maps is approximately concentrated on a thin annulus , where it shows and apparently random behavior .thus , a simple way to obtain quasi - periodic orientation preference - like structures is given by where \to \mathbb{c} ] and .selectivity of v1 neurons to local frequencies is often measured in terms of _ orientation tuning _ and of _ spatial frequency tuning _curves , which quantify the decay of neural response for stimulus parameters that differ from the preferred one .while this phenomenon can be partially explained in terms of classical receptive fields , it also offers a favorable experimental setting for checking for nonclassical effect , typically dynamical and nonlinear / nonlocal deformations .here we simply observe that the qualitative behavior of these tuning curves is captured by the elements of the _ gram matrix _ of the system . since this is meant to be only a rough approximation of the experimental results , for simplicity we will consider ( i.e. ) , and denote with , , the receptive profiles .the square modulus of the gram matrix for two elements at the same scale reads because ( calling ) the elements ( [ eq : gram ] ) can be considered as the energy of the response of a neuron to a stimulus shaped as a receptive profile .since such a stimulus can provide the maximal linear response ( diagonal elements ) and is also parametrized by the features themselves , it offers a simple way to obtain tuning curves .the typical experimental setup for obtaining tuning curves consists of showing stimuli that produce an electrophysiologically measured response that does not depend on the absolute position of the neuron and is -periodic on the angular variable .a simple way to obtain such a curve for a cell with preferred spatial frequency and preferred orientation is then via the following function it qualitatively resembles well known models and shows the role of spatial frequency in the orientation tuning width .it is compared with actual measurements in cats in figure [ fig : webster ] .the observed higher sharpness of the real tuning curves is thought to be related to nonlinear / nonlocal additional cortical mechanisms . ) for a typical cell , plotted with respect to and for different values of the other parameter ( normalized to have a maximum value 1 ) .observe that is in logarithmic scale.,title="fig : " ] ) for a typical cell , plotted with respect to and for different values of the other parameter ( normalized to have a maximum value 1 ) .observe that is in logarithmic scale.,title="fig : " ] ) for a typical cell , plotted with respect to and for different values of the other parameter ( normalized to have a maximum value 1 ) .observe that is in logarithmic scale.,title="fig : " ] the elements ( [ eq : gram ] ) can also be used to obtain a _ relationship between the shape index cutoff and the characteristic length _ ( [ eq : constraint ] ) , by an argument introduced in concerning correlations . the function r.18 can indeed be considered as a measurement of _ energy autocorrelation _ in space and orientation for a receptive profile having spatial frequency and scale , denoting with its ( isotropic ) shape index . sincean orientation difference of provides the minimum correlation that can be attained due to the orientation selectivity mechanism only , it may be relevant to observe what is the spatial distance at which a receptive profile has the same correlation due to spatial decay only . by ( [ eq : correlations ] )one obtains the dependency of the ratio on the shape index is shown in figure [ fig : shape ] .it is possible to observe that , for a value of around , which coincides approximately with the cutoff bound ( [ eq : upperbound ] ) one obtains a distance that is approximately 4 times the scale , providing an effective size of a receptive profile corresponding to what is inside 2 standard deviations . by ( [ eq : pi ] ) , this was considered to correspond approximately to the point image , and by ( [ eq : constraint ] ) this length is apparently mapped on v1 as the characteristic length of the quasi - periodic map .this can be interpreted as a principle of _ dimensionality reduction constrained to optimally independent representation of orientations _ , in the following sense .as previously observed , v1 does not have a sufficiently high topological dimension to implement all scales / frequencies and orientations over each point , so it has adopted compromises such as the one described by the coverage strategy . on the other hand ,if we measure the independence of receptive profiles in terms of correlations , then the maximal independence with respect to orientations can be obtained equivalently by translations at a given distance .this distance can be estimated to grow with the shape index , as described by figure [ fig : shape ] , and reaches the characteristic length of the orientation preference maps at about the cutoff value for the shape index . at that distance , two receptive profilescan then be considered as collecting a sufficiently independent information that justifies a repetition of a new full set of orientations .say it from another point of view , orientation preference maps may be a way to map a compact variable on the highly redundant sampling space of positions .however , spatial frequency variable is not compact , so in order to be projected down one needs a cutoff .the one provided by ( [ eq : upperbound ] ) is consistent with the purpose of having maximal decorrelation at the distance corresponding to the quasi - periodicity of the map .finally , we observe that frequency cutoff could be related to the finite resolution at the retinal level , but due to the intermediate optic nerve information compression and to the lgn preprocessing this dependence may be highly nontrivial .even if much is known about the computations performed by primary visual cortex v1 , still many problems concerning the fine structure of these computations as well as their purposes in the processing of visual information are open .this paper has focused on a restricted class of behaviors concerning classical receptive fields , which provide a primal linear response of v1 neurons to visual stimuli that is later processed by nonlinear mechanisms and by dynamical lateral and feedback connectivities .many studies approach the computation implemented by v1 as being optimized for best information detection ( even if other purposes have been proposed , such as that of maximizing inference via bayesian schemes ) , and we have provided evidence that some of the design principles encountered in v1 are actually compatible with this view .the presented approach focuses on the geometry of classical receptive fields , by considering their contribution to neural code as a linear wavelet analysis of visual stimuli . due to the prominent role of selectivity for local orientation ,the substructure given by the group of translations and rotations has been often considered in relation with with the cortical two dimensional space - frequency analysis , as well as with the modeling of contour perception and of lateral connectivities , and it has provided a fruitful environment for cortical - inspired image processing on groups .more in general , the geometry associated to v1 receptive profiles has provided a well - established set of models of the cortical architecture , and is thought to play a key role in the constitution of perceptual units . most of this geometry can be obtained by experimental setups that make use of parametric sets of simple stimuli , but the more recent approach to the study of receptive fields is actually based on the exploitation of nonlinearities and on the use of natural stimuli .this provides indeed the framework for obtaining families of receptive field - like linear analyzers in terms of optimality criteria extracted from the statistics of natural images , and to reconcile some behaviors observed in v1 with recent approaches to high dimensional statistics , such as compressed sensing and sparsity . on the other hand ,still much of the linear behavior continues to provide unsolved problems form the point of view of the strategies used by v1 for collecting informations .we have mentioned that of characterizing the space of signals that can be represented by v1 classical receptive fields , but also the appearance of _ quasi - periodic structures _ in the space of coefficients of a gabor - like system seems to be suggestive in view of recent advances on similar approaches to sampling problems . moreover , the energy model for complex cells activity should be actually related to issues of _ _ phase retrieval__ .finally , the way nonlinearities intervene into the neural coding seems to be quite unusual with respect to the mainstream approaches to nonlinear approximation , as some of them seem to act mainly as _ deformations of the linear behavior _ , with frequent evidence of adaptivity mechanisms .few theoretical studies in approximation theory deal with similar kinds of nonlinear behaviors , which however could provide effective alternative strategies from the ones presently considered in signal processing .99 s. t. ali , j .-antoine , j .-gazeau , _ coherent states , wavelets and their generalizations_. springer , ed . 2014 .a. afgoustidis , _ orientation maps in v1 and non - euclidean geometry_. j. math .5:12 ( 2015 ) .a. angelucci , j. b. levitt , e. j. s. watson , j .-hup , j. bullier , j. s. lund , _circuits for local and global signal integration in primary visual cortex_. j. neurosci .22:8633 - 8646 ( 2002 ) .a. angelucci , p. c. bressloff , _ contribution of feedforward , lateral and feedback connections to the classical receptive field center and extra - classical receptive field surround of primate v1 neurons_. prog .brain res .154:93 - 120 ( 2006 ) .antoine , r. murenzi , p. vandergheynst , s. t. ali , _ two - dimensional wavelets and their relatives_. cambridge university press 2004 .d. barbieri , g. citti , g. sanguinetti , a. sarti , _ an uncertainty principle underlying the functional architecture of v1_. j. physiol .-paris 106:183 - 193 ( 2012 ) .d. barbieri , g. citti , _ reproducing kernel hilbert spaces of cr functions for the euclidean motion group_. anal .13:331 - 346 ( 2015 ) .d. barbieri , g. citti , a. sarti , _ how uncertainty bounds the shape index of simple cells_. j. math .4:5 ( 2014 ) .d. barbieri , g. citti , g. cocci , a. sarti , _ a cortical - inspired geometry for contour perception and motion integration _ j. math .imaging vision 49:511 - 529 ( 2014 ) .m. v. berry , m. r. dennis , _ phase singularities in isotropic random waves_. proc .lond . a 456:2059 - 2079 ( 2001 ) .i. bojak , d. t. j. liley , _ axonal velocity distributions in neural field equations_. plos comput .biol . 6:e1000653 ( 2010 ) .w. h. bosking , y. zhang , b. schofield , d. fitzpatrick , _ orientation selectivity and the arrangement of horizontal connection in tree shrew striate cortex_. j. neurosci .17:2112 - 2127 ( 1997 ). w. h. bosking , j. c. crowley , d. fitzpatrick , _ spatial coding of position and orientation in primary visual cortex_. nature neurosci .5:874 - 882 ( 2002 ) .t. bonhoeffer , a. grinvald , _ optical imaging based on intrinsic signals : the methodology_. in `` brain mapping ; the methods '' .a. w. toga and j. c. mazziotta ( eds . ) . academic press 1996 .p. c. bressloff , _ waves in neural media_. springer 2014. m. carandini et al . ,_ do we know what the early visual system does ?_ j. neurosci .25:10577 - 10597 ( 2005 ) .m. carandini , d. j. heeger , _ normalization as a canonical neural computation_. nature rev .13:51 - 62 ( 2012 ) .g. citti , a. sarti , _ a cortical based model of perceptual completion in the roto - translation space_. j. math .imaging vision 24:307 - 326 ( 2006 ) .g. citti , a. sarti , _ neuromathematics of vision_. springer 2014 .g. citti , a. sarti , _ a gauge field model of modal completion_. j. math .imaging vision 52:267 - 284 ( 2015 ) .g. cocci , d. barbieri , a. sarti , _ spatiotemporal receptive fields of cells in v1 are optimally shaped for stimulus velocity estimation_. j. opt .am . a 29:130 - 138 ( 2012 ) .g. cocci , d. barbieri , g. citti , a. sarti , _ cortical spatiotemporal dimensionality reduction for visual grouping_. neural comput .27:1252 - 1293 ( 2015 ) .s. coombes , r. thul , k. c. a. wedgwood , _ nonsmooth dynamics in spiking neuron models_. physica d 241:2042 - 2057 ( 2012 ) .m. c. crair , e. s. ruthazer , d. c. gillespie , m. p. stryker , _ ocular dominance peaks at pinwheel center singularities of the orientation map in cat visual cortex_. j. neurophysiol .77:3381 - 3385 ( 1997 ) . p.m. daniel , d. whitteridge , _ the representation of the visual field on the cerebral cortex in monkeys_. j. physiol .159:203 - 221 ( 1961 ) .p. dayan , l. f. abbott , _theoretical neuroscience_. mit press 2001 .g. c. deangelis , i. ohzawa , r. d. freeman , _ receptive - field dynamics in the central visual pathways_. trends neurosci .18:451 - 458 ( 1995 ) .r. l. de valois , k. k. de valois , _ spatial vision_. oxford university press 1990 .r. duits , h. fhr , b. janssen , m. bruurmijn , l. florack , h. van assen , _ evolution equations on gabor transforms and their applications_. appl . comput . harmon .35:483 - 526 ( 2013 ) .r. durbin , g. mitchinson , _ a dimension reduction framework for understanding cortical maps_. nature 343:644 - 647 ( 1990 ) .m. ehler , m. fornasier , j. sigl , _ quasi - linear compressed sensing_. multiscale model .12:725 - 754 ( 2014 ) .g. felsen , y. dan , _ a natural approach to studying vision_. nature neurosci .8:1643 - 1646 ( 2005 ) .d. fitzpatrick , _ seeing beyond the receptive field in primary visual cortex_. curr . opin .10:438 - 443 ( 2000 ) .a. k. fletcher , s. rangan , l. r. varshney , a. bhargava , _ neural reconstruction with approximate message passing_. adv .neur . in .24:2555 - 2563 ( 2011 ) .p. fries , j .- h . schrder , p. r. roelfsema , w. singer , a. k. engel , _ oscillatory neuronal synchronization in primary visual cortex as a correlate of stimulus selection_. j. neurosci .22:3739 - 3754 ( 2002 ) .k. friston , _ a theory of cortical responses_. phil .b 360:815 - 836 ( 2005 ) .s. ganguli , h. sompolinsky , _ compressed sensing , sparsity , and dimensionality in neuronal information processing and data analysis_. annu .35:485 - 508 ( 2012 ) .j. l. gardner , a. a. anzai , i. ohzawa , r. d. freeman , _ linear and nonlinear contributions to orientation tuning of simple cells in the cat s striate cortex_. visual neurosci .16:1115 - 1121 ( 1999 ) r. gattass , c. g. gross , j. h. sandell , _ visual topography of v2 in the macaque_. j. comp .201:519 - 539 ( 1981 ) .w. gerstner , w. m. kistler , _ spiking neuron models_. cambridge university press 2002 .k. grill - spector , r. malach , _ the human visual cortex_. annu .27:649 - 677 ( 2004 ) .b. m. harvey , s. o. dumoulin , _ the relationship between cortical magnification factor and population receptive field size in human visual cortex : constancies in cortical architecture_. j. neurosci .31:13604 - 13612 ( 2011 ) .j. c. horton , d. l. adams , _ the cortical column : a structure without a function_. philos .t. r. soc .b 360:837 - 862 ( 2005 ) .d. h. hubel , t. n. wiesel , _ uniformity of monkey striate cortex . a parallel relationship between field size , scatter and magnification factor_. j. comp . neurol .158:295 - 306 ( 1974 ) .d. hubel , eye brain and vision .freely available at http://hubel.med.harvard.edu/ ( 1988 ) .m. hbener , d. shoham , a. grinvald , t. bonheffer , _ spatial relationships among three columnar systems in cat area 17_. j. neurosci .17:9270 - 9284 ( 1997 ) .g. huguet , j. rinzel , j .-hup , _ noise and adaptation in multistable perception : noise drives when to switch , adaptation determines percept choice_. j. vision 14:19 ( 2014 ) .n. p. issa , c. trepel , m. p. stryker , _ spatial frequency maps in cat visual cortex_. j. neurosci .20:8504 - 8514 ( 2000 ) .s. jayasuriya , z. p. kilpatrick , _ effects of time - dependent stimuli in a competitive neural network model of perceptual rivalry_. b. math .74:1396 - 1426 ( 2012 ) . c. kalisa , b. torresani , _n - dimensional affine weyl - heisenberg wavelets_. ann .i. h. poincar a 59:201 - 236 ( 1993 ) .m. kaschube , m. schnabel , s. lwel , d. m. coppola , l. e. white , f. wolf , _ universality in the evolution of orientation columns in the visual cortex_. science 330:1113 - 1116 ( 2010 ) .[ includes online material ] w. keil , f. wolf , _coverage , continuity , and visual cortical architecture_. neural syst .circuits 1:17 ( 2011 ) .t. s. lee , _ image representation using 2-d gabor wavelets_. ieee t. pattern anal ., 18:959 - 71 ( 1996 ) .d. marr , t. poggio , e. hildreth , _ smallest channel in early human vision_. j. opt .70:868 - 870 ( 1980 ) .b. matei , y. meyer , _ quasicrystals are sets of stable sampling_. c. r. acad .paris sr .346:1235 - 1238 ( 2008 ) .r. miikkulainen , j. a. bednar , y. choe , j. sirosh , _ computational maps in the visual cortex_. springer 2005 .i. nauhaus , a. benucci , m. carandini , d. l. ringach , _ neuronal selectivity and local map structure in visual cortex_. neuron 57:673 - 679 ( 2008 ) .e. niebur , f. wrgtter , _ design principles of columnar organization in visual cortex_. neural comput .6:602 - 614 ( 1994 ) .k. ohki , s. chung , p. kara , m. hbener , t. bonhoeffer , r. clay reid , _ highly ordered arrangement of single neurons in orientation pinwheels_. nature 442:925 - 928 ( 2005 ) .b. a. olshausen , d. j. field , _ sparse coding with an overcomplete basis set : a strategy employed by v1 ? _ vision res .37:3311 - 3325 ( 1997 ) .j. ribot , y. aushana , e. bui - quoc , c. milleret , _ organization and origin of spatial frequency maps in cat visual cortex_. j. neurosci .33:13326 - 13343 ( 2013 ) .d. ringach , _ spatial structure and symmetry of simple cells receptive fields in macaque primary visual cortex_. j. neurophysiol .88:455 - 463 ( 2002 ) .d. l. ringach , m. j. hawken , r. shapley , _ dynamics of orientation tuning in macaque v1 : the role of global and tuned suppression_. j. neurophysiol .90:342 - 352 ( 2002 ) .a. romagnoni , j. ribot , d. bennequin , j. touboul , _ parsimony , exhaustivity and balanced detection in neocortex_. preprint , http://arxiv.org/abs/1409.4927 m. g. p. rosa , r. tweedale , _ maps of the visual field in the cerebral cortex of primates : functional organization and significance_. in `` the primate visual system '' , j. h. kaas , c. e. collins ( eds . ) .crc press 2005 .g. sanguinetti , g. citti , a. sarti , _ a model of natural image edge co - occurrence in the rototranslation group_. j. vision 10:37 ( 2010 ) .a. sarti , g. citti , j. petitot , _ the symplectic structure of the primary visual cortex_. biol .98:33 - 48 ( 2008 ) .a. sarti , g. citti , _ the constitution of visual perceptual units in the functional architecture of v1_. j. comput .38:285 - 300 ( 2015 ) .m. p. sceniak , m. j. hawken , r. shapley , _ contrast - dependent changes in spatial frequency tuning of macaque v1 neurons : effects of a changing receptive field size_. j. neurophysiol .88:1363 - 1373 ( 2002 ) .u. sharma , r. duits , _ left - invariant evolutions of wavelet transforms on the similitude group_. appl .39:110 - 137 ( 2015 ) .y. shechtman , y. c. eldar , o. cohen , h. n. chapman , j. miao , m. segev , _ phase retrieval with application to optical imaging_. preprint , http://arxiv.org/abs/1402.7350 e. p. simoncelli , l. paninski , j. pillow , o. schwartz , _ characterization of neural responses with stochastic stimuli_. in `` the new cognitive neurosciences '' , ed ., m. gazzaniga ( ed . ) .mit press 2004 .l. sirovich , r. uglesich , _ the organization of orientation and spatial frequency in primary visual cortex_. proc .usa 101:16941 - 16946 ( 2004 ) .n. v. swindale , _ orientation tuning curves : empirical description and estimation of parameters_. biol .78:45 - 56 ( 1998 ) .n. v. swindale , d. shoham , a. grinvald , t. bonhoeffer , m. hbener , _ visual cortex maps are optimized for uniform coverage_. nature neurosci .3:822 - 826 ( 2000 ) .d. teller , _ vision and the visual system_. notes , http://faculty.washington.edu/jpalmer/files/teller/ a. tonda , e. lutton , g. squillero , p .- h .wuillemin , _ a memetic approach to bayesian network structure learning_. lect .notes comput . sc .7835:102 - 111 ( 2013 ) .s. d. van hooser , j. a. f. heimel , s. chung , s. b. nelson , l. j. toth , _ orientation selectivity without orientation maps in visual cortex of a highly visual mammal_. j. neurosci .25:19 - 28 ( 2005 ) .w. e. vinje , j. l. gallant , _ sparse coding and decorrelation in primary visual cortex during natural vision_. science 287:1273 - 1276 ( 2000 ). m. a. webster , r. l. de valois , _ relationship between spatial - frequency and orientation tuning of striate - cortex cells_. j. opt .am . a 2:1124 - 1132 ( 1985 ) .e. yacoub , n. harel , k. ugurbil , _ high - field fmri unveils orientation columns in humans_. proc .usa 105:10607 - 10612 ( 2008 ) .s. w. zucker , _ which computation runs in visual cortical columns ? _ in `` 23 problems in systems neuroscience '' , j. l. van hemmen , t. j. sejnowski ( eds . ) .oxford university press 2006 .
some geometric properties of the wavelet analysis performed by visual neurons are discussed and compared with experimental data . in particular , several relationships between the cortical morphologies and the parametric dependencies of extracted features are formalized and considered from a harmonic analysis point of view .
wncyr10 at 12.0pt * 11 . * ... za nerazumnye pomyshleniya ih nepravdy , po kotorym oni sluzhili besslovesnym presmykayuwimsya i prezrennym chudoviwam , ty v nakazanie poslal na nih mnozhestvo besslovesnyh zhivotnyh , chtoby oni poznali , chto chem kto sogreshit , tem i nakazyvaetsya ... i bez `` 0btogo oni mogli pogibnut''7e ot odnogo dunoveniya , presleduemye pravosudiem i rasseivaemye duhom sily tvoe"1a ; no + ty vse raspolozhil meroyu , chislom i vesom .wncyb10 at 12.0pt once upon a time the great russian physicist - theorist and mathematician bogoljubov said that the last line in the above written fragment from the book of proverbs which is a part of the bible ( non - canonical though ! ) ... ; but you did all arrange by measure , number and weight " represents the definition of the physics . probably this ... ; but you did all arrange by measure , number and weight " was an earliest evidence of the principle that all in the world have to be in harmony with each other . in fact , the word harmony ( syn : music : accord , concord , consonance ) has the greek origin from which means orderliness ( symmetry ) of the whole , commensurability ( proportionality ) of its parts .the idea of harmony has intensively been elaborated by pythagoras , the greek philosopher and mathematician and founder of the pythagorean school .originally from samos , pythagoras founded a society which was at once a religious community and a scientific school flourished at kroton in southern italy about the year 530 b.c .pythagoras was the first genius of western culture .he had a multifaceted magnetic personality an intelligent mathematician and a religious thinker , both co - existed in him .his main contributions are in geometry , numbers , music , cosmology , astronomy , philosophy and religion .pythagoras must have been one of the world s greatest men , but he wrote nothing though numerous works are attributed to him , and it is hard to say how much of the doctrine one knows as pythagorean is due to the founder of the society and how much is later development .it is also hard to say how much of what we are told about the life of pythagoras is trustworthy . for a mass of legend gathered around his name : sometimes he is represented as a man of science , and sometimes as a preacher of mystic doctrines , and we might be tempted to regard one or other of those characters as alone historical . certainly , it s true that there is no need to reject either of the traditional views . even though many wonderful things related to pythagoras , belong to legend , and seem to have no historical foundation , similarly the description of the learned works which he wrote is not attested by reliable historians and also belongs to the region of fable , nevertheless it is no doubt however , that he founded a school , or , rather , a religious philosophical society , which exerted great influence on the intellectual development of human civilization and had a fundamental importance all the time . of great influence were the pythagorean doctrines that numbers were the basis of all things and possessed a mystic significance , in particular the idea that the cosmos is a mathematically ordered whole .aristotle wrote : pythagorean having been brought up in the study of mathematics , thought that things could be represented by numbers ... and that the whole cosmos consists of a scale and a number ". briefly stated , the doctrine of pythagoras was that all things are numbers .pythagoras was led to this conception by his discovery that the notes sounded by stringed instrument are related to the length of the strings .he conducted remarkable investigation in music " as he was a musician .harmonies correspond to most beautiful mathematical ratio , he stated .melodious musical tunes could be produced on a stringed instrument by plucking the string at particular points , which correspond to mathematical ratios .such beautiful mathematical ratios are ( an octave ) , ( a fifth ) , and ( a fourth ) .pythagoras recognized that first four numbers known as tetractys " , whose sum equals ten ( ) , contained all basic musical intervals : the octave , the fifth and the fourth .in fact , all the major consonances , that is , the octave , the fifth and the fourth are produced by vibrating strings whose lengths stand to one another in the ratios of , and respectively .recent major scale in according to pythagoras tune looks like where is major second ( fifth of fifth with octave lowering ) ; is major sixth ( fifth of major second ) ; is major third ( fifth of sixth with octave lowering ) ; is major seventh ( fifth of third ) .the resemblance which pythagoras perceived between the orderliness of music , as expressed in the ratios which he had discovered and the idea that cosmos is an orderly whole , made up of parts harmoniously related to one another , led him to conceive of the cosmos too as mathematically ordered .pythagoras compared the eight planets ( there were seven planets known the babylonians : moon , mercury , venus , sun , mars , jupiter and saturn ) , including the earth , with the musical octave and the seven planets , excluding the earth , as seven strings of the musical instrument lair .the planets situated at different distances and moving at different speed correspond to different notes on musical octave .the planets moving with higher speed produce higher notes and those with lower speed produce lower notes .the celestial harmony of moving planets produces heavenly music ( the music of the spheres " ) analogous to different notes of musical octave . according to pythagoras ,the sphere was the most beautiful solid and the circle the most beautiful shape .thus , a spherical planet moving in circular orbit would form a harmonious constellation .pythagoras worked out the distances of the planets from the earth .he arranged the planets in order of increasing distances of the planets from the earth .the order given by him was the moon , mercury , venus , the sun , mars , jupiter and saturn .some pythagoreans believed that the earth moved round a central fire .the earth did not always face the central fire .this accounted for day and night on the earth .they also believed that the moon as well as the sun shone because they reflected light from their surfaces received from the central fire .perhaps the idea of central fire later on led to the heliocentric ( sun at the centre of the solar system ) configuration of solar system .pythagoras observation of heavens suggested to him that the motion of the heavenly bodies was cyclic and that the heavenly bodies returned to the place from which they had started . from this, pythagoras concluded that there must be a cycle of cycles , a greater year and on its completion the heavenly bodies returned to the original position and the same heavenly constellation would be observed again and again .he called this the eternal recurrence .pythagoras doctrine that mathematics contains the key to all philosophical knowledge , an idea , which was by his followers afterwards developed into an elegant number - theory .the pythagorean philosophy in its later elaboration is dominated by the number - theory .being the first , apparently , to observe that natural phenomena , especially the phenomena of the astronomical world , may be expressed in mathematical formulas , the pythagoreans held that numbers are not only the symbols of reality , but the very substance of real things .pythagoras associated numbers with geometrical notions and numerical ratios with shapes .he associated number one with a point , too with a line , three with a triangle ( the surface ) and four with a tetrahedron ( the solid ) .thus , one point generates dimensions , two points generate a line of one dimension , three points generate a surface of two dimensions , and four points generate three - dimensional solid figures . in geometry , numbers represent lengths , their squares represent areas , their cubes represent volumes . starting from numbers , numerical ratios and their powers , one can construct geometrical figures of different shapes and geometrical solids of different sizes . using distance the arrangement of planets , their motion, their orbital path , their distances from the center and their interrelations with each other can be worked out .thus , according to pythagoras all relations could be reduced to number relations and hence , the whole cosmos is a scale and a number based phenomenon . according to pythagoras ,ten is the perfect number , because it is the sum of one , two , three , and four the point , the line , the surface , and the solid .there are the second type of perfect numbers : according to pythagoras the second type of perfect numbers are those were the numbers equal to sum of their factors .for instance 28 has factors 1 , 2 , 4 , 7 , 14 and . from perfect numbers ,pythagoras was led to amicable numbers like 220 and 284 .amicable numbers form a pair of numbers where each number is equal to the sum of the factors of the other numbers .for instance 220 has factors 1 , 2 , 4 , 5 , 10 , 11 , 20 , 22 , 44 , 55 , 110 .the sum of these factors is .moreover , 284 has factors 1 , 2 , 4 , 71 , 142 .the sum of these factors is .triangular numbers have been introduced by pythagoras : pythagoras called numbers 1 , 3 , 6 , 10 , 15 , 21 , 28 , 36 , 45 , 55 , 66 as triangular numbers because these numbers can be arranged so as to form triangles .if are sides of a right - angled triangle and is the hypotenuse then according to pythagoras theorem .the triad of positive integers satisfying the relation is called the pythagorean triad of numbers .about fifteen such triads were previously known like ( 3,4,5 ) , ( 5,12,13 ) , ( 7,24,25 ) , ( 9,12,15 ) , ( 15,36,39 ) .the pythagorean triads in which the numbers do not have a common factor are called primitive pythagorean triads . for example( 3,4,5 ) , ( 5,12,13 ) , ( 7,24,25 ) etc .are primitive pythagorean triads . but( 9,12,15 ) , ( 15,36,39 ) are not primitive triads .it is believed that pythagoras himself discovered the formula for determining triads of numbers satisfying the relation .in fact , all pythagorean triads can be expressed via formulae a = m^2-n^2,b=2mn , c = m^2+n^2,[triad ] where are any positive integers . from his observations in music , mathematics and astronomy, pythagoras generalized that everything could be expressed in terms of numbers and numerical ratios .numbers are not only symbols of reality , but also substances of real things .hence , he claimed - all is number .the importance of this conception is very great , for example , it is the ultimate source of galileo s belief _ il libro della natura scritto in lingua matematica " _ that the book of nature is written in mathematical symbols and hence the ultimate source of modern physics in the form in which it came to us from galileo .it may be taken as certain that the union of mathematical genius and mysticism is common enough .pythagoras himself discovered the numerical ratios which determine the concordant intervals of the musical scale .similar to musical intervals , in medicine there are opposites , such as the hot and the cold , the wet and the dry , and it is the business of the physician to produce a proper blend " of these in the human body .the pythagoreans contended that the opposites are found everywhere in nature , and the union of them constitutes the harmony of the real world .they also argued for the notion that virtue is a harmony , and may be cultivated not only by contemplation and meditation but also by the practice of gymnastics and music .pythagoras held the theory that what gives form to the unlimited is the limit .that is the great contribution of pythagoras to philosophy , and we must try to understand it .it was natural for pythagoras to look for something of the same kind in the world at large .musical tuning and health are alike means arising from the application of limit to the unlimited . in their psychology and their ethicsthe pythagoreans used the idea of harmony and the notion of number as the explanation of the mind and its states , and also of virtue and its various kinds .pythagoras argued that there are three kinds of men , just as there are three classes of strangers who come to the olympic games .the lowest consists of those who come to buy and sell , and next above them are those who come to compete .best of all are those who simply come to look on .men may be classified accordingly as lovers of wisdom , lovers of honour , and lovers of gain .that seems to imply the doctrine of the tripartite soul , which is also attributed to the early pythagoreans on good authority .the pythagoreans were religiously and ethically inclined , and strove to bring philosophy into relation with life as well as with knowledge .the pythagoreans believed also in reincarnation or transmigration ( doctrine of rebirth ) , that is , the soul , after death , passes into another living thing , which presupposes the ability of the soul to survive the death of the body , and hence some sort of belief in its immortality .the above detailed introduction is made so as to show in the next sections that the great ancient pythagorean ideas have found themselves in the latest researches in high energy elementary particle and nuclear physics . in this respectwe will concern and discuss the mathematical , physical and geometrical aspects of the famous froissart theorem and in this way we will easily establish a link of this theorem to the mathematics and ideas elaborated in the pythagorean school . in other words , we would like to show a harmony of the froissart theorem just in the pythagoreans sense .in the year 1961 french physicist marcel froissart discovered and proved a remarkable theorem , which stated that two - body reaction amplitude , satisfying mandelstam representation , is bounded by expressions of the form at the forward and backward angles , and at any fixed angle in the physical region , being a constant , being the total squared c.m .energy ( one of the mandelstam invariant variables ) .this corresponds to the total cross sections increasing at most like .a little bit later it was shown that the analytical properties of two - particle scattering amplitude , which may be established strictly in the framework of axiomatic quantum field theory , bring us to the froissart statements as well . up - to - date derivation of the froissart theorem can be realized in a few steps , and we briefly sketch out it here . for simplicity we consider a reaction of elastic scattering for two scalar particles .the scattering amplitude of the two - body reaction may be considered as a function of the invariant variable and two unit vectors and on two - dimensional sphere , which characterise the initial and final states of two - particle system : , is c.m .momentum of particles in an initial state is lorentz boost , and the same with the primes in a final state . in the first step we wright the partial wave expansion =_ lmy _ lm ( * n*)f_l(s)_lm(*n * ) = _ l(2l+1)f_l(s)p_l(*n* ) , [ 1 ] where , is two - particle phase space volume , is a surface of two - dimensional unit sphere , , and an addition theorem for the spherical harmonics in second line of eq .( [ 1 ] ) has been used .the second invariant mandelstam variable ( momentum transfer ) is related to by the following equation = 1 + .[2 ] a remarkable analytic properties of scattering amplitudes as functions of momentum transfer have been discovered in the year 1958 by harry lehmann using jost - lehman - dyson representation especially dyson s theorem for a representation of causal commutators in local quantum field theory .lehmann proved that imaginary part of two - body interaction amplitude is analytic function of , regular inside an ellipse in -plane with center at the origin and with semi - major axis z_0(s ) = 1 + _l(s ) , _ l(s ) = , [ 3 ] where and define the support of spectral function in the jld representation by the requirements of spectral condition or spectrality .actually , and are the lowest mass values of the physical states for which the following matrix elements are not equal to zero where and are local heisenberg s currents of particles and .he also shown that two - body interaction amplitude , as itself , is analytic function of , regular inside an ellipse in -plane with center at the origin and with semi - major axis which is related to by the equation x_0(s ) = .[4 ] afterwards the fundamental results of harry lehmann were improved by martin and sommer : it was shown that imaginary part of two - body interaction amplitude is analytic function of , regular inside an ellipse in -plane with semi - major axis z_0(s ) = 1 + _m(s ) , _ m(s ) = , t_0 = 4m_^2,[5 ] ^2 = = , [ lambda ] where is pion mass .correspondingly two - body interaction amplitude , as itself , appears as analytic function of , regular inside an ellipse in -plane with semi - major axis which is related to by eq .( [ 4 ] ) .the fundamental results derived by lehmann and improved by his followers are of great importance because it has been shown that the partial wave expansions ( [ 1 ] ) which define physical scattering amplitudes continue to converge for complex values of the scattering angle , and define uniquely the amplitudes appearing in the unphysical region of non - forward dispersion relations .in fact , expansions converge for all values of momentum transfer for which dispersion relations have been proved .the proved analyticity of two - body interaction amplitudes as functions of two complex mandelstam variables and in a topological product of cut -plane with the cuts ( ) except for possible fixed poles and circle in -plane allowed in a more general case to save the fundamental froissart results previously obtained at a more restricted mandelstam analyticity . really , let us wright cauchy representation for imaginary part of two - body interaction amplitude where contour is a boundary of an ellipse in -plane with semi - major axis given by eq .( [ 5 ] ) . using heine formula we obtain i m f_l(s ) = _ c dz i m f_2(s;z)q_l(z).[6 ] from eq .( [ 6 ] ) it follows i m f_l(s)_2(s)_zc|im f_2(s;z)|_zc|q_l(z)|(c),[7 ] where is a length of contour .representation ( [ 6 ] ) where estimate ( [ 7 ] ) followed from is a good tool to study an asymptotic behaviour of partial waves at large orbital momentum . using asymptotic properties of the legendre functions is some polynomial in , we find i m f_l(s)_2(s)p_2(s)()^[z_0(s)+]^-l , |l| .[ 8 ] if we put , then estimate ( [ 8 ] ) at large values of may be rewritten in the form i m f_l(s)(-l ) , s,[9 ] where _2(s ) = ( ) ^_2(s ) p_2(s).[polinom ] thus we have obtained a very important result : analyticity of two - body interaction amplitudes as functions of , regular inside an ellipse in -plane , results in exponential decrease of partial waves as functions of orbital momentum at large values of .this means that the significant contribution to the partial wave expansion ( [ 1 ] ) is determined by partial waves for which the orbital momentum does not exceed the quantity l = .[10 ] the contribution of partial waves with to the partial wave expansion will be exponentially small .let us decompose the partial wave expansion in two terms i m f_2(s;=1 ) = _ l=0^l-1(2l+1)imf_l(s ) + i m f_2^l(s),[11 ] where the second term in eq .( [ 11 ] ) contains the contribution of partial waves with .now we would like to take advantage of unitarity condition which can be written for the partial waves as the following sequence of inequalities 0|f_l(s)|^2im f_l(s)|f_l(s)|1.[12 ] taking into account the unitarity condition we get for the first term in eq .( [ 11 ] ) an estimate in the form _ l=0^l-1(2l+1)im f_l(s)_l=0^l-1(2l+1 ) = = , [ 13 ] where expression ( [ 10 ] ) for the quantity has been used .froissart has shown that the second term in eq . (11 ) is asymptotically small compared to the first one at large values of , so that we finally get i m f_2(s;=1 ) < .[14 ] the optical theorem relates a total cross section of two - body interaction with imaginary part of two - body forward elastic scattering amplitude -function is defined by eq .( [ lambda ] ) . hence from estimate ([ 14 ] ) it follows an upper bound for the total cross section of two - body interaction _ab^tot(s ) < .[15 ] where , as it was mentioned above , here is just the place to introduce the physical notion of the effective radius of two - body forces .let us define the effective radius of two - body forces by the following equation r_2(s ) = , [ 16 ] where the definition ( [ 10 ] ) of the quantity and expression ( [ lambda ] ) for have been used . now upper bound ( [ 15 ] ) in terms of such defined quantity takes the form _^ 2(s).[17 ] this form of the upper bound for experimentally measured quantity has a quite transparent physical and clear geometrical meanings : it means that the total cross section of two - body interaction is bounded by the area of a surface of two - dimensional sphere whose radius is defined by the effective radius of two - body forces .a remarkable property of upper bound ( [ 17 ] ) consist in the fact that here all information about analytic properties of two - body interaction amplitudes is hidden in the physically tangible quantity ( [ 16 ] ) which is the effective radius of two - body forces . if we put equal to given by eq .( [ 5 ] ) then from eqs .( [ 13 ] ) and ( [ polinom ] ) it follows that in that case for the the effective radius of two - body forces we find from eq .( [ 16 ] ) r_2(s ) = ~ ( s / s_0 ) = ( s / s_0),s.[18 ] in the article froissart gave an excellent semiclassical explanation corroborating his theorem .we would like to present here a remarkable fragment from section ii of the froissart paper .he wrote : _ to get intuitive idea why the amplitude is bounded in the physical region , let us consider a classical problem : two particles interact by means of absorptive yukawa potential .if is the impact parameter , the total interaction seen by a particle for large is likely to be approximately .if this is small compared to one , there will be practically no scattering .if is large compared to one , there will be practically complete scattering , so that the cross section will be essentially determined by the value where .it is .if we now assume that is a function of the energy , and increases like a power of the energy , then will vary at most like the squared logarithm of the energy_. " in fact , froissart anticipated here a running coupling and quasi - potential character of strong forces . later on itwas shown that the hypothesis about validity of the dispersion relations in the momentum transfer leads , for any value of the energy , to a potential which is a superposition of yukawa potentials with energy dependent intensities .this fact together with a theorem on single - time reduction in quantum field theory provides a strong basis for semiclassical consideration given by froissart .however , it should be stressed that upper bound ( [ 17 ] ) has a quite different geometrical sense compared to semiclassical consideration given by froissart : eq .( [ 17 ] ) shows that the total cross section of two - body interaction is bounded by the area of a surface of the sphere with the radius equal to the effective radius of two - body forces but not by the area of a disk with the same radius .unitarity bound ( [ 17 ] ) states that the total probability ( per unit volume per unit time in fraction of particles density flux ) of all possible ( elastic and inelastic ) two - particle interactions , which take place in a limited volume during a limited interval of time , is limited by the area of a surface of the sphere which is , actually , a boundary of the volume .this means that widely discussed in the recent literature concerning some physical problems at planck scale _ the holographic principle _ has been incorporated in the general scheme of axiomatic quantum field theory and resulted from the general principles of local quantum field theory .in our works it was shown that there is a quite natural geometrical generalization of the froissart theorem to the case of multiparticle interaction . in this respectit should be noted that the problem of finding such generalization is non - trivial because at least the known singularities of multiparticle scattering amplitudes related to disconnected parts by cluster structure of the amplitudes point to the fact that for the total amplitude of -particle scattering ( ) there is no such generalization .connected part of -particle ( ) scattering amplitudes contains singular rescattering terms as well .therefore , the first problem which arises in this case is to define a suitable object connected with the reaction amplitude which would permit a correct formulation of the problem .it turns out there is a wide class of many - particle reaction amplitudes for which such a problem would be quite meaningful .we have shown that these amplitudes should be understood as amplitudes of true -particle interaction or -body forces amplitudes ; see details in . herewe reproduce our results taking a line stated in previous section .the scattering amplitude of the -body reaction may be considered as a function of the invariant variable and two unit vectors and on -dimensional sphere , which characterise the initial and final states of -particle system : dimensionality of multidimensional space is related to the number of particles by the equation .there are many ways to introduce the spherical coordinates in multidimensional space. moreover , there are some peculiarities related to a parametrization of relativistic -particle system .however , we will not concern this subject here because it does not play any role for our main goal .for the details we refer to and references therein . as above we may wright the partial wave expansion = _ lmy _ lm ( * e*)f_l(s)_lm(*e * ) = _ l(+1)f_l(s)c_l^ ( ) , [ 19 ] where , is -particle phase space volume , is a surface of -dimensional unit sphere , , and we have used in second line of eq .( [ 19 ] ) an addition theorem for the ( hyper)spherical harmonics in multidimensional space where is gegenbauer polynomial .here we contented ourself with a special class of -body forces scattering amplitudes which are invariant under rotation in multidimensional space ( so called -invariant amplitudes ) .we will assume that for physical values of the variable imaginary part of -body forces scattering amplitude is analytic function of , regular inside an ellipse in -plane with center at the origin and with semi - major axis z_n(s ) = 1 + _n(s ) , _ n(s ) = , [ 20 ] and for any is polynomially bounded in the variable , is some constant of mass dimensionality independent of , * q * is global momentum ( dependent of ) of -particle system which will be defined later on .such analyticity of -body forces scattering amplitudes was called _ global _if it is so , one can wright cauchy representation for imaginary part of -body interaction amplitude where contour is a boundary of an ellipse in -plane with semi - major axis given by eq .( [ 20 ] ) .there is a standard generalization of heine s expansion of the cauchy denominator = ( -i)2 ^ 2[()]^2(z^2 - 1)^-1/2 _ l=0^(l+)d_l^ ( z)c_l^(t),[21 ] which converges absolutely for is a second solution to gegenbauer s equation .the restriction ( [ converge ] ) requires that the point lie within that ellipse in the complex -plane with foci at which passes through the point .in particular from eq .( [ 21 ] ) it follows as a result we obtain i m f_l(s ) = _ c_n dz ( z^2 - 1)^-1/2d_l^(z)im f_n(s;z).[22 ] representation ( [ 22 ] ) is very useful to study an asymptotic behaviour of partial waves at large global orbital momentum .taking into account asymptotic properties of the gegenbauer functions is some polynomial in , we find ( ) ^1- ( z_n(s)+)^-l , |l| . [ 23 ] finally if we put , then we get at large values of i m f_l(s)(-l ) , s,[24 ] where p_n(s , ) = 2^+1()[2_n(s)]^(-1)/2_n(s ) p_n(s).[25 ] estimate ( [ 25 ] ) shows that partial waves as functions of global orbital momentum exponentially decrease at large values of , i.e. the significant contribution to the partial wave expansion ( [ 19 ] ) is resulted from partial waves for which the global orbital momentum does not exceed the quantity = .[26 ] the contribution of partial waves with to the partial wave expansion will be exponentially small .so , we decompose the partial wave expansion in two terms i m f_n(s;=1 ) = _ l=0^(+1)imf_l(s)c_l^(1 ) + i m f_n^(s),[27 ] where the second term in eq .( [ 27 ] ) contains the contribution of partial waves with . taking into account the unitarity condition ( [ 12 ] ) for the partial waves we get for the first term in eq .( [ 27 ] ) an estimate = ( 1+o()),[28 ] where we inserted $ ] .it can easily be seen that the second term in eq .( 27 ) is asymptotically small compared to the first one at large values of , so that we finally get i m f_n(s;=1 ) < .[29 ] where we have used expression ( [ 26 ] ) for and relation .by analogy with eq .( [ 16 ] ) let us introduce the effective radius of -body forces r_n(s)=p_n(s,),[30 ] where the definition ( [ 26 ] ) of the quantity and expression ( [ 20 ] ) for have been used . now upper bound ( [ 29 ] ) in terms of such defined quantity takes the form i m f_n(s;=1 )< = j_n(s)s_d-1[r_n(s)]^d-1,[31 ] where j_n(s ) = = .[32 ] with account of the generalized optical theorem relating a total cross section of -body interaction with imaginary part of -body forces forward scattering amplitude from estimate ( [ 31 ] ) we obtain an upper bound for the total cross section of -body interaction _n^tot(s ) < s_d-1[r_n(s)]^d-1.[33 ] here again , as it should be , upper bound ( [ 33 ] ) has a quite clear geometrical meaning : the total cross section of -body interaction is bounded by the area of a surface of -dimensional sphere whose radius is defined by the effective radius of -body forces .again all information about global analyticity of -body interaction amplitudes is hidden in the physical quantity ( [ 30 ] ) which is the effective radius of -body forces . from eqs .( [ 25 ] ) and ( [ 29 ] ) it follows that for the the effective radius of -body forces we find from eq .( [ 30 ] ) in that case r_n(s ) ~ ( s / s_0),r_n=,s.[34 ] upper bounds ( [ 31],[33 ] ) are a direct consequence of global analyticity of -body forces scattering amplitudes which , in one s turn , is a direct geometrical generalization of analytic properties of two - body scattering amplitude strictly proved in axiomatic quantum field theory . at presentwe do not know to what extend global analyticity of -particle scattering amplitudes is a consequence of general principles of local quantum field theory .the validity of such an assumption is obvious to us if we rely on the physical nature of -body forces : our intuition tells us that true -body interactions should manifest themselves only in the case when all the particles are in a sufficiently limited volume . on the other hand , from the beginning onemay , by definition , consider the -body forces scattering amplitude to be a globally analytic part of the total -matrix which may always be singled out from it . at last, we have to give the definition of global momentum for the relativistic -particle system . in this respect , first of all , note that momentum for two - particle system has been defined in a relativistic covariant way . under any lorentz transformation from the restricted lorentz group momentum is transforming by wigner rotation : is lorentz boost .this means that defined by eq .( [ lambda ] ) is a lorentz invariant quantity .moreover , we would like to emphasize the following asymptotic properties ^2s , s;^2 2_2(-m_2 ) , m_2 , m_2=m_a+m_b , _ 2=.[35 ] the expression of given by eq .[ lambda ] can be rewritten in the form ^2 = 16s(_2(s)/s_2)^2 = 16sa_2 ^ 2(s).[36 ] the definition of global momentum for the relativistic -particle system should be given such as to save the asymptotic properties shown by eqs .( [ 35 ] ) .such generalization for any number of particles looks like ^2=_n s^(n-1)/(3n-5)a_n^2/(3n-5 ) , [ 37 ] where is dimensionless constant _n=2 ^ 2n/(3n-5)()^(2n-4)/(3n-5 ) , _ n=()^1/(n-1 ) , m_n=_i=1^n m_i .[ 38 ] from the definition ( [ 37 ] ) we have the following asymptotic properties : ^2a_n^2 s , s,[39 ] where is dimensionless constant a_n^2 = ( ) ^2/(3 n-5)()^(2n-4)/(3n-5),[40 ] for example and ^22_n(-m_n),m_n.[41 ]let us come back to eq .( [ 13 ] ) and remind an ancient pythagoras theorem stated that the sum of first odd numbers beginning from unity is equal exactly to the square of i.e. _ n = n^2.[pyth ] this pythagoras theorem can easily be proved with the help of the formula for an arithmetical progression .however , pythagoras theorem can be proved without a knowledge of the formula for an arithmetical progression but using only some remarkable observations in a game with the numbers .we will not touch here the simplest proof , we would only like to stress a deep link between the froissart bound and this pythagoras theorem . of course , to take advantage of this link we have to learn apart from differential calculus and integral calculus that : * symmetry properties of the space - time continuum are described by inhomogeneous lorentz group or poincar group .we had also to know how to construct the unitary representations of this group as well , as it was made in the fundamental paper of wigner .* there is a very deep connexion between general physical principles such as causality , spectrality , unitarity and analytic properties of physical scattering amplitudes .the very essence of this connexion is expressed by brilliant jost - lehmann - dyson representation which provided the fundamental results of lehmann . *it takes many other attainments and the knowledge acquisitions as well .there is a generalization of pythagoras theorem ( [ pyth ] ) .really , let us consider any polynomial degree of let be a sum of the polynomial values when the argument takes an integer value then it can be proved that is also a polynomial in degree of s(n)=q_n+1(n),q_n+1(x)=c_0(q)+c_1(q)x+c_2(q)x^2++c_n+1(q)x^n+1,[pythag ] and there is correspondence between and : for example , if is polynomial of fourth degree then we have c_5(q)=c_4(p)/5 , + c_4(q)=c_3(p)/4+c_4(p)/2 , + c_3(q)=c_2(p)/3+c_3(p)/2+c_4(p)/3 , + c_2(q)=c_1(p)/2+c_2(p)/2+c_3(p)/4 , + c_1(q)=c_0(p)+c_1(p)/2+c_2(p)/6-c_4(p)/30. + c_0(q)=c_0(p).[p4 ] we will call that statement as a generalized pythagoras theorem . it can easily be seen that usual pythagoras theorem ( [ pyth ] ) corresponds to . from eq .( [ 28 ] ) it s clear that the generalized froissart theorem is related to the generalized pythagoras theorem where is being used . in according with the theory held by pythagoras the unitarity bounds ( [ 17 ] ) and ( [ 33 ] )give form to the unlimited and therefore they are limit ; see introduction .recently a simple theoretical formula describing the global structure of and total cross - sections in the whole range of energies available up today has been derived by an application of single - time formalism in qft and general theorems a l froissart .the fit to the experimental data with the formula was made , and it was shown that there is a very good correspondence of the theoretical formula to the existing experimental data obtained at the accelerators .moreover , it turned out there is a very good correspondence of the theory to all existing cosmic ray experimental data as well .the predicted values for obtained from theoretical description of all existing accelerators data are completely compatible with the values obtained from cosmic ray experiments . the global structure of ( anti)proton - proton total cross section is shown in figs . 1 - 2 extracted from papers . ( 188,200 ) ( -35,0 ) ( 92,-15) ( -55,65 ) ( 288,200 ) ( 15,10 ) ( 144,0) ( -5,95 ) the theoretical formula describing the global structure of ( anti)proton - proton total cross section has the following structure _ ( |p)pp^tot(s ) = ^tot_asmpt(s ) , [ 42 ] where ^tot_asmpt(s ) = 2= ( mb),[43 ] (gev^{-2}),\ ] ] (gev^{-2 } ) = \ ] ] = ( gev^-2),[44 ] is the slope of nucleon - nucleon differential elastic scattering cross section , is the effective radius of two - nucleon forces , is the effective radius of three - nucleon forces , characterizes the internucleon distance in a deuteron , the functions describe low - energy parts of ( anti)proton - proton total cross sections and asymptotically tend to zero at ( see details in the original paper ) .the mathematical structure of the formula ( [ 42 ] ) is very simple and physically transparent : the total cross section is represented in a factorized form .one factor describes high energy asymptotics of total cross section and it has the universal energy dependence predicted by the general froissart theorem in local quantum field theory .the other factor is responsible for the behaviour of total cross section at low energies and it has a complicated resonance structure .however this factor has also the universal asymptotics at elastic threshold .it is a remarkable fact that the low energy part of total cross section has been derived by application of the generalized froissart theorem for a three - body forces scattering amplitude .( [ 43 ] ) shows that geometrical scaling in a naive form is not valid .however , from eq .( [ 43 ] ) it follows the generalized geometrical scaling which looks like ^tot_asmpt(s ) = 2b_el(s)[1 + 2(1-)],[geom - scale ] where is defined above and here , we would like to point out some remarkable features of the global structure in the ( anti)proton - proton total cross sections .first of all , the ( anti)proton - proton total cross sections have a minimum at , and the question is what this minimum corresponds to .it turns out that the effective radius of three - nucleon forces at the point satisfys the following harmonic ratio , [ 45 ] where is the proton charge radius . in other words , at the minimum it takes place the _ octave consonance " _ of the three - nucleon forces with the proton charge distribution .going further on , we have applied our approach to study a shadow dynamics in scattering from deuteron in some details . in this way a new simple formula for the shadow corrections to the total cross - section in scattering from deuteron has been derived and new scaling characteristics with a clear physical interpretation have been established .we shall briefly sketch the basic results of our analysis of high - energy particle scattering from deuteron .as has been shown in , the total cross - section in the scattering from deuteron can be expressed by the formula where are the total cross - sections in scattering from deuteron , proton and neutron , ( s ) = ^el(s ) + ^inel(s)=2^el(s)a^el(x_el ) + 2_sd^ex(s)a^inel(x_inel ) , [ 46 ] the total single diffractive dissociation cross - section is defined by the following equation _ sd^(s ) = _ m_min^2^s_t_-(m_x^2)^t_+ ( m_x^2 ) dt , [ 47 ] where = ^ex = /2m_n r_d,[ex ] and we supposed that and at high energies . the first term in the r.h.s .( [ 46 ] ) generalizes the known glauber correction but the second term in the r.h.s . of eq .( [ 46 ] ) is totally new and comes from the contribution of the three - body forces to the hadron - deuteron total cross section .the expressions for the shadow corrections have quite a transparent physical meaning , both the elastic and inelastic scaling functions have a clear physical interpretation .the function measures out a portion of elastic rescattering events among of all the events during the interaction of an incident particle with a deuteron as a whole , and this function attached to the total probability of elastic interaction of an incident particle with a separate nucleon in a deuteron .correspondingly , the function measures out a portion of inelastic events of inclusive type among of all the events during the interaction of an incident particle with a deuteron as a whole , and this function attached to the total probability of single diffraction dissociation of an incident particle on a separate nucleon in a deuteron . the scaling variables and have quite a clear physical meaning too .the dimensionless quantity characterizes the effective distances measured in the units of fundamental length " , which the deuteron size is , in elastic interactions , but the similar quantity characterizes the effective distances measured in the units of the same fundamental length " during inelastic interactions . the functions and have a different behaviour : is a monotonic function while has the maximum at the point where .the existence of the maximum in the function results an interesting physical effect of weakening the inelastic eclipsing ( screening ) at superhigh energies .the energy at the maximum of can easily be calculated from the equation and here we faced with the harmonic ratio ( in square ) .[48 ] using the above mentioned global structure for the ( anti)proton - proton total cross sections , we have made a preliminary comparison of the new structure for the shadow corrections in elastic scattering from deuteron with the existing experimental data on proton - deuteron and antiproton - deuteron total cross sections .the results of this comparison are shown in figs . 3 - 4 ( 298,184 ) ( 20,10 ) ( 155,0) ( 0,90 ) ( 288,184 ) ( 15,10 ) ( 155,0) ( 0,88 ) we would like to emphasize that in the fit to the data on antiproton - deuteron total cross sections was considered as a single free fit parameter .after that a comparison with the data on proton - deuteron total cross sections has been made without any free parameters : was fixed by the previous fit to the data on antiproton - deuteron total cross sections , and our fit yielded . if we take into account the latest experimental value for the deuteron matter radius then we can find that the fitted value for the satisfies with a good accuracy the equality r_d^2 = r^2_d , m,(r^2_d , m = 3.853fm^2 = 98.96gev^-2).[rd ] so , we have established a harmonic _ consonance " _ between the internucleon distance in a deuteron and the deuteron matter distribution .[50 ] now , let us come back to eq .( [ 43 ] ) . taking into account that , from the froissart bound ( [ 17 ] ) and eq .( [ 43 ] ) we have the following bound .[51 ] on the other hand for the effective radii of -body forces we have obtained an asymptotic behaviour given by eq .( [ 34 ] ) where it follows from = , s.[52 ] bound ( [ 51 ] ) with account of eq .( [ 52 ] ) gives m_3>m_2=,(m_2=2m_).[54 ] however , if we conjecture that which is fulfilled for then = , s.[55 ] the ratio given by eq .( [ 55 ] ) corresponds to the harmonic ratio for the major second in the major scale in according to pythagoras tune ; see introduction .we would like especially to emphasize that the ratio ( [ 55 ] ) is compatible with the global structure of ( anti)proton - proton(deuteron ) total cross sections described above .in this minireview we have tried in the spirit of the pythagorean school to show the mathematical , physical and geometrical beauty of the froissart theorem .no doubt , we were enchanted with the aesthetic aspects of the froissart theorem : there were heard _ the new notes of the music of the spheres _ produced by the froissart theorem in the fundamental dynamics of particles and nuclei . starting from abstract mathematical structures of axiomatic quantum field theory by applying the general theorems , a physically transparent intuitively clear and visual picture of particles and nuclei interactions was arisen before our eyes .we found a very simple relations between physically tangible quantities which looked like pythagoras harmonic ratios mentioned above and hence might be considered as a hadronic symphony " in the fundamental dynamics .in fact , we came back to the great pythagorean ideas reformulated in terms of the objects living in the microcosmos .it appears that the study of fundamental processes in high energy elementary particle physics makes it possible to establish a missing link between cosmos and microcosmos , between the great ancient ideas and recent investigations in particle and nuclear physics and to confirm the unity of physical picture of the world .anyway , we believe in it . at last , in our previous papers we repeatedly criticized the so called supercritical pomeron phenomenology in hadronic physics . in our opinion this phenomenology might be compared with a cacophony " in particle physics .certainly , someone likes cacophony in the music .however , we prefer a symphony in the music and a harmony in the fundamental dynamics as well .it is a great pleasure to thank professor o.a .khrustalev who initiated , encouraged and supported my scientific flame and research in the youth and professor v.i .savrin for friendly and successfully collaboration in the middle of seventieth . * * v.v .belokurov , o.a .khrustalev , o.d .timofeevskaya , quantum teleportation usual wonder , in russian , izhevsk , 2000 .lebedev , fragments of ancient greek philosophers , part i , in russian , moscow , nauka " , 1989 .m. froissart , phys . rev . * 123 * , 1053 ( 1961 ) . h. lehmann , nuovo cim .* 10 * , 579 ( 1958 ) .r. jost , h. lehmann , nuovo cim . * 5 * , 1598 ( 1957 ) .dyson , phys .rev . * 110 * , 1460 ( 1958 ) .h. lehmann , nuovo cimento suppl .* 14 * , 153 ( 1959 ) .a. martin , nuovo cim . * 42a * , 930 ( 1966 ) ; ibid * 44a * , 1219 ( 1966 ) . g. sommer , nuovo cim . *48a * , 92 ( 1967 ) ; * 52a * , 373 ( 1967 ) ; * 52a * , 850 ( 1967 ) ; * 52a * , 866 ( 1967 ) .l. durand , p.m. fishbane , l.m .simmons , jr . , journ . of math* 17 * , 1933 ( 1976 ) .wigner , _ on unitary representations of the inhomogeneous lorentz group _ , ann . of math . *40 * , 149 ( 1939 ) .khrustalev , v.i .savrin , preprint ihep 68 - 19k , serpukhov , 1968 .khrustalev , a.a .logunov , nguen van hieu , in : _ problems of theoretical physics _ , essays dedicated to n.n .bogolyubov on the occasion of his sixtieth birthday , publishing house nauka " , moscow , 1969 , p. 90 .khrustalev , a.a .logunov , a.n .tavkhelidze , i.t .todorov , nuovo cim . * 30 * , 134 ( 1963 ) .arkhipov , sov .* 74 * , 69 ( 1988 ) ; ibid * 83 * , 247 ( 1990 ) .g.t hooft , _ the holographic principle _ , e - print hep - th/0003004 .arkhipov , v.i .savrin , sov .phys . * 49 * , 3 ( 1981 ) .arkhipov , rep . on math. phys . * 20 * , 303 ( 1984 ) .arkhipov , _ what can we learn from the study of single diffractive dissociation at high energies ? _ in proceedings of viiith blois workshop on elastic and diffractive scattering , protvino , russia , june 28july 2 , 1999 , world scientific , singapore , 2000 , pp. 109 - 118 ; report ihep 99 - 43 , protvino , 1999 ; e - print hep - ph/9909531 .a.a . arkhipov , _ on global structure of hadronic total cross sections _ , preprint ihep 99 - 45 , protvino , 1999 ; e - print hep - ph/9911533 .arkhipov , _ proton - proton total cross sections from the window of cosmic ray experiments _ , preprint ihep 2001 - 23 , protvino , 2001 ; e - print hep - ph/0108118 ; in proceedings of ixth blois workshop on elastic and diffractive scattering , pruhonice near prague , czech republic , june 9 - 15 , 2001 , eds . v. kundrat , p. zavada , institute of physics , prague , czech republic , 2002 , pp . 293 - 304 . a.a .arkhipov , _ three - body forces , single diffraction dissociation and shadow corrections in hadron - deuteron total cross sections _ , preprint ihep 2000 - 59 , protvino , 2000 ; e - print hep - ph/0012349 ; in proceedings of xvth workshop on high energy physics and quantum field theory , tver , russia ,september 7 - 13 , 2000 , eds .m. dubinin , v. savrin , institute of nuclear physics , moscow state university , russia , 2001 , pp .241 - 257 .arkhipov,_diffraction 2000 : new scaling laws in shadow dynamics _ , nucl .b ( proc . suppl . )* 99a * , 72 ( 2001 ) .f. schmidt - kaler et al . , phys .lett . * 70 * , 2261 ( 1993 ) .
it has been shown that the great ancient pythagorean ideas have found themselves in the latest researches in high energy elementary particles and nuclear physics . in this respect we concern and discuss the mathematical , physical and geometrical aspects of the famous froissart theorem and in this way one establishes a link of this theorem to the mathematics and ideas elaborated in the pythagorean school . a harmony of the froissart theorem in fundamental dynamics of particles and nuclei has been displayed . we argue that a harmony of the froissart theorem allow us to hear the new notes of _ the music of the spheres " _ just in the pythagoreans sense . * harmony of the froissart theorem + in fundamental dynamics of particles and nuclei * + a.a . arkhipov + _ state research center institute for high energy physics " + 142280 protvino , moscow region , russia _ + pythagorean watchword
in the past few years there has been increasing interest in the investigation of financial markets as complex systems in statistical mechanics .the empirical analysis of the high frequency financial data reveals nonstationary statistics of market fluctuations and several mathematical models of markets based on the concept of the nonequilibrium phenomena have been proposed .recently mizuno _ et al ._ investigate high frequency data of the usd / jpy exchange market and conclude that dealers perception and decision are mainly based on the latest 2 minutes data .this result means that there are feedback loops of information in the foreign currency market . as microscopic models of financial marketssome agent models are proposed . specifically ising - like models are familiar to statistical physicists andhave been examined in the context of econophysics .the analogy to the paramagnetic - ferromagnetic transition is used to explain crashes and bubbles ._ consider the effect of a weak external force acting on the agents in the ising model of the financial market and conclude that apparently weak stimuli from outside can have potentially effect on financial market due to stochastic resonance .this conclusion indicates that it is possible to observe the effect of the external stimuli from the market fluctuations . motivated by their conclusion we investigate high - frequency financial data and find a potential evidence that stochastic resonance occurs in financial markets . in this article the results of data analysisare reported and the agent - based model is proposed in order to explain this phenomenon .we analyze tick quotes on three foreign currency markets ( usd / jpy , eur / usd , and jpy / eur ) for periods from january 1999 to december 2000 .this database contains time stamps , prices , and identifiers of either ask or bid . since generally market participants ( dealers ) must indicate both ask and bid prices in foreign currency markets the nearly same number of ask and bid offering are recorded in the database . herewe focus on the ask offering and regard the number of ask quotes per unit time ( one minute ) as the market activity .the reason why we define the number of ask quotes as the market activity is because this quantity represents amount of dealers responses to the market . in order to elucidate temporal structure of the market activity power spectrum densities of , estimated by where represents frequency , and a maximum period of the power spectrum density , are calculated .fig : power - spectrum - usdjpy ] , [ fig : power - spectrum - eurusd ] , and [ fig : power - spectrum - eurjpy ] show the power spectrum densities for three foreign currency markets ( usd / jpy , uer / usd , and eur / jpy ) from january 1999 to december 2000 .it is found that they have some peaks at the high frequency region .there is a peak at 2.5 minutes on the usd / jpy market , at 3 minutes on the eur / usd market , and there are some peaks on the jpy / eur .we confirm that these peaks appear and disappear depending on observation periods . on the usd / jpy marketthere is the peak for periods of january 1999july 1999 , march 2000april 2000 , and august 2000november 2000 ; on the eur / usd market july 1999september 1999 ; and on the eur / jpy market january 1999march 1999 , april1999june 1999 , november 1999 , and july 2000december 2000 .these peaks mean that market participants offer quotes periodically and in synchronization .the possible reasons for these peaks to appear in the power spectrum densities of the market activity are follows : 1 .the market participants are affected by common periodic information .2 . the market participants are spontaneously synchronized . in the next section the double - threshold agent modelis introduced and explain this phenomenon on the basis of the reason ( 1 ) .here we consider a microscopic model for financial markets in order to explain the dependency of the peak height on observation periods . we develop the double - threshold agent model based on the threshold dynamics . in foreign exchange markets the market participants attend the markets with utilizing electrical communication devices , for example , telephones , telegrams , and computer networks .they are driven by both exogenous and endogenous information and determine their investment attitudes .since the information to motivate buying and one of selling are opposite to each other we assume that the information is a scaler variable .moreover the market participants perceive the information and determine their investment attitude based on the information .the simplest model of the market participant is an element with threshold dynamics .we consider a financial market consisting of market participants having three kinds of investment attitudes : buying , selling , and doing nothing .recently we developed an array of double - threshold noisy devices with a global feedback . applying this model to the financial marketwe construct three decision model with double thresholds and investigate the dependency of market behavior on an exogenous stimuli .the investment attitude of the dealer at time is determined by his / her recognition for the circumstances , where represents the investment environment , and the dealer s prediction from historical market data . is given by where and represent threshold values to determine buying attitude and selling attitude at time , respectively . is the uncertainty of the dealer s decision - making . for simplicityit is assumed to be sampled from an identical and independent gaussian distribution , where is a standard deviation of . of course this assumption can be weakened .namely we can extend the uncertainty in the case of non - gaussian noises and even correlated noises .the excess demand is given by the sum of investment attitudes over the market participants , which can be an order parameter .furthermore the market price moves to the direction to the excess demand where represents a liquidity constant and is a sampling period . may be regarded as an order parameter .the dealers determine their investment attitude based on exogenous factors ( fundamentals ) and endogenous factors ( market price changes ) . generally speaking ,the prediction of the dealer is determined by a complicated strategy described as a function with respect to historical market prices , . following the takayasus first order approximation we assume that is given by where is the dealer s response to the market price changes . it is assumed that the dealers response can be separated by common and individual factors , where denotes the common factor , and the individual factor . generally these factors are time - dependent and seem to be complicated functions of both exogenous and endogenous variables . for simplicity it is assumed that these factors vary rapidly in the limit manner . then this model becomes well - defined in the stochastic manner .we assume that and are sampled from the following identical and independent gaussian distributions , respectively : where represents a mean of , a standard deviation of , and a standard deviation of .since we regard the market activity as the number of tick quotes per unit time it should be defined as the sum of dealers actions : the market activity may be regarded as an order parameter .this agent model has nine model parameters .we fix , , , , , and throughout all numerical simulations . it is assumed that an exogenous periodic information to the market is subject to at , and .we calculate the signal - to - noise ratio ( snr ) of the market activity as a function of .the snr is defined as where represents a peak height of the power spectrum density , and noise level . from the numerical simulation we find non - monotonic dependency of the snr of on .[ fig : snr ] shows a relation between the snr and the noise strength .it has an extremal value around .namely the uncertainty of decision - making plays a constructive role to enhance information transmission . if there are exogenous periodic information and the uncertainty of decision - making we can find the peak on power spectrum densities at appropriate uncertainty of decision - making due to stochastic resonance . for the double - threshold agent modelis plotted against the uncertainty of decision - making of agents at , , , , , , , , , and . ]we analyzed time series of the number of tick quotes ( market activity ) and found there are short - time periodicities in the time series .the existence and positions of these peaks of the power spectrum densities depend on foreign currency markets and observation periods .the power spectrum densities have a peak at 2.5 minutes on the usd / jpy market , 3 minutes on the eur / usd .there are some peaks at a few minutes on the jpy / eur .we developed the double - threshold agent model for financial markets where the agents choose three kinds of states and have feedback strategies to determine their decision affected by last price changes .from the numerical simulation we confirmed that the information transmission is enhanced due to stochastic resonance related to the uncertainty of decision - making of the market participants .we propose a hypothesis that the periodicities of the market activity can be observed due to stochastic resonance .appearance and disappearance of these peaks may be related to the efficiency of the markets .the efficiency market hypothesis says that prices reflect information .because quotes make prices tick frequency can reflect information .if the peaks of the power spectrum densities come from exogenous information then snr is related to the efficiency of the market .namely the market may be efficient when the peaks appear .the author thanks prof .t. munakata for stimulative discussions and useful comments .this work is partially supported by the japan society for the promotion of science , grant - in - aid for scientific research # 17760067 .99 r.n . mantegna and h.e .stanley , an introduction to econophysics : correlations and complexity in finance , cambridge university press , cambridge ( 2000 ) . m.m .dacorogna , r. genay , u. mller , r.b . olsen and o.v .pictet , an introduction to high - frequency finance , academic press , san diego ( 2000 ) .t. mizuno , t. nakano , m. takayasu , and h. takayasu , physica a , * 344 * ( 2004 ) 330 . a .- h .sato and h. takayasu , physica a * 250 * ( 1998 ) 231 . t. lux and m. marchesi , nature , * 397 * ( 1999 ) 498500 .d. challet , m. marsili and yi - cheng zhang , physica a , * 294 * ( 2001 ) 514524 .p. jefferies , m.l .hart , p.m. hui and n.f .johnson , the european physical journal b , * 20 * ( 2001 ) 493501 .a. krawiecki and j.a .hoyst , physica a , * 317 * ( 2003 ) 597 .the data is provided by cqg international ltd .sato , k. takeuchi , and t. hada , physics letters a , * 346 * ( 2005 ) 27 .h. takayasu , and m. takayasu , physica a , * 269 * ( 1999 ) 24 .l. gammaitoni , p. hnggi , p. jung , and f. marchesoni , review of modern physics , * 70 * ( 1998 ) 223 .e. fama , journal of finance , * 46 * ( 1991 ) 1575 .
power spectrum densities for the number of tick quotes per minute ( market activity ) on three currency markets ( usd / jpy , eur / usd , and jpy / eur ) for periods from january 1999 to december 2000 are analyzed . we find some peaks on the power spectrum densities at a few minutes . we develop the double - threshold agent model and confirm that stochastic resonance occurs for the market activity of this model . we propose a hypothesis that the periodicities found on the power spectrum densities can be observed due to stochastic resonance . tick quotes , foreign currency market , power spectrum density , double - threshold agent model , stochastic resonance + * pacs * 89.65.gh , 87.15.ya , 02.50.-r
important in both fundamental science and numerous applications , optimization problems of various degrees of complexity are challenging ( see for an excellent introduction ) .optimization conditioned by constraints that may vary from event to event is of especial theoretical and practical importance . as a first example , when dealing with a system under a random potential , each realization of the random potential demands a separate optimization resulting in a different ground state .the thermodynamic behavior of such a system in a quenched random potential crucially depends on the random potential realized .a similar but practical problem may arise in routing passengers at various cities to reach their destinations . in the latter case ,the optimal routing depends on the number of passengers at various locations , the costs from one location to the others , which likely to vary from time to time .this type of conditional optimization also occurs in modern proteomics problem , that is , in the mass spectrometry ( ms ) based peptide sequencing . in this case , each tandem ms ( ms ) spectrum constitute a different condition for optimization which aims to find a database peptide or a _ de novo _ peptide to best explain the given ms spectrum . when the cost function of an optimization problem can be expressed as a sum of independent local contributions , the problem usually can be solved using the transfer matrix method that is commonly employed in statistical physics .a well - studied example of this sort in statistical physics is the directed polymer / path in a random medium ( dprm ) .even when a small non - local energetics is involved , the transfer matrix approach still proves useful . as an example , the close relationship between the dprm problem and ms - based peptide sequencing , where a small nonlocal energetics is necessary to enhance the peptide identifications , was sketched in an earlier publication and the cost value distribution from many possible solutions other than the optimal one is explored .indeed , obtaining the cost value distribution from _ all _ possible solutions in many cases is harder than finding the optimal solution alone . in this paper, we will provide the solution to a generic problem that enables a full characterization of the peptide sequencing score statistics , instead of just the optimal peptide .the 1d problem considered is essentially a hopping model in the presence of a random potential .the solution to this problem may also be useful in other applications such as in routing of passengers and even internet traffic . in what follows, we will first introduce the generic 1d hopping model in a random potential , followed by its transfer matrix ( or dynamic programming ) solution .we then discuss the utility of this solution in the context of ms - based peptide sequencing , and demonstrate with real example from mass spectrum in real ms - based proteomics experiments . in the discussion section , we will sketch the utility of the transfer matrix solution in other context and then conclude with a few relevant remarks .along the -axis , let us consider a particle that can hop with a set of prescribed distances towards the positive direction .that is , if the particle is currently at location , it can move to location , , in the next time step . at each hopping step, the particle will accumulate an energy from location that it just visited .the score ( negative of the on - site potential energy ) is assumed positive and may only exist at a limited number of locations . forlocations that do not exist , we simply set there .the energy of a path starting from the origin specified by the sequential hopping events would have visited locations with and has energy in general , there can be more than one path terminated at the same point . treating each path as a state with energy given by ,one ends up having the following recursion relation for the partition function [ pf_global.0 ] z(x ) = _i=1^k e^s(x - m_i ) z(x - m_i ) , where plays the role of inverse temperature ( with chosen ) .if one were only interested in the best score terminated at point , it will be given by the zero temperature limit and the recursion relation may be obtained by taking the logarithm on both sides of ( [ pf_global.0 ] ) and divided by then taking limit to reach [ pf_zt_best ] s_best(x ) = _ 1i k \ { s(x - m_i)+s_best(x - m_i ) } , where records the best path score among all paths reaching position .this update method , also termed dynamic programming , records the lowest energy and lowest energy path reaching a given point .the lowest energy among all possible at position is simply and the associated path can be obtained by tracing backwards the incoming steps .it is interesting to observe that one can also obtain the worst score at each position via dynamical programming [ pf_zt_worst ] s_worst(x ) = _ 1i k \ { s(x - m_i)+s_worst(x - m_i ) } .the full thermodynamic characterization demands more information than the ground state energy . in principle , one may obtain the full partition function using eq .( [ pf_global.0 ] ) evaluated at various temperatures .this procedure , however , hinders analytical property such as determination of the average energy a better starting point may be achieved if one can obtain the density of states . in this case , we have z & & de e^-e d(e ) + e & = & .note that if the ground energy of the system is bounded from below , the partition function is simply a laplace transform of a modified density of states given by where and this implies that the density of states together with the ground state energy determine all the thermodynamic behavior of the system . in the next section, we will explain how to obtain the density of states using the dynamical programming technique as well as how to extend this approach to more complicated situations that will be useful in characterizing the score statistics in ms - based peptide sequencing .the density of states is related to the energy histogram in a simple way .the number of states between energies and ( with ) is given by . if we happen to use as the energy bin size for energy histogram , the count in the bin with energy simply and the density of states . for simplicity, we will assume that the all the on - site energies are integral multiple of .this implies that each path energy / score is also an integral multiple of . in the following subsections, we will use score density of states instead of energy density of states .we denote by the number of paths reaching position with score . with this notation ,we can easily write down the recursion relation for as follows [ s.dos ] c(x , n ) = _ i=1^k c(x - m_i , n- ) .this recursion relation allows us to compute the density of states in the same manner as computing the partition function ( [ pf_global.0 ] ) except that we need to have an additional dimension for score at each position . as an even simpler application of this recursion relation ,suppose that one is only interested in the number of paths reaching position , one may sum over the energy part on both side of ( [ s.dos ] ) and arrives at [ p_count ] c(x ) = _ i c(x - m_i ) , which enables a very speedy way to compute the total number of paths reaching position .in the context of _ de novo _ peptide sequencing , this number corresponds to the total number of all possible _ de novo _ peptides within a given small mass range .although simply obtained , this number may be useful for providing rough statistical assessment in _ de novo _ peptide sequencing . in general , one may wish to associate with each hop an energy or one may wish to introduce some kind of score normalization based on the number of hopping steps .this is indeed the case when applying this framework to ms - based peptide sequencing where a peptide length factor adding or multiplying to the overall raw score is a common practice . in this case, it becomes important to keep track the number of hops made in each path .we may further categorize the counter into .that is , we may separate the paths with different number of steps from one another and arrive at a finer counter which records the number of paths reaching position with score and with hopping steps .it is rather easy to write down the recursion relation obeyed by this fine counter [ sl.dos ] c(x , n , l ) = _i=1^k c(x - m_i , n-,l-1 ) .this recursion relation allows us to renormalize the raw score based on the number of steps taken .for example , for raid_dbs , a database search method we developed , we divide the raw score obtained by for any peptide ( path ) of amino acids ( hopping steps ) to get better sensitivity in peptide identification . in principle , the recursion relations given by ( [ s.dos]-[sl.dos ] ) are all one - dimensional updates .the only difference is the internal structure of counters at each position . for ( [ p_count ] ) ,the counter is just an integer and has no further structure . for ( [ s.dos ] ) ,the counter at each position has a 1d structure indexed by the score . for ( [ sl.dos ] ) ,the counter at each position has a 2d structure indexed by both the score and the number of hopping steps .this means that in terms of solving the problem using dynamical programming , it is always a 1d dynamical programming with different degrees of internal structure that may lengthen the execution time when shifting from the simplest case ( [ p_count ] ) to the more complicated case ( [ sl.dos ] ) . obviously at each position , there is an upper bound and a lower bound for score and also for the number of hopping steps accumulated .we shall call them , , and respectively .the first two quantities may be obtained by eqs .( [ pf_zt_best ] ) and ( [ pf_zt_worst ] ) respectively .we provide the recursions for the two latter quantities below l_max(x ) & = & _ 1i k \ { l_max(x - m_i ) } + 1 [ l_max ] , + l_min(x ) & = & _ 1i k \ { l_min(x - m_i ) } + 1 [ l_min ] .( [ pf_zt_best]-[pf_zt_worst ] ) and ( [ l_max]-[l_min ] ) provides the ranges for both the scores and the number of cumulative hopping steps at each position via simple dynamic programming . as we will discuss later, this information enables a memory - efficient computations of score histograms .in this section , we focus on an important subject in modern biology using ms data to identify the numerous peptides / proteins involved in any given biological process . because of the peptide mass degeneracies and the limited measurement accuracy for the peptide mass - to - charge ratio , using ms spectra is more effective in peptide identifications . in a ms setup ,a selected peptide with its mass identified by the first spectrometer is fragmented by noble gas , and the resulting fragments are analyzed by a second mass spectrometer .although such ms-based proteomics approaches promise high throughput analysis , the confidence level assignment for any peptide / protein identified is challenging .the majority of peptide identification methods are so - called database search approaches .the main idea is to _ theoretically _ fragment each peptide in a database to obtain the corresponding _ theoretical _ spectra .one then decides the degree of similarity between each theoretical spectrum and the input query spectrum using a scoring function .the candidate peptides from the database are ranked / chosen according to their similarity scores to the query spectrum .although one may assign relative confidence levels among the candidate peptides via various ( empirical ) means , an objective , standardized calibration exists only recently . in our earlier publications , we proposed to tackle this difficulty by using a _ de novo _ sequencing method to provide an objective confidence measure that is both database - independent and takes into account spectrum - specific noise . in this paper , we will provide concrete algorithms for such purpose . to begin , consider a spectrum with parent ion mass range ] , it will have a hopping trajectory in the molecular array given by ] . we only need to allocate computer memory associated with those sites .for each of these relevant sites , we also know the values of , , , , and the total number of peptides reaching that site through the algorithm above . one may therefore allocate a 2d array of size for each relevant mass_index for later use . once memory allocation for relevant mass_indicesis done , we can efficiently go through those relevant sites to obtain the 2d score histogram that we mentioned . in the pseudocodebelow , update is performed using eq .( [ sl.dos ] ) .we now demonstrate the very simple main algorithm initialize all the fine counters + except =1 ; + for ( aa_index = 0 ; aa_index max_aa ; aa_index + + ) \ { + update at (aa_index ) ; + } + for ( mass_index in ascendingly ordered _relevant mass_indices_)\ { + for ( aa_index = 0 ; aa_index max_aa ; aa_index + + ) \ { + update at (mass_index + n(aa_index ) ) ; + } + } + we now define the final 2d counter [ 2dhist_final ] y(n , l ) _ i=1^k c(f_i , n , l ) . apparently , in the 1d hopping model when allowing consecutive terminating points , the resulting density of states can now be expressed as . if one were interested in normalizing the final score in a path - length dependent manner , one will has the following generic transformation h(e ) = _ lde ( e - f(e,l ) ) where is a generic length - normalized energetic function that takes the raw energy with hopping steps and turn them into a new energy , and is understood . using a real experimental ms spectrum of parent ion mass da and a raw scoring function ( raid_dbs raw score without divided by with being the peptide length ) , we obtained a 2d score histogram . from this 2d score histogram, we can compute the average peptide length as well .we then transform the 2d score histogram using two different functions . in the first case , , meaning that one just divides the score by a constant given by . in the second case, we use the raid_dbs scoring function where . in figure[ fig1 ] , we show the two resulting score histograms along with the fits to theoretical distribution function . as one may see from the figure , both histograms are well fitted by the theoretical distribution function over at least 15 order of magnitudes .there is difference , however , in the histograms obtained .in the first case , where the score is merely divided by the average length , we have a wider score distribution than that of the second case .this implies that a high scoring hit out of the first type of scoring function will have a larger -value than that of the second type .this is perfectly reasonable because when using the first type of raw scoring , very long peptides which by random chances are more likely to hit on fragment peaks in the mass spectrum are less penalized than the shorter peptides . as a consequence ,one anticipates more false long peptides out of the first type of scoring method than that of the second scoring method .therefore , one should assign a larger -value to the former case and a smaller -value to the latter case .it is apparently important to be able to obtain score histograms of the second scoring method .however , this can only be achieved if one keeps the length information in the dynamical programming update , see eq .( [ sl.dos ] ) .our method may also be extended to other applications . in the case of passenger routings ,the -axis actually represents time .the local score may be viewed as the additional cost that may vary for different stops . once the problem is laid out, the 2d histogram obtained from our solution indicates the number of equivalent routes in terms of additional costs and the total number of stops .this problem should be interesting in its own right . in this paper , we developed a new approach to obtain the density of states of a 1d hopping problem in random potential .we have extended the simplest case scenario and have shown that we can apply this method to provide a _complete _ score histogram for ms - based peptide sequencing problem .this important information may be used for a more objective statistical significance assignment in peptide identification .our algorithm may also serve as a speedy _ de novo _ algorithm .if one is only interested in getting the best scoring peptide with length normalized score , one only needs to keep track of .furthermore , it is straightforward to include in our _ de novo _ algorithm post - translationally modified amino acids .the effect is simply an enlargement of the alphabet .that is , instead of having 20 amino acids , we will simply have more allowed masses but without needing to change any part of the algorithm .in the near future , we would like to build a web application that allows the users to obtain information of interest .for example , a user might be interested in knowing : given a parent ion molecular mass and a mass error tolerance , how many _ de novo _ peptides can there be ?furthermore , we plan to provide users with the full score histogram when a query spectrum is provided and a scoring method is chosen .our approach , founded on statistical physics , can easily address this type of questions to provide useful information for biological researches .this work was supported by the intramural research program of the national library of medicine at the national institutes of health .
we provide a complete thermodynamic solution of a 1d hopping model in the presence of a random potential by obtaining the density of states . since the partition function is related to the density of states by a laplace transform , the density of states determines completely the thermodynamic behavior of the system . we have also shown that the transfer matrix technique , or the so - called dynamic programming , used to obtain the density of states in the 1d hopping model may be generalized to tackle a long - standing problem in statistical significance assessment for one of the most important _ proteomic _ tasks peptide sequencing using tandem mass spectrometry data . statistical significance , dynamic programming , mass spectrometry , directed paths in random media , peptide identification
chronometry is the science of the measurement of time .as the time flow of clocks depends on the surrounding gravity field through the relativistic gravitational redshift predicted by einstein , chronometric geodesy considers the use of clocks to directly determine earth s gravitational potential differences .instead of using state - of - the - art earth s gravitational field models to predict frequency shifts between distant clocks ( , itoc project ) , the principle is to reverse the problem and ask ourselves whether the of frequency shifts between distant clocks can improve our knowledge of earth s gravity and geoid .for example , two clocks with an accuracy of in terms of relative frequency shift would detect a 1-cm geoid height variation between them , corresponding to a geopotential variation of about ( for more details , see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?until recently , the performances of optical clocks had not been sufficient for the determination of earth s gravity potential . in 2010, demonstrated the ability of the new generation of atomic clocks , based on optical transitions , to sense geoid height differences with a 30-cm level of accuracy . to date , the best of these instruments reach a stability of ( nist , riken + univ .tokyo , ) after 7 hours of integration time .more recently , an accuracy of ( jila , ) has been obtained , equivalent to geopotential differences of 0.2 , or 2 cm on the geoid . .such results the question of what can we learn about earth s gravity and mass sources using clocks , that we can not easily derive from existing gravimetric data .recent studies address this question ; for example , discuss the value and future applicability of chronometric geodesy for direct geoid mapping on continents and joint gravity potential surveying to determine subsurface density anomalies .they find that a geoid perturbation caused by a 1.5 km radius sphere with 20 per cent density anomaly buried at 2 km depth in the earth s crust is already detectable by atomic clocks with present - day accuracy .they also investigate other applications , for earthquake prediction and volcanic eruptions , or to monitor vertical surface motion changes due to magmatic , post - seismic , or tidal deformations . herewe will consider the `` static '' or `` long - term '' component of earth s gravity .our knowledge of earth s gravitational field is usually expressed through geopotential grids and models that integrate all available observations , globally or over an area of interest .these models are , however , not based on direct observations with the potential itself , which has to be reconstructed or extrapolated by integrating measurements of its derivatives .the potential is reconstructed with a centimetric accuracy at resolutions of the order of 100 km from grace and goce satellite data , and integrated from near - surface gravimetry for the shorter spatial scales . as a result ,the standard deviation ( rms ) of differences between geoid heights obtained from a global high - resolution model as egm2008 , and from a combination of gps / leveling data , reaches up to 10 cm in areas well covered in surface data .the uneven distribution of surface gravity data , especially in transitional zones ( coasts , borders between different countries ) and with important gaps in areas difficult to access , indeed limits the accuracy of the reconstruction when aiming at a centimetric level of precision .this is an important issue , as large gravity and geoid variations over a range of spatial scales are found in mountainous regions , and because a high accuracy on altitudes determination is crucial in coastal zones .airborne gravity surveys are thus realized in such regions ; local clock - based geopotential determination could be another way to overcome these limitations . in this context , here, we investigate to what extent clocks could contribute to fill the gap between the satellite and near - surface gravity spectral and spatial coverages in order to improve our knowledge of the geopotential and gravity field at all wavelengths . by nature , potential data are smoother and more sensitive to mass sources at than gravity data , which are strongly influenced by local effects .thus , they could naturally complement existing networks in sparsely covered places and even also contribute to point out possible systematic patterns of errors in the less recent gravity datasets .we address the question through test case examples of high - resolution geopotential reconstructions in areas with different characteristics , leading to different variabilities of the gravity field .we consider the massif central in france , marked by smooth , moderate altitude mountains and volcanic plateaus , and an alps - mediterranean zone , comprising high reliefs and a land / sea transition .the paper is organized as follows . in section [ sec:2 ] , we briefly summarize the method schematically . in section [ sec:3 ] , we describe the regions of interest and the construction of the high - resolution synthetic datasets used in our tests . in section [ sec:4 ] , we present the methodology to assess the contribution of new clock data in the potential recovery , in addition to ground gravity measurements .numerical results are shown in section [ sec:5 ] .we finally discuss in section [ sec:6 ] the influence of different parameters like the data noise level and coverage .the rapid progress of optical clocks performances opens new perspectives for their use in geodesy and geophysics . while they wereuntil recently built only as stationary laboratory devices , several transportable optical clocks are currently under construction or test ( see e.g. . the technological step toward state - of - the art transportable optical clocks is likely to take place within the next decade . in parallel , in order to assess the capabilities of this upcoming technology , we chose an approach based on numerical simulation in order to investigate whether atomic clocks can improve the determination of the geopotential ., our method is adapted to the determination of the geopotential at regional scales . in figure[ fig : methodo ] a scheme of the method used in this paper is shown : 1 . in the first step ,we generate a high spatial resolution grid of the gravity disturbance and the disturbing potential , considered as our reference solutions .this is done using a state of the art geopotential model ( eigen-6c4 ) , and by removing low and high frequencies .it is described in details in section [ sec:3 ] ; 2 . in the second step ,we generate synthetic measurements and from a realistic spatial distribution , then we add generated random noise representative of the measurement noise .this is described in details in section [ sec:4 ] ; 3 . in a third step, we estimate the disturbing potential from the synthetic measurements and/or on a regular grid thanks to least - square collocation ( lsc ) method .is realized by making an assumption on the a priori gravity field regularity on the target area , as described in section [ sec:5 ] .finally , we evaluate the potential recovery quality for different data distribution sets , noise levels , and types of data , by comparing the statistics of the residuals between the estimated values and the reference model .residuals ( higher than the machine precision ) can have several origins , depending on the parameters of the simulation that can be : 1 .the modeled instrumental noise added to the reference model at step 2 .this noise can be changed in order to determine , for instance , whether it is better to reduce gravimetry noise by one order of magnitude , rather than using clock measurements ; 2 .the data distribution chosen in step 2 .this is useful to check for instance the effect of the number of clock measurements on the residuals or to find an optimal coverage for the clock measurements ; 3 . the potential estimation error , due to the intrinsic imperfection of the chosen for the geopotential . in our case, this is due to the low - frequency content of the covariance function chosen for the least - square collocation method ( see section [ sec:5 ] ) .all these sources of errors are somewhat entangled with one another , such that a careful analysis must be done when varying the parameters of the simulation .this is discussed in details in section [ sec:6 ] .our study focuses on two different areas in france . the first region is the massif central located between n and e , and consists of plateaus and low mountain range , see figure [ fig : auv ] .the second target area , much more hilly and mountainous , is the french alps with a portion of the mediterranean sea located at the limit of different countries and bounded by n and e , see figure [ fig : alpes ] .topography is obtained from the 30 m digital elevation model over france by ign , completed with bathymetry and srtm data .available surface gravity data in these areas , from the bgi ( international gravimetric bureau ) , are shown in figures [ fig : auv_bgi_dg_val][fig : alpes_bgi_dg_val ] .note that the bgi gravity data values are not used in this study , but only their spatial distribution in order to generate realistic distribution in the synthetic tests . in these figures, it is shown that the gravity data are sparsely distributed : the plain is densely surveyed while the mountainous regions are poorly covered because they are mostly inaccessible by the conventional gravity survey .the range of free - air gravity anomalies ( see e.g * ? ? ?* ; * ? ? ?* ) which are quite large reflects the complex structure of the gravity field in these regions , which means that the gravitational field strength varies greatly from place to place at high - resolution .the scarcity of gravity data in the hilly regions is thus a major limitation in deriving accurate high - resolution geopotential model . here, we present the way to simulate our synthetic gravity disturbances and disturbing potentials by subtracting the gravity field long and short wavelengths influence of a high - resolution global geopotential model . the generation of the synthetic data and at the earth s topographic surface was carried out , in ellipsoidal approximation , with the fortran program geopot of the national geodetic survey ( ngs ) .the ellipsoidal normal field is defined by the parameters of the geodetic reference system grs80 . as input, we used the static global gravity field model eigen-6c4 .it is a combined model up to degree and order ( d / o ) containing satellite , altimetry , terrestrial gravity and elevation data . by using the spherical harmonics ( sh )coefficients up to d / o , it allows us to map gravity variations down to 10 km resolution . .our objective is to study how clocks can advance knowledge of the geoid beyond the resolution of the satellites . in a first step ,as illustrated in figure [ fig : filtren100 ] , the long wavelengths of the gravity field covered by the satellites are completely removed up to the degree ( 200 km resolution ) . between degree 101 and 583 ,the gravity field is progressively filtered using 3 poisson wavelets spectra , while its full content is preserved above degree 583 . in this way we realize a smooth transition between the wavelengths covered by the satellites and those constrained from the surface data . to subtract the terrain effects included in eigen-6c4 , we used the topographic potential model dv_ell_ret2012 truncated at d / o . complete up to d / o, this model provides in ellipsoidal approximation the gravitational attraction due to the topographic masses anywhere on the earth s surface .the results of this data reduction yields to the reference fields and for both regions , shown in figures [ fig : auv_tdg_geopot][fig : alpes_tdg_geopot ] .[ [ gravimetric - location - points - selection . ] ] * gravimetric location points selection*. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our goal is to reproduce a realistic spatial distribution of the gravity points .the bgi gravity data sets contain hundreds of thousands points for the target regions ( see figure [ fig : auv_bgi_dg_val][fig : alpes_bgi_dg_val ] ) . in order to reduce the size of the problem and make it numerically more tractable, we build a distribution with no more than several thousand points from the original one .starting from the spatial distribution of the bgi gravity data sets , a grid of cells is built with a regular step of about 6.5 km .each cell contains points with .these points are replaced by one point which location is given by the geometric barycenter of the points , in the case that . if then there is no point in the cell .figures [ fig : coverage_tdg ] show the new distributions of gravimetric data for the massif central and the alps regions ; they have , respectively , and location points .these new spatial distributions reflect the initial bgi gravity data distribution but are be more homogeneous .they will be used in what follows .[ [ chronometric - location - points - selection . ] ] * chronometric location points selection*. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we choose to put clock measurements only where existing land gravity data are located .indeed , these data mainly follow the roads and valleys which could be accessible for a clock comparison .then , we use a simple geometric approach in order to put clock measurements in regions where the gravity data coverage is poor .since the potential varies smoothly compared to the gravity field , a clock measurement is affected by masses at a larger distance than in the case of a gravimetric measurement .for that reason , a clock point will be able to constrain longer wavelengths of the geopotential than a gravimetric point . finally , in order to avoid having clocks too close to each other , we define a minimal distance between them .we chose greater than the correlation length of the gravity covariance function ( in this work km , see table [ tab : cov_stats ] ) .here we give more details about our algorithm to select the clock locations : 1 .first , we initialize the clock locations on the nodes of a regular grid with a fixed interval .this grid is included in the target region at a setback distance of about 30 km from each edge ( outside possible boundary effects ) ; 2 .secondly , we change the positions of each clock point to the position of the nearest gravity point from the grid , located in cell ( see the previous paragraph ) ; in cell are located points of the initial bgi gravity data distribution ; 3 . finally , we remove all the clock points located in cells where .this is a simple way to keep only the clock points located in areas with few gravimetric measurements .this method allows to simulate different realistic clock measurement coverages by changing the values of and .the number of clock measurements increases when the distance decreases or when the threshold increases , and vice versa .it is also possible to obtain different spatial distributions but the same number of clock measurements for different sets of and . in figure[ fig : coverage_tdg ] , we propose an example of clock coverage used hereafter for both target regions with 32 and 33 clock locations , respectively , in the massif central and the alps , corresponding to percent of the gravity data coverage . for the chosen distributions ,the value of is about 60 km and .[ [ synthetic - measurements - simulation . ] ] * synthetic measurements simulation*. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for each data point , the synthetic values of and are computed by applying the data reduction presented in section [ ssec : data_synt ] .it is important to note that the location points of the simulated data are not necessarily at the same place than the estimated data .a gaussian white noise model is used to simulate the instrumental noise of the measurements .we chose , for the main tests in the next section , a standard deviation mgal for the gravity data and /s for the potential data . in terms of geoid height , the latter noise level is equivalent to 1 cm .other tests with different noise levels are discussed in section [ sec:6 ] .in this section , we present our numerical results showing the contribution of clock data in regional recovery of the geopotential from realistic data points distribution in the massif central and the alps .the reconstruction of the disturbing potential is realized from the synthetic measurements and , and by applying the least - squares collocation ( lsc ) method . [ [ ssec : lsc ] ] * planar least - squares collocation*.+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the lsc method , described in , is a suitable tool in geodesy to combine heterogeneous data sets in gravity field modelling . assuming that the measured values are linear functionals of the disturbing potential , this approach allows us to estimate any gravity field parameter based on from many types of observables .consider = l_k$ ] a data vector composed by data and data , affected by measurement errors , with .the estimation of the disturbing potential at point from the data can be performed with the relation with the covariance matrix of the measurement vector , the covariance matrix of the noise , the cross covariance matrix between the estimated signal and the data , and the tikhonov regularization factor , also called weight factor . in practice ,the data are synthesized as described in sections [ sec:3 ] and [ sec:4 ] .therefore , the measurement noise is known to be a gaussian white noise .noise and signal ( errorless part of ) are assumed to be uncorrelated , and the covariance matrix of the noise can be written as \ ] ] with the identity matrix of size . because can be very ill - conditioned, the matrix plays an important role in its regularization before inversion , since positive constant values are added to the elements of its main diagonal . to avoid any iterative process to find an optimum value of in case where this matrix is not definite positive , we chose to fix the weight factor and to apply a singular value decomposition ( svd ) to pseudo - inverse the matrix .as shown in , these two approaches are similar .[ [ ssec : efc ] ] * estimation of the covariance function*. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + implementation of the collocation method requires to compute the covariance matrices and .this step has been carried out using a logarithmic spatial covariance function from , see appendix [ anx : deffcf ] .this stationary and isotropic model is well - adapted to our analysis .indeed , it provides the auto - covariances ( acf ) and cross - covariances ( ccf ) of the disturbing potential and its derivatives in 3 dimensions with simple closed - form expressions .the spatial correlations of the gravity field are analyzed with the program gpfit .the variance is directly computed from the gravity data on the target area , and the parameters and ( see appendix [ anx : deffcf ] ) are estimated by fitting the a priori covariance function to the empirical acf of the gravity disturbances .results of the optimal regression analysis for both regions are given in figure [ fig : covariogram ] and table [ tab : cov_stats ] .knowing the parameter values of the covariance model , we can now estimate the potential anywhere on the earth s surface .[ [ contribution - of - clocks . ] ] * contribution of clocks . * + + + + + + + + + + + + + + + + + + + + + + + + + the contribution of clock data in the potential recovery is evaluated by comparing the residuals of two solutions to the reference potential on a regular grid interval of 10 km .the first solution corresponds to the errors between the estimated potential model computed solely from gravity data and the potential reference model , while the second solution uses combined gravimetric and clock data . to avoid boundary effects in the estimated potential recovery , a grid edge cutoff of 30km has been removed in the solutions . for the massif central region ,the disturbing potential is estimated with a bias ( 4.1 mm ) and a rms ( 2.5 cm ) using only the gravimetric data , see figure [ fig : auv_emc_tdg_from_t0dg_4art ] .when we now reconstruct by adding the 33 potential measurements to the gravimetric measurements , the bias is improved by one order of magnitude ( or mm ) and the standard deviation by a factor 3 ( or 7 mm ) , see figure [ fig : auv_emc_tdg_from_t33dg_56_15_4art ] . for the alps ,figure [ fig : alpes_t_recovery ] , the potential is estimated with a bias ( 2.3 cm ) and a standard deviation ( 3.9 cm ) using only the gravimetric data .when adding the 32 potential measurements , we note that the bias is improved by a factor 4 ( or mm ) and the standard deviation by a factor 2 ( or 1.8 cm ) .it should also be stressed that a trend appears in the reconstructed potential with respect to the original one when no clock data are added in both regions .this effect is discussed in section [ sec:6 ] .[ [ ssec : nbclockeffect ] ] * effect of the number of clock measurements . * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : effect_nb_clock ] shows the influence of the number of clock data in the potential recovery , and thereby of their spatial distribution density .we vary the number and distribution of clock data by changing the mesh grid size , which represents the minimum distance between clock data points ( see section [ sec:4 ] ) .the particular cases shown in detail in section [ sec:5 ] are included .we characterize the performance of the potential reconstruction by the standard deviation and mean of the differences between the original potential on the regular grid and the reconstructed one .when increasing the density of the clock network , the standard deviation of the differences tends toward the centimeter level , for the massif central case , and the bias can be reduced by up to 2 orders of magnitude .note that we have not optimized the clock locations such as to maximize the improvement in potential recovery .the chosen locations are simply based on a minimum distance and a maximum coverage of gravity data ( c.f .section [ sec:4 ] ) .an optimization of clock locations would likely lead to further improvement , but is beyond the scope of this work and will be the subject of future studies .moreover , the results indicate that it is not necessary to have a large number of clock data to improve the reconstruction of the potential .we can see that only a few tens of clock data , i.e. less than 1 percent of the gravity data coverage , are sufficient to obtain centimeter level standard deviations and large improvements in the bias . when continuing to increase the number of clock data the standard deviation curve seems to flatten at the cm level .[ [ ssec : nbgraveffect ] ] * effect of the number of gravity measurements . * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [ [ ssec : syntdataeffect ] ] * covariance function consistency .* + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in figures [ fig : auv_emc_tdg_from_t0dg_4art ] and [ fig : alpes_emc_tdg_from_t0dg_4art ] , a trend appears in the residuals , but disappears when gravimetric and clock data are combined .this is due to the fact that the covariance function does not have the same spectral coverage as the data generated from the gravity field model eigen-6c4 .indeed , the covariance function contains low frequencies while we have removed them for the synthetic data .therefore , some low frequency content is present in the recovered potential .whilst the issue could be avoided by using a covariance parametric model from which we can remove the low frequency content in a perfectly consistent way with the data generation ( e.g. a closed - form tscherning - rapp model ) , it is not obvious that the corresponding results would allows realistic conclusions .indeed , the spectral content of real surface observations , after removal of lower frequencies from a global spherical harmonics model , may still retain some unknown low frequencies .as consequence , it is not obvious to match to that of a single covariance function , while perfect consistency can only be achieved from synthetic data .we chose to keep this mismatch , thereby investigating the interest of clocks for high - resolution geopotential determination when our prior knowledge on the surface data signal and noise components is not perfect .more detailed studies on this issue are considered beyond the scope of our paper , which presents a first step to quantify the possible use of clock measurements in potential recovery .[ [ ssec : noiseeffect ] ] * influence of the measurement noise . * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we have also investigated the effect of the noise levels applied to the synthetic data , see tables [ tab : effect_noise_auv][tab : effect_noise_alps ] , by using various standard deviations to simulate white noise of the measurements : for the clock measurements and mgal for the gravimetric measurements . these results were obtained for the same conditions as in section [ sec:5 ] , i.e. 33 ( resp .32 ) clock data points and 4374 ( resp .4959 ) gravity data points for the massif central ( resp .alps ) .we can see that adding clocks improves the potential recovery ( smaller standard deviation and bias of the residuals ) for both regions and whatever the noise of the gravimetric or clock measurements . decreasing the noise of the gravity data by up to two orders of magnitude only improves the standard deviation of the residuals of the recovered potential by comparatively small amounts ( less than a factor 2 ) .this is probably due to the fact that the covariance function does not reflect the gravity field correctly in these regions , combined with a limited data coverage .note that the low frequency content in the covariance function ( see above ) is unlikely to be the main cause here , as the comparatively small reduction of is also observed when clocks are present in spite of the fact that they remove the low frequency trend ( c.f . figures [ fig : auv_emc_tdg_from_t33dg_56_15_4art ] and [ fig : alpes_emc_tdg_from_t32dg_50_3_4art ] ) .* 1c * 8s[table - format = -1.1e2 ] & & & & + ( r)1 - 1 ( r)2 - 3 ( r)4 - 5 ( r)6 - 7 ( ) 8 - 9 & & & & & & & & + ( r)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 & 2.2e-1 & 3.7e-1 & 4.1e-2 & 2.5e-1 & 1.5e-1 & 1.7e-1 & 2.6e-1 & 1.8e-1 + ( lr)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 1 & -4.4e-3 & 2.8e-1 & -1.8e-4 & 1.7e-1 & -1.1e-2 & 1.6e-1 & -2.0e-2 & 1.7e-1 + ( lr)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 0.1 & -1.4e-2 & 2.0e-1 & -2.4e-3 & 7.3e-2 & -6.7e-3 & 5.2e-2 & -1.1e-3 & 4.8e-2 + * 1c * 8s[table - format = -1.1e2 ] & & & & + ( r)1 - 1 ( r)2 - 3 ( r)4 - 5 ( r)6 - 7 ( ) 8 - 9 & & & & & & & & + ( r)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 & 5.8e-1 & 6.6e-1 & 2.2e-1 & 3.9e-1 & 2.1e-1 & 4.2e-1 & 2.1e-1 & 4.2e-1 + ( lr)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 1 & 1.8e-1 & 6.2e-1 & 1.4e-1 & 3.4e-1 & 1.2e-1 & 3.3e-1 & 1.2e-1 & 3.3e-1 + ( lr)1 - 1 ( lr)2 - 2(lr)3 - 3 ( lr)4 - 4(lr)5 - 5 ( lr)6 - 6(lr)7 - 7 ( lr)8 - 8(lr)9 - 9 0.1 & 2.0e-1 & 5.6e-1 & 6.8e-2 & 1.7e-1 & 4.7e-2 & 1.5e-1 & 1.7e-2 & 1.6e-1 + when adding clocks the standard deviations are decreased by up to a factor 3.7 with low clock noise ( 0.1 or 1 cm ) and a factor 1.5 with higher clock noise ( 1 or 10 cm ) . the effect is stronger in the massif central region than in the alps . we attribute this again to the mismatch between the covariance function and the complex structure of the gravity field , which is larger in the alps . thus , optical clocks with just an accuracy of 1 ( or 10 cm ) are interesting no matter what the gravity data quality . with an accuracy of 0.1 ( or 1 cm ) , we can expect a gain of up to a factor 4 in the estimated potential with respect to simulations using no clock data .[ [ ssec : aliasing ] ] * aliasing of the very high - resolution components . *+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +optical clocks provide a tool to measure directly the potential and determine the geopotential at high spatial resolution .we have shown that the recovery of the potential from gravity and clock data with the lsc method can improve the determination of geopotential at high spatial resolution , beyond what is available from satellites .compared to a solution that does not use the clock data , the standard deviation can be improved by a factor 3 , and the bias can be reduced by up to 2 orders of magnitude with only a few tens of clock data . this demonstrates the benefit of this new potential geodetic observable , which could be put in practice in the medium term when the first transportable optical clocks and appropriate time transfer methods will be developed ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?since clocks are sensitive to low frequencies of the gravity field , this method is particularly well - adapted in hilly and mountainous regions for which the gravity coverage is more sparsely distributed , allowing to fill areas not covered by the classical geodetic observables ( gravimetric measurements ) . additionally , adding new observables helps to reduce the modelling errors , e.g. coming from a mismatch between the covariance function used and the real gravity field . to our knowledge, this is the first detailed quantitative study of the improvement in field determination that can be expected from chronometric geodesy observables .it provides first estimates and paves the way for future more detailed and in depth works in this promising new field . to overcome some limitations in the a priori model ,as discussed in the previous section , we intend in a forthcoming work to investigate in more details the imperfections of the covariance function model .moreover , as the gravity field is in reality non - stationary in mountainous areas or near the coast , some numerical tests with non - stationary covariance functions will be conducted .another promising source of improvement could be the optimization of the positioning of the clock data .for example , the correlation lengths and the variations of the gravity field could be used as constraints .a genetic algorithm could also be considered to solve this location problem .finally , it will be interesting to focus on the improvement of the potential recovery quality by combining other types of observables such as leveling data and gradiometric measurements .as knowledge of the geopotential provides access to height differences , this could be a way to estimate errors of the gnss technique for the vertical positioning , or contribute to regional height systems unification .we thank ren forsberg for providing us the fortran code of the logarithmic covariance function .we gratefully acknowledge financial support from labex first - tf , erc adoc ( grant n 617553 and emrp itoc ( emrp is jointly funded by the emrp participating countries within euramet and the european union ) .we thank olivier jamet and matthias holschneider for discussions about the collocation method .let us consider two points and with the cartesian coordinates and , respectively . to compute the acf and ccf of the disturbing potential and its derivatives , proposed a planar attenuated logarithm covariance model with upward continuation that can be expressed in the generic form with this modelis characterized by three parameters : the variance of the gravity disturbance and two scale factors acting as high and low frequency attenuators : the shallow depth parameter and the compensating depth , respectively .the function is logarithmic function modelling the covariances between the gravity field quantities .for example , by putting and , the acf of and can be evaluated respectively with and the ccf between and with
recent technological advances in optical atomic clocks are opening new perspectives for the direct determination of geopotential differences between any two points at a centimeter - level accuracy in geoid height . however , so far detailed quantitative estimates of the possible improvement in geoid determination when adding such clock measurements to existing data are lacking . we present a first step in that direction with the aim and hope of triggering further work and efforts in this emerging field of chronometric geodesy and geophysics . we specifically focus on evaluating the contribution of this new kind of direct measurements in determining the geopotential at high spatial resolution ( ) . we studied two test areas , both located in france and corresponding to a middle ( massif central ) and high ( alps ) mountainous terrain . these regions are interesting because the gravitational field strength varies greatly from place to place at high spatial resolution due to the complex topography . our method consists in first generating a synthetic high - resolution geopotential map , then drawing synthetic measurement data ( gravimetry and clock data ) from it , and finally reconstructing the geopotential map from that data using least squares collocation . the quality of the reconstructed map is then assessed by comparing it to the original one used to generate the data . we show that adding only a few clock data points ( less than 1 % of the gravimetry data ) reduces the bias significantly and improves the standard deviation by a factor 3 . the effect of the data coverage and data quality on the results is investigated , and the trade - off between the measurement noise level and the number of data points is discussed . * keywords . * chronometric geodesy ; high spatial resolution ; geopotential ; gravity field ; atomic clock ; least - squares collocation ( lsc ) ; stationary covariance function
the notion of _ critical mass _ in research has been around for a long time without proper definition .as governments , funding councils and universities seek indicators to measure research quality and to pursue greater efficiencies in the research sector , critical mass is becoming an increasingly important concept at managerial and policy - making level .however , until very recently there have been no successful attempts to quantify this notion ( harrison , 2009 ) .it has been described by evidence ( 2010 ) as `` some minimum size threshold for effective performance '' and , as such , has been linked to the idea that benefit accrues through increase of scale of research groups . however , although evidence ( 2010 ) demonstrated `` a relationship of some kind between larger units and relatively high citation impact '' , indications of such a threshold have been lacking .we recently presented a model for the relationship between quality of research groups and their quantity ( kenna and berche , 2010a ) .this model was inspired by mean - field theories of statistical physics and allowed for a quantitative definition of critical mass .in fact there are two critical masses in research and their values are discipline dependent . instead of a threshold group sizeabove which research quality improves , we have shown that there is a breakpoint or _ upper critical mass _ beyond which the linear dependency of research quality on group quantity reduces . denoting this value by , we showed that the strength of the overall research sector in a given discipline is improved by supporting groups whose size are less than , provided they are bigger than a second critical mass , which we denote by .groups whose size are smaller than are vunerable and should seek to achieve the lower critical mass for long - term viability .the two critical masses are related by a scaling relation , we classify research groups of size within a given discipline as small , medium and large according to whether , or , respectively .we recently determined the critical masses of a multitude of academic disciplines by applying statistical analyses to the results of the uk s most recent _ research assessment exercise _( rae ) in which the quality of research groups were measured ( kenna and berche , 2010b ) . notably absent from our analaysis , however , were the statistics and operational research groups , as these were less straightforward to analyse than other subject areas .here we rectify this omission by a careful analysis of these disciplines .our main result is that the lower critical mass , which statistics and operational research groups should attain to be viable in the long term , is in section 2 we summarize our model and how we derive critical masses from it .we also discuss the research assessment exercise . in section3 we apply the model and statistical analysis to the results of the rae for statistics and operational research groups .we conclude in section 4 , where implications for policy and management are briefly discussed .our model is based on the idea that research groups are _ complex systems _ , for which the properties of the whole are not simple sums of the corresponding properties of the individual parts .instead , interactions between individuals within research groups have to also be taken into account .the strength of an individual within a research group is a function of many factors : their intrinsic calibre and training , their teaching and administrative loads , library facilities , journal access , extramural collaboration , the quality of management , and even confidence gained by previous successes as well as the prestige of the institution and other factors .we denote the average individual research strength within the research group in a given academic discipline , resulting from all of these ( and any other ) factors by .the overall calibre of a research group comprising individuals is also dependent on the extent of , and strength of , the communication links between them .we denote the average strength of the interactions between the individuals in the group by .the overall strength of the group is therefore given by however , once the size of a research group becomes too large ( say above a cutoff value ) , meaningful communication between _ all _ pairs of individuals becomes impossible . in this case, the group may fragment into subgroups , of average size , say .if the average strength of interaction between the subgroups is , the overall strength of the group becomes we denote by the _ expected _ strength of a group of size and we define the quality of such a research group to be the average strength per head : gathering terms of the same order in , we arrive at a form for the _ expected _ dependency of research - group quality on research - group quantity , we considered the effect on the overall strength of a discipline by adding new researchers ( kenna and berche , 2010a ) . asking the question whether it is better , on average , to allocate new researchers to a group with or members , we found that the latter is preferable provided , where is given by eq.([scaling ] ) .this is equivalent to maximising the gradient of the strength function .we also considered the consequences of transferring researchers from large to small / medium groups and found that such a movement is expected to be beneficial to society as a whole , provided the recipient group is not too small ( i.e. , provided , again , that it has over members ) .thus there are two critical masses in research , which we name _ lower _( ) and _ upper _ ( ) . of these ,the former corresponds more closely to the traditional , intuitive notion of critical mass , although there is no threshold value beyond which research quality suddenly improves ( evidence , 2010 ) . to implement the model ( [ nc ] ), we require a set of empirical data on the quality and quantity of research groups .the rae is an evaluation process undertaken approximately every 5 years on behalf of the funding bodies for universities in the uk .the results of the rae are used to allocate funding to such higher education institutes for the subsequent years .the last rae was carried out in 2008 .research groups were examined to determine the proportion of research submitted categorized as follows : * 4 * : quality that is world - leading in terms of originality , significance and rigour * 3 * : quality that is internationally excellent in terms of originality , significance and rigour but which nonetheless falls short of the highest standards of excellence * 2 * : quality that is recognised internationally in terms of originality , significance and rigour * 1 * : quality that is recognised nationally in terms of originality , significance and rigour * unclassified : quality that falls below the standard of nationally recognised work .a formula is then used to determine how funding is distributed to research groups .the 2009 formula used by the higher education funding council for england weighs each rank in such a way that 4 * and 3 * research respectively receive seven and three times the amount of funding allocated to 2 * research , and 1 * and unclassified research attract no funding .this funding formula may therefore be considered to represent a measurement of quality of each research group .( in 2010 , after lobbying by the larger , research intensive universities the english funding formula was changed so that 4 * research receives nine times the funding allocated to 2 * research .we have checked that the 2010 formula produces no significant change to the results presented here . ) from the outset , we acknowledge that there are obvious assumptions underlying our analysis and limits to what can be achieved .firstly , we use the term `` group '' in the sense of rae .this means the collection of staff included in a submission to one of the 67 _ units of assessment _ ( uoa s ) .rae groups are not always identical to administrative departments within universities , but we assume that they represent a coherent group for research purposes .individuals submitted to rae are drawn from academic staff who were in post and on the payroll of the submitting higher education institution on the census date ( 31 october 2007 ) .we assume that the rae process is fair and unbiased and that the scores are reasonably reliable and robust .deviations from these assumptions contribute to noise in the system .statistical analyses and a list of the critical masses for a variety of academic disciplines ( not including statistics and operational research ) are given in ( kenna and berche , 2010b ) .in the next section , we perform a similar analysis for the statistics and operational research groups submitted to rae 2008 .the statistics and operational research uoa at rae 2008 included theoretical , applied and methodological approaches to statistics , probability and operational research .there were 30 submissions comprising 388.8 individuals ( with fractions corresponding to part - time staff ) and group sizes ranged from to , with mean group size 13 .we find it useful to compare to the applied mathematics uoa because of the high degree of overlap between the two disciplines .there were 45 submissions in applied mathematics entailing 850.05 individuals in groups of size to with mean group size .the 30 submissions for statistics and operational research are listed in table 1 .also listed are the numbers of staff submitted and the resultant quality score ..universities which submitted to the statistics and operational research uoa at rae 2008 , listed alphabetically together with the numbers of staff submitted and quality measurements . [ cols= " > , < , > , > " , ] to further compare statistics and operational research to applied mathematics, we plot the sets of data corresponding to both uoa s in fig .2(b ) together with the fits coming from the model ( [ nc ] ) .the similarities in their critical masses are evident , as are the similarities between slopes of the piecewise linear fits , although that for statistics and operational research is shifted slightly above that for applied mathematics , indicating a consistently better average performance for comparably sized groups or problems with the rae due to the absence of a systematic approach to normalize scores between disciplines .we believe the latter is the more likely scenario . in any case , it is clear that in comparison to applied mathematics , there are relatively few statistics and operational research teams in the uk and , of those , there are even fewer which are supercritical ( and therefore operating with sufficient resources ) in size .this suggests that greater investment in this subject area is required to achieve optimal research efficiency . given in eq.([nc ] ) .the tighter distribution of the data about the line in ( b ) demonstrates the validity of the model . in both plots ,the abscissae index the universities listed alphabetically in table 1.,title="fig : " ] given in eq.([nc ] ) .the tighter distribution of the data about the line in ( b ) demonstrates the validity of the model . in both plots ,the abscissae index the universities listed alphabetically in table 1.,title="fig : " ] to illustrate the superiority of the model over the alternative idea that there is no relationship between quality and quantity in research , we plot in fig .3 the deviations of the data from the predictions coming from both scenarios . in each casethe data are plotted against the index values listed in table 1 , which correspond to an alphabetical ordering of the institutes which submitted to the statistics and operational research uoa . in fig .3(a ) , the differences between the quality scores and the mean quality value of the 30 research groups are plotted .the range and standard deviation corresponding to this plot are 43.6 and 10.5 respectively ( 43.6 and 10.7 if edinburgh / heriot - watt is excluded ) . in fig .3(b ) , the deviations from the expectation values coming from the model ( [ nc ] ) are plotted .the range and standard deviation associated with this plot ( excluding edinburgh / heriot - watt ) are 26.1 and 6.7 , respectively .the tighter distribution of the data in fig .3(b ) over fig .3(a ) illustrates the validity of the model .plots of the type given in fig .3(a ) form the basis on which research groups are ranked post rae , with teams above and below the line deemed to be performing above and below average , respectively . however, such rankings do not compare like with like as they fail to take size , and hence resources , into account .we suggest that fig .3(b ) forms the basis of a better system as in this plot , performances are compared to the averages for teams of given sizes .3(b ) takes size into account and gives a better indication of which groups are punching above and below their weights .to summarise , we have applied a mean - field inspired model to examine the relationship between the quality of research teams in statistics and operational research and the quantity of researchers in those teams .our empirical data is taken from the most recent research assessment exercise in the uk .we find that , when an outlying amalgamated group is omitted the dependency of quality upon quantity for this subject area is similar to , and consistent with , a multitude of other disciplines which were reported on in ( kenna and berche , 2010b ) .the model allows the definition of two critical masses for the discipline .the research quality of small ( ) and medium ( ) teams is strongly dependent on the number of researchers in the group . beyond ,large teams tend to fragment and research quality is no longer correlated with group size .the lower critical mass for statistics and operational research is determined to be , and the upper value is about twice that .these values compare satisfactorily to the equivalent for applied mathematics which has .to further contextualize these values , we quote from kenna and berche ( 2010b ) the results for pure mathematics ( a relatively solitary research discipline ) and for medical sciences ( a highly collaborative one ) . notwithstanding the fact that some statisticians and operational researchers were submitted to rae 2008 as part of teams in other disciplines such as business , economics , engineering and epidemiology , about a quarter of statistics / operational research _groups _ submitted to rae are sub - critical , with , and therefore vulnerable .these teams need to strive to attain critical mass . of the 29 teams excluding the edinburgh / heriot - watt combination ,only five ( 17% ) have size above the upper critical mass of .therefore the majority of statistics and operational research teams within the uk are under - resourced in terms of staff numbers .we suggest that to increase research efficiency for this discipline investment is needed .this conclusion parallels that of smith and staetsky ( 2007 ) for the teaching of statistics in the uk . *references * harrison , m. ( 2009 ) does high quality research require critical mass ? in _ the question of r&d specialisation : perspectives and policy implications _ ( eds d. pontikakis , d. kriakou and r. van baval ) , pp 57 - 59 .european commission : jrc technical and scientific reports .
using a recently developed model , inspired by mean field theory in statistical physics , and data from the uk s research assessment exercise , we analyse the relationship between the quality of statistics and operational research groups and the quantity researchers in them . similar to other academic disciplines , we provide evidence for a linear dependency of quality on quantity up to an upper critical mass , which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group . the model also predicts a lower critical mass , which research groups should strive to achieve to avoid extinction . for statistics and operational research , the lower critical mass is estimated to be . the upper critical mass , beyond which research quality does not significantly depend on group size , is about twice this value .
random fluctuations in nonlinear systems in engineering and science are often non - gaussian . for instance , it has been argued that diffusion by geophysical turbulence corresponds to a series of pauses " , when the particle is trapped by a coherent structure , and flights " or jumps " or other extreme events , when the particle moves in the jet flow .paleoclimatic data also indicate such irregular processes .there are also experimental demonstrations of lvy flights in foraging theory and rapid geographical spread of emergent infectious disease .humphries et . al . used gps to track the wandering black bowed albatrosses around an island in southern indian ocean to study the movement patterns of searching food .they found that by fitting the data of the movement steps , the movement patterns obeys the power - law property with power parameter . to get the data set of human mobility that covers all length scales , brockmann collected data by online bill trackers , which give successive spatial - temporal trajectories with a very high resolution .when fitting the data of probability of bill traveling at certain distances within a short period of time ( less than one week ) , he found power - law distribution property with power parameter , and observed that lvy motions are strikingly similar to practical data of human influenza .lvy motions are thought to be appropriate models for a class of important non - gaussian processes with jumps .recall that a lvy motion , or , is a stochastic process with stationary and independent increments .that is , for any with , the distribution of only depends on , and for any , the random variables , , are independent .a lvy motion has a version whose sample paths are almost surely right continuous with left limits .stochastic differential equations ( sdes ) with non - gaussian lvy noises have attracted much attention recently . to be specific ,let us consider the following n - dimensional _ state system _ : where is a vector field ( also called a drift ) , and is a symmetric lvy motion ( ) , defined in a probability space .assume that we have either + ( i ) a discrete time m - dimensional _ observation system _ : where is a white sequence of gaussian random variables , i.e. s are mutually independent standard normal random variables , and is a sequence of nonnegative numbers ; + or + ( ii ) a continuous time m - dimensional _ observation system _: where is a given vector field and is a brownian motion . in the present paper ,we estimate system states , with help of observations , and in particular , we try to capture transitions between metastable states by examining most probable paths for system states .this paper is organized as follows .we consider state estimates with discrete time and continuous time observations in sections 2 and 3 , respectively .to demonstrate our ideas , we consider the following scalar sde driven by a symmetric -stable lvy motion together with observations are taken at discrete time instants as follows where is a white sequence of gaussian random variables . for ,a symmetric -stable lvy motion has the generating triplet , where the jump measure with given by the formula . for more information see , the generator for the solution process in is \ ; \nu_\alpha({\rm d}y).\end{aligned}\ ] ] in fact , this linear operator is a nonlocal laplace operator ( ( * ? ? ?7 ) ) , denoted also by , for .the generator carries crucial information about the system state , and hence will be useful in our investigation of state estimation . also note that the non - gaussianity of the lvy noise manifests as nonlocality ( an integral term ) in the generator .the adjoint operator for the generator is \ ; \nu_\alpha(dy).\end{aligned}\ ] ] the fokker - planck equation for the sde ( [ state1 ] ) is ( ) : \ ; \nu_\alpha(dy).\end{aligned}\ ] ] denote .similarly as in , we have the following theorem which determines the time evolution of the conditional probability density function . for convenience , we often write for .( conditional density for continuous - discrete problems ) .let system ( [ state1 ] ) satisfy the hypotheses that is lipschitz in space and the initial state , with the property , is independent of \}$ ] .suppose that the prior density for([state1 ] ) exists and is once continuously differentiable with respect to and twice with respect to .let be continuous in both arguments and bounded for each with probability .+ then , between observations , the conditional density satisfies the fokker - planck equation where is the operator in ( [ ccdd ] ) . at an observation ,the conditional density satisfies the following difference equation where is ^t r_k^{-1}[y_k - h(x , t_k)]\}.\ ] ] the conditional density in the absence of observation , satisfies the fokker - planck equation .therefore , between observations , conditional density satisfies the fokker - planck equation ( [ kolmog ] ) .thus , it remains to determine the relationship between and since , we have by bayes rule now , since the noise is white , similarly , we compute therefore , this completes the proof .this theorem provides the foundation for computing conditional density for system state of sde ( [ state1 ] ) , under discrete time observations .define .+ this provides the most probable orbit ( ) starting at .these most probable orbits are the maximal likely orbits for a dynamical system under noisy fluctuations .let us consider an example .[ discrete ] let us consider a scalar system with state equation the discrete - time scalar observation is : with , and . in the absence of lvy noise , this system has two stable states : and . when the noise kicks in , these two states are no longer fixed .the random system evolution near these two states , together with possible transitions between them , is sometimes called a metastable phenomenon . for convenience ,we call and ( and random motions nearby ) metastable states .the corresponding nonlocal fokker - planck equation is computed on with a finite difference scheme ( ) under the natural boundary condition .space stepsize and time stepsize .the initial probability density is taken either as a gaussian distribution or a uniform distribution . in figures[ discrete1 ] and [ discrete2 ] , we show the conditional density , together with the corresponding most probable orbit ( taken as the state estimation for ) , together with observations and a true state path .the initial density is either gaussian or uniform , but centered at the metastable state .notice that the estimated state captures the transitions from to and then back to , during the time period . conditional probability and state estimation when .the initial probability density is gaussian centered at .online version : the red curve in the right panel is the estimate state orbit ( i.e. , most probable orbit ) . , title="fig:",width=234,height=188 ] conditional probability and state estimation when .the initial probability density is gaussian centered at .online version : the red curve in the right panel is the estimate state orbit ( i.e. , most probable orbit ) . , title="fig:",width=234,height=188 ] conditional probability and state estimation when .the initial probability density is uniform on .online version : the red curve in the right panel is the estimate state orbit ( i.e. , most probable orbit ) ., title="fig:",width=234,height=188 ] conditional probability and state estimation when .the initial probability density is uniform on .online version : the red curve in the right panel is the estimate state orbit ( i.e. , most probable orbit ) . , title="fig:",width=234,height=188 ]we consider the following scalar _ state system _ with a symmetric lvy motion together with a continuous time scalar _ observation system _ : where is a given vector field and is a brownian motion. where is the adjoint operator of the generator : \ ; \nu_\alpha({\rm d}y),\end{aligned}\ ] ] and is the initial density of ( say a uniform distribution near the metastable state ) . the zakai equation may be numerically solved with a finite difference method based on together with a discretization of the noisy term at the current space - time point and .the initial probability density is taken either as a gaussian distribution or a uniform distribution .for other numerical methods , see , for example , . the conditional density provides information for the system evolution . with the observation , we can infer possible transitions from the metastable state to the metastable state , within a time range .if the system starts with a probability distribution near the metastable state , then the conditional density helps us to infer whether the system will get near the other metastable state , and vice versa .this may be achieved by examining the most probable orbits for the system , under the observation .define .+ this provides the most probable orbit ( ) starting at .we take this as our state estimation for , as in .[ continuous ] let us consider the following scalar sde state equation with a symmetric lvy motion : the scalar observation equation is given by when noise is absent , the state system has two stable states : and . in figures ( [ continuous1 ] ) - ( [ continuous2 ] ) , we show the conditional density , together with the corresponding most probable orbit ( taken as the state estimation for ) , together with a true state path .the initial density is either gaussian or uniform , but centered at the metastable state .notice that the estimated state captures the transitions from to , then back to , and finally to , during the time period . conditional probability ( left ) and state estimation ( right ) when .the initial probability density is gaussian centered at .,title="fig:",width=234,height=188 ] conditional probability ( left ) and state estimation ( right ) when .the initial probability density is gaussian centered at .,title="fig:",width=234,height=188 ] conditional probability ( left ) and state estimation ( right ) when .the initial probability density is uniform on .,title="fig:",width=234,height=188 ] conditional probability ( left ) and state estimation ( right ) when .the initial probability density is uniform on .,title="fig:",width=234,height=188 ] b. grigelionis and r. mikulevicius , nonlinear filtering equations for stochastic processes with jumps . in _ the oxford handbook of nonlinear filtering _, d. crisan and b. l. rozovskii ( eds . ) , oxford university press , p. 95 - 128 , 2011 .s. popa and s. s. sritharan , nonlinear filtering of ito - levy stochastic differential equations with continuous observations ._ communications on stochastic analysis _ , vol .3 , ( 2009 ) , pp . 313 - 330 .d. schertzer , m. larcheveque , j. duan , v. yanovsky and s. lovejoy , fractional fokker planck equation for nonlinear stochastic differential equations driven by non - gaussian lvy stable noises ._ j. math ._ , * 42 * ( 2001 ) , 200 - 212 .w. a. woyczynski , lvy processes in the physical sciences . in _lvy processes : theory and applications _ , o. e. barndorff - nielsen , t. mikosch and s. i. resnick ( eds . ) , 241 - 266 , birkhuser , boston , 2001 .
a goal of data assimilation is to infer stochastic dynamical behaviors with available observations . we consider transition phenomena between metastable states for a stochastic system with ( non - gaussian ) lvy noise . with either discrete time or continuous time observations , we infer such transitions by computing the corresponding nonlocal zakai equation ( and its discrete time counterpart ) and examining the most probable orbits for the state system . examples are presented to demonstrate this approach . * short title : * transitions in non - gaussian stochastic systems + * key words : * nonlocal zakai equation ; nonlocal laplace operator ; non - gaussian noise ; transitions between metastable states ; most probable orbits * pacs * ( 2010 ) : 05.40.ca , 02.50.fz , 05.40.fb , 05.40.jc
medical scanner is the most used application of tomographic reconstruction .it allows to explore the interior of a human body . in the same way, industrial tomography explores the interior of an object and is often used for non - destructive testing .we are interested here in a very specific application of tomographic reconstruction for a physical experiment described later .the goal of this experiment is to study the behavior of a material under a shock .we obtain during the deformation of the object an x - ray radiography by high speed image capture .we suppose this object is radially symmetric , so that one radiograph is enough to reconstruct the 3d object .several authors have proposed techniques ( standard in medical tomography ) for tomographic reconstruction when enough projections ( from different points of view ) are available : this allows to get an analytic formula for the solution ( see for instance or ) .these methods can not be used directly when only a few number of projections is known .some alternative methods have been proposed in order to partially reconstruct the densities ( see for instance ) .we are interesting here in single view tomographic reconstruction for radially symmetric object ( see for instance for a more complete presentation of the subject ) . as any tomographic reconstruction, this problem leads to an ill - posed inverse problem .as we only have one radiograph , data are not very redundant and the ill - posed character is even more accurate .we present here a tomographic method adapted to this specific problem , originally developed in , and based on a curve evolution approach .the main idea is to add some a priori knowledge on the object we are studying in order to improve the reconstruction .the object may then be described by a small set of characters ( in this case , they will be curves ) which are estimated by the minimization of an energy functional .this work is very close to another work by feng and al .the main difference is the purpose of the work : whereas they are seeking recovering textures , we are looking for accurate edges .it is also close to the results of bruandet and al .however , the present work handles very noisy data and highly unstable inverse problems , and shows how this method is powerful despite these perturbations .further , we take here into account the effects of blur ( which may be non - linear ) and try to deconvolve the image during the reconstruction .let us mention at this point that our framework if completely different from the usual tomographic point of view , and usual techniques ( such as filtered back - projection ) are not adapted to our case .indeed , usually , as the x - rays are supposed to be parallel ( this is also the case here ) , the `` horizontal '' slices of the object are supposed to be independent and studied separately .usual regularization techniques deal with one slice and regularize this particular slice . here , because of the radial symmetry , the slices are composed of concentric annulus and do not need any regularization .the goal of this work is to add some consistency between the slices in order to improve the reconstruction .the paper is organized as follows .first we present the physical experiment whose data are extracted and explain what are the motivations of the work .next , we introduce the projection operator . in section 4 ,we present a continuous model with the suitable functional framework and prove existence result .section 5 is devoted to formal computation of the energy derivative in order to state some optimality conditions . in section 6 , a front propagation point of viewis adopted and the level set method leads to a non local hamilton - jacobi equation . in the last section , we present some numerical results and give hints for numerical schemes improvement .this work is part of some physical experiments whose goal is the study of the behavior of shocked material .the present experiment consists in making a hull of well known material implode using surrounding explosives .the whole initial physical setup ( the hull , the explosives ... ) are radially symmetric .a reasonable assumption is to suppose that during the implosion , everything remains radially symmetric .physicists are looking for the shape of the interior at some fixed time of interest . at that time, the interior may be composed of several holes which also may be very irregular .figure [ fig : object ] is a synthetic object that contains all the standard difficulties that may appear .these difficulties are characterized by : * several disconnected holes . * a small hole located on the symmetry axis ( which is the area where the details are difficult to recover ) .* smaller and smaller details on the boundary of the top hole in order to determine a lower bound detection . to achieve this goal ,a x - rays radiograph is obtained . in order to extract the desired informations, a tomographic reconstruction must be performed .let us note here that , as the object is radially symmetric , a single radiography is enough to compute the reconstruction .a radiography measures the attenuation of x - rays through the object . a point on the radiographywill be determined by its coordinates in a cartesian coordinates system where the -axis will be the projection of the symmetry axis .if is the intensity of the incident x - rays flux , the measured flux at a point is given by where the integral operates along the ray that reaches the point of the detector , is the infinitesimal element of length along the ray and is the local attenuation coefficient . for simplicity, we will consider that this coefficient is proportional to the material density . to deal with linear operators ,we take the neperian logarithm of this attenuation and will call the transformation the projection operator . through the rest of the paper , in order to simplify the expression of the projection operator , we will suppose that the x - ray source is far enough away from the object so that we may consider that the rays are parallel , and orthogonal to the symmetry axis . as a consequence , the horizontal slices of the object may be considered separately to perform the projection .as the studied object is radially symmetric , we will work in a system of cylinder coordinates where the -axis is the symmetry axis .the object is then described by the density at the point , which is given by a function which depends only on by symmetry . in the text, the notation will always refer to the density of the object .a typical function is given in figure [ fig : whole - object ] .it represents an object composed of concentric shells of homogeneous materials ( called the `` exterior '' in what follows ) surrounding a ball ( called the `` interior '' ) of another homogeneous material that contains some empty holes .this figure may be viewed as a slice of the object by a plane that contains the symmetry axis . to recover the 3d - object, it suffices to perform a rotation of this image around the axis .for instance , the two round white holes in the center are in fact the slice of a torus . as the looked - after characteristic of the object is the shape of the holes, we will focus only on the interior of the object ( see figure [ fig : object ] ) .we here handle only binary objects composed of one homogeneous material ( in black ) and some holes ( in white ) .-axis ) . , width=188 ] .the homogeneous material is in black whereas the holes are in white.,width=188 ]we first explicit the projection operator and its adjoint . in the case of a radially symmetric object ,the projection operator , denoted by , is given , for every function with compact support , by _ proof - _ consider a 3d - object which is described by a function ( cartesian coordinates system ) .the projection operator is in the case of radially symmetric object , we parametrize the object by a function with cylinder coordinates .therefore then , we have , for we perform the following change of variable to get for , we have using the change of variable and the fact that is even , by symmetry , we get operator may be defined by density on measurable functions such that all the partial applications belong to .then , all the functions belong to the space of bounded mean oscillations functions : where . for more details , one can refer to . in the sequel , we will need to handle functions that are defined on ( instead of on ) .we thus define the operator for function with compact support by although this has no more physical meaning . here , the function is defined by we shall also need the back - projection that is the adjoint operator of ; it can be computed in a similar way .[ propadj]the adjoint operator ( in ) of the projection operator is given , for every function with compact support by : _ proof - _ the adjoint operator of is the unique operator such that , for every and in with compact support , using ( [ h ] ) and fubini s theorem , we get so we obtain the following expression for the back projection : thanks to the symmetry , this operator characterizes the radon transform of the object and so is invertible ; one radiograph is enough to reconstruct the object .the inverse operator is given , for an almost everywhere differentiable function with compact support , and for every , by because of the derivative term , the operator is not continuous . consequently , a small variation on the measure leads to significant errors on the reconstruction .as our radiographs are strongly perturbed , applying to our data leads to a poor reconstruction . due to the experimental setupthey are also two main perturbations : * a blur , due to the detector response and the x - ray source spot size . * a noise .others perturbations such as scattered field , motion blur ... also exist but are neglected in this study .+ we denote by the effect of blurs .we will consider the following simplified case where is supposed to be linear where is the usual convolution operation , is the projected image and is a positive symmetric kernel .a more realistic case stands when the convolution operates on the intensity : then is of the form where is the multiplicative coefficient between the density and the attenuation coefficient .some specific experiments have been carried out to measure the blur effect .consequently , we will suppose that , in both cases , the kernel is known .the linear blur is not realistic but is treated here to make the computations simpler for the presentation .the noise is supposed for simplicity to be an additive gaussian white noise of mean 0 , denoted by .consequently , the projection of the object will be the comparison between the theoretical projection and the perturbed one is shown on figure [ fig : real_proj ] . the reconstruction using the inverse operator applied to given by figure [ fig : inverse ] .the purpose of the experiment is to separate the material from the empty holes and consequently to precisely determine the frontier between the two areas , which is difficult to perform on the reconstruction of figure [ fig : inverse ] . of the object of figure [ fig : object ] .right - hand side : real projection of the same object with realistic noise and blur.,title="fig:",width=188 ] of the object of figure [ fig : object ] .right - hand side : real projection of the same object with realistic noise and blur.,title="fig:",width=188 ] applied to the real projection on the right - hand side.,title="fig:",width=188 ] applied to the real projection on the right - hand side.,title="fig:",width=188 ] it is clear from figure [ fig : inverse ] that the use of the inverse operator is not suitable . in order to improve the reconstruction, we must add some a priori knowledge on the object to be reconstructed .indeed the object that we reconstruct must satisfy some physical property .we chose to stress on two points : * the center of the object is composed of one homogeneous known material s density with some holes inside .* there can not be any material inside a hole . in a previous work , j.m .lagrange reconstructed the exterior of the object .in this reconstruction , the density of the material at the center of the object is known , only the holes are not reconstructed .in other words , we can reconstruct an object without holes and we can compute ( as and are known ) the theoretical projection of this reconstruction .we then act as if the blurred projection was linear and subtract the projection of the non - holed object to the data . inwhat follows , we will call experimental data this subtracted image which corresponds to the blurred projection of a `` fictive '' object of density 0 with some holes of known `` density '' .consequently , the space of admitted objects will be the set of functions that take values in .this space of functions will be denoted in the sequel .the second hypothesis is more difficult to take into account .we chose in this work to tackle the problem via an energy minimization method where the energy functional is composed of two terms : the first one is a matching term , the second one is a penalization term which tries to handle the second assumption . the matching term will be a -norm which is justified by the gaussian white noise . in the case where is not linear , the exact method to removethe exterior is to operate the blur function on the addition of the known exterior and the center . for the sake of simplicity, we will not use this method and will consider that the errors are negligible when subtracting the projections .let us first describe more precisely the set .this time , the functions will be defined on , with values still in , with compact support .therefore , such a function will be characterized by the knowledge of the curves that limit the two areas where is equal to and to . indeed , as the support of the function is bounded , these curves are disjoint jordan curves and the density of the inside is whereas the density of the outside is 0 .consequently , the energy that we will consider will be a function of where is a set of disjoint jordan curves . for mathematical reasons, we must add an extra - assumption : the curves are so that the normal vector of the curves is well - defined ( as an orthogonal vector to the tangent one ) . in this continuous framework ,the matching term is just the usual -norm between and the data ( where is given by ( [ h ] ) ) .so , the first term is for the penalization term , we choose where denotes the length of the curves .let us remark that this penalization term may be also viewed as the total variation ( up to a multiplicative constant ) of the function because of the binarity . eventually , the total energy functional is which is an adaptation of the well - known mumford - shah energy functional introduced in .the `` optimal '' value of may depend on the data .the previous analysis gives the mains ideas for the modelization .now , we make it precise using an appropriate functional framework .let be a bounded open subset with lipschitz boundary .we shall consider bounded variation functions . recall that the space of such functions is where where denotes the space of functions with compact support in .the space endowed with the norm is a banach space .+ if its derivative in ( distributions ) is a bounded radon measure denoted and is the total variation of on .let us recall useful properties of bv - functions ( ) : let be an open subset with lipschitz boundary .+ 1 . if , we get the following decomposition for : where is the absolutely continuous part of with respect of the lebesgue measure and is the singular part .the map from to is lower semi - continuous ( lsc ) for the topology . with compact embedding . with compact embedding .we precise hereafter an important continuity property of the projection operator .[ h1 ] the projection operator is continuous from to for every ] where and only depends on .it is clear that is defined everywhere on and , for every & \le 2\displaystyle { \left [ \int_{u}^{m}|f(r , v)|^{2+s } \ , dr \right]^{\frac{1}{2+s } } \ , \left[\int_{u}^{m } \frac{r^q}{(r + u)^{\frac{q}{2 } } } \ , \frac{1 } { ( r -u)^{\frac{q}{2 } } } \ , dr\right ] ^{\frac{1}{q } } } \end{array}\ ] ] where . therefore \le 2 \ , m^{\frac{q}{2 } } \|f\|_{l^{2+s } } \,\left[\int_{u}^{m } ( r -u)^{-\frac{q}{2 } } dr \right]\ ] ] the computations in the case are similare and lead to the same inequality ( with some additional absolute values ) .as we get ^\frac{s}{2(1+s)}\le c ( { \omega},s ) \|f\|_{l^{2+s}}~;\ ] ] here and in the sequel denotes a generic constant depending on and .so as is bounded , this yields \quad \|hf \|_{l^p ( { \omega } ) } \le c ( { \omega},s ) \|f\|_{l^{2+s}}~.\ ] ] as we consider the length of curves , the most suitable functional space to set a variational formulation of the reconstruction problem is .therefore , we consider the following minimization problem stands for the - norm , and * the operator is given by ( [ blur ] ) . without loss of generality , we may assume ( for simplicity ) that * at last , `` '' , is the binarity constraint .we have mentionned that the image takes its values in where . with the change of variable , we may assume that the image values belong to .a similar problem has been studied in with smoother projection operator and * convex * constraints .this is not our case .the pointwise constraint `` '' is a very hard constraint .the constraint set is not convex and its interior is empty for most usual topologies .now we may give the main result of this section : problem admits at least a solution ._ proof - _ let be a minimizing sequence .it satisfies ; so therefore the sequence is - bounded .as is bounded as well , the sequence is bounded in .thus it converges ( extracting a subsequence ) to some for the weak - star topology .+ estimate ( [ estim1 ] ) implies the weak convergence of to in for every .thanks to the continuity property of proposition [ h1 ] , we assert that weakly converges to in .we get with the lower semi - continuity of the norm .+ moreover is compactly embedded in .this yields that strongly converges to in .as is lsc with respect to - topology , we get finally as the pointwise constraint is obviously satisfied , is a solution to . we look for optimality conditions .unfortunately we can not compute easily the derivative of the energy in the framework .indeed we need regular curves and we do not know if the minimizer provides a curve with the required regularity . moreover , the set of constraints is not convex and it is not easy to compute the gteaux- derivative ( no admissible test functions ) .so we have few hope to get classical optimality conditions and we rather compute minimizing sequences .we focus on particular ones that are given via the gradient descent method inspired by .formally , we look for a family of curves such that so that decreases as .let us compute the energy variation when we operate a small deformation on the curves .in other word , we will compute the gteau derivative of the energy for a small deformation : we will first focus on local deformations . let be a point of .we consider a local reference system which center is and axis are given by the tangent and normal vectors at and we denote by the new generic coordinates in this reference system . with an abuse of notation , we still denote .we apply the implicit functions theorem to parametrize our curve : there exist a neighborhood of and a function such that , for every , eventually , we get a neighborhood of , a neighborhood of and a function such that the local parametrization is oriented along the outward normal to the curve at point ( see figure [ fig : deformation ] ) .more precisely , we define the local coordinate system where is the usual tangent vector , is the direct orthonormal vector ; we set the curve orientation so that is the outward normal .the function if then defined on by this parametrization is described on figure [ fig : deformation ] .we then consider a local ( limited to ) deformation along the normal vector .this is equivalent to handling a function whose support is included in . the new curve obtained after the deformation is then parametrized by this defines a new function : we will also set . this deformation is described on figure [ fig : deformation ] . . is the current point , is the neighborhood of in which the deformation is restricted to and is the new curve after deformation .the interior of the curve is the set where the gteau derivative for the energy has already been computed in and is where denotes the curvature of the curve and is the parametrization of .+ it remains to compute the derivative for the matching term .first we estimate : a simple computation shows that and now we compute where ( resp . ) is the curve associated to the function ( resp . ) : to simplify the notations , we denote by so that we need to compute as is zero out of the neighbourhood , we have in the case , we have , as the function is continuous ( and thus bounded on ) , we may pass to the limit by dominated convergence and get in the case , we have and we obtain the same limit as in the nonnegative case . finally , the energy derivative is if we set , we get as formula ( [ derivebis ] ) may be written where denotes the outward pointing normal unit vector of the curve , denotes the usual scalar product in and is a positive coefficient that depends on the curvilinear abscissa .the latter expression is linear and continuous in , this formula is also true for a non - local deformation ( which can be achieved by summing local deformations ) .the goal of the present section is to consider a family of curves that will converge toward a local minimum of the functional energy . from equation ( [ eq : derivee ] ) , it is clear that if the curves evolve according to the differential equation the total energy will decrease . to implement a numerical scheme that discretizes equation ( [ eq : edp ] ), it is easier to use a level set method ( see for a complete exposition of the level set method ) . indeed ,equation ( [ eq : edp ] ) may present some instabilities , in particular when two curves collide during the evolution or when a curve must disappear .all these evolutions are handled easily via the level set method .the level set method consists in viewing the curves as the 0-level set of a smooth real function defined on .the function that we are seeking is then just given by the formula we must then write an evolution pde for the functions that corresponds to the curves .let be a point of the curve and let us follow that point during the evolution .we know that this point evolves according to equation [ eq : edp ] we can re - write this equation in terms of the function recognizing that where stands for the gradient of with respect to , denotes the euclidean norm .the evolution equation becomes then , as the point remains on the curve , it satisfies . by differentiating this expression, we obtain which leads to the following evolution equation for : that is the above equation is an hamilton - jacobi equation which involves a non local term ( through and ) .such equations are difficult to handle especially when it is not monotone ( which is the case here ) .in particular , existence and/or uniqueness of solutions ( even in the viscosity sense ) are not clear .the approximation process is not easy as well and the numerical realization remains a challenge though this equation is a scalar equation which is easier to discretize than the vectorial one .here we used the discrete scheme described in to get numerical results .we briefly present the numerical scheme .we used an explicit scheme in time and the spatial discretization has been performed following .we set so that the equation ( [ eq : levelset ] ) is we set with et .the explicit euler scheme gives : the curvature term is computed as where stands for the partial derivative with respect to .the discrete approximation of the gradient is standard : are defined in the same way .a usual approximation for is given by : ^{1/2}~.\ ] ] the non local term is exactly computed .the previous scheme has been implemented on a 3.6 ghz pc .a classical reinitialization process has been used each 500 iterations .the test image size was 256 256 pixels .the other parameters of the computation were set to and the blur kernel is a gaussian kernel of standard deviation 5 pixels .the computed image is quite satisfying ( see figure 7 . )however , we note a bad reconstruction along the symmetry axis due to the problem geometry and a lack of information. moreover we have to improve the algorithme behavior .indeed , we observe numerical instability ( in spite of the regularization process ) that leads to a very small time step choice .therefore the computational time is quite long ( about 2.5 hours ) .in addition , classical stopping criteria are not useful here : the expected solution corresponds to a `` flat '' level of function and the difference between two consecutive iterates means no sense .an estimate of the cost function decrease is not appropriate as well ( we observe oscillations ) .we decided to stop after a large enough number of iterations ( here 20 000 ) . in spite of all these disadvantages ,this method is satisfactory considering the low signal to noise ratio of the radiograph .these good results can be explained by the strong assumptions that we add ( in particular the binary hypothesis ) which are verified by our synthetic object .anyway , the method has beeen successfully tested on `` real '' images as well , that is imgaes of objets with the same kind of properties ( `` almost '' binary ) but we can not report them here ( confidential data ) . a semi - implicit version of the algorithm is actually tested to improve stability .
this paper deals with a method of tomographic reconstruction of radially symmetric objects from a single radiograph , in order to study the behavior of shocked material . the usual tomographic reconstruction algorithms such as generalized inverse or filtered back - projection can not be applied here because data are very noisy and the inverse problem associated to single view tomographic reconstruction is highly unstable . in order to improve the reconstruction , we propose here to add some a priori assumptions on the looked after object . one of these assumptions is that the object is binary and consequently , the object may be described by the curves that separate the two materials . we present a model that lives in bv space and leads to a non local hamilton - jacobi equation , via a level set strategy . numerical experiments are performed ( using level sets methods ) on synthetic objects .
the kinetics of many chemical reactions is influenced by the transport properties of the reactants that they involve .in fact , schematically , any chemical reaction requires first that a given reactant a meets a second reactant b. this first reaction step can be rephrased as a search process involving a searcher a looking for a target b. in a very dilute regime , exemplified by biochemical reactions in cells which sometimes involve only a few copies of reactants , the targets b are sparse and therefore hard to find in this search process language .in such reactions , the first step of search for reactants b is therefore a limiting factor of the global reaction kinetics . in the general aim of enhancing the reactivity of chemical systems ,it is therefore needed to optimize the efficiency of this first step of search .recently , it has been shown that intermittent processes , combining slow diffusion phases with a faster transport , can significantly increase reactions rates .a minimal model demonstrating the efficiency of this type of search , introduced to account for the fast search of target sequences on dna by proteins is as follows ( see also ) .the pathway followed by the protein , considered as a point - like particle , is a succession of 1d diffusions along the dna strand ( called sliding phases ) with diffusion coefficient and 3d excursions in the surrounding solution .the time spent by the protein on dna during each sliding phase is assumed to follow an exponential law with dissociation rate . in this minimal model ,the 3d excursions are uncorrelated in space , which means that after dissociation from dna , the protein will rebind the dna at a random position independently of its starting position . assuming further that the mean duration of such 3d excursions is finite , it has been shown that the mean first - passage time at the target can be minimized as a function of , as soon as the mean time spent in bulk excursions is not too long .quantitatively , this condition writes in orders of magnitude as , and the minimum of the search time is obtained for in the large limit .note that in this minimal model , where the time is supposed to be a fixed exterior parameter , bulk phases are always beneficial in the large limit ( i.e. allow one to decrease the search time with respect to the situation corresponding to 1d diffusion only ) . in many practical situations however , the duration of the fast bulk excursions strongly depends on the geometrical properties of the system and can not be treated as an independent variable as assumed in the mean - field ( mf ) model introduced above .an important generic situation concerns the case of confined systems , involving transport of reactive molecules both in the bulk of a confining domain and on its boundary , referred to as surface - mediated diffusion in what follows .this type of problems is met in situations as varied as heterogeneous catalysis , or reactions in porous media and in vesicular systems . in all these examples ,the duration of bulk excursions is controlled by the return statistics of the molecule to the confining surface , which crucially depends on the volume of the system .this naturally induces strong correlations between the starting and ending points of bulk excursions , and makes the above mf assumption of uncorrelated excursions largely inapplicable in these examples . at the theoretical level, the question of determining mean first - passage times in confinement has attracted a lot of attention in recent years for discrete random walks and continuous processes .more precisely , the surface - mediated diffusion problem considered here generalizes the so - called narrow escape problem , which refers to the time needed for a simple brownian motion in absence of surface diffusion to escape through a small window of an otherwise reflecting domain .this problem has been investigated both in the mathematical and physical literature , partly due to the challenge of taking into account mixed boundary conditions .the case of surface - mediated diffusion brings the additional question of minimizing the search time with respect to the time spent in adsorption , in the same spirit as done for intermittent processes introduced above . the answer tothis question is _ a priori _ not clear , since the mean time spent in bulk excursions diverges for large confining domains , so that the condition of minimization mentioned previously can not be taken as granted , even in the large system limit . in this context, first results have been obtained in where , surprisingly enough , it has been found that , even for bulk and surface diffusion coefficients of the same order of magnitude , the reaction time can be minimized , whereas mf treatments ( see for instance ) predict a monotonic behavior . here, we extend the perturbative results of obtained in the small target size limit . relying on an integral equation approach, we provide an exact solution for the mean fpt , both for 2d an 3d spherical domains , and for any spherical target size .we also develop approximation schemes , numerically validated , that provide more tractable expressions of the mean fpt .the surface - mediated process under study is defined as follows .we consider a molecule diffusing in a spherical confining domain of radius ( see figure [ fig1 ] ) , alternating phases of boundary diffusion ( with diffusion coefficient ) and phases of bulk diffusion ( with diffusion coefficient ) . the time spent during each one - dimensional phaseis assumed to follow an exponential law with dissociation rate . at each desorption event , the molecule is assumed to be ejected at a distance from the frontier ( otherwise it is instantaneously readsorbed ) . although formulated for any value of this parameter smaller than , in most physical situations of real interest .the target is perfectly absorbing and defined in by the arc ] where is in this case the elevation angle in standard spherical coordinates .note that as soon as , the target can be reached either by surface or bulk diffusion . in what followswe calculate the mean first - passage time at the target for an arbitrary initial condition of the molecule .model , scaledwidth=45.0% ]in this section , the confining domain is a disk of radius and the target is defined by the arc , } \\ 0 & \text{if , } \end{cases}\ ] ] so that in what follows we will make use of the following quantities : and as we proceed to show , two different approaches can be used to solve this problem .( i ) the first approach , whose main results have been published in , uses the explicit form of the green function for the two - dimensional problem and relies on a small target size expansion .we recall these perturbative results below for the sake of self - consistency and give details of the derivation in appendix [ sec : approach ] .( ii ) the second approach presented next relies on an integral equation which can be derived for , and leads to an exact non - perturbative solution .it is shown in appendix that the fourier coefficients of as defined in eq.([eq : t2integre ] ) satisfy an infinite hierarchy of linear equations , which lead to the following small expansion : note that eq.([eq : alphanperturbation0 ] ) gives in particular the first terms of the perturbative expansion of the search time defined in ( [ eq : deffourier ] ) and given in . it should be stressed that since the coefficients of of this expansion diverge with , in practice one finds that the range of applicability in of this expansion is wider for small . in this section, we first show that the resolution of the coupled pdes ( [ eq : t1 ] , [ eq : t2 ] ) amounts to solving an integral equation for only . as we proceed to show , this integral equation can be solved exactly .writing eq .( [ eq : t1 ] ) as ,\ ] ] and expanding its right - hand side into a taylor series leads to substituting the fourier representation ( [ eq : t2integre ] ) for into this equation yields changing the order of summations over and , using the binomial formula and the expression ( [ eq : deffourier ] ) for give this integro - differential equation for can actually easily be transformed into an integral equation for , by integrating successively two times , which leads to or equivalently to where with defined in eq .( [ deft ] ) and .note that eq .( [ eq : inteq_2d ] ) holds for ] : , \ ] ] with the coefficients which satisfy = \omega \sum\limits_{n=1}^\infty \biggl(u_n + \sum\limits_{n'=1}^\infty q_{n , n ' } d_{n'}\biggr)\bigl[\cos(n\theta ) - \cos(n\epsilon)\bigr ] , \\ \end{split}\ ] ] where we introduced and with since eq .( [ eq : subs_2d ] ) should be satisfied for any ] . more precisely , one has from eqs .( [ eq : q_2d],[ieps ] ) : and keeping only the leading term of this expansion yields from which we obtain the desired approximation : this yields an approximation for the search time : in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * ,the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below .in the * third row * , the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) ., title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below .in the * third row * , the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) . , title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below . in the * third row * ,the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case, the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) ., title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below .in the * third row * , the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) ., title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * ,the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below . in the * third row * ,the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) . , title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * ,the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below . in the * third row * ,the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) ., title="fig:",width=302 ] in 2d : the exact solution ( [ eq : psi_2d ] , [ eq : dn_2d ] ) , the approximation ( [ eq : psi_part_2d ] ) and the perturbative formula ( [ eq : alphanperturbation ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to for the exact and approximate solutions , and to for the perturbative solution .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is below . in the * third row * ,the parameters are : , , and the series are truncated to .the perturbative solution is evidently not applicable . in the * last row* , the parameters are : , , and the series are truncated to . in this case , the approximate solution significantly deviates from the exact one ( providing mostly negative values ) .the perturbative solution is completely invalid ( not shown ) ., title="fig:",width=302 ] computed through eq .( [ eq : dn_2d ] , [ eq : t1mean_2d ] ) with as a function of the desorption rate for three values of : ( dot - dashed blue line ) , ( dashed green line ) , and ( solid red line ) .the other parameters are : and . when ( the first case ) , monotonously increases with so that there is no optimal value . in two other cases , , and starts first to decrease with , passes through a minimum ( the optimal value ) and monotonously increases .symbols show the approximate mean time computed through eq .( [ eq : betaa_2d ] , [ eq : t1mean_2d ] ) .one can see that the approximation accurate enough even for large values of .* right * : the derivative defined by eq .( [ eq : t1deriv_2d ] ) for the same parameters . , title="fig:",width=302 ] computed through eq .( [ eq : dn_2d ] , [ eq : t1mean_2d ] ) with as a function of the desorption rate for three values of : ( dot - dashed blue line ) , ( dashed green line ) , and ( solid red line ) .the other parameters are : and . when ( the first case ) , monotonously increases with so that there is no optimal value . in two other cases , , and starts first to decrease with , passes through a minimum ( the optimal value ) and monotonously increases .symbols show the approximate mean time computed through eq .( [ eq : betaa_2d ] , [ eq : t1mean_2d ] ) .one can see that the approximation accurate enough even for large values of .* right * : the derivative defined by eq .( [ eq : t1deriv_2d ] ) for the same parameters ., title="fig:",width=302 ] as a function of , with , , and ( left ) or ( right ) .the exact computation through eq .( [ eq : dn_2d ] , [ eq : t1mean_2d ] ) is compared to the approximation ( [ eq : betaa_2d ] , [ eq : t1mean_2d ] ) and to the perturbative approach . in all cases ,the series are truncated to . for small ( ) ,the approximate solution is very close to the exact one , while the perturbative solution is relatively close for up to . in turn , for large ( ) , the approximate solution shows significant deviations for the intermediate values of , while the perturbative solution is not applicable at all ., title="fig:",width=302 ] as a function of , with , , and ( left ) or ( right ) .the exact computation through eq .( [ eq : dn_2d ] , [ eq : t1mean_2d ] ) is compared to the approximation ( [ eq : betaa_2d ] , [ eq : t1mean_2d ] ) and to the perturbative approach . in all cases ,the series are truncated to . for small ( ) ,the approximate solution is very close to the exact one , while the perturbative solution is relatively close for up to . in turn , for large ( ) , the approximate solution shows significant deviations for the intermediate values of , while the perturbative solution is not applicable at all ., title="fig:",width=302 ] in this section , we answer two important questions .when are bulk excursions favorable , meaning enabling to reduce the search time ( with respect to the situation with no bulk excursion corresponding to ) ?if so , is there an optimal value of the desorption rate minimizing the search time ?this question can be investigated by studying the sign of the derivative at .the mean search time from eq .( [ eq : t1mean_2d ] ) can also be written as ,\ ] ] where , .the derivative of with respect to is then .\ ] ] if the derivative is negative at , i.e. bulk excursions are beneficial to the search .this inequality determines the critical value for the bulk diffusion coefficient ( which enters through ) , above which bulk excursions are beneficial : ^ 2.\ ] ] two comments are in order : \(i ) interestingly , this ratio depends only on and . in the limit of ,one gets taking next the limit finally yields : where stands for the riemann -function .\(ii ) the dependence of the rhs of eq.([eq : d2crit_2d ] ) with is not trivial ( fig .[ fig : d2crit_2d ] ) .indeed it can be proved to have a maximum with respect to , which can be understood intuitively as follows : in the vicinity of , increasing makes the constraint less stringent since the target can be reached directly from the bulk ; in the opposite limit , the constraint on has to tend to since the target is found immediately from the surface .quantitatively , in the physical limit , one finds that , as soon as , bulk excursions can be beneficial . as a function of computed from eq .( [ eq : d2crit_2d ] ) in 2d for three values of : , , and . when approaches ( the whole surface becomes absorbing ) , diverges ( not shown ) .in fact , in this limit , there is no need for a bulk excursion because the target will be found immediately by the surface diffusion ., width=302 ] if the reaction time is a decreasing function of the desorption rate , the bulk excursions are `` too favorable '' , and the best search strategy is obtained for ( purely bulk search ) . for the reaction time to be an optimizable function of , the derivative has to be positive at some . this necessary and sufficient condition remains formal and requires numerical analysis of eq .( [ eq : t1deriv_2d ] ) . a simple _ sufficient _ condition can be used instead by demanding that the search time at zero desorption rate is less than the search time at infinite desorption rate : this writes in the physically relevant limit ( using the result of ) : finally , combining eqs .( [ eq : d2crit_2d ] , [ cond2 ] ) , the search time is found to be an optimizable function of in the limit if ^ 2.\ ] ] knowing that , eq .( [ cond3 ] ) writes in the small limit : which summarizes the conditions for the search time to be an optimizable function of .this case is illustrated in fig .[ fig : optimizable ] .in this section , the confining domain is a sphere of radius and the target is the region on the boundary defined by , } \\ 0 & \text{if , } \end{cases}\ ] ] the can be written in terms of as taylor expanding the rhs of \end{aligned}\ ] ] leads to using eq .( [ eq : exp3d ] ) for yields changing the order of summations over and and using the binomial formula and eq .( [ eq : alpha_n_3d ] ) for finally give where , as in previous section , . this integro - differential equation for actually easily be transformed into an integral equation for , by integrating successively two times . indeed , multiplying first both members of eq .( [ eq : integrodiff_3d ] ) by and integrating between and gives (\cos\theta+1)\nonumber\\ & + & \frac{\omega^2}{2}\sum_{n=1}^\infty ( x^n-1 ) \left(p_{n+1}(\cos\theta)-p_{n-1}(\cos\theta)\right ) \int_\epsilon^\pi \sin\theta ' p_n(\cos\theta ' ) t_1(\theta ' ) { \rm d}\theta',\end{aligned}\ ] ] where we have used dividing eq .( [ eq : int3d ] ) by and integrating between and finally leads to where we have again used eq .( [ eq : prop_legendre ] ) , or equivalently to with the following definitions in this 3d case . iterating the integral equation eq .( [ eq : psi_3d ] ) shows that the solution writes for ] .more precisely , one has and keeping only the leading term of this expansion yields from which the mean time is then approximated as ^ 2}{n(n+1 ) + \omega ( 1-x^n)(2n+1 ) i_\epsilon(n , n)}\biggr]\biggr\}. \\ \end{split}\ ] ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small . in the * third row * , the parameters are : , , and the series are truncated to . the perturbative solution is inaccurate as expected for large .in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown ., title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small .in the * third row * , the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown . , title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , .in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row* , the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small . in the * third row* , the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case, the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown ., title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small . in the * third row* , the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown ., title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * , the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small . in the * third row* , the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown ., title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * ,the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row * ,the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small .in the * third row * , the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown . , title="fig:",width=302 ] in 3d : the exact solution ( [ eq : psi_3d ] , [ eq : dn_3d ] ) , the approximation ( [ eq : psi_3d_part ] ) and the perturbative formula ( [ eq : psi_3d_perturb ] ) , with , . in the * first row * ,the other parameters are : , , and the series are truncated to . on the right , the absolute error between the exact solution and the approximation ( dashed blue curve ) and between the exact solution and the perturbative formula ( solid red curve ) .the approximation is very accurate indeed . in the * second row* , the parameters are : , , and the series are truncated to .one can see that the perturbative solution is inaccurate for large values of , while the maximal relative error of the approximate solution is still small . in the * third row * ,the parameters are : , , and the series are truncated to .the perturbative solution is inaccurate as expected for large . in the * last row * , the parameters are : , , and the series are truncated to . in this case , the approximate solution deviates from the exact one for .the perturbative solution is negative and not shown ., title="fig:",width=302 ] computed through eq .( [ eq : dn_3d ] , [ eq : t1mean_3d ] ) with as a function of the desorption rate for three values of : ( dot - dashed blue line ) , ( dashed green line ) , and ( solid red line ) .the other parameters are : and . when ( the first case ) , monotonously increases with so that there is no optimal value . in two other cases , , and starts first to decrease with , passes through a minimum ( the optimal value ) and then increases .symbols show the approximate mean time computed through eq .( [ eq : psi_3d_av ] ) .one can see that the approximation accurate enough even for large values of .* right * : the derivative defined by eq .( [ eq : t1_deriv_3d ] ) for the same parameters ., title="fig:",width=302 ] computed through eq .( [ eq : dn_3d ] , [ eq : t1mean_3d ] ) with as a function of the desorption rate for three values of : ( dot - dashed blue line ) , ( dashed green line ) , and ( solid red line ) .the other parameters are : and . when ( the first case ) , monotonously increases with so that there is no optimal value . in two other cases , , and starts first to decrease with , passes through a minimum ( the optimal value ) and then increases .symbols show the approximate mean time computed through eq .( [ eq : psi_3d_av ] ) .one can see that the approximation accurate enough even for large values of .* right * : the derivative defined by eq .( [ eq : t1_deriv_3d ] ) for the same parameters . ,title="fig:",width=302 ] as a function of , with , , and ( left ) or ( right ) .the exact computation through eq .( [ eq : dn_3d ] , [ eq : t1mean_3d ] ) is compared to the approximation ( [ eq : psi_3d_av ] ) and to the perturbative approach . in all cases ,the series are truncated to . for small ( ) ,the approximate solution is very close to the exact one , while the perturbative solution is relatively close for up to . in turn , for large ( ), the approximate solution shows significant deviations for the intermediate values of , while the perturbative solution is not applicable at all ., title="fig:",width=302 ] as a function of , with , , and ( left ) or ( right ) .the exact computation through eq .( [ eq : dn_3d ] , [ eq : t1mean_3d ] ) is compared to the approximation ( [ eq : psi_3d_av ] ) and to the perturbative approach . in all cases ,the series are truncated to . for small ( ) ,the approximate solution is very close to the exact one , while the perturbative solution is relatively close for up to . in turn , for large ( ), the approximate solution shows significant deviations for the intermediate values of , while the perturbative solution is not applicable at all ., title="fig:",width=302 ] we investigate here as in the 2d case the dependence of on .the sign of at is conveniently studied by rewriting eq .( [ eq : t1mean_3d ] ) as - \lambda \bigl(\xi \cdot ( i+\lambda \tilde{q})^{-1 } u\bigr ) \biggr\ } , \ ] ] where and .the derivative of with respect to is then - \biggl(\xi \cdot \frac{(\eta^{-1 } + 2\lambda)i + \lambda^2 \tilde{q}}{(i+\lambda\tilde{q})^2 } u\biggr)\biggr\ } .\ ] ] if the above derivative is negative at , i.e. bulk excursions are beneficial to the search .this inequality determines the critical value for the bulk diffusion coefficient ( which enters through ) : ^ 2 \biggr)^{-1 } .\\ \end{split}\ ] ] in the limit of , one gets in the physically relevant limit , one has there are similarities and differences between the behaviors of in 2d and 3d .figure [ fig : d2crit_3d ] shows that from eq .( [ eq : d2crit2_3d ] ) is not a monotonous function of , with the qualitative explanation which is the same as in the two - dimensional case . in contrast to the analogous eq .( [ eq : d2crit2_2d ] ) in 2d , the rhs of eq .( [ eq : d2crit2_3d ] ) diverges as .this divergence reflects the fact that a poink - like target ( ) , which could be found within a finite time in 2d by one - dimensional surface diffusion on the circle , is not detectable in 3d neither by bulk excursions , nor by surface diffusion . as a function of computed from eq .( [ eq : d2crit_3d ] ) in 3d for three values of : , , and . when approaches ( the whole surface becomes absorbing ) , diverges ( not shown ) .in fact , in this limit , there is no need for a bulk excursion because the target will be found immediately by the surface diffusion .in addition , also diverges as because a point - like target can not be detected neither by bulk excursions , nor by surface diffusion in 3d ., width=302 ] for the reaction time to be an optimizable function of the desorption rate , it is necessary to write an additional condition , requiring that the bulk excursions are not `` too favorable '' ( otherwise , the best strategy is obtained for ) .a sufficient condition is given by demanding that the search time at zero desorption rate ( i.e. , without leaving the boundary ) is less than the search time at infinite desorption rate which writes in the physically relevant limit ( using the result of ) : finally , this conditions leads , for small , to combining the two conditions ( [ eq : d2crit2_3d ] , [ cond4 ] ) , the search time is found to be optimizable when and if and by eqs .( [ eq : optim_region_2d ] , [ eq : optim_region_3d ] ) in 2d and 3d , respectively .when the ratio lies between two curves , the search time is optimizable with respect to . above the upper bound ,surface diffusion is preferred ( is the optimal solution ) , while below the lower bound , bulk excursions may be `` too favorable ''( may give the optimal solution ) .we recall that the lower bound was obtained from the sufficient condition ( [ eq : cond_suff ] ) meaning that the region below the dotted line may still be optimizable . , title="fig:",width=302 ] and by eqs .( [ eq : optim_region_2d ] , [ eq : optim_region_3d ] ) in 2d and 3d , respectively .when the ratio lies between two curves , the search time is optimizable with respect to . above the upper bound ,surface diffusion is preferred ( is the optimal solution ) , while below the lower bound , bulk excursions may be `` too favorable '' ( may give the optimal solution ) .we recall that the lower bound was obtained from the sufficient condition ( [ eq : cond_suff ] ) meaning that the region below the dotted line may still be optimizable ., title="fig:",width=302 ]in the previous sections , we derived the closed matrix forms ( [ eq : dn_2d ] , [ eq : dn_3d ] ) for the coefficients in 2d and 3d .these coefficients determine the angular dependence of through the explicit representations ( [ eq : psi_2d ] , [ eq : psi_3d ] ) in 2d and 3d , respectively .although the formulas ( [ eq : dn_2d ] , [ eq : dn_3d ] ) which are based on the inversion of an infinite - dimensional matrix remain implicit , a numerical resolution of the problem has become straightforward .in fact , one needs to truncate the infinite - dimensional matrix and vectors and and to invert the truncated matrix numerically .there are six parameters that determine the function : the radius of the disk ( sphere ) , the diffusion coefficients and , the desorption rate , the size of the absorbing region , and the distance . from now on , we set the units of length and time by setting and .although the distance may take any value from to , the physically interesting case corresponds to the limit of small . as we mentioned previously, the limit exists but trivially leads to searching on the surface , without intermediate bulk excursions . in order to reveal the role of , we consider several values of : , , and , the latter corresponding to the specific situation when search is always restarted from the center .since the diffusion coefficient enters only through the prefactor from eq .( [ deft ] ) , its influence onto the searching time is easy to examine . inwhat follows , we take three values of : , and .the dependence of on the desorption rate and the size is the most interesting issue which will be studied below . in the previous sections, we derived several formulas for computing : * explicit representations ( [ eq : psi_2d ] , [ eq : psi_3d ] ) with the exact expressions ( [ eq : dn_2d ] , [ eq : dn_3d ] ) for the coefficients ; * approximations ( [ eq : psi_part_2d ] , [ eq : psi_3d_part ] ) which were derived by neglecting non - diagonal elements of the matrix ; * perturbative formulas ( [ eq : alphanperturbation0 ] , [ eq : psi_3d_perturb ] ) which are valid for small . for a numerical computation of the coefficients in eqs .( [ eq : dn_2d ] , [ eq : dn_3d ] ) , we truncate the infinite - dimensional matrix to a finite size and invert the matrix . in order to check the accuracy of this scheme , we compute the coefficients by taking several values of from to . for , , and , the computed mean time rapidly converges to a limit .even the computation with gives the result with four significant digits .note that other sets of parameters ( e.g. , larger values of ) may require larger truncation sizes .to conclude , we have presented an exact calculation of the mean first - passage time to a target on the surface of a 2d and 3d spherical domain , for a molecule performing surface - mediated diffusion .the presented approach is based on an integral equation which can be solved analytically , and numerically validated approximation schemes , which provide more tractable expressions of the mean fpt .this minimal model of surface - mediated reactions , which explicitly takes into account the combination of surface and bulk diffusions , shows the importance of correlations induced by the coupling of the switching dynamics to the geometry of the confinement . indeed, standard mf treatments prove to substantially underestimate the reaction time in this case , and sometimes even fail to reproduce the proper monotonicity . in the context of interfacial systems in confinement , our results show that the reaction time can be minimized as a function of the desorption rate from the surface , which puts forward a general mechanism of enhancement and regulation of chemical reactivity .in this appendix , we describe another theoretical approach which relies on the explicit form of the green function of the poisson equation in 2d case . in particular , the perturbative analysis for small becomes easier within this approach . considering as a source term in the poisson type equation ( [ eq : t1 ] ) with absorbing conditions at and whose green function is well known , writes {\rm d } \theta',\end{aligned}\ ] ] and the notations and . injecting eq .( [ eq : t2integre ] ) into eq .( [ eq : t1integre ] ) leads to where , for integer , so that substituting eq .( [ eq : t1complet ] ) into eq .( [ eq : deffourier ] ) gives and where eq . ( [ eq : mode0 ] ) can be rearranged into and eq .( [ eq : moden ] ) into in this case , the previous equations can be solved exactly , leading to and we note that the particular case is also described by these expressions , although it does not seem to be clear from eqs .( [ eq : mode0])-([eq : moden ] ) . here again ,( [ eq : mode0])-([eq : moden ] ) can be solved exactly , and give : and expanding and in powers of eqs .( [ eq : mode0])-([eq : moden ] ) lead , after lengthy calculations , to eq . ( [ eq : deffourier ] ) for the fourier coefficients in eq .( [ eq : t1complet ] ) leads to a second integral equation satisfied by where this equation is especially well adapted to local expansions of in the vicinity of , but it can also be rearranged into the following integral equation , useful when : where this appendix , we provide the explicit formula for the matrix in 3d case .although technical , this is an important result for a numerical computation because it allows one to avoid an approximate integration in eq .( [ eq : i_3d ] ) which otherwise could be a significant source of numerical errors . the formula ( [ eq : pmpn ] ) for non - diagonal elementsis somewhat elementary , while the derivation for diagonal elements seems to be original .we denote using the relation we obtain the second integral is given by eq .( [ eq : pmpn ] ) . in order to compute the first one , we consider + n(n+1)p_n(x)\biggr ] \\ & - x p_n(x )\biggl[\frac{d}{dx } \biggl[(1-x^2 ) \frac{d}{dx } p_{n-1}(x)\biggr ] + ( n-1)np_{n-1}(x)\biggr]\biggr\ } \\ & = 2n \int\limits_a^b dx x p_{n-1}(x ) p_n(x ) + \biggl[xp_{n-1}(x ) ( 1-x^2 ) p'_n(x ) - xp_n(x ) ( 1-x^2 ) p'_{n-1}(x)\biggr]_a^b \\ & - \int\limits_a^b dx ( 1-x^2 ) \bigl [ p'_n(x ) p_{n-1}(x ) - p'_{n-1}(x ) p_n(x)\bigr ]. \\ \end{split}\ ] ] the last integral can be written as ^b \\ & - \int\limits_a^b dx ( p_{n-1}(x)/p_n(x ) ) \bigl[-2x p_n^2(x ) + 2(1-x^2 ) p'_n(x ) p_n(x)\bigr ] \\ & = \bigl[(1-x^2 ) p_{n-1}(x ) p_n(x ) \bigr]_a^b - 2\int\limits_a^b dx \bigl[-x p_{n-1}(x)p_n(x ) + ( 1-x^2 ) p'_n(x ) p_{n-1}(x)\bigr ] . \\ \end{split}\ ] ] in the last term , we substitute to get ^b + 2\int\limits_a^b dx x p_{n-1}(x)p_n(x ) - 2\int\limits_a^b dx \bigl[-nxp_n(x ) + np_{n-1}(x)\bigr ] p_{n-1}(x ) .\ ] ] bringing these results together , we get ^b \\ & + \bigl[(1-x^2 ) p_{n-1}(x ) p_n(x ) \bigr]_a^b + 2\int\limits_a^b dx xp_{n-1}(x)p_n(x ) - 2\int\limits_a^b dx \bigl[-nxp_n(x ) + np_{n-1}(x)\bigr ] p_{n-1}(x ) \\ \end{split}\ ] ] so that + ( 1-x^2 ) p_{n-1}(x ) p_n(x)\biggr]_a^b + \frac{n}{2n+1 } k_{n-1 } .\ ] ] we obtain + ( 1-x^2 ) p_{n-1}(x ) p_n(x)\biggr]_a^b + \frac{2n-1}{2n+1 } k_{n-1 } \\ & - \frac{n-1}{n } \biggl[\frac{2xp_{n-2}(x)p_n(x ) - np_{n-1}(x)p_{n-2}(x ) + ( n-2)p_{n-3}(x)p_n(x)}{2(2n-1)}\biggr]_a^b . \\ \end{split}\ ] ] we can further simplify this expression by using the following identities we get + ( 1-x^2 ) p_{n-1}(x ) p_n(x)\biggr]_a^b + \frac{2n-1}{2n+1 } k_{n-1 } \\& - \frac{n-1}{2n(2n-1 ) } \biggl[(2n-1)x p_n(x)p_{n-2}(x ) - np_{n-1}(x)p_{n-2}(x ) - ( n-1)p_{n-1}(x)p_n(x)\biggr]_a^b \\ % & = - \frac{2n-1}{2n(2n+1 ) } \biggl[nx \bigl[p_{n-1}^2(x ) + p_n^2(x ) - 2x p_n(x ) p_{n-1}(x ) ] + ( 1-x^2 ) p_{n-1}(x ) p_n(x)\biggr]_a^b + \frac{2n-1}{2n+1 } k_{n-1 } \\ & - \frac{1}{2n } \biggl[((2n-1)x^2 + 1 ) p_n(x)p_{n-1}(x ) - nx ( p_{n-1}^2(x ) + p_n^2(x))\biggr]_a^b \\ & = \frac{\bigl[x ( p_{n-1}^2(x ) + p_n^2(x ) ) - 2p_n(x ) p_{n-1}(x)\bigr]_a^b}{2n+1 } + \frac{2n-1}{2n+1 } k_{n-1 } \\\end{split}\ ] ] and we know that .applying this formula recursively , one finds where \\ & - 2p_n(x)p_{n-1}(x ) - 2p_{n-1}(x)p_{n-2}(x ) - ... - 2p_1(x)p_0(x ) + x\\ & = \sum\limits_{k=1}^n \bigl[2(x-1)p_k^2(x ) + [ p_k(x ) - p_{k-1}(x)]^2\bigr ] - ( x-1)p_n^2(x ) + ( x-1)p_0 ^ 2(x ) + x.\\ \end{split}\ ] ] one can check that this function satisfies the recurrent relation - 2p_n(x ) p_{n-1}(x ) , \hskip 5 mm f_0(x ) = x .\ ] ] note that .
we present an exact calculation of the mean first - passage time to a target on the surface of a 2d or 3d spherical domain , for a molecule alternating phases of surface diffusion on the domain boundary and phases of bulk diffusion . the presented approach is based on an integral equation which can be solved analytically . numerically validated approximation schemes , which provide more tractable expressions of the mean first - passage time are also proposed . in the framework of this minimal model of surface - mediated reactions , we show analytically that the mean reaction time can be minimized as a function of the desorption rate from the surface .
recently , demand for large - scale complex optimization is increasing in computational science , engineering and many of other fields . in that kind of problems, there are many difficulties caused by noise in function evaluation , many tuning parameters and high computation cost . in such cases ,derivatives of the objective function are unavailable or computationally infeasible .these problems can be treated by the derivative - free optimization ( dfo ) methods .dfo is the tool for optimization without derivative information of the objective function and constraints , and it has been widely studied for decades .dfo algorithms include gradient descent methods with a finite difference gradient estimation , some direct search methods using only function values , and trust - region methods .there is , however , a more restricted setting in which not only derivatives but also values of the objective function are unavailable or computationally infeasible . in such a situation , the so - called pairwise comparison oracle , that tells us an order of function values on two evaluation points , is used instead of derivatives and function evaluation .for example , the pairwise comparison is used in learning to rank to collect training samples to estimate the preference function of the ranking problems . in decision making , finding the most preferred feasible solution from among the set of many alternatives is an important application of ranking methods using the pairwise comparison .also , other type of information such as stochastic gradient - sign oracle has been studied .now , let us introduce two dfo methods , i.e. , the nelder - mead method and stochastic coordinate descent algorithm .they are closely related to our work . in both methods ,the pairwise comparison of function values is used as a building block in optimization algorithms .nelder and mead s downhill simplex method was proposed in early study of algorithms based on the pairwise comparison of function values . in each iteration of the algorithm , a simplex that approximates the objective functionis constructed according to ranking of function values on sampled points .then , the simplex receives four operations , namely , reflection , expansion , contraction and reduction in order to get close to the optimal solution .unfortunately , the convergence of the nelder - mead algorithm is theoretically guaranteed only in low - dimension problems . in high dimensional problems ,the nelder - mead algorithm works poorly as shown in . the stochastic coordinate descent algorithm using only the noisy pairwise comparison was proposed in .lower and upper bounds of the convergence rate were also presented in terms of the number of pairwise comparison of function values , i.e. , query complexity .the algorithm iteratively solves one dimensional optimization problems like the coordinate descent method .however , practical performance of the optimization algorithm was not studied in that work . in this paper , we focus on optimization algorithms using the pairwise comparison oracle . in our algorithm ,the convergence to the optimal solution is guaranteed , when the number of pairwise comparison tends to infinity .our algorithm is regarded as a block coordinate descent method consisting of two steps : the direction estimate step and search step . in the direction estimate step , the search direction is determined . in the search step , the current solution is updated along the search direction with an appropriate step length . in our algorithm , the direction estimate step is easily parallelized .therefore , it is expected that our algorithm effectively works even in large - scale optimization problems .let us summarize the contributions presented in this paper . 1 .we propose a block coordinate descent algorithm based on the pairwise comparison oracle , and point out that the algorithm is easily parallelized .we derive an upper bound of the convergence rate in terms of the number of pairwise comparison of function values , i.e. , query complexity .we show a practical efficiency of our algorithm through numerical experiments . the rest of the paper is organized as follows . in section [ pre ] , we explain the problem setup and give some definitions. section [ mai ] is devoted to the main results .the convergence properties and query complexity of our algorithm are shown in the section . in section [ num ] ,numerical examples are reported . finally in section [ con ], we conclude the paper with the discussion on future works .all proofs of theoretical results are found in appendix .in this section , we introduce the problem setup and prepare some definitions and notations used throughout the paper .a function is said to be -strongly convex on for a positive constant , if for all , the inequality holds , where and denote the gradient of at and the euclidean norm , respectively .the function is -strongly smooth for a positive constant , if holds for all .the gradient of the -strongly smooth function is referred to as -lipschitz gradient .the class of -strongly convex and -strongly smooth functions on is denoted as . in the convergence analysis , mainly we focus on the optimization of objective functions in .we consider the following pairwise comparison oracle defined in .the stochastic pairwise comparison ( pc ) oracle is a binary valued random variable defined as \ge \frac{1}{2 } + \min \{\delta_0 , \mu |f(\y ) - f(\x)|^{\kappa-1 } \ } , \label{eqp3}\ ] ] where , and . for , without loss of generality is assumed .when the equality = 1\ ] ] is satisfied for all and , we call the deterministic pc oracle . for ,the probability in ( [ eqp3 ] ) is not affected by the difference , meaning that the probability for the output of the pc oracle is not changed under any monotone transformation of . in ,jamieson et al . derived lower and upper bounds of convergence rate of an optimization algorithm using the stochastic pc oracle .the algorithm is referred to as the original pc algorithm in the present paper . under above preparations ,our purpose is to find the minimizer of the objective function in by using pc oracle . in the following section , we provide a dfo algorithm in order to solve the optimization problem and consider the convergence properties including query complexity .initial point , and accuracy in line search . set .choose coordinates out of coordinates according to the uniform distribution .solve the one - dimension optimization problems within the accuracy using the pc - based line search algorithm shown in algorithm [ alg2 ] , where denotes the -th unit basis vector .then , obtain the numerical solutions .set .if is the zero vector , add to .apply algorithm [ alg2 ] to obtain a numerical solution of within the accuracy . . in algorithm [ alg1 ] , we propose a dfo algorithm based on the pc oracle . in our algorithm, coordinates out of elements are updated in each iteration to efficiently cope with high dimensional problems .algorithm [ alg1 ] is referred to as ] is approximated by , where denotes the diagonal matrix , the diagonal elements of which are those of the square matrix . in the modified newton method , the hessian matrix in the newton method is replaced with a positive definite matrix to reduce the computation cost . using only the diagonal part of the hessian matrix is a popular choice in the modified newton method .current solution , search direction and accuracy in line search .set , , , . ( double - sign corresponds ) , , , , , ( double - sign corresponds ) , figure [ fg1 ] demonstrates an example of the optimization process of both the original pc algorithm and our algorithm .the original pc algorithm updates the numerical solution along a randomly chosen coordinate in each iteration .hence , many iterations are required to get close to the optimal solution . on the other hand , in our algorithm, a solution can move along a oblique direction . therefore, our algorithm can get close to the optimal solution with less iterations than the original pc algorithm . with same initialization .left panel : jamieson et al.s original pc algorithm .right panel : proposed algorithm . , title="fig : " ] with same initialization . left panel : jamieson et al.s original pc algorithm .right panel : proposed algorithm . , title="fig : " ] we now provide an upper bound of the convergence rate of our algorithm using the deterministic pc oracle .let us denote the minimizer of as .[ upper ] suppose , and define and be let us define be for , we have \leq{}\varepsilon ] .the proof of theorem [ upper ] is given in [ proof.upper ] .note that any monotone transformation of the objective function does not affect the output of the deterministic pc oracle .hence , the theorem above holds even for the function such that the composite function with a monotone function is included in .let be the output of blockcd ] , set and toss the coin with probability of heads once . in stochastic pc oracle , one needs to ensure that the correct information is obtained in high probability . in algorithm[ alg3 ] , the query is repeated under the stochastic pc oracle .the reliability of line search algorithm based on stochastic pc oracle was investigated by .[ ] [ sto_line ] for any with ] with probability , and requests no more than queries .it should be noted here that , in this paper , \}= { \rm sign } \{f(y)-f(x)\} ] of algorithm [ alg1 ] . here , the pc oracle was used in all the optimization algorithms . in blockcd ] with .numerical results are presented in figure [ fig:2_dim_problems ] .we tested optimization methods on the quadratic function , and two - dimension rosenbrock function , , where the matrix was a randomly generated 2 by 2 positive definite matrix . in two dimension problems, we do not use the parallel implementation of our method , since clearly the parallel computation is not efficient that much .the efficiency of parallel computation is canceled by the communication overhead . in our method , the accuracy of the line search is fixed to a small positive number .hence , the optimization process stops on the way to the optimal solution , as shown in the left panel of fig .[ fig:2_dim_problems ] . on the other hand , the nelder - mead method tends to converge to the optimal solution in high accuracy . in terms of the convergence speed for the optimization of two - dimensional quadratic function , there is no difference between the nelder - mead method and blockcd method , until the latter stops due to the limitation of the numerical accuracy . even in non - convex rosenbrock function ,the nelder - mead method works well compared to the pc - based blockcd algorithm . [ cols="^,^ " , ]in this paper , we proposed a block coordinate descent algorithm for unconstrained optimization problems using the pairwise comparison of function values .our algorithm consists of two steps : the direction estimate step and search step .the direction estimate step can easily be parallelized . hence, our algorithm is effectively applicable to large - scale optimization problems .theoretically , we obtained an upper bound of the convergence rate and query complexity , when the deterministic and stochastic pairwise comparison oracles were used .practically , our algorithm is simple and easy to implement . in addition, numerical experiments showed that the parallel implementation of our algorithm outperformed the other methods .an extension of our algorithm to constrained optimization problems is an important future work .other interesting research directions include pursuing the relation between pairwise comparison oracle and other kind of oracles such as gradient - sign oracle .10 charles audet and john e dennis jr .analysis of generalized pattern searches ., 13(3):889903 , 2002 .stephen poythress boyd and lieven vandenberghe .. cambridge university press , 2004 . a andrew r conn , katya scheinberg , and luis n vicente . ,volume 8 .siam , 2009 .andrew r conn , nicholas i m gould , and ph l toint ., volume 1 .siam , 2000 .a. d. flaxman , a. t. kalai , and h. b. mcmahan .online convex optimization in the bandit setting : gradient descent without a gradient . in_ proceedings of the sixteenth annual acm - siam symposium on discrete algorithms _ , soda 05 , pages 385394 , philadelphia , pa , usa , 2005 .society for industrial and applied mathematics .m. c. fu .gradient estimation . in s. g. henderson and b. l. nelson , editors , _ handbooks in operationsresearch and management science : simulation _ , chapter 19 .elservier amsterdam , 2006 .fuchang gao and lixing han .implementing the nelder - mead simplex algorithm with adaptive parameters ., 51(1):259277 , 2012 .k. g. jamieson , r. d. nowak , and b. recht .query complexity of derivative - free optimization . in _ nips _ , pages 26812689 , 2012 .matti kriinen .active learning in the non - realizable case . in _ algorithmic learning theory _ , pages 6377 .springer , 2006 .jeffrey c. lagarias , james a. reeds , margaret h. wright , and paul e. wright .convergence properties of the nelder - mead simplex method in low dimensions ., 9:112147 , 1998 .d. luenberger and y. ye . .springer , 2008 .mehryar mohri , afshin rostamizadeh , and ameet talwalkar . .mit press , 2012 .j. a. nelder and r. mead . a simplex method for function minimization . , 7(4):308313 , 1965 . .r foundation for statistical computing , vienna , austria , 2014 .aaditya ramdas and aarti singh .algorithmic connections between active learning and stochastic convex optimization . in _ algorithmic learning theory _ , pages 339353 .springer , 2013 .aaditya ramdas and aarti singh .algorithmic connections between active learning and stochastic convex optimization . in sanjay jain , rmi munos , frank stephan , and thomas zeugmann , editors , _ alt _ , volume 8139 of _ lecture notes in computer science _ , pages 339353 .springer , 2013 .luis miguel rios and nikolaos v sahinidis .derivative - free optimization : a review of algorithms and comparison of software implementations ., 56(3):12471293 , 2013 .the optimal solution of is denoted as .let us define be . if holds in the algorithm , we obtain , since the function value is non - increasing in each iteration of the algorithm is assured by a minor modification of pc - oracle in . ] .next , we assume .the assumption leads to in which the second inequality is derived from ( 9.9 ) in . in the following ,we use the inequality that is proved in . forthe -th coordinate , let us define the functions and as then , we have let and be the minimum solution of and , respectively . then , we obtain the inequality yields that lies between and , where and are defined as here , holds .each component of the search direction in algorithm [ alg1 ] satisfies if and otherwise . for ,let of the vector be .then , the triangle inequality leads to the assumption and the inequalities lead to hence , we obtain + , \end{aligned}\ ] ] where +=\max\{0,x\} ] holds .thus , lemma [ eqn : lemma_bound_expz ] in the below leads to & \geq \frac{k^2}{53}\frac{m}{n}\|\nabla{f}(\x_t)\|^2 . \end{aligned}\ ] ] eventually , if , the conditional expectation of for given is given as & \leq f(\x_t)-f(\x^*)- \frac{k^2}{106l}\frac{m}{n}\|\nabla{f}(\x_t)\|^2 + \frac{l\eta^2}{2}\\ & \leq \left(1-\frac{m}{n}\gamma\right)(f(\x_t)-f(\x^ * ) ) + \frac{l\eta^2}{2}. \end{aligned}\ ] ] combining the above inequality with the case of , we obtain \\ & \leq \1[f(\x_{t})-f(\x^*)\geq\varepsilon']\cdot \left [ \left(1-\frac{m}{n}\gamma\right)(f(\x_{t})-f(\x^ * ) ) + \frac{l\eta^2}{2}\right ] + \1[f(\x_{t})-f(\x^*)<\varepsilon ' ] \cdot\varepsilon ' . \end{aligned}\ ] ] the expectation with respect to all yields & \leq \left(1-\frac{m}{n}\gamma\right ) \ebb[\1[f(\x_{t})-f(\x^*)\geq\varepsilon'](f(\x_{t})-f(\x^ * ) ) ] \\ & \phantom{\leq } + \ebb[\1[f(\x_{t})-f(\x^*)\geq\varepsilon']]\frac{l\eta^2}{2 } + \ebb[\1[f(\x_{t})-f(\x^*)<\varepsilon ' ] ] \varepsilon ' \\ &\leq \left(1-\frac{m}{n}\gamma\right ) \ebb[f(\x_{t})-f(\x^ * ) ] + \max\left\ { \frac{l\eta^2}{2},\,\varepsilon ' \right\}.\end{aligned}\ ] ] since and hold , for ] .then , we have +^2}{(z+1/2)^2}\right]\geq\frac{1}{53}. \end{aligned}\ ] ] for and , we have the inequality +^2}{(z+1/2)^2 } \geq \frac{\delta^2}{(1+\delta)^2}\1[z\geq1/2+\delta ] . \end{aligned}\ ] ] then ,we get +^2}{(z+1/2)^2}\right ] & \geq \frac{\delta^2}{(1+\delta)^2}\ebb[z^2 \1[z\geq1/2+\delta]]\\ & = \frac{\delta^2}{(1+\delta)^2}\ebb[z^2(1-\1[z<1/2+\delta])]\\ & = \frac{\delta^2}{(1+\delta)^2 } \left ( 1-\ebb[z^2\1[z<1/2+\delta ] ] \right)\\ & \geq \frac{\delta^2}{(1+\delta)^2 } \left ( 1-(1/2+\delta)^2\pr(z<1/2+\delta ) \right)\\ & \geq \frac{\delta^2}{(1+\delta)^2 } \left ( 1-(1/2+\delta)^2 \right ) . \end{aligned}\ ] ] by setting appropriately , we obtain +^2}{(z+1/2)^2}\right]\geq\frac{1}{53}.\end{aligned}\ ] ]for the output of blockcd $ ] , holds , and thus , the sequence is included in since is convex and continuous , is convex and closed . moreover , since is convex and it has non - degenerate hessian , the hessian is positive definite , and thus , is strictly convex .then is bounded as follows .we set the minimul directional derivative along the radial direction from over the unit sphere around as then , is strictly positive and the following holds for any such that , thus we have since the right hand side of ( [ include ] ) is a bounded ball , is also bounded .thus , is a convex compact set . since is twice continuously differentiable , the hessian matrix is continuous with respect to . by the positive definiteness of the hessian matrix ,the minimum and maximum eigenvalues and of are continuous and positive. therefore , there are the positive minimum value of and maximum value of on the compact set .it means that is -strongly convex and -lipschitz on .thus , the same argument to obtain ( [ eq : quecom ] ) can be applied for .
this paper provides a block coordinate descent algorithm to solve unconstrained optimization problems . in our algorithm , computation of function values or gradients is not required . instead , pairwise comparison of function values is used . our algorithm consists of two steps ; one is the direction estimate step and the other is the search step . both steps require only pairwise comparison of function values , which tells us only the order of function values over two points . in the direction estimate step , a newton type search direction is estimated . a computation method like block coordinate descent methods is used with the pairwise comparison . in the search step , a numerical solution is updated along the estimated direction . the computation in the direction estimate step can be easily parallelized , and thus , the algorithm works efficiently to find the minimizer of the objective function . also , we show an upper bound of the convergence rate . in numerical experiments , we show that our method efficiently finds the optimal solution compared to some existing methods based on the pairwise comparison .