article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
a major shift in the field of thermodynamics in the last century was from idealized equilibrium processes to natural irreversible processes [ 1 - 4].chemical reactions continue to play a pivotal role in this development and provide significant motivation in studying the non - equilibrium thermodynamic properties of systems _ in vitro _ as well as _ in vivo _ [ 5 - 10 ] . since a closed system always tends to thermodynamic equilibrium ( te ) , a natural generalization in the theory of irreversible thermodynamics has been achieved via the concept of a steady state . in this regard , the quantity of primary importance is the entropy production rate ( epr ) .the epr vanishes for a closed system in the long - time limit that reaches a true te . on the other hand , epr is positive definite for a steady state that can emerge in an _ open _ system .the easiest way to model such a system in the context of chemical reactions is to assume that concentrations of some of the reacting species are held fixed . under this condition , aptly known as the chemiostatic condition , epr tends to a non - zero constant , reflecting a steady dissipation rate ( sdr ) to sustain the system away from equilibrium .the corresponding steady state is denoted as the non - equilibrium steady state ( ness ) .this concept has been extensively used in analyzing single - molecule kinetic experiments .the ness also includes the te as a special case when detailed balance ( db ) is obeyed , thus providing a very general framework .recently , an important progress was made in the theory and characterization of ness , considering a master equation formalism .these studies have established that the classification of ness requires _ not only _ the steady distribution ( as in te ) but _ also _ the stationary fluxes or probability currents .this approach enables one to identify _ all possible _ combinations of transition rates that ultimately lead the system to the _ same _ ness .however , these nesss in general have different values of the epr , and hence the sdr .this proposition prompts one to check ( i ) how states with the same epr at ness can be generated and ( ii ) whether there exist ways to distinguish these states . here, we shall address both the issues by considering an enzyme - catalyzed reaction under chemiostatic condition . expressing the epr as a function of experimentally measurable reaction rate , we emphasize alsothat , the quantity that identifies the various nesss having the same epr is linked with the enzyme efficiency , a useful measure that is expressible in terms of enzyme kinetic constants .the basic scheme of enzyme catalysis within the michaelis - menten ( mm ) framework with reversible product formation step is shown in fig.[fig1 ] . under chemiostatic condition , ] are kept constant by continuous injection and withdrawal , respectively .this is the simplest model to mimic an open reaction system . unlike the usual case of full enzyme recovery with total conversion of substrate into product in a closed system , here both the concentrations of free enzyme e and the enzyme - substrate complexes reach a steady value . also , instead of the rate of product formation , the progress of reaction is characterized by the rate of evolution of ] ) .we define the pseudo - first - order rate constants as ] concentration of e is denoted by and that of es is given by we have then here is a constant that stands for the total enzyme concentration. then the rate of the reaction , , is written as where with the initial condition , , the time - dependent solution is given as the steady state enzyme concentration corresponds to the long - time limit of eq.([ct ] ) : at any steady state , we thus note the fluxes of the reaction system are defined pairwise as from eq.([const ] ) , eq.([remm ] ) , eq.([j1 ] ) and eq.([j2 ] ) , one gets at the steady state , eq.([rateflx ] ) leads to an ness is characterized by a non - zero flux , . at te ,the fluxes vanish for both the reactions .one may note , then the system satisfies db . the conjugate forces of the fluxes given in eqs ( [ j1])-([j2 ] ) are defined as corresponding to the scheme depicted in fig.[fig1 ] , the epr is then given by we set here ( and henceforth ) the boltzmann constant . in the present case ,the steady value of epr becomes therefore , unless the substrate and the product take part in equilibrium , the reaction system reaches an ness with a sdr equal to .the problem is now transparent .if the rate constants become different , the steady concentrations will also differ .but , one can adjust them in such a way that remains the same . in these situations, one needs an additional parameter to distinguish these states . to proceed ,we define a small deviation in around ness as it then follows from eq.([const ] ) that from eq.([remm ] ) and eq.([delmm1 ] ) , the reaction rate becomes now , putting eqs ( [ j1])-([f2 ] ) and eqs ( [ delmm1])-([vnsmm ] ) in eq.([eprmm ] ) and taking only the first terms of the logarithmic parts , we obtain the epr close to ness as here as vanishes at any steady state , the sdr at ness is given by however , at te , one may check that here db holds : inspection of eq.([eprmm1 ] ) reveals that , near ness , varies _ linearly _ with with a slope .thus , while distinguishes an ness from a true te , plays the same role in identifying systems with the _ same _ sdr but having _ different _ time profiles .in this section , we consider various situations where the reaction system reaches ness with the same sdr . focusing on eq.([x3 ] ) , the different cases that keep invariant are discussed next .case a : any parent choice of rate constants .case b : only and are exchanged .case c : only and are exchanged .case d : both and are exchanged .case e : both and are exchanged .case f : both and are exchanged .case g : changed to , changed to , changed to and changed to , such that it can be easily verified that cases d and e possess not only identical but also the same and .this is true for cases a and f as well .so , we do not consider cases e and f any further . a simple explanation of the equivalence is given in fig.[fig2 ] schematically , based on reflection symmetry . to explore the characteristics of various cases given above, we take the rate constants from the single molecule experimental study of english _ et al . _ on the _ escherichia coli _ -galactosidase enzyme .they are as follows : e07 e04 e02 we clarify that , in their study , had actually been shown to be a fluctuating quantity with a distribution .however , only an _ average _ value of will suffice our purpose .the constant substrate concentration is set at =1.0 ] e03 .we choose e-05 to make the reaction scheme almost identical to the conventional mm kinetics . here with magnitudes given above represents the parent choice of rate constants , _i.e. _ , case a. the value of the constant e01 , in case g. the time - evolution of epr , determined using both the exact ( eq.([eprmm ] ) ) and the approximate ( eq.([eprmm1 ] ) ) expressions , are shown in fig.[fig3 ] , for the various cases .the concentrations are made dimensionless by scaling with respect to the total enzyme concentration .this ensures that has the unit of . from the figure ,it is evident that eq.([eprmm1 ] ) nicely approximates the behavior near ness .specifically , the curves of exact and approximate cases merge quite well for any e-04 s. the evolution of reaction rate is shown in fig.[fig4 ] for all the distinct cases .the curves are displayed over a time - span where eq.([eprmm1 ] ) is valid , as mentioned above .this gives us a quantitative understanding of the magnitude of up to which the _ close to _ ness approximation , and hence eq.([eprmm1 ] ) , is valid .we note the variation of as a function of in all the relevant cases in fig.[fig5 ] .both the exact ( fig.[fig5](a ) ) as well as the approximate results ( fig.[fig5](b ) ) are shown .two features are interesting .first , in all the situations , the system reaches an ness with identical e03 .secondly , the quantity that distinguishes one case from the other is the slope of vs. curve near the ness. this slope can be positive as well as negative .one may like to next investigate the role of the rate constants in governing the overall dissipation in various cases . specifically, we like to enquire if the efficiency of the enzyme has anything to do with the total dissipation . in this context, it may be recalled that , the conventional mm kinetics requires the rate constant to be negligible compared with the others .so , the enzyme kinetic constants , like the mm constant and catalytic efficiency , are meaningful in the limit .our choice of parent rate constants ensures that in case a , the system follows mm kinetics .case b , which leaves unchanged and case g , which changes to ( with e01 ) , can also be included within the mm scheme .but , cases c to f , which exchange with any one of the other bigger rate constants , can not follow the usual mm kinetics . therefore , we focus on cases a , b and g in finding any possible connection between the kinetic constants of the enzyme and the total dissipation .while the sdr is the same for all of them , the time - integrated epr , giving the total entropy production , is different .we define it as the upper limit is fixed at such a time when all the systems reach ness .in the present set of cases , we find that setting e-03 s is satisfactory .the values of and ( determined by integrating from eq.([eprmm ] ) ) are listed in table [ tab1 ] , along with the slope [ see eq.([eprmm1 ] ) ] .it is clear from the data that , in going from case a to case g , gradually increases , whereas falls .both these features indicate that the enzyme becomes _ less efficient_. more interesting is to note that the corresponding values also exhibit a decreasing trend from case a to case g. thus , we can say that , with identical sdr , the more efficient enzyme ( bigger and smaller ) involves higher _ total _ dissipation .this can be rationalized by the fact that , higher efficiency corresponds to a _faster _ conversion of substrate into product .this implies an increased irreversibility in the process .consequently , a higher entropy production is noted ..values of the quantities , , and for cases a , b and g. [ cols="<,^,^,^,^",options="header " , ] [ tab1 ] before ending this section , we mention briefly the fate of the different situations when db , eq.([db ] ) , gets satisfied. in this scenario , whatever be the values of the rate constants , the final epr is trivially zero as the reaction system reaches te [ see eq.([epreq ] ) ] .for the same reason , also becomes zero [ see eq.([y3 ] ) ] .however , it follows from eq.([eprmm1 ] ) that , epr varies _ quadratically _ with near te .then , in principle , _ can _ distinguish systems reaching te .it is easy to see from eq.([z3 ] ) that , cases a , b and g possess different values for and hence they can be identified by following the behavior of epr with the reaction rate .the mm kinetics , shown in fig.[fig1 ] , with a single intermediate in the form of the es complex , is _ exactly _ solvable .we now generalize this scheme to an enzyme catalysis reaction having n number of species .these include the free enzyme e and ( n-1 ) intermediates , under similar chemiostatic condition as discussed in section ii .the reaction scheme is depicted in fig.[fig6 ] .essentially , the species refer to the various conformers of the enzyme - substrate complex .the corresponding rate equations are given as with being the concentration of species at time .the following periodic boundary conditions hold : we have set ] the flux due to the i - th reaction is defined as the expression of epr then becomes it is generally not possible to solve the set of coupled equations analytically for a system of arbitrary size .however , again focusing on a situation close to the ness , one can get some insights .for that purpose , we define small deviations in species concentrations from their respective ness values as for a short time interval , using finite difference approximation , one gets putting eqs ( [ del])-([del1 ] ) in eq.([dai ] ) , we get as the reactions are coupled , so the are related to each other and can be expressed in terms of any one of them , say then , one can write next we will discuss the scheme to determine the .the set of coupled equations ( [ delness ] ) , with the help of eq.([relatn ] ) , can be cast in the matrix form here is a matrix with and is a matrix with the property the non - zero matrix elements are from eq.([mat ] ) and eq.([matel ] ) , we obtain a recursion relation with the boundary conditions : the first of the relations becomes then , it is easy to follow from eq.([recur ] ) that , all the other can be expressed in terms of from the condition we get and using eq.([relatn ] ) , we have from eqs ( [ matel1])-([recur1 ] ) and eq.([sumf ] ) , one can determine the in eq.([relatn ] ) .we are now ready to explore the epr near the ness . from eq.([dai ] ) , we have at ness . as we have chosen to express all the deviations in concentration from the ness in terms of ,so we take the reaction rate as .then , from eq.([dai ] ) with and using eq.([relatn ] ) along with the periodic boundary conditions , we get near ness where now putting eq.([del ] ) , eq.([relatn ] ) , eq.([jns ] ) and eq.([vns ] ) in eq.([eprc ] ) and also using the smallness of , the epr near ness becomes with eq.([epr1 ] ) is the generalized version of eq.([eprmm1 ] ) , confirming that expression of the epr as a functional of reaction rate possesses a universal character .the next task is , whether states having the same sdr , _i.e. _ , identical , can be generated for the n - cycle .an obvious clue comes from the invariance of a cycle under rotation .thus , if the steady concentrations are represented as n points uniformly placed on a circle , then rotations by an angle , defined as will just redistribute the values .this keeps the steady flux in eq.([x1 ] ) unchanged .therefore , for a n - cycle , there are _ at least _ ( n-1 ) ways to interchange the rate constants that will lead the reaction system to states with the same sdr .we illustrate this result here by taking the simplest non - trivial case of a triangular network as an example .one can see from eq.([ang ] ) that , for a triangular network with , _ at least _ two kinds of changes of the rate constants keep the sdr unchanged .they are given below : case 1 .any parent choice of rate constants .case 2 . change with the boundary condition .case 3 . change , and . one can generate additional ways to keep fixed with some added constraints on the rate constants .two pairs of situations [ cases 4 and 5 , and 6 and 7 ] are the following : case 4 .any parent choice of rate constants with .change , , , , and .any parent choice of rate constants with .change , , , , and .+ all the above variants have been numerically studied and shown in fig.[fig7 ] where the epr , determined exactly by eq.([eprc ] ) , is plotted as a function of reaction rate for each of the cases .it is evident from the figure that the sdr are identical for the respective bunch of cases .but they can be distinguished by following the vs. curve in the small- regime .in summary , the present endeavor has been to characterize steady states with the same non - zero sdr .we have found that the variation of epr with the reaction rate near completion of the reaction is a nice indicator to distinguish such states .particularly important is the role of the slope of vs. curve near .this has been substantiated by studying enzyme - catalysed reactions as an exactly - solvable test case .we have also noticed , the leading term that accounts for the variation depends on the rate constants , more specifically on the enzyme efficiency .it is gratifying to observe that the more efficient enzyme incurs higher total dissipation .the physical appeal is immediate .a more efficient enzyme approaches the steady state more quickly .this implies the process becomes more irreversible .hence , becomes higher .one more notable point is the following .the sdr is equal to the steady heat dissipation rate .our study reveals that enzymes with very different efficiencies can show the same heat dissipation rate at steady state .an extension to cases of higher complexities involving various conformers of the enzyme - substrate complex has also been envisaged .further studies along this line on enzymes with multiple sites may be worthwhile .k. banerjee acknowledges the university grants commission ( ugc ) , india for dr .d. s. kothari fellowship .k. bhattacharyya thanks crnn , cu , for partial financial support .
|
a non - equilibrium steady state is characterized by a non - zero steady dissipation rate . chemical reaction systems under suitable conditions may generate such states . we propose here a method that is able to distinguish states with identical values of the steady dissipation rate . this necessitates a study of the variation of the entropy production rate with the experimentally observable reaction rate in regions close to the steady states . as an exactly - solvable test case , we choose the problem of enzyme catalysis . link of the total entropy production with the enzyme efficiency is also established , offering a desirable connection with the inherent irreversibility of the process . the chief outcomes are finally noted in a more general reaction network with numerical demonstrations . * states with identical steady dissipation rate : role of kinetic constants in enzyme catalysis * .1 in kinshuk banerjee and kamal bhattacharyya .05 in _ department of chemistry , university of calcutta , 92 a.p.c . road , _ kolkata 700 009 , india . pacs : 05.70.ln , 82.39.-k , 82.20.-w .05 in keywords : entropy production rate , dissipation , enzyme efficiency , + reaction network
|
a metasurface is a composite material layer , designed and optimized to control and transform electromagnetic waves .the layer thickness and the unit - cell size are small as compared to the wavelength in the surrounding space .recently , considerable efforts have been devoted to creation of metasurfaces for shaping reflected and transmitted waves , see e.g. review papers .however , nearly always the realized properties of metasurfaces have not perfectly satisfied the design goals . in particular , in most attempts to synthesise metasurfaces for `` perfect refraction '' of plane waves travelling along a certain direction into a plane wave propagating along a different direction , it was either found that the surface must be active or some reflections are inevitable . herewe summarize our recent results on synthesis of perfectly refractive metasurfaces .based on the general form of the impedance matrix of metasurfaces with the desired response , we identify a number of possible realizations of metasurfaces which exactly perform the desired operation . in the presentation, we will also discuss the synthesis theory for full and perfect control of reflection .let us set a goal to design a metasurface which perfectly ( without reflection or absorption ) refracts a plane wave in medium 1 ( wave impedance , wavenumber , and the incidence angle ) into a plane wave travelling in medium 2 ( characterized by parameters , ) in some other direction , specified by the refraction angle .the tangential field components and on the input side of the metasurface read ( te polarization as an example ) _ e_t1=_e_ie^-jk_1_iz,_n_h_t1= _ e_i1_1_i e^-jk_1_iz[plus ] where is the coordinate along the tangential component of the wavevector of the incident wave , and the unit vector is orthogonal to the metasurface plane , pointing towards the source ( fig .[ geom ] ) .the required fields on the other side of the surface are _ e_t2=_e_te^-jk_2_tz+j_t,_n_h_t2= _e_t1_2_t e^-jk_2_tz+j_t [ minus]( is the desired phase change in transmission ) .the synthesis method is based on the use of the impedance matrix , which relates the tangential fields on the two sides of the metasurface as _ e_t1=z_11 _n_h_t1 + z_12 ( -_n_h_t2),_e_t2=z_21 _ n_h_t1 + z_22 ( -_n_h_t2)[21 - 22 ] substituting the required field values , we see that the metasurface will perform the desired operation exactly , if the impedance matrix components satisfy obviously , infinitely many alternative realizations are possible , because we need to satisfy only two complex equations for four complex parameters of the metasurface. we can impose additional conditions , for example , require some extra functionality or demand that the structure be made only from reciprocal or / and passive elements , etc .some interesting possible approaches are discussed in the following .\1 . * teleportation metasurface . * inspecting ( [ eq1 ] ) and ( [ eq2a ] ) , we conclude that in general the metasurface should be non - uniform , as the parameters are functions of the coordinate .however , there is an interesting solution for which all the parameters are constants , and perfect refraction operation can be achieved using a uniform metasurface . indeed , assuming that , both equations are satisfied with z_11=_1_i , z_22=-_2_t in this scenario , the metasurface is formed by a matched absorbing layer ( the input resistance , a perfect electric conductor ( pec ) sheet , and an active layer ( an `` anti - absorber '' ) on the other side .the incident plane wave is totally absorbed in the matched absorber .the negative - resistance sheet ( resistance ) together with the wave impedance of medium 2 forms a self - oscillating system whose stable - generation regime corresponds to generation of a plane wave in the desired direction ( the refraction angle ) : the sum of the wave impedance of plane waves propagating at the angle and the input impedance of the active layer is zero , which is the necessary condition for stable generation .this structure is similar to the `` teleportation metasurface '' introduced in for teleportation of waves without changing their propagation direction .* transmitarray .* alternatively , we can demand that the metasurface is matched for waves incident from medium 2 at the required refraction angle , which means that . solving ( [ eq2a ] ) with this value of , we find z_21=2 e^-j(k_2_t - k_1_i)z+j _ tfurthermore , we can set and , so that at the incidence angle the metasurface is acting as a matched receiving antenna array .this nonreciprocal realization obviously corresponds to conventional transmitarrays ( e.g. ) .the incident plane wave is first received by a matched antenna array on one side of the surface and the wave is then launched into medium 2 with a transmitting phase array antenna .* double current sheets .* one can seek a realization in form of a metasurface which maintains both electric and magnetic surface currents , each subject to the corresponding impedance sheet conditions _ j_e=_n(_h_t1- _ h_t2)=y_e_e_t = y_e(_e_t1+_e_t2)/ 2 , _j_m=-_n(_e_t1-_e_t2)=y_m_h_t = y_m(_h_t1 + _ h_t2)/2[jm ] equations ( [ plus ] ) tell that relations ( [ jm ] ) can hold only if the metasurface is symmetric and reciprocal , with and . in this case , , , and solution of ( [ eq1 ] ) and ( [ eq2a ] ) is unique .however , it gives complex - valued parameters ( * ? ? ?* suppl . mat . ) , which means that the perfectly refractive double sheet must be lossy in some areas and active in some other areas of the surface .purely reactive ( lossless ) parameters are possible only if the impedances seen from the opposite sides are equal , that is , .this result lead to a conclusion that lossless operation is possible only if there are some reflections .\4 . * perfect metasurfaces formed by lossless elements .* however , perfect refraction by lossless reciprocal metasurfaces is possible if we allow bianisotropic coupling in the metasurface .it can be proved simply by solving ( [ eq1 ] ) and ( [ eq2a ] ) in the assumption that the metasurface is lossless , that is , all -parameters are purely imaginary numbers . in this casethe solution is again unique and reads z_11=-j_1_i_t , z_22=-j_2_t_t , z_12=z_21=-j 1_t[x ] where . for the case of zero phase shift ( ) formulas ( [ x ] )agree with the recent result of , obtained within the frame of the generalized scattering parameters approach .bianisotropic metasurfaces with the required properties defined by ( [ x ] ) can be realized as arrays of low - loss particles with appropriate symmetry . formicrowave applications , metallic canonical omega particles or double arrays of patches ( patches on the opposite sides of the substrate must be different to ensure proper magnetoelectric coupling ) can be used . for optical applications , arrays of properly shaped dielectric particleswere introduced as omega - type bianisotropic metasurfaces .we have introduced a general approach to the synthesis of metasurfaces for arbitrary manipulations of plane waves .the main ideas of the method have been explained on an example of metasurfaces which perfectly refract plane waves incident at an angle into plane waves propagating in an arbitrary direction defined by the refraction angle .the general synthesis approach shows a possibility for alternative physical realizations , and we have discussed several possible device realizations : self - oscillating teleportation metasurface , transmitarrays , double current sheets , and metasurfaces formed by only lossless components . the crucial role of omega - type bianisotropy in the design of lossless - component realizations has been revealed . knowing -parameters ( [ x ] ) , it is easy to find the unit - cell polarizabilities or surface susceptibilities and optimize the unit - cell dimensions .the method can be used also for synthesis of perfectly reflecting metasurface and for other transformations of waves ( focusing , etc . ) a. epstein and g.v .eleftheriades , passive lossless huygens metasurfaces for conversion of arbitrary source field to directive radiation , _ ieee trans .antennas propag .11 , pp . 56815695 , 2014 .y. radi , d. l. sounas , a. al , and s. a. tretyakov , parity - time symmetric tunnelling , in proc ._ 9th international congress on advanced electromagnetic materials in microwaves and optics metamaterials 2015 _ , pp . 265267 , oxford , united kingdom , 7 - 12 september 2015 .
|
* _ abstract _ in this talk we present and discuss a new general approach to the synthesis of metasurfaces for full control of transmitted and reflected fields . the method is based on the use of an equivalent impedance matrix which connects the tangential field components at the two sides on the metasurface . finding the impedance matrix components , we are able to synthesise metasurfaces which perfectly realize the desired response . we will explain possible alternative physical realizations and reveal the crucial role of bianisotropic coupling to achieve full control of transmission through perfectly matched metasurfaces . this abstract summarizes our results on metasurfaces for perfect refraction into an arbitrary direction . * # 1 * # 1
|
the theory of dissipativity for linear dynamical systems helps in the analysis and design of control systems for several control problems , for example , lqr / lqg control , , synthesis of passive systems , and optimal estimation problems .when dealing with lti systems , it is straightforward to define dissipativity for controllable systems due to a certain property of such systems that their compactly supported system trajectories are , loosely speaking , ` dense ' in the set of all allowed trajectories .however , this is not the case for uncontrollable systems , and this situation is the central focus of this paper .we elaborate more on this point when we define dissipativity and review equivalent conditions for controllable systems in section [ sec2 ] . in this paper , we use a less - often - used definition of dissipativity for systems , possibly uncontrollable , and generalize key results using some techniques from indefinite linear algebra ( see ) for solving algebraic riccati inequalities in the context of an uncontrollable state space system .like in , we define a system as dissipative if there exists a storage function that satisfies the dissipation inequality for all system trajectories .the existential aspect of this definition raises key issues that this paper deals with .the main result we show is that if the uncontrollable poles of an lti system are such that no two of them add to zero , and if the controllable subsystem strictly dissipates energy at frequency equal to infinity , then the dissipativity of the controllable subsystem is equivalent to the system s dissipativity .we also show that , using the concatenability axiom of the state , the energy stored in a system is a static function of the state variables .further , we also show that this state is ` observable ' from the external variables , i.e. the state is a linear combination of the external variables and possibly their derivatives .this is intuitively expected in view of the fact that energy exchange between the system and its ambience takes place through the external variables .however , it appears that this may not be the case for lossless systems , i.e. systems that do nt dissipate any energy , nor contain a source within .in this section we include various definitions about the behavioral framework for studying dynamical systems ( subsection [ subsec2.1 ] ) and then introduce background results about dissipative systems ( subsection [ subsec2.2 ] ) .subsection [ subsec2.3 ] contains brief notation about indefinite linear algebra from . for this paper, denotes the set of all real numbers and ] .the space of infinitely often differentiable functions from to say is denoted by and denotes the set of compactly supported functions within this space . when dealing with linear differential systems , it is convenient to use polynomial matrices for describing a differential equation .suppose , are constant matrices of the same size such that is a linear constant coefficient ordinary differential equation in the variable .we define the polynomial matrix , and represent the above differential equation as .a linear differential behavior , denoted by , is defined as the set of all infinitely often differentiable trajectories that satisfy a system of ordinary linear differential equations with constant coefficients , i.e. , where is a polynomial matrix having number of columns , i.e. , } ] .this representation of is known as an _ image representation_. the variable is called a _ latent _ variable : these are auxiliary variables used to describe the behavior ; we distinguish the variable as the manifest variable , the variable of interest .it is known that always allows an image representation with such that has full column rank for every .this kind of image representation is known as an _ observable _ image representation . in this paper ,unless otherwise stated explicitly , we assume the image representations are observable .the use of the term ` observable ' is motivated by the fact that the variable is _ observable _ from the variable .this notion is defined as follows . for a behavior with variables and ,we say is observable from if whenever and both are in , we have . observability of from in a behavior implies that there exists a polynomial matrix such that for all and in the behavior .we now define relevant notions in the context of uncontrollable behaviors . for a behavior , possibly uncontrollable , the largest controllable behavior contained in called the _controllable _ part of , and denoted by .an important fact about the controllable part of is that .the set of complex numbers for which loses rank is called the set of _ uncontrollable modes _ and is denoted by . for a detailed exposition on behaviors , controllability and observability we refer the reader to . the concept of quadratic differential forms ( qdf ) ( see ) is central to this paper .consider a two variable polynomial matrix with real coefficients , ] , we define the single variable polynomial matrix by .consider , a symmetric nonsingular matrix . a behavior is said to be dissipative with respect to the supply rate ( or -dissipative ) if there exists a qdf such that the qdf is called a storage function : it signifies the energy stored in the system at any time instant .the above inequality is called the dissipation inequality .a behavior is called -lossless if the above inequality is satisfied with an equality for some qdf .notice that the storage function plays the same role as that of lyapunov functions in the context of autonomous systems ; the notion of storage functions is a generalization to non - autonomous systems of lyapunov functions , as pointed in .the following theorem from applies to controllable behaviors .consider and let be an observable image representation .suppose is symmetric and nonsingular. then , the following are equivalent . 1 .there exists a qdf such that inequality is satisfied for all . for all , the compactly supported trajectories in . for all .the significance of the above theorem is that , for controllable systems , it is possible to verify dissipativity by checking non - negativity of the above integral over all compactly supported trajectories : the compact support signifying that we calculate the ` net power ' transferred when the system starts ` from rest ' and ` ends at rest ' .the starting and ending ` at rest ' ensures that for linear systems there is no internal energy at this time .the absence of internal energy allows ruling out the storage function from this condition : in fact , this is used as the definition of dissipativity for controllable systems .the same can not be done for uncontrollable systems due to the compactly supported trajectories not being dense in the behavior ( see ) .an extreme case is an autonomous behavior , i.e. a behavior which has : while the zero trajectory is the only compactly supported trajectory , the behavior consists of exponentials corresponding to the uncontrollable poles of . the issue of existence of storage functions is elaborated in ( * ? ? ?* remark 5.9 ) and in text following ( * ? ? ?* proposition 3.3 ) .the state variable is defined as a latent variable that satisfies the _ property of state _ , that is , if , and , then the new trajectory formed by concatenating and at , i.e. , also satisfies the system equation of in a distributional sense .it is intuitively expected that a variable has the state property if and only if and satisfy an equation that is at most first order in and zeroth order in : see for precise statement formulation and proof .when is partitioned into , with as the input and as the output , then admits the more familiar _ input / state / output _( i / s / o ) representation as one can ensure that is observable from ; this is equivalent to conventional observability of the pair . while such a state space representation is admitted by any , the pair is controllable ( in the state space sense ) if and only if is controllable ( in the behavioral sense defined above ) . in this paper, we use certain properties of matrices that are self - adjoint with respect to an _ indefinite _ inner product .we briefly review self - adjoint matrices and neutral subspaces ( see ) .let be an invertible hermitian matrix .this defines an indefinite inner product on by , where is the complex conjugate transpose of the vector .for a complex matrix , the complex conjugate transpose of is denoted by .consider matrices and with invertible and hermitian .the -adjoint of the matrix , denoted by } ] .the matrix is said to be -self - adjoint if } ] and let and ] such that is a storage function , i.e. for all .3 . there exists a matrix and an observable state variable such that for all .statement 2 tells that the storage function can be expressed as a quadratic function of the manifest variables and their derivatives .statement 3 says that the storage function is a ` state function ' , i.e. a static function of the states , and hence storage of energy requires no more memory of past evolution of trajectories than required for arbitrary concatenation of any two system trajectories .in this section we discuss two examples of uncontrollable systems that are dissipative .the riccati equations encountered in these cases are solvable by the methods proposed in this paper ; we also give solutions to the riccati equations .the first example is of an uncontrollable system with uncontrollable modes satisfying the unmixing assumption , i.e. no two of the uncontrollable poles add to zero .however , the hamiltonian matrix has eigenvalues on the imaginary axis . consider the behavior whose input / state / output representation is given by the following and matrices with . here which satisfies the unmixing assumption . an equivalent kernel representation of the behavior is given by in this case and hence , it can be checked that the controllable part is -dissipative . and .thus from theorem [ main ] , is -dissipative .the following real symmetric matrix induces a storage function that satisfies the dissipation inequality the 2-dimensional , -invariant , -neutral subspace which gives the solution is the next example is an rlc circuit shown in figure [ rlc.exa.fig ] .consider the rlc circuit system whose input is the current flowing into the circuit , and output is the current through the inductor .a state space representation of the system is found using the following definition of the states .the state variables are , voltage across the capacitor , and , the current through the inductor .assume . the system becomes uncontrollable when .let , and .the state representation of the system is poles of the system are and one of them is uncontrollable .the controllable part is dissipative and the corresponding hamiltonian matrix has eigenvalues on the imaginary axis .a solution to the are is given by the following symmetric matrix which induces the storage function .in this section we consider the case when all the states of the system are uncontrollable , in other words , when the only controllable part is static / memoryless . in a state space realization, this means that the matrix is zero .the case of autonomous behaviors is clearly a special case : also is zero and the assumptions in the lemma are satisfied .consider a behavior with static controllable part .let the behavior have a state representation , with the pair observable .assume and .then there does not exist a symmetric solution to the corresponding are .* proof : * the hamiltonian matrix takes the form , here , . next we use the following proposition to say that partial multiplicities of purely imaginary eigenvalues of the hamiltonian matrix are even .consider the matrix then for every purely imaginary such that , the partial multiplicities of such are even .in fact , they are twice the partial multiplicities of as an eigenvalue of .[ part ] consider , as the pair is observable , the pair is controllable . from proposition [ part ] ,the partial multiplicities of purely imaginary eigenvalues are twice the partial multiplicities of purely imaginary eigenvalues of .in this section we investigate the requirement of unobservable variables in the definition of the storage function .as has been studied / shown so far , for controllable dissipative systems ( ) , the storage function need not depend on unobservable variables .it was later shown in that for the case of strict dissipativity of uncontrollable systems , observable storage functions are enough under unmixing and maximum input cardinality conditions .theorem [ main ] shows that this is true for a more general scenario , i.e. , dissipativity ( including non - strict dissipativity ) for all input cardinality conditions under unmixing assumption . on the other hand , when relaxing the unmixing assumption elsewhere except on the imaginary axis , under certain conditions solutions to the ari exists thoughare does not have a solution ( see ) . in this section ,we investigate the need for unobservable storage functions for uncontrollable systems whose uncontrollable poles lie entirely on the imaginary axis .we discuss the case for autonomous behaviors below .consider an autonomous behavior with .let the supply rate be .then the following is true . if there exists a storage function satisfying the inequality then any is c - unobservable .* proof : * suppose if there exists a storage function which is a state function satisfying the dissipation lmi , then the dissipation lmi is equivalent to the lyapunov inequality now for every eigenvector of a corresponding to eigenvalue , we have which gives or but , as , we have for every eigenvector of corresponding to .this implies that the any is c - unobservable .this observation tells that for dissipativity of autonomous systems having eigenvalues on the imaginary axis , it is necessary to allow storage functions to depend on unobservable variables also .in this section we investigate the property of orthogonality of two behaviors in the absence of controllability .we propose a definition that is intuitively expected and show that by relating this definition to lossless uncontrollable behaviors , we encounter a situation that suggests an exploration whether dissipativity should be defined for behaviors for which the input - cardinality condition is not satisfied .we first review a result about orthogonality of controllable behaviors .[ prop : lossless : orthogonal ] let be nonsingular , and suppose .the following are equivalent . 1. for all and for all .2 . is lossless with respect to .3 . there exists a bilinear differential form , induced by } ] such that is identity matrix and ] conforming to the row partition of , we have \left [ \begin{array}{ccc } i & 0 & 0 \\ 0 & d & 0 \end{array } \right]\ ] ] this simplifies to \ ] ] let be the behaviour defined by the kernel representation of . for to be controllable , needs to have full row rank for all .as loses rank for , should have full row rank for every so that is controllable .this proves the existence of satisfying properties 1 and 2 of [ prob : smallestb ] . in order to satisfy property 3 in problem[ prob : smallestb ] , i.e. has to be the least , we have to choose a unimodular and free such that satisfies the three conditions . can be freely chosen because the choice of does not affect the input cardinality of . *( 2 ) : * let be controllable . then , in the above , and does not exist .this means that does not exist .thus the kernel representation matrix of would be given by ] with for all for some .similarly , choose ] with ] and such that and .this implies and .taking with and , it follows that .further , , hence because is non - zero and of compact support .further , we have such that both the above conditions can not be satisfied simultaneously for .thus , gives a contradiction .this proves and hence autonomy of .we illustrate the above theorem using an example .let .define and by image representations and respectively , with strict dissipativities is easily verified . calculating the kernel representations , we get a kernel representation for as clearly , is nonsingular and hence is autonomous . for non - strict dissipativity case , we have the following problem and theorem . [prob : plusminus2 ] given a nonsingular , symmetric and indefinite , find conditions for existence of a behavior such that * there exist and such that .* is dissipative * is - dissipative .let be nonsingular , symmetric and indefinite .then there exists such that requirements in problem [ prob : plusminus2 ] are satisfied .any such satisfies . in case is uncontrollable , + if , then neither nor can be strictly dissipative .the proof proceeds in the same way as the proof for the previous theorem , except for the strictness of the dissipativities .construct and as in the previous proof , but with and equal to zero .we have the above two equations imply for all . since , the behavior is dissipative with respect to both and .dissipativity with respect to implies .similarly , dissipativity with respect to implies .this implies if is uncontrollable , then from theorem [ thm : superbehavior : controllable ] , then the two inequalities leading to the inequality are both strict .hence , the input cardinality of has to be strictly less than that of and .this implies if , then from theorem [ thm : strict : embeddable : autonomou ] , and can not be strictly dissipative with respect to and respectively .this completes the proof . as one of the consequences of the above theorem , if the input cardinality condition is satisfied for an uncontrollable behavior , i.e. or , then such a behavior can not be embedded into both a -dissipative controllable behavior and a -dissipative controllable behavior .however , an observable storage function for such a situation exists when the controllable part is strictly dissipative at and when the uncontrollable modes satisfy the unmixing condition ( see [ main ] and for this situation in presence of more assumptions ) , a situation when is very familiar : we deal with rlc circuits in the next section .we use this method of defining orthogonality of two behaviors to explore further the definition of dissipativity of a behavior . herewe bring out a fundamental significance of the so - called input - cardinality condition : the condition that the number of inputs to the system is equal to the positive signature of the matrix that induces the power supply .we show that when this condition is not satisfied , then a behavior could be both supplying and absorbing net power , and is still not lossless .in this brief section we revisit a classical result : a rational transfer function matrix being positive real is a necessary and sufficient condition for that transfer matrix to be realizable using only resistors , capacitors and inductors ( see and also for the case with transformers ) .note that the transfer matrix captures only the controllable part of the behavior and positive realness of the transfer matrix is nothing but dissipativity with respect to the supply rate .this is made precise below .consider an -port electrical network ( with each port having two terminals ) and the variable , where is the vector of voltages across the -ports and is the vector of currents through these ports , with the convention that is the power flowing _ into _ the network .behaviors that are dissipative with respect to the supply rate are also called ` passive ' . given a controllable behavior whose transfer matrix with respect to a specific input / output partition , say current is the input and voltage is the output , is positive real , one can check that this behavior is passive . in this case is the impedance matrix and is square , i.e. the number of inputs is equal to the number of outputs .one can introduce additional laws that the variables need to satisfy , thus resulting in a sub - behavior which has a lesser number of inputs ; consider for example these additional laws as putting certain currents equal to zero : due to opening of certain ports .however , the transfer matrix for with respect to the input / output partition : input as currents through the non - open ports and output as the voltages across _ all _ the ports , is clearly not square , and in fact , tall , i.e. has strictly more rows than columns .let denote this transfer function .since , the behavior is also passive .of course , need not be controllable , even if is assumed to be controllable . as an extreme case ,suppose all the currents are equal to zero , and we obtain an autonomous .while rlc realization of such transfer matrices which are tall , and further of autonomous behaviors obtained by , for example , opening all ports , has received hardly any attention , we remark here one very well - studied sub - behavior of every passive behavior : the zero behavior .consider the single port network with and .this port which is called a nullator ( also nullor , in some literature ) behaves as both the open circuit and shorted circuit : see and .the significant fact about a nullator is that a nullator can not be realized using only passive elements , and moreover any realization ( necessarily active ) leads to the realization of both a nullator and its companion , the norator ; a norator is a two - terminal port that allows both the voltage across it and current through it to be arbitrary .we briefly review the main results in this paper .we first used the existence of an observable storage function as the definition of a system s dissipativity and proved that the dissipativities of a behavior and its controllable part are equivalent assuming the uncontrollable poles are unmixed and the dissipativity at infinity frequency is strict .this result s proof involved new results in the solvability of are and used indefinite linear algebra results .we showed that for lossless autonomous dissipative systems , the storage function can not be observable , thus motivating the need for unobservable storage functions .we then studied orthogonality / lossless behaviors in the context of using the definition of existence of a controllable dissipative superbehavior as a definition of dissipativity .in addition to results about the smallest controllable superbehavior , we showed necessary conditions on the number of inputs for embeddability of lossless / orthogonal behaviors in larger controllable such behaviors . in the context of embeddability as a definition of dissipativity, we showed that one can always find behaviors that can be embedded in both a strictly dissipative behavior and a strictly ` anti - dissipative ' behavior , thus raising a question on the embeddability definition .we related this question to the well - known result that the nullator one - port circuit is not realizable using only rlc elements .j. c. willems , dissipative dynamical systems - part i : general theory , part ii : linear systems with quadratic supply rates , _archive for rational mechanics and analysis _ ,45 , pages 321351 , 352393 , 1972 ,
|
the theory of dissipativity has been primarily developed for controllable systems / behaviors . for various reasons , in the context of uncontrollable systems / behaviors , a more appropriate definition of dissipativity is in terms of the dissipation inequality , namely the _ existence _ of a storage function . a storage function is a function such that along every system trajectory , the rate of increase of the storage function does not exceed the power supplied . while the power supplied is always expressed in terms of only the external variables , whether or not the storage function should be allowed to depend on only the external variables and their derivatives or also unobservable / hidden variables has various consequences on the notion of dissipativity : this paper thoroughly investigates the key aspects of both cases , and also proposes another intuitive definition of dissipativity . we first assume that the storage function can be expressed in terms of the external variables and their derivatives only and prove our first main result that , assuming the uncontrollable poles are unmixed , i.e. no pair of uncontrollable poles add to zero , and assuming a strictness of dissipativity at the infinity frequency , the dissipativities of a system and its controllable part are equivalent ; in other words once the autonomous subsystem satisfies a lyapunov equation solvability - like condition , it does not interfere with the dissipativity of the system . we also show that the storage function in this case is a static state function . this main result proof involves new results about solvability of the algebraic riccati equation , and uses techniques from indefinite linear algebra and hamiltonian matrix properties . we then investigate the utility of unobservable / hidden variables in the definition of storage function : we prove that lossless uncontrollable behaviors are ones which require storage function to be expressed in terms of variables that are unobservable from the external variables . we next propose another intuitive definition : a behavior is called dissipative if it can be embedded in a controllable dissipative _ super - behavior_. we show that this definition imposes a constraint on the number of inputs and thus explains unintuitive examples from the literature in the context of lossless / orthogonal behaviors . these results are finally related to rlc realizability of passive networks , specifically to the nonrealizability of the nullator one - port circuit using rlc elements .
|
complex adaptive systems have drawn attention among statistical physicists in recent years .the study of such systems not only provides invaluable insight into the non - trivial global behavior of a population of competing agents , but also has potential application in economics , biology and finance . moreover , we can study complex adaptive systems from the perspective of statistical physics . the el farol bar problem , which was proposed by w. b. arthur in 1994 , has greatly influenced and stimulated the study of complex adaptive systems in the last few years .it describes a system of agents deciding independently in each week whether to go to a bar or not on a certain night . as spaceis limited , the bar is enjoyable only if it is not too crowded .agents are not allowed to communicate directly with each other and their choices are not affected by previous decisions .the only public information available to agents is how many agents came in last week . to enjoy an uncrowded bar , each agent has to employ some hypotheses or mental models to guess whether one should go or not . with bounded rationality ,all agents use inductive reasoning rather than perfect , deductive reasoning since the system is ill - defined . in other words ,agents act upon their currently most credible hypothesis based on the past performance of their hypotheses .consequently , agents can interact indirectly with each other through their hypotheses in use .the emerging system is thus both evolutionary and complex adaptive in the presence of those inductive reasoning agents .inspired by the el farol bar problem , challet and zhang put forward the minority game ( mg ) .it is a toy model of inductive reasoning players who have to choose one out of two alternatives independently at each time step . those who end up in the minority side ( that is, the choice with the least number of players ) win .the only public information available to all players in the mg is the winning alternatives of the last passes , known as the history .players have to employ some strategies based on the history to guess the winning choice .in fact , they may employ more than one strategy throughout the game .more precisely , every player picks once and for all randomly drawn strategies from a suitably chosen strategy space before playing the game .the performance of each strategy will be recorded during the game .then players make decision according to their current best performing strategy at every time step . in spite of its simplicity ,mg displays a remarkably rich emergent collective behavior .numerical simulations showed that there is a second order phase transition between a symmetric phase and an asymmetric phase .there is no predictive information about the next minority group available to agent s strategies in the symmetric phase , whereas there is predictive information available to agent s strategies in the asymmetric phase .mg addresses the interaction between agents and public information , that is , how agents react to public information and how the feedback modify the public information itself .later works revealed that the dynamics of the system in fact minimizes a global function related to market predictability .therefore , the mg can be described as a disordered spin system .hart and his coworkers found that the fluctuations arising in the mg is controlled by the interplay between crowds of like - minded agents and their anti - correlated partners . since mg is a prototype to study detailed pattern of fluctuations , it plays a dominant role in economic activities like the market mechanism . in order to learn more on mg - like systems ,much effort was put on the extension of the mg model , such as the introduction of evolution and the modification of mg for modelling the real market . in the real world , however , an agent usually has more than two options .for example , people decide where to dine or which share to buy from a stock market .consequently , it is worthwhile to investigate the situation where players have more than two choices especially when the number of choices is large . indeed ,dhulst and rodgers has written a paper on the three - choice minority game . in their paper , a symmetric and an asymmetric three - sided models are introduced .their symmetric model is only a model which mimics the cyclic trading between three players using the same strategy formalism as mg and thus can not be extended to realistic cases with a large number of choices .furthermore , their asymmetric model is nothing but the original minority game with the possibility allowing players not to participate in a turn .hence , the player s choice is not symmetric .recently , ein - dor _ et al . _proposed a multichoice minority game based on neural network .their model generalizes the el farol bar problem to the case of multiple choice .nevertheless , it differs quite significantly from the mg of challet and zhang as each player has only one strategy to use and that strategy evolves according to its performance . besides, no phase transition of any kind is observed in ein - dor _ et al ._ s model .chau and chow also proposed another multichoice minority game model based on mg . in this model , players choosing their strategies from a reduced strategy space consists of anti - correlated and uncorrelated strategies only . in this paper, we propose a new model called the multiple choices minority game ( mcmg ) with a neural network flavor .it is a variation of the mg where all heterogeneous , inductive thinking players have more than two choices . just like the original mg, strategies in mcmg are not evolving and they are picked in each turn according to their current performance . in section [ secmodel ] , the mcmg model are explained in detail .results of numerical simulation of our model are presented and discussed in section [ secresults ] .we also compare our results with those of the original mg as well . in the section [ secconc ] ,we deliver a brief summary and an outlook of our work .let us consider a repeated game of a population of players . at each time step , every players has to choose one of rooms / choices _ independently_. here , we assume that is a prime power and we identify the rooms with elements in the finite field .we represent the choice of the player at time by which only `` takes on '' the different rooms . those players in the room withthe least number of players , that is , in the _ minority _ side , win . the winning room at time is the publicly known output of the game .the players of the winning room will gain one unit of wealth while all the others lose one .so the wealth of the player , , is updated by where and whenever .note that the output of the last steps , , is the _ only _ public information available to all players .therefore , players can only interact indirectly with each other through the history which can take on different values .aimed at maximizing one s own wealth , each player has to employ some strategies to predict the trend of the output of the game . but how to define a strategy ? for a minority game with more than two choices , it is not effective to formulate the strategy as in mg . recall that in mg , a strategy is defined as a number of set of choices corresponding to different histories . in other words ,a strategy is a map sending each to the choice .therefore , there are different strategies in the full strategy space for mg. similar numerical results of the fluctuation arising in the mg are obtained if strategies are drawn from the reduced strategy space instead of the full strategy space .hence , the reduced strategy space plays a fundamental role in the properties of the fluctuation arising in the mg .the reduced stratgey space is formed by strategies which are significantly different from each other . given a strategy , only its uncorrelated and anti - correlated strategies are significantly different from it .indeed , the reduced strategy space is composed of two ensembles of mutually uncorrelated strategies where the anti - correlated strategy of any strategy in one ensemble always exists in the other ensemble . for mg, there are different strategies in the reduced strategy space . for a minority game with rooms using the form of strategy in mg ,there are different strategies in the full strategy space while there are different strategies in the reduced strategy space provided that is a prime power . because the strategy space size increases rapidly as increases , strategies will quickly get out of control for large .consequently , we would like to define the strategy in a different way . in our game, we do not restrict players to have only `` good strategies '' since each player ( who thinks inductively ) does not know whether a strategy is good or not before commences the game .`` good strategy '' is time dependent .in fact , players will adapt with each other in order to use those `` good strategies '' .therefore , all strategies must be _ uniform _ in the following sense : ) any input can produce any output , there are same probability for any output produced by any input . indeed , it is not completely clear that whether those strategies used in the ein - dor _ et al . _s model are uniform in the above sense . with the above consideration , a strategy consists of weights ( ) and a uniform random variable called bias satisfying the following condition : where is a fixed constant in .note that all the arithemetic used to generate a strategy and the corresponding choice ( including eq .( 2 ) above and eq .( 3 ) below ) are performed in the finite field .the choice is defined to be the sum of the weighted sum of the last output plus the bias .namely , the choice of the player using the strategy with history is given by : physically , the weight represents the importance of the output of the game of pass before on the choice .it is obvious that the strategies of mcmg all fulfil the uniform criteria which was mentioned before . since the weights of a strategy are independent variable in which satisfy a single constraint , namely eq .( 2 ) , there are different combinations of weights .moreover , it can be shown that strategies with the same set of weights are anti - correlated with each other while the others are uncorrelated with each other ( see ref . ) .hence , it follows that both the full and reduced strategy space size in mcmg is equal to .thus , the strategy space size of mcmg is much smaller than that of mg . in our model, every player picks and sticks to randomly drawn strategies before commences the game .we denote the strategies of the player by . but how does a player decide which strategy is the best ? players use the virtual score , which is just the hypothetical profit for using a single strategy in the game , to estimate the performance of a strategy . in each pass , the virtual score of a strategy of the player , , is updated by where is the choice of the player under history using the strategy .each player uses one s own strategy with the highest virtual score .although our mcmg model is quite similar to the mg model , there are two main differences , namely , in the number of choices of players and in the formalism of strategies . in our game ,the aim of each player is to maximize one s own wealth which can be in turn achieved by the maximization of the global profit .so the quantity of interest is namely , the variance of the attendance of room , where the attendance of a room is just the number of people chosen that room .indeed , the maximum global profit will be achieved if and only if the largest possible minority size is attained .so the expected attendance of all rooms should be equal to for players to gain as much as possible .accordingly , the variance of the attendance of a room represents the loss of players in the game . in order to investigate the significance of the strategies, we would like to compare the variance with the coin - toss case value in which all the players make their decision simply by tossing an unbiased coin .it is easy to check that the probability for the attendance equal to in the coin - toss case is given by where $ ] .so the expectation of and in the coin - toss case are given by and all the simulations , each set of data was taken for 1,000 independent runs . in each run, we took the average values on 15,000 steps after running 10,000 steps for equilibrium starting from initialization .we first want to investigate if the performance of players are different in mg and the two - room mcmg .we applied the two models to study the properties of the mean attendance as a function of the control parameter as shown in figure 1 .the control parameter measures the ratio of the reduced strategy space size to the number of strategies at play .specifically , for the mg and for the mcmg . we have only studied the properties of the attendance of one of the rooms as the attendance of the two rooms have the same behavior due to symmetry . in both mg and the two room mcmg ,the mean attendance always fluctuates around the expected value no matter how large is the control parameter ( see figure 1 ) .therefore , we believe that players have the same performance in both mg and the two room mcmg if we only consider the mean attendance . to further investigate the difference of the performance of players in mg and the two - room mcmg , we studied the variance of the attendance as a function of the control parameter which was shown in figure 2 . before making comparsion with that in the mcmg ,let us first take a brief review on the properties of the variance in mg as shown in figure 2 . in mg , there is a great variance when the control parameter is small .it is because players use very similar strategies when the reduced strategy space size ( ) is much smaller than the number of strategies at play ( ) .such overcrowding of strategies will lead to small minority size and also great fluctuation of the attendance . when the reduced strategy space size increases , the variance decreases rapidly since players are able to cooperate more with less overcrowding effect .subsequently , the variance attains a minimum when the number of strategies at play is approximately equal to the reduced strategy space size .in fact , maximum cooperation is attained when all the strategies in the reduced strategy space are used by players . as the slope of the variance changes discontinuously at the minimum point, it strongly suggests that there is a second order phase transition .the phase transition was also confirmed by analytical method .after the minimum point , the variance increases and gradually tends to the coin - toss case value when the control parameter increases further .it is due to more and more insufficient sampling of the reduced strategy space when the reduced strategy space size become much and much larger than the number of strategies at play .consequently , there is less and less difference between choosing by random or using strategies .in addition , the properties of the variance are found to be similar for different number of strategies . for the mcmg ,the variance of the attendance of a room , , exhibits similar behavior as a function of the control parameter ( for mcmg ) to that in mg no matter what the number of strategies is .for example , the variance tends to the coin - toss case value for sufficiently large control parameter .moreover , there is again an indication of a second order phase transition . to checkif the phase transition is second order or not , we calculate the order parameter ^ 2 } \right\ } } \ ] ] where denotes the conditional time average of the probability for given that .in fact , the order parameter measures the bias of player s decision to any choice for individual history .figure 3 shows that the order parameter vanishes when the control parameter is smaller than its value corresponding to minimum variance . as a result, we confirm that the phase transition is a second order one .although our two - room mcmg model does not exactly coincide with the mg model , the variance has almost the same properties as a function of the control parameter in both models .however , the behaviour of the variance as a function of the memory size are different in mg and mcmg because the reduced strategy space size are different in the two models . here , we want to study if the properties of the attendance of different rooms are different or not in the mcmg .figures 4 and 5 show the mean attendance and the variance of the attendance of different rooms versus the control parameter .we found that the behavior of the attendance of different rooms are almost the same in mcmg with because there is same probability for any choice to be the minority side .as we only want to focus on the study of the attendance of a room , not on their high order correlation , so we will stick to one of them from now on .now , we investigate the properties of the attendance for mcmg with different number of rooms .figure 6 depicts the dependence of the mean attendance on the control parameter for mcmg with to . in mcmg with different ,the mean attendance fluctuates around irrespective of the control parameter .it is resonable as the maximum global profit will be achieved if and only if the largest possible minority size is attained .we also studied the dependence of the variance of the attendance on the control parameter for mcmg with to as shown in figure 7 . in this figure , we have divided the variance by ( largest possible minority size ) in order to have an objective comparison of the variance for different .no matter what is the value of , there is always a cusp of the variance which strongly suggests the occurence of a second order phase transition ( see figure 7 ) .we calculate the order parameter ( introduced in eq.(10 ) ) to identify the order of the phase transition as shown in figure 8 . in mcmg with different , the order parameter vanishes when the control parameter is smaller than its value at the cusp .therefore , we conclude that there is a second order phase transition in mcmg with different . moreover , the variance tends to a constant value when the control parameter .but is such a constant value consistent with the coin - toss case value as in mg ?table 1 shows the value of at in contrast with the coin - toss case value for mcmg with different ..the value of the variance at and the coin - toss limit value in mcmg with different where . [ cols="^,^,^",options="header " , ] table 2 summarises the estimated values of and for different .the estimated values of and against with fixed memory size are also shown in figure 9 .we again divided the variance by in order to have an objective comparision of the variance for different .we found that in mcmg , decreases as increases .we may explain this phenomenon as follows : for mcmg with more number of rooms , more strategies at play are required such that players can use all the strategies in the reduced strategy space . since maximum cooperation is attained when all the strategies in the reduced strategy space are used by players ,so the value of is larger for mcmg with more number of rooms .we also found that the scaled variance increases as increases .it is due to the increase of difficulty for players to cooperate with each other when there are more choices for them . on the other hand, we also studied the behavior of the attendance as a function of the control parameter in mcmg with different number of strategies .figures 10 and 11 display the results for .the two figures indicate that the behavior of the attendance as a function of the control parameter for different number of strategies are similar , just like the case of mg and mcmg with . in conclusion, the attendance in mcmg has very similar properties with respect to the control parameter as in mg no matter how many choices and strategies players have .we proceed to study the properties of player s wealth in the mcmg .the mean and maximum player s wealth as a function of the control parameter for to were shown in figures 12 and 13 . from figures 12 and 13, we notice that player s wealth displays similar behavior for mcmg with different number of rooms .when the reduced strategy space size is much smaller than the number of strategies at play , the mean player s wealth remains almost constant as the control parameter increases .when is small , the system is in the overcrowding phase where all the rooms have the same chance to win " .therefore , all the players , on average , always win the same number of times and make the same amount of profit for small .then the mean player s wealth attains a maximum value near the point of minimum variance when the reduced strategy space size is approximately equal to the number of strategies at play .if the control parameter increases further , the number of players is comparable to the number of rooms .thus , finite size effect is important . in this case, some of the rooms will have a higher chance to win " .then some of the players will always lose while some of them will always win in the game . as a result ,the mean player s wealth decreases rapidly when the control parameter increases further .the properties of the maximum player s wealth is similar to the mean player s wealth except there is a significant peak correponding to maximum cooperation of players .although players on average always perform the same for small , but the smart players can perform much better when the small control parameter increases . therefore , those smart players is wealthier when the control parameter .although the mcmg and mg models are not exactly the same , our work shows that the attendance of a room in mcmg has similar behavior to that in mg as a function of the control parameter .besides , we found that the attendance of different rooms displays almost the same behavior in mcmg with number of choices .moreover , we observed that both the attendance and player s wealth displays similar properties as a function of the control parameter in mcmg with different .as all the above mentioned features can be explained reasonably , so we concluded that we have successfully built a computationally feasible model of multi - choice minority game mcmg .various extensions of the mcmg model could be studied .for example , the multi - choice game with zero - sum and the multi - room game with agents who can invest different amount .we hope the study of the extensions of the mcmg model can give us more insight on more realistic complex adaptive system .we would like to thank p. m. hui and kuen lee for their useful discussions and comments , especially during the annual conference of the physical society of hong kong in june 2001 .f. k. c. would also like to thank k. m. lee for his useful discussions and comments .this work is supported by in part by the rgc grant of the hong kong sar government under the contract number hku7098/00p .h. f. c. is also supported by the outstanding young researcher award of the university of hong kong .99 w. b. arthur , amer .papers and proc . *84 * , 406 ( 1994 ) .d. challet and y. c. zhang , physica a * 246 * , 407 ( 1997 ) .y. c. zhang , europhys .news * 29 * , 51 ( 1998 ) .d. challet and y. c. zhang , physica a * 256 * , 514 ( 1998 ) .r. savit , r. manuca and r. riolo , phys .lett . * 82 * , 2203 ( 1999 ) .n. f. johnson , s. jarvis , r. jonson , p. cheung , y. r. kwong and p. m. hui , physica a * 258 * , 230 ( 1998 ) .d. challet and m. marsili , phys .e * 60 * , r6271 ( 1999 ) .d. challet , m. marsili and r. zecchina , phys .lett . * 84 * , 1824 ( 2000 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , physica a * 298 * , 537 ( 2001 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , eur .j. b * 20 * , 547 ( 2001 ) .n. f. johnson , p. m. hui , r. jonson and t. s. lo .lett . * 82 * , 3360 ( 1999 ) .d. challet , m. marsili and y .- c .zhang , physica a * 276 * , 284 ( 2000 ) .p. jefferies , m. hart , p. m. hui and n. f. johnson , eur .j. b * 20 * , 493 ( 2001 ) .r. dhulst and g. j. rodgers , adap - org/9904003 .l. ein - dor , r. metzler , i. kanter and w. kinzel , phys .e * 63 * , 066103 ( 2001 ) .h. f. chau and f. k. chow , nlin.ao/01102049 .d. challet , m. marsili and r. zecchina , http:// www.unifr.ch/econophysics/principal/minority/psfiles /cmze99procdublin.ps ( 1999 ) .
|
minority game is a model of heterogeneous players who think inductively . in this game , each player chooses one out of two alternatives every turn and those who end up in the minority side wins . it is instructive to extend the minority game by allowing players to choose one out of many alternatives . nevertheless , such an extension is not straight - forward due to the difficulties in finding a set of reasonable , unbiased and computationally feasible strategies . here , we propose a variation of the minority game where every player has more than two options . results of numerical simulations agree with the expectation that our multiple choices minority game exhibits similar behavior as the original two - choice minority game .
|
for a -variate random vector the covariance matrix , or variance - covariance matrix , is a fundamental descriptive measure and is one of the cornerstones in the development of multivariate methods .the covariance matrix has a number of important basic properties , for example : [ covprop ] let and be -variate continuous random vectors with finite second moments , then + 1 .the covariance matrix is symmetric and positive semi - definite .2 . the covariance matrix is affine equivariant in the sense that for all full rank matrices and all -vectors .3 . if the and components of are independent , then 4 .if x and y are independent , then the covariance matrix is additive in the sense that furthermore , for a random sample coming from a -variate normal distribution , the finite sample version of , i.e. the sample covariance matrix is the maximum likelihood estimator for the scatter parameter .also , together with the sample mean vector , the sample covariance matrix gives a sufficient summary of the data under the assumption of multivariate normality .hence any method derived assuming multivariate normality will be based solely on the the sample mean vector and sample covariance matrix .it is well known though that multivariate methods based on the sample mean and sample covariance matrix are highly non - robust to departures from multivariate normality .such methods are extremely sensitive to just a single outlier and are highly inefficient at longer tailed distributions .consequently , a substantial amount of research has been undertaken in an effort to develop robust multivariate methods which are not based on the mean vector and covariance matrix .a common approach for `` robustifying '' classical multivariate methods based on the sample mean vector and covariance matrix is the `` plug - in '' method , which means to simply modify the method by replacing the mean vector and covariance matrix with robust estimates of multivariate location and scatter .however , sometimes crucial properties of the covariance matrix are needed in order for a particular multivariate method to be valid , and investigating whether these properties hold for the robust scatter replacement is often not addressed .typically , scatter matrices are defined so that they satisfy the first two properties in lemma [ covprop ] , but not necessarily the other properties . in this paper , we focus on the third property above and its central role in certain multivariate procedures , in particular in independent components analysis ( section [ section - ica ] ) , in observational regression ( section [ section - obsreg ] ) and in graphical modeling ( section [ section - graphical ] ) .these cases illustrate why the use of plug - in methods should be done with some caution since not all scatter matrices necessarily satisfy this property .some counterexamples are given in section [ section - indep ] , where it is it also noted that using symmetrized versions of common robust scatter matrices can make the corresponding plug - in method more meaningful .some comments on the computational aspects of symmetrization are made in section [ section - comp ] .all computations reported in this paper were done using r 2.15.0 , and relied heavily on the r - packages ics , icsnp mass and spatialnp .proofs are reserved for the appendix .to begin , the next section briefly reviews that concepts of scatter matrices , affine equivariance and elliptical distributions , and sets up the notation used in the paper .many robust variants of the covariance matrix have been proposed within the statistics literature , with the vast majority of these variants satisfying the following definition of a scatter , or pseudo - covariance , matrix .[ scatterdef ] let be a -variate random vector with cdf .a matrix valued functional is called a scatter functional if it is symmetric , positive semi - definite and affine equivariant in the sense that for any full rank matrix and any -vector .a scatter statistic is then one that satisfies the above definition when is replaced by the empirical cdf .scatter statistics which satisfy this definition include m - estimators , minimum volume ellipsoids ( mve ) and minimum covariance determinant ( mcd ) estimators , s - estimators , -estimators , projection based scatter estimators , re - weighted estimators and mm - estimates .definition [ scatterdef ] emphasizes only the first two properties of the covariance matrix noted in lemma [ covprop ] , with the other stated properties not necessarily holding for a scatter functional in general .in addition , a scatter statistic can not be viewed as an estimate of the population covariance matrix , but rather as an estimate of the corresponding scatter functional .for some important distributions , though , a scatter functional and the covariance matrix have a simple relationship .for example , elliptically symmetric distributions are often used to evaluate how well a multivariate statistical method performs outside of the normal family . for such distributions , it is known that if possesses second moments then .this relationship also holds for a broader class of distributions discussed below .we first recall the definition of elliptical distributions ( see e.g. * ? ? ?[ elldef ] a -variate random vector is said to be spherically distributed around the origin if and only if for all orthogonal matrices .the random vector is said to have an elliptical distribution if and only if it admits the representation with having a spherical distribution , being a full rank matrix and being a -vector . if the density of an elliptical distribution exists , then it can be expressed as where is a function independent of and and .we then say that .( for a symmetric positive definite matrix , the notation refers to its unique symmetric positive semi - definite square root . )a generalization of the spherical distributions and of the elliptical distributions can be constructed as follows ( see * ? ? ?[ essdef ] a -variate random vector is said to have an exchangeable sign - symmetric distribution about the origin if and only if for all permutation matrices and all sign - change matrices ( a diagonal matrix with on its diagonal ) .the density ( if it exists ) of an exchangeable sign - symmetric must satisfy the property that for any and .we then denote if and only if it admits the representation where has a exchangeable sign - symmetric distribution with density , is a full rank matrix and is a -vector .note that in this model is not completely identifiable since for any and .however , is identifiable since . on the other hand , unlike the elliptical distributions , the distribution can not be completely determined from and .clearly the multivariate normal distributions are special cases of the family of elliptical distributions and the elliptical distributions in turn belong to the family of distributions .in particular , with .the distributions also contain otherwell studied distributions such as the family of -norm distributions ( see for example * ? ? ?* ) . for in general , or in particular ,the parameter provided exist , with the constant of proportionality being dependent on the function or the function respectively .to simplify notation , it is hereafter assumed that these functions are standardize so that whenever which has finite second moments .if the second moments do not exist , then still contains information regarding the linear relationship between the components of .the following lemma notes that the relationship between and extends to any scatter functional .[ diagvess ] + 1 . for any -vector y which is exchangeable sign - symmetric around the originall scatters matrices are proportional to the identity matrix , i.e. for any scatter functional which is well defined at , + where is a constant depending on the density of .2 . for with ,if the scatter functional is well - defined at , then + where is a constant depending on the function . for these models ,all scatter functionals are proportional and so any consistent scatter statistic is consistent for up to a scalar multiple .consequently , and especially when the function is not specified for the distribution , the parameter is usually only of interest up to proportionality .this motivates considering the broader class of shape functionals as defined below .lemma [ diagvess ] also holds when is taken to be a shape functional .[ shapedef ] let be a -variate random vector with cdf .then any matrix valued functional is a shape functional if it is symmetric , positive semi - definite and affine equivariant in the sense that for any full rank matrix and any -vector .an example of a shape functional which is not a scatter functional is the distribution - free m - estimate of scatter .it is worth noting that conjecture in their remark 1 that the distributions are perhaps the largest class of distributions which all scatter or shape matrices are proportional to each other . outside of this class , different scatter or shape statisticsestimate different population quantities .this is not necessarily a bad feature , since as noted by several authors the comparison of different scatter / shape matrices can be useful in model selection , outlier detection and clustering .note that due to lemma [ diagvess ] , any scatter functional satisfies lemma [ covprop ] under an distribution ( although properties 3 , 4 and 5 are vacuous for any non - normal elliptical distribution since such distributions do not have any independent components ) . for general distributions , however , one must check that the scatter functional used in a plug - in method has the properties of the regular covariance matrix needed for the method at hand .although a zero covariance between two variable does not imply the variables are independent , the property that independence implies a zero covariance ( when the second moments exist ) is of fundamental importance when one wishes to view the covariance or correlation as a measure of dependency between variables .it has been pointed out by that many of the popular robust scatter matrices do not posses the property , but they do not present any concrete counterexample . this somewhat surprising observation is not well known and so in this section we explore it in more detail .some simple counterexamples are given which not only verify this observation but also demonstrates how large a _ pseudo - correlation _ , can be even when the corresponding variables are independent .the first example involves the family of weighted covariance matrices , which for a given is defined as where is the mahalanobis distance .it is easy to see that that satisfies definition [ scatterdef ] for a scatter matrix for and that it corresponds to the covariance matrix when .the weighted covariance matrices do not necessarily have good robustness properties , especially when since this corresponds `` up - weighing '' the values of based on their mahalanobis distances .they serve , though , as a tractable family of scatter matrices which helps us to illustrate our main points . for simplicity , assume without loss of generality that and , then suppose now that the components of are mutually independent and consider the case . this yields for the diagonal elements and for the off - diagonal elements since corresponds in this case to the skewness of the component of ( given that the components have mean zero and unit variance ) it follows that an off - diagonal element is zero only if at least one of the components has zero skewness .for example , consider the bivariate case with and being independent and each having the discrete distribution with probability mass function and .this gives and and hence a pseudo - correlation between and of 0.1743 even though they are independent .to demonstrate this idea further , figure [ offfig ] shows the pseudo - correlation obtained from for different values of and in a setting where all -components are mutually independent and each having a distribution .thus , the components have zero mean , unit variance and a skewness of .the results were obtained by taking the average , over 2000 repetitions , of the sample version of for samples of size 5000 . .the vertical lines at 0 and 2 correspond to and respectively.,scaledwidth=80.0% ] figure [ offfig ] clearly shows that the pseudo - correlations based can be fairly large especially for negative values of .curiously , it is for that has a more robust flavor since it corresponds to down - weighing values rather than up - weighting values based on their original mahalanobis distances .it can also be noticed that the pseudo - covariances are zero when , which corresponds to the covariance matrix , and for .the case , is sometimes referred to as a _ kurtosis matrix _ , or as a matrix of fourth moments , since it involves the fourth moments of .it is known in general that is always diagonal whenever the components of are independent and possess fourth moments , which is a key result needed to justify the well - known _ fobi _ algorithm in independent components analysis .the next counterexample utilizes the minimum volume ellipsoid ( _ mve _ ) estimators . for a given , the _ mve _is defined as the ellipsoid with the minimum volume covering at least of the probability mass , say .the _ mve _location functional is then taken to be the center of this ellipsoid and the _ mve _ scatter functional is taken to be proportion to , with the constant of proportionality chosen so that corresponds to the covariance function when is multivariate normal . for our admittedly artificial example , suppose the random vector has independent components with each component following a multinomial distribution with support 0 , 1 and 2 and probabilities , and respectively . for , the points covered by the _ mve _ can be shown to be , and , which then implies that hence yield as a robust pseudo - correlation of between the two independent components of .of the scatter functionals considered so far , only and are known to be diagonal whenever the components are mutually independent . refer to this property as the _ independence property _ and discuss its importance in independent components analysis .since we are to consider various notions of the independence property here , we refer to this as the _ joint independence property_. that is , [ indpropdef ] a scatter matrix is said to have the joint independence property if , provided exists , whenever has independent components and where d(x ) is a positive diagonal matrix dependent on the distribution of .a common feature of and is that both can be expressed strictly in terms of pairwise differences .let and be two independent copies of , then in general , scatter functionals usually can not be expressed as a function of pairwise differences . on the other hand ,given any scatter functional , one can generate its _ symmetrized version _ by simply applying the functional to pairwise differences .[ symvdef ] let be a scatter functional .its symmetrized version is then defined to be where and are independent copies of . symmetrized m - estimators are discussed in , while symmetrized s - estimators are discussed in . the symmetrized version of the covariance matrix is simply , whereas the symmetrized version of the kurtosis matrix is . as shown by theorem 1 of , any symmetrized scatter matrix ,provided it exists , possesses the joint independence property . an open question , though , is whether these exist scatter matrices possessing the joint independence property which can not expressed as a function of pairwise differences .consider again the case where consists of independent components .for and a sample size of 1000 , figure [ offscatterfig ] shows the box - plots of the simulated distribution , based upon 2000 repetitions , of the pseudo - correlations using on ( i ) the regular covariance matrix , ( ii ) the m - estimator derived as the maximum likelihood estimator of an elliptical cauchy distribution , ( iii ) the symmetrized version of denoted as , ( iv ) the m - estimator using huber s weights , ( v ) the symmetrized version of denoted , ( vi ) tyler s shape matrix , ( vii ) the symmetrized version of denoted ( also known as dmbgen s shape matrix , ) , ( viii ) the minimum volume estimator and ( ix ) the minimum determinant estimator . throughout the paper ,unless stated otherwise , the tuning constant for and is taken to be 0.7 while for and for is taken to be , where is the sample size and the dimension .dimensional random vector having mutually independent components.,scaledwidth=80.0% ] the box - plots are in agreement with our conjecture that in general only symmetrized scatter matrices have the joint independence property .the joint independence property is weaker than property 3 of lemma [ covprop ] .that is , a scatter matrix satisfying definition [ indpropdef ] does not necessarily give whenever and are independent .for example , consider the kurtosis matrix , which is known to satisfy the joint independence property .let and be mutually independent , each with zero mean and unit variance , and define , where and .it readily follows that and .moreover , and are independent , but a simple calculation gives which is non - zero even for the case when has a symmetric distribution .symmetrization does not help here since is already symmetrized .we conjecture that no scatter matrix , other than the covariance matrix , satisfies property 3 of lemma [ covprop ] in general .as noted in , if more assumptions on the distribution of other than just independence are made , then unsymmetrized scatter matrices can also yield zero pseudo - correlations .for example , if is symmetrically distributed about a center , then any scatter functional , provided it exist at , is a diagonal matrix .this result immediately implies that a symmetrized scatter matrix has the joint independence property . in the following , we state some further conditions under which independence implies a zero pseudo - correlationthe first result shows that symmetry can be slightly relaxed .[ diagvsym ] let be a -variate random vector with independent components . furthermore ,suppose components of are marginally symmetric , i.e. for at least components , for some .then any scatter matrix , provided it exists at , is a diagonal matrix .next , consider the case for which all components are are not necessarily mutually independent , but rather that the -vector consists of independent blocks of components .this means consists of sub - vectors with dimensions , , such the sub - vectors are mutually independent of each other .such a setup arises for example in independent subspace analysis ( isa ) .we refer to this property as the _ block independence property_. [ indblockpropdef ] let have independent blocks with dimensions .the scatter matrix is said to have the block independence property if , provided exists at , where is a block diagonal matrix with block dimensions .clearly scatter matrices having the block independence property have the joint independence property .it is not clear though if the converse is true , i.e. whether the joint independence property implies the block independence property .nevertheless , as the corollary to the next theorem shows , symmetrization again assures that the scatter matrix has zeros at the right places .[ diagvindblock ] let have independent blocks with dimensions .if at least blocks are symmetric in the sense that where is the symmetry center of the block , then any scatter matrix , provided it exists at , will be block diagonal .[ diagvsymindblock ] any symmetrized scatter matrix has the block independenceindependent components analysis ( ica ) has become increasingly popular in signal processing and biomedical applications , where it is viewed as a practical replacement for principal components analysis ( pca ) .ica , in its most basic form , presumes that an observable random -vector is a linear mixture of a latent random -vector , with the components of being mutually independent .hence , the ica model is commonly given as where is a full rank _ mixing _ matrix . in order for the model to be identifiable , the _ signal _ can have at most one normally distributed component .even then , the mixing matrix and signal are not completely identifiable , since can also be represented as where and , with being a permutation matrix and being a full rank diagonal matrix .this , though , is the only indeterminacy in the model .the primary goal in independent components analysis ( ica ) is to then find an _ unmixing _matrix such that has independent components .consequently , for some permutation matrix and full rank diagonal matrix d , and . a general overview of ica can be found , for example , in the often cited ica book by .most approaches to ica typically begin by first whitening the data using the sample covariance matrix .this is based on the observation that where is an orthogonal matrix whenever is viewed as a standardized signal , i.e. .after whitening the data , attention can then be focused on methods for rotating the uncorrelated components of to obtain independent components .the approach of course presumes that possesses second moments .an obvious , though naive , way to make this approach more robust would be to simply replace with some robust scatter matrix .this is proposed , for example , by ( * ? ? ?* section 14.3.2 ) , and by , who recommend using the minimum covariance determinant ( mcd ) estimator .however , in neither case is it noted that for such an approach to be valid either the signal must have a symmetric distribution , or more exactly to have at most one skewed component , or the robust covariance must satisfy the independence property ( [ indpropdef ] ) , which e.g. is not satisfied by the mcd. problems in practice , when simply replacing the regular covariance matrix with the mcd in the context of the popular fastica method , have been noted by .the reason such problems can arise is that if does not satisfy ( [ indpropdef ] ) , then is not necessarily diagonal and hence the signal may not correspond to any rotation of . to quantitatively demonstrate the relevance of the independence property, we consider the bivariate case where has two skew independent components , the first component having a distribution and the second component having a distribution , with both components being standardized to have mean zero and unit variance .for this example , we use the ica method proposed by .this ica method requires two scatter ( or shape ) matrices , say and , with both satisfying the independence property .the method consists of using to first whiten the data , giving , and then performing a principal component analysis on .the resulting principal components of then correspond to the independent components .the results are also the same when the roles of and are interchanged . for more details , see .a small simulation study was conducted using samples of size 1000 and with 1000 replications .since this ica method is affine invariant , the choice of the mixing matrix has no effect on the performance of the method , and so without loss of generality we take . using the terminology established in the earlier sections , we consider the following pairs of scatter matrices ( i ) - ( ii ) - , ( iii ) - , ( iv ) - , and ( v ) - . case ( iii ) and ( v ) are the symmetrized version of ( ii ) and ( iv ) respectively . case ( i )is already the same as its symmetrized version , and it corresponds to the classical fobi method .note that only for the cases ( i ) , ( iii ) and ( v ) do both scatter matrices satisfy the independence property . to measure the performance of the methods , we use the minimum distance index md , proposed in , which is defined to be where is a permutation matrix and a diagonal matrix with non - zero entries .the range of the index is $ ] , with 0 corresponding to an optimal recovery of the independent components .box - plots for the simulations are shown in figure [ icaex ] .the plots clearly show the relevance of the independence property here when there is more than one asymmetric component , even in case ( ii ) which consists on only one scatter matrix without the independence property .dimensions for the ica method based on two scatter matrices , for various choices of the scatter matrices .the first component has a distribution and the second a distribution.,scaledwidth=80.0% ] +in this section we consider observation multivariate linear regression , that is linear regression for the case when the explanatory variables , as well as the responses , are randomly observed rather than controlled .the classical multivariate linear regression model is then where is a -dimensional response , is a -vector of explanatory variables with distribution , and is a random error term , independent of , with distribution . in thissetting , interest usually is focused still on estimating the intercept vector , the slope matrix and perhaps the error variance - covariance matrix if it exists .the standard least squares approach is well known to be highly non - robust , and so there have been numerous proposed robust regression methods .one such method is based on the observation that if both and possess second moments , and if , then which corresponds to the population or functional version of the estimates arising from the least squares method .one can then generate a robust functional version by again simply replacing the first two moments with robust versions of scatter and location .that is , let , which concatenates and , and consider the corresponding partitions of an affine equivariant location functional and a scatter functional , if the distribution of is symmetric , then it has been observed in that the parameters and can also be identified , even if no moments exist , through the equations and so using the finite sample versions of and in the above relationship gives , under general regularity conditions , consistent estimates of and .this approach was first proposed for univariate multiple regression by using -estimators of multivariate location an scatter .they note that this approach , unlike -estimates of regression , yields bounded influence regression estimates .this approach has also been studied for the oja sign covariance matrix in , for the lift rank covariance matrix in , for s - estimators in and for the mcd in .the error variance is not a robust functional itself , and is not identifiable when the error term does not have second moments .consequently , it is usually replaced by a robust scatter matrix for the residual term .also , if does not have a symmetric distribution , then the intercept term is confounded with the location of the error term ( chapter 3 of * ? ? ?it has not been previously noted , though , how the relationship is affected by asymmetric error distributions .we first note that , due to the affine equivariance property of a scatter ( or shape ) functional , this relationship always yields the proper equivariance properties for the slope parameters .[ regequi ] let follow the regression model ( [ regmodel ] ) , assume that exists with being nonsingular , and denote .then is regression , scale and design equivariant .that is , for , nonsingular and nonsingular , despite these equivariance properties , in order to obtain , additional conditions on are needed , which as shown by corollary [ diagvsymindblock ] , holds for symmetrized scatter / shape matrices .[ regsymv ] let follow the regression model ( [ regmodel ] ) and assume that exists with being nonsingular .also , suppose satisfies the block independence property given by definition [ indblockpropdef ] , then consistency of the slope term under asymmetric errors has also been established for rank regression estimates and for -estimates of regression . for detailssee for example ( chapter 3 of * ? ? ?* ) and ( chapter 4.9.2 of * ? ? ?* ) respectively .in order to demonstrate the necessity of symmetrization here whenever skewness is present in both and , we conducted a simulation study for the model where has a log - normal distribution with shape parameter standardized such that and and has an exponential distribution standardized to have and . for samples of size 2000 , is estimated using ( i ) the regular covariance matrix , ( ii ) m - estimator derived from as the maximum likelihood estimator of an elliptical cauchy distribution , ( iii ) the symmetrized version of , ( iv ) the m - estimator using huber s weights , ( v ) the symmetrized version of , ( vi ) tyler s shape matrix , ( vii ) the symmetrized version of , ( viii ) the minimum volume estimator and ( ix ) the minimum determinant estimator .the results , based on 1000 replications and presented in figure [ regex ] , shows the severe bias when non - symmetrized scatter matrices are used . which clearly shows that in this case the estimate for is severely biased when non - symmetrized scatter matrices are used .the last method considered in this paper is graphical modeling for quantitative variables based on undirected graphs . in graphical models ,one is usually interested in those pairs of variables which are independent conditional on all the other variables , or , in graphical modeling terminology , one is interested in those vertices ( variables ) which have no edges between them .in general , finding conditionally independent variables is challenging and so finding variables with zero partial correlations often serves as a proxy . in this section, we investigate the relationship between conditional independence and robust versions of the partial correlation . for random variables ,consider the relationship between the variables and given , with containing the remaining variables . denoting ,the partial variance - covariance matrix of given is given by where , which corresponds to the covariance matrix of the residuals between the orthogonal projections of and onto the -dimensional subspace spanned by .the corresponding partial correlation between and given is then simply the partial correlation can also be expressed in terms of the precision or concentration matrix of the combined vector .specifically , expressing the precision or concentration matrix of as , for , where , one obtains and hence if and only if . for gaussian graphical models , for which is presumed to be multivariate normal, conditional independence between and given , i.e. , is equivalent to the partial correlation . in general, conditional independence implies a conditional correlation of zero , presuming the second moments exist , although the converse does not hold in general .however , a perhaps lesser known result is that conditional independence does not imply a zero partial correlation in general .some additional conditions are needed . in particular ,if the regression of on is linear , then conditional independence implies a zero partial correlation , see theorem 1 in . under such conditions ,variables having zero partial correlations then serve as candidates for conditionally independent variables .when used in place of conditional independence , zero partial correlations help provide a parsimonious understanding of the relationship between variables .robustness issues have been considered for graphical models , see for example and . in both papers ,the emphasis is on finding pairs of variables for which a robust version of the partial correlations are zero .the approach used in is a robust graphical lasso .the method uses a penalized maximum likelihood approach based on an elliptical -distribution .the approach advocated in is a plug - in method based on using robust scatter matrices .they also study the asymptotic properties of the plug - in method under elliptical distributions .consequently , neither paper addresses conditional independence since conditional independence can never hold for variables following a joint elliptical distribution other than the multivariate normal . outside the elliptical family ,an important question worth addressing is under what conditions does conditional independence imply that the the plug - in version of the partial correlation equals zero ?since regression , i.e. the conditional mean of given , is itself not a robust concept and also is naturally related to covariances , the condition that regression be linear is not helpful here .we leave general conditions under which conditional independence implies a zero robust partial correlation as an open question . we can , though , obtain results for the following model where is a non - random matrix , , and , and are mutually independent . for this model, it readily follows that . also , if the first moments exist then the regression of on is linear . again , if one uses symmetrized scatter matrices than one obtains a plug - in version of the partial correlation which is equal to zero under this model .[ graphmodtheo ] suppose model ( [ graphmodel ] ) holds , and assume that exists and is nonsingular .also , suppose satisfies the block independence property given by definition [ indblockpropdef ] , then where is the element of the corresponding precision matrix .as an example for illustrating theorem [ graphmodtheo ] , consider the simple graphical model given in figure [ graph ] , where and , with having a standard normal distribution , a log - normal distribution with shape parameter standardized such that and and a distribution standardized to have and .using the same nine scatter matrices ( i)-(ix ) as in the previous section , box plots for the plug - in partial correlation of and given for sample of size 2000 based on 1000 replications are presented in figure [ graphex ] .again , the advantage to using symmetrized scatter / shape matrices is clearly shown .for various robust multivariate plug - in methods , we recommend symmetrized scatter matrices since they help protect against severe bias whenever skew components are present . a drawback to using symmetrized scatter matrices , though , is that they are more computationally intensive than their non - symmetrized counterparts . for a sample of size ,a symmetrized scatter matrix involves pairs . on the other hand, it does not require an estimate of location since the difference is centered at the origin .consequently , only those pairwise differences for which are required for its computation and so the number of pairwise differences needed reduces somewhat to .modern computers , though , have become so powerful that computational cost should not deter the use of symmetrized scatter matrices when appropriate .unfortunately , most robust scatter matrices implemented in packages such as r do not allow the option of specifying the location vector , and so can not be applied readily in computing symmetrized scatter matrices .we hope the discussion in this paper will motivate future implementations of scatter matrices to include a fixed location option , as is the case in the r packages ics and icsnp. it may be difficult in general to develop algorithms which spread the computation of a scatter matrix over several cores . for -estimates of scatter ,though , parallelization is possible . to see this, we note that when computing a symmetrized -estimate of scatter via the simple iteratively weighted least squares algorithm , the update step is given by where is the current value of the scatter matrix and is the weight function associated with the -estimate . a simple way to compute the symmetrized scatter matrix which allows parallelization is to then set andso the iteration update for the symmetrized version becomes to illustrate computation times , we considered the symmetrized version of tyler s shape matrix , i.e. dmbgen s shape matrix , implemented as ` duembgen.shape ` in the r - package icsnp and the symmetrized -estimator of scatter using huber s weights implemented as ` symm.huber ` in the r - package spatialnp .the average computing times out of 5 runs for data , where was randomly chosen , computed on a intel(r ) xeon(r ) cpu x5650 with 2.67ghz and 24 gb of memory running a 64-bit redhat linux are presented in figure [ compt ] .the figure shows that the computation time as of function of sample size is close to linear when plotted on a log - log scale with a slope of approximately 2 .hence , the computation times are approximately of the order .also , for samples of size the computation times tend to be around one second , and that the symmetrized -estimates are computationally feasible for even fairly large sample sizes . as a comparison , for , computation times for the non - symmetrized version of the m - estimators are also shown in the figure . ) and for the symmetrized huber m - estimator of scatter ( ) for various sample sizes and dimensions .both axes are given on a log - scale .the non - symmetrized version of the m - estimators are also given for .,scaledwidth=80.0% ]the goal of this paper has been to stress that some important or `` good '' properties of the covariance matrix do not necessarily carry over to affine equivariant scatter matrices .consequently , it is necessary to exercise some caution when implementing robust multivariate procedures based on the plug - in method , i.e. when substituting a robust scatter matrix for the covariance matrix in classical multivariate procedures .in particular , the validity of some important multivariate methods require that the scatter matrix satisfy certain independence properties , which do not necessarily hold whenever the components arise from a skewed distribution .thus , we recommended the use of symmetrized scatter matrices in such situations , since they are the only known scatter matrices which satisfy the independence property , definition [ indpropdef ] , or the block independence property , definition [ indblockpropdef ] .we further conjecture that the only scatter matrices that satisfy these independence properties are those which can be expressed in terms of the pairwise differences of the observation .this paper has focused on the independence properties of scatter matrices .it would also be worth considering which scatter matrices , if any , possess the additivity property of the covariance matrix , lemma [ covprop].4 .this property is relevant in factor analysis , in structural equation modeling , and in other multivariate methods .for example , the factor analysis model is given by where corresponds to latent factors and corresponds to a -variate error term . .the parameter represents a -variate location and corresponds to the matrix of factor loadings ( defined up to an orthogonal transformation ) .the standard factor analysis assumptions are that the components of both are are mutually independent , and that and are also independent of each other .furthermore , if the first two moments exist , then is further assumed without loss of generality that , , and , where is a diagonal matrix with positive entries .consequently , one can view such as factor analysis model as a reduced rank covariance model with an additive diagonal term , i.e. as this decomposition is central to the classical statistical methods in factor analysis .it is not clear though if one can define other scatter matrices so that with both and being diagonal .some robust plug - in methods for factor analysis and structural equation models have been considered by and .let again represents a sign - change matrix , that is a diagonal matrix with diagonal elements of either . also , let represent a permutation matrix obtained by permuting the rows and or columns of . for part 1 , if for all and then for all , which implies all off - diagonal elements are zero .also , since for all , it follows that all the diagonal elements are equal .hence , , where is a constant depending on the density of .part 2 of the lemma then follows from affine equivariance .let be a vector with independent components where components are marginally symmetric .let be the component which is not necessarily symmetric and let be any sign - change matrix for which the diagonal element is .hence , and due to the affine equivariance of we have for any such .this implies for and hence is a diagonal matrix .let have independent blocks with dimensions , where all but the block are symmetric in the sense that .let denote a block sign - change matrix where the signs are changed according to blocks having dimension respectively .also let denote a block sign - change matrix matrix where the diagonal block is .since for any such , it follows from the affine equivariance of that .this implies that off - diagonal block elements are zero and hence is block - diagonal with blocksizes .let have independent blocks and let and be independent identical copies of .then also has independent blocks .furthermore all blocks of are symmetric around the origin and so the corollary follows from theorem [ diagvindblock ] .due to the equivariance properties stated in lemma [ regequi ] it is sufficient to consider the case for which and . for this case two independent blocks of dimensions and , which by theorem [ diagvsymindblock ] implies is block diagonal .consequently , and so .let . by property [ indblockpropdef ], it follows that where is a diagonal matrix with positive diagonal terms , and is positive definite symmetric matrix . by affine equivariance , under model ( [ graphmodel ] ) it then follows that taking the inverse gives thus , .ilmonen , p. , nordhausen , k. , oja , h. & ollila , e. ( 2010 ) . a new performance index for ica : properties , computation and asymptotic analysis . in _ latent variable analysis and signal separation _ , v. vigneron , v. zarzoso , e. moreau , r. gribonval & e. vincent , eds . heidelberg : springer , pp . 229236 .nordhausen , k. , oja , h. & ollila , e. ( 2011 ) .multivariate models and the first four moments . in _ nonparametric statistics and mixture models : a festschrift in honor of thomas p. hettmansperger _ , d. hunter , d. richards & j. rosenberger , eds .singapore : world scientific , pp .267287 .ollila , e. , oja , h. & hettmansperger , t. p. ( 2002 ) .estimates of regression coefficients based on the sign covariance matrix ._ journal of the royal statistical society : series b ( statistical methodology ) _ * 64 * , 447466 .rousseeuw , p. j. ( 1986 ) .multivariate estimation with high breakdown point . in _ mathematical statistics and applications _ , w. grossman , g. pflug , i. vincze & w. wertz , eds .dordrecht : reidel , pp .
|
many multivariate statistical methods rely heavily on the sample covariance matrix . it is well known though that the sample covariance matrix is highly non - robust . one popular alternative approach for `` robustifying '' the multivariate method is to simply replace the role of the covariance matrix with some robust scatter matrix . the aim of this paper is to point out that in some situations certain properties of the covariance matrix are needed for the corresponding robust `` plug - in '' method to be a valid approach , and that not all scatter matrices necessarily possess these important properties . in particular , the following three multivariate methods are discussed in this paper : independent components analysis , observational regression and graphical modeling . for each case , it is shown that using a symmetrized robust scatter matrix in place of the covariance matrix results in a proper robust multivariate method . + * keywords * : factor analysis ; graphical model ; independent components analysis ; observational regression , scatter matrix , symmetrization .
|
research and education in astronomy and astrophysics are an international enterprise and the astronomical community has long shown leadership in creating international collaborations and cooperation : because ( i ) astronomy has deep roots in virtually every human culture , ( ii ) it helps to understand humanity s place in the vast scale of the universe , and ( iii ) it teaches humanity about its origins and evolution .humanity s activity in the quest for the exploration of the universe is reflected in the history of scientific institutions , enterprises , and sensibilities .the institutions that sustain science ; the moral , religious , cultural , and philosophical sensibilities of scientists themselves ; and the goal of the scientific enterprise in different regions on earth are subject of intense study ( pyenson and sheets - pyenson 1999 ). the decadal reports for the last decade of the 20th century ( bahcall , 1991 ) and the first decade of the 21st century ( mckee and taylor , 2001 ) have been prepared primarily for the north american astronomical community , however , it may have gone unnoticed that these reports had also an impact on a broader international scale , as the reports can be used , to some extend , as a guide to introduce basic space science , including astronomy and astrophysics , in nations where this field of science is still in its infancy .attention is drawn to the world - wide - web sites at http://www.seas.columbia.edu// + un - esa/ and http://www.unoosa.org/oosa/en/sap/bss/index.html , where the tripod concept is publicized on how developing nations are making efforts to introduce basic space science into research and education curricula at the university level .the concept , focusing on astronomical telescope facilities in developing nations , was born in 1990 as a collaborative effort of developing nations , the european space agency ( esa ) , the united nations ( un ) , and the government of japan . through annual workshops and subsequent follow - up projects , particularly the establishment of astronomical telescope facilities , this concept is gradually bearing results in the regions of asia and the pacific , latin america and the caribbean , africa , and western asia ( wamsteker et al . 2004 ) .in 1959 , the united nations recognized a new potential for international cooperation and formed a permanent body by establishing the committee on the peaceful uses of outer space ( copuos ) . in 1970 , copuos formalized the un programme on space applications to strengthen cooperation in space science and technology between developing and industrialized nations .the overall purpose of the programme `` peaceful use of outer space '' is the promotion of international cooperation in the peaceful uses of outer space for economic , social and scientific development , in particular for the benefit of developing nations .the programme aims at strengthening the international legal regime governing outer space activities to improve conditions for expanding international cooperation in the peaceful uses of outer space .the implementation of the programme will strengthen efforts at the national , regional and global levels , including among entities of the united nations system , to increase the benefits of the use of space science and technology for sustainable development . within the secretariat of the united nations , the programme is implemented by the office for outer space affairs . at the inter - governmental level , the programme is implemented by the committee on the peaceful uses of outer space , which addresses scientific and technical as well as legal and policy issues related to the peaceful uses of outer space .the committee was established by the general assembly in 1959 and has two subsidiary bodies , the legal subcommittee and the scientific and technical subcommittee .the direction of the programme is provided in the annual resolutions of the general assembly and decisions of the committee and its two subcommittees . as part of its programme of work ,the office provides secretariat services to the committee and its subsidiary bodies and implements the united nations programme on space applications .the activities of the programme on space applications are primarily designed to build the capacity of developing nations to use space applications to support their economic and social development . in its resolution 54/68 of 6 december 1999 ,the united nations general assembly endorsed the resolution entitled `` the space millennium : vienna declaration on space and human development '' , which had been adopted by the third united nations conference on the exploration and peaceful uses of outer space ( unispace iii ) , held in july 1999 .since then , the focus of the work undertaken by the office under this programme has been to assist the committee in the implementation of the recommendations of unispace iii . in october 2004, the united nations general assembly reviewed the progress made in the implementation of the recommendations of unispace iii and , in its resolution 59/2 , endorsed the committee s plan of action for their further implementation .the plan of action , contained in the report of the committee to the assembly for its review ( a/59/174 ) , constitutes a long - term strategy for enhancing mechanisms to develop or strengthen the use of space science and technology to support the global agendas for sustainable development .the report also provides a road map to make space tools more widely available by moving from the demonstration of the usefulness of space technology to an operational use of space - based services . in its report, the committee noted that in implementing the plan of action , the committee could provide a bridge between users and potential providers of space - based applications and services by identifying needs of member states and coordinating international cooperation to facilitate access to the scientific and technical systems that might meet them . to maximize the effectiveness of its resources ,the committee adopted a flexible mechanism , action teams , that takes advantage of partnerships among its secretariat , governments , and intergovernmental and international non - governmental organizations to further implement the recommendations of unispace iii .at its forty - ninth session , held in june 2006 , the committee had before it for its consideration the proposed strategic framework for the office for outer space affairs for the period 2008 - 2009 , as contained in document ( a/61/6 ( prog.5 ) ) .the committee agreed on the proposed strategic framework .the expected accomplishments and the strategy reflected in the strategic framework proposed by the office for outer space affairs for the period 2008 - 2009 ( a/61/6 ) are aimed at achieving increased international cooperation among member states and international entities in the conduct of space activities for peaceful purposes and the use of space science and technology and their applications towards achieving internationally agreed sustainable development goals . in brief , the three expected accomplishments of the office are : ( a ) greater understanding , acceptance , and implementation by the international community of the legal regime established by the united nations to govern outer space activities ; ( b ) strengthened capacities of countries in using space science and technology and their applications in areas related , in particular , to sustainable development , and mechanisms to coordinate their space - related policy matters and space activities ; and ( c ) increased coherence and synergy in the space - related work of entities of the united nations system and international space - related entities in using space science and technology and their applications as tools to advance human development and increase overall capacity development .the establishment and operation of regional centres for space science and technology , affiliated to the united nations + ( http://www.unoosa.org/oosa/en/sap/centres/index.html ) , + as well as workshops on basic space science + ( http://www.unoosa.org/oosa/en/sap/bss/index.html ) + and the international heliophysical year 2007 + ( http://www.unoosa.org/oosa/en/sap/bss/ihy2007/index.html ) + are part of the accomplishments of the office .in conjunction to the workshops , to support research and education in astronomy , the government of japan has donated high - grade equipment to a number of developing nations ( singapore 1987 , indonesia 1988 , thailand 1989 , sri lanka 1995 , paraguay 1999 , the philippines 2000 , chile 2001 ) within the scheme of oda of the government of japan ( kitamura 1999 ) . here , reference is made to 45 cm high - grade astronomical telescopes furnished with photoelectric photometer , computer equipment , and spectrograph ( or ccd ) .after the installation of the telescope facility by the host country and japan , in order to operate such high - grade telescopes , young observatory staff members from the host country have been invited by the bisei astronomical observatory for education and training , sponsored by the japan international cooperation agency [ jica ] ( kitamura 1999 , kogure 1999 , kitamura 2004 , un document a / ac.105/829 ) .similar telescope facilities , provided by the government , were inaugurated in honduras ( 1997 ) and jordan ( 1999 ) .the research and education programmes at the newly established telescope facilities focus on time - varying phenomena of celestial objects . the 45 cm class reflecting telescope with photoelectric photometer attachedis able to detect celestial objects up to the 12th magnitude and with a ccd attached up to the 15th magnitude , respectively .such results have been demonstrated for the light variation of the eclipsing close binary star v505 sgr , the x - ray binary cyg x-1 , the eclipsing part of the long - period binary eps aur , the asteroid no.45 eugenia , and the eclipsing variable rt cma ( kitamura 1999 ) .also in 1990 , the government of japan through oda , facilitated the provision of planetariums to developing nations ( kitamura 2004 ; smith and haubold 1992 ) .in the course of preparing the establishment of the above astronomical telescope facilities , the workshops made intense efforts to identify available material to be used in research and education by utilizing such facilities .it was discovered that variable star observing by photoelectric or ccd photometry can be a prelude to even more advanced astronomical activity .variable stars are those whose brightness , colour , or some other property varies with time . if measured sufficiently carefully , almost every star turns out to be variable. the variation may be due to geometry , such as the eclipse of one star by a companion star , or the rotation of a spotted star , or it may be due to physical processes such as pulsation , eruption , or explosion .variable stars provide astronomers with essential information about the internal structure and evolution of the stars .the most preeminent institution in this specific field of astronomy is the american association of variable star observers .the aavso co - ordinates variable star observations made by amateur and professional astronomers , compiles , processes , and publishes them , and in turn , makes them available to researchers and educators . to facilitate the operation of variable star observing programmes and to prepare a common ground for such programmes ,the aavso developed a rather unique package titled `` hands - on astrophysics '' which includes 45 star charts , 31 35 mm slides of five constellations , 14 prints of the cygnus star field at seven different times , 600,000 measurements of several dozen stars , user - friendly computer programmes to analyze them , and to enter new observations into the database , an instructional video in three segments , and a very comprehensive manual for teachers and students ( http://www.aavso.org/ ) . assuming that the telescope is properly operational , variable starscan be observed , measurements can be analyzed and sent electronically to the aavso . the flexibility of the `` hands - on astrophysics '' material allows an immediate link to the teaching of astronomy or astrophysics at the university level by using the astronomy , mathematics , and computer elements of this package .it can be used as a basis to involve both the professor and the student to do real science with real observational data . after a careful exploration of `` hands - on astrophysics '' and thanks to the generous cooperation of the aavso, it was adopted by the above astronomical telescope facilities for their observing programmes ( mattei and percy 1999 , percy 1991 , wamsteker et al .2004 ) .the aavso is currently undertaking a massive effort to translate its basic visual observing manual into many languages such as spanish and russian , to make this basic material available in the native language of any developing nation .the aavso is actively pursuing translations in arabic and chinese so as to have versions available in all the official united nations languages .various strategies for introducing the spirit of scientific inquiry to universities , including those in developing nations , have been developed and analyzed ( wentzel 1999a ) .the workshops on basic space science were created to foster scientific inquiry . organized and hosted by governments and scientific communities ,they serve the need to introduce or further develop basic space science at the university level , as well as to establish adequate facilities for pursuing a scientific field in practical terms .such astronomical facilities are operated for the benefit of the university or research establishment , and will also make the results from these facilities available for public educational efforts .additional to the hosting of the workshops , the governments agreed to operate such a telescope facility in a sustained manner with the call on the international community for support and cooperation in devising respective research and educational programmes .organizers of the workshops have acknowledged in the past the desire of the local scientific communities to use educational material adopted and available at the local level ( prepared in the local language ) .however , the workshops have also recommended to explore the possibility to develop educational material ( additional to the above mentioned `` hands - on astrophysics '' package ) which might be used by as many as possible university staff in different nations while preserving the specific cultural environment in which astronomy is being taught and the telescope is being used .a first promising step in this direction was made with the project `` astrophysics for university physics courses '' ( wentzel 1999b , wamsteker et al .this project has been highlighted at the iau / cospar / un special workshop on education in astronomy and basic space science , held during the unispace iii conference at the united nations office vienna in 1999 ( isobe 1999 ) .additionally , a number of text books and cd - roms have been reviewed over the years which , in the view of astronomers from developing nations , are particularly useful in the research and teaching process : bennett et al .2007 , for teaching purposes and bennett , 2001 , lang , 1999 , 2004 reference books in the research process . as part of the 15th anniversary celebrations of the hubble space telescope ,the european space agency has produced an exclusive , 83-minute dvd film , entitled `` hubble 15 years of discovery '' .the documentary also mentions the role of the hubble space telescope project in facilitating some of the activities of the united nations office for outer space affairs , particularly processing of hubble imagery as part of the education and research activities of the un - affiliated regional centres for space science and technology and the workshops on basic space science .the hubble dvd was distributed world - wide , through the office , as a unique educational tool for astronomy and astrophysics .in 1957 a programme of international research , inspired by the international polar years of 1882 - 83 and 1932 - 33 , was organized as the international geophysical year ( igy ) to study global phenomena of the earth and geospace .the igy involved about 66,000 scientists from 60 nations , working at thousands of stations , from pole to pole to obtain simultaneous , global observations on earth and in space .the fiftieth anniversary of igy will occur in 2007 .it was proposed to organize an international programme of scientific collaboration for this time period called the international heliophysical years ( ihy ) in 2007 ( http://ihy2007.org/ ) . like igy , and the two previous international polar years, the scientific objective of ihy is to study phenomena on the largest possible scale with simultaneous observations from a broad array of instruments . unlike previous international years, today observations are routinely received from a vast armada of sophisticated instruments in space that continuously monitor solar activity , the interplanetary medium , and the earth .these spacecraft together with ground level observations and atmospheric probes provide an extraordinary view of the sun , the heliosphere , and their influence on the near - earth environment .the ihy is a unique opportunity to study the coupled sun - earth system .future basic space science workshops will focus on the preparation of ihy 2007 world - wide , particularly taking into account interests and contributions from developing nations .currently , in accordance with the united nations general assembly resolution 60/99 , the scientific and technical subcommittee of the uncopuos is considering an agenda item on the ihy 2007 under the three - year work plan adopted at the forty - second session of the subcommittee + ( http://www.unoosa.org/oosa/en/sap/bss/ihy2007/index.html ) . a major thrust of the ihy 2007 is to deploy arrays of small , inexpensive instruments such as magnetometers , radio antennas , gps receivers , all - sky cameras , etc . around the world to provide global measurements of ionospheric , magnetospheric , and heliospheric phenomena .this programme is implemented by collaboration between the ihy 2007 secretariat and the united nations office for outer space affairs .the small instrument programme consists of a partnership between instrument providers and instrument host nations .the lead scientist or engineer provides the instrumentation ( or fabrication plans for instruments ) in the array ; the host nation provides manpower , facilities , and operational support to obtain data with the instrument , typically at a local university . in preparation of ihy 2007, this programme has been active in deploying instrumentation , developing plans for new instrumentation , and identifying educational opportunities for the host nation in association with this programme + ( http://ihy2007.org/observatory/observatory.shtml ; + un document a / ac.105/856 ) .currently , a tripod concept is being developed for the international heliophysical year 2007 , consisting of an instrument array , data taking and analysis , and teaching space science .in 2006 , 27 november - 1 december , the indian institute of astrophysics will host the second un / nasa workshop on the international heliophysical year and basic space science in bangalore , india ( http://www.iiap.res.in/ihy/ ) . in 2007 , 11 - 15 june , the national astronomical observatory of japan , tokyo , will host a workshop on basic space science and the international heliophysical year 2007 , co - organized by the united nations , european space agency , and the national aeronautics and space administration of the united states of america , and will use this opportunity to commemorate the cooperation between the government of japan and the united nations , as highlighted in this article , since 1990 .bahcall , j. , the decade of discovery in astronomy and astrophysics , national academy press , washington d.c ., 1991 ; and astronomy and astrophysics : panel reports , national academy press , washington d.c . , 1991 .bennett , j. , donahue , m. , schneider , n. , and voit , m. , the cosmic perspective , addison wesley longman inc ., menlo park , california , fourth edition , 2007 ; cd - roms and a www site , offering a wealth of additional material for professors and students , specifically developed for teaching astronomy with this book and upgraded on a regular basis are also available : http://www.masteringastronomy.com/. haubold , h.j ., `` un / esa workshops on basic space science : an initiative in the world - wide development of astronomy '' , journal of astronomical history and heritage 1(2):105 - 121 , 1998 ; space policy 19:215 - 219 , 2003 .mckee , c.f . andtaylor , jr . , j.h . ,astronomy and astrophysics in the new millennium , national academy press , washington d.c ., 2001 ; and astronomy and astrophysics in the new millennium : panel reports , national academy press , washington d.c . , 2001 ; see also g. brumfield , wishing for the stars , nature 443(2006)386 - 389 .kitamura , m. , `` provision of astronomical instruments to developing countries by japanese oda with emphasis on research observations by donated 45 cm reflectors in asia '' , in conference on space sciences and technology applications for national development : proceedings , held at colombo , sri lanka , 21 - 22 january 1999 , ministry of science and technology of sri lanka , pp .147 - 152 .kogure , t. , `` stellar activity and needs for multi - site observations '' , in conference on space sciences and technology applications for national development : proceedings , held at colombo , sri lanka , 21 - 22 january 1999 , ministry of science and technology of sri lanka , pp .124 - 131 .un document a / ac.105/856 : report on the united nations / european space agency / national aeronautics and space administration of the united states of america workshop on the international heliophysical year 2007 , abu dhabi and al - ain , united arab emirates , 20 - 23 november 2005 , united nations , vienna 2005 .wentzel , d.g ., astrofisica para cursos universitarios de fisica , la paz , bolivia , 1999b , english language version available from the united nations in print and electronically at + http://www.seas.columbia.edu//un-esa/astrophysics ; printed version also contained in wamsteker at al .
|
since 1990 , the united nations is annually holding a workshop on basic space science for the benefit of the worldwide development of astronomy . additional to the scientific benefits of the workshops and the strengthening of international cooperation , the workshops lead to the establishment of astronomical telescope facilities through the official development assistance ( oda ) of japan . teaching material , hands - on astrophysics material , and variable star observing programmes had been developed for the operation of such astronomical telescope facilities in an university environment . this approach to astronomical telescope facility , observing programme , and teaching astronomy has become known as the basic space science tripod concept . currently , a similar tripod concept is being developed for the international heliophysical year 2007 , consisting of an instrument array , data taking and analysis , and teaching space science .
|
the ability of neural nets to be universal approximators has been proved by and studied by further authors in different contexts .for instance , neurons or small neuronal groups implementing `` plane wave responses '' have been considered by and .as well , pairs of neurons implementing `` windows '' have been investigated by .any `` complete enough '' basis of functions which is able to span a sufficiently large vector space of response functions is of interest , and , for instance , the wavelet analysis has been the subject of a complete investigation by and . in this paper , we visit again the subject of a linear reconstruction of tasks , but with an emphasis upon neglecting the usual `` translational '' parameters .we mainly use a scale parameter only .this is somewhat different from the usual wavelet approach , which takes advantage of both translation and scale .but we shall find that a multifrequency reconstruction of tasks occurs as well .simultaneously , we separate a `` radial '' from an `` angular '' analysis of the task . finally , for the sake of robustness and biological relevance , we introduce a significant amount of randomness , corrected by training , in the choice of the implemented neuronal parameters .furthermore , our basic neuronal units can be those `` window - like '' pairs advocated earlier , because of biological relevance too . such deviations from the more rigorous approaches of and are expected to make cheaper the practical implementation of such neural nets .we also investigate two training operations .the first one consists in a trivial optimization of the output synaptic layer connecting a layer of intermediate , `` elementary task neurons '' to an output , purely _ linear _the second training consists in optimizing the scale parameters of such a layer of intermediate neurons .it will be found that one may start from random values of such parameters and , however , sometimes reach solutions where some among the intermediate neurons are driven to become identical .this `` dynamical identification '' training will be discussed . in sectionii we describe our formalism , including a traditional universality theorem .we also reduce the realistic , multi - dimensional situations to a one - dimensional problem . in section iiiwe illustrate such considerations by numerical examples of network training .finally section iv contains our discussion and conclusion .consider an input which must be processed into an output ( a task ) this input is here taken to be a positive number , such as the intensity of a spike or the average intensity ( or frequency ) of a spike train .one may view as a `` radial '' coordinate in a suitable space .there is no loss of generality in restricting to be a positive number , because , should negative values of be necessary for the argument , then could always be split into an even and odd parts , /2, ] the same approach expands in this set , ,\ ] ] where the integral is most often reduced to a discrete sum . also , and do not need to be independent parameters .the expansion coefficients , are output synaptic weights and are the unknowns of the problem .this well known architecture is shown in figure 1 .the following , seemingly poorer , but simpler expansion , does not use the translation parameter here it is assumed that there exists a suitable electronic or biological tuning mechanism , able to recruit or adjust fn s with suitable gains but no threshold tuning .such gains are positive numbers , naturally .the outputs of such fn s are then added , via synaptic output efficiencies which can be both positive and negative , namely excitatory and inhibitory , respectively .the coefficient is introduced in eq .( [ basicinte ] ) for convenience only .it can be absorbed in this expansion , eq .( [ basicinte ] ) allows a universality theorem .define and the same expansion becomes , where and this reduces the `` scale expansion '' , eq .( [ basicinte ] ) , into a `` translational expansion '' where a basis is generated by arbitrary translations of a given function .the solution of this inverse convolution problem is trivially known as where the superscript refers to the fourier transforms of and respectively , and is the relevant `` momentum '' .this result will make our claim for universality . in the following ,this paper empirically assumes that the needed analytical properties of , , ... are satisfied . actually , for the sake of biological or industrial relevance , we are only concerned with discretizations of eq .( [ basicinte ] ) , with units , where we now let include the coefficient obviously , input patterns to be processed by a net can not be reduced to one degree of freedom only .rather , they consist of a vector with many components these may be considered as , and recoded into , a radial variable and , to specify a direction on the suitable hypersphere , angles enough special functions ( legendre polynomials , spherical harmonics , rotation matrices , etc . )are available to generate complete functional bases in angular space and one might invoke some formal neurons as implementing such base angular functions . the design of such fn s , and as well the design of such a polar coordinate recoding , is a little far fetched , though . in this paperwe prefer to take advantage of the following argument , based upon the synaptic weights of the input layer , shown in figure 2 . in the left part of the figure , fig .2 , all the fn s have the same input synaptic weights hence receive the same input when contributing to a global task for the right part of fig .2 it is again assumed that all fn s have equal input weights , with , however , weights deduced from by a sheer rotation , accordingly , if the output weights of the left part are the same as those of the right one , the global task performed by the right part is a rotated task , an expansion of any task upon the -rotation group is thus available , where discretizations are in order , naturally , with suitable output weights here plays the rle of an elementary task , and it might be of some interest to study cases where belongs to specific representations of the rotation group .this broad subject exceeds the scope of the present paper , however , and , in the following , we restrict our considerations to scalar tasks of a scalar input according to fig . 1 only .let us return to eq .( [ pratiq ] ) , in an obvious , short notation two kinds of parameters can be used to best reconstruct : the output synaptic weights and , hidden inside the elementary tasks the scales let denote a suitable scalar product in the functional space spanned by all the s of interest .we assume , naturally , that the same scalar product makes sense for the s .incidentally , there is no loss of generality if is normalized , since the final neuron is linear .one way to define the `` best '' is to minimize the square norm of the error in terms of the s , this consists in solving the equations , let be that matrix with elements its inverse usually exists . even in those rare cases when isvery ill - conditioned , or its rank is lower than it is easy to define a pseudoinverse such that , in all cases , the operator is the projector upon the subspace spanned by the s .then a trivial solution , is found for eqs .( [ linear ] ) , given and the s , this projection , which can be achieved by elementary trainings of the output layer of synaptic weights , will be understood in the following .it makes the s functions of the s .now we are concerned with the choice of the parameters of the fn s performing elementary tasks .this is of some importance , for the number of fn s in the intermediate layer is quite limited in practice .the subspace spanned by the s is thus most undercomplete .hence , every time one requests an approximator to a new an optimization with respect to the intermediate layer is in order , to patch likely weaknesses of the `` projector '' solution , eqs ( [ linear ] ) .let us again minimize the square norm of the error .we know from eqs .( [ linear ] ) that the s are functions of the s , but there is no need to use chain rules because the same equations , eqs .( [ linear ] ) , cancel the corresponding contributions , the s being optimal .derivatives of with respect to their scales are enough .the gradient of to be cancelled , reads , here is the straight derivative of the reference elementary task , before any scaling .there is no difficulty in implementing a training algorithm for a gradient descent in the -space . the next section , sec .iii , gives a brief sample of the results we obtained when solving eqs .( [ linear ] ) and ( [ gradien ] ) for many choices of the global task and elementary task for instance the scalar product in the functional space as , among many numerical tests we show here the results obtained when the target task reads , - 4.33575 \tanh[4(x-9.56591 ) ] \}. ] a window - like elementary response , and the target task reads - 1/[1+(x^2/16)], ] with randomized choices of the number of terms , the coefficients the large slope coefficient and the positions of the steep areas .the set of initial values for the s before gradient descent was also sometimes taken at random .it was often found that a traditional sequence is not a bad choice for a start .all our runs converge reasonably smoothly to a saturation of the norm provided those cases where becomes ill - conditioned are numerically processed .there is a significant proportion of runs where the optimum seems to be quite flat , hence some robustness of the results .local minima where the learning gets trapped do not seem to occur very often , but this problem deserves the usual caution , with the usual annealing if necessary .we did not find clear criteria for predicting whether a given leads to a merging of some s , however . despite this failure , all these results advocate a reasonably positive case for the learning process described by eqs .( [ linear ] ) and ( [ gradien ] ) and the emergence of `` derivative tasks '' .this paper tries to relate several issues .most of them are well known in the theory of neural nets , but two of our considerations , the question of symmetries and the rotational analysis , might give reasonably original results , up to our knowledge at least .the most important and well known issue is that of the universality offered by nets whose architecture is described by figures 1 and 2 , namely four layers : input weights fn s for elementary tasks with adjustable parameters output weights linear output neuron(s ) . the linearity of the output(s ) can be summarized in any dimensions by the linear transform ( we use here boldface symbols to stress that the linearity generalizes to any suitable vector and tensor situations for multiparameter inputs , intermediate tasks and outputs . )this linearity actually reduces the theory of such an architecture to a special case of the `` generator coordinate '' theory , well known in physics .as well , from a mathematical point of view , this boils down to the only question of the invertibility of the kernel actually , the invertibility problem boils down into identifying those classes of global tasks which belong to the functional ( sub)space spanned by the s . for the sake of definiteness , we proved a universality theorem for the very special case of `` scaling without translating '' , inspired by wavelets .but most of the considerations of this paper clearly hold if one replaces , _ mutatis mutandis _ ,wavelets by other responses and scaling parameters by any other parameters . the parameters can be defined as including the input synaptic weight vectors whose dimension is necessarily the same as that of the inputs in order to generate the actual inputs received by the intermediate fn s .when also explicitly includes scale parameters there is no loss of generality in restricting the s to be unitary vectors .hence the linear kernel can imply , in a natural way , an integration upon the group of rotations transforming all the s into one another .this part of the theory relates to the angular momentum projections which are so familiar in the theory of molecular and nuclear rotational spectra .the well known issue of the discretization of a continuous expansion converts kernels into finite matrices , naturally .this paper studied what happens if one trains for a temporary optimum of the approximate task , while is not yet optimized .this implies a prejudice on training speeds : fast learner , slower .other choices , such as slower learner and faster , for instance , are as legitimate , and should be investigated too .the question is of importance for biological systems , because of obvious likely differences in the time behaviors and biochemical and metabolic factors of synapses and cell bodies .the training speed hierarchy we chose points to one technical problem only , namely whether the gram - schmidt matrix of scalar products is easily invertible or not .we do not use a gram - schmidt orthogonalization of the finite basis of such s , but the ( pseudo ) inversion of amounts to the same .once is obtained , temporarily optimal are easily derived .our further optimization of with respect to the parameters of the intermediate fn s takes advantage of the linearity of the output(s ) and the symmetry of the problem under any permutation of the fn s .let label such fn s , and denote the parameters of the -th fn .we found cases where the gradient descent used to optimize induces a few s to become quite close to one another .such functional clusters , because of the output linearity , may yield elementary tasks corresponding to derivatives of with respect to components of this derivative process may look similar to a gram - schmidt orthogonalization , but it is actually distinct , because no rank is lost in the basis . for those s which induce such mergings of fn s , industrial applications should benefit from a preliminary simulation of training as a useful precaution , because , besides straight fn s implementing additional , more specific implementing `` derivative s '' will be necessary . for biological systems , diversifications of neurons , or groups of such , between tasks and `` derivative tasks ''might also be concepts of interest .it may be noticed that the word `` derivative '' may hold with respect to inputs as well as parameters . indeed , as found at the stage of eq .( 3 ) , scale parameters reduce , in a suitable representation , to translational parameters in a task the sign difference between and is obviously inconsequential . to conclude , this emergence of `` derivative elementary tasks '' prompts us into a problem yet unsolved by our numerical studies with many different and many different s : given the shape of one predict whether a given leads to a full symmetry breaking or to a partial merging of the fn s ?a.t . thanks service de physique thorique , saclay , for its hospitality during this work .k. hornik , m. stinchcombe , h. white , _ multilayer feedforward networks are universal approximators _ , neural networks * 2 * ( 1989 ) 359 - 366 ; k. hornik , m. stinchcombe , h. white and p. auer , _ degree of approximation results for feedforward networks approximating unknown mappings and their derivatives _ , neural computation , * 6 * ( 6 ) ( 1994 ) 1262 - 1275 b.g .giraud , l.c .liu , c. bernard and h. axelrad , _ optimal approximation of square integrable functions by a flexible one - hidden - layer neural network of excitatory and inhibitory neuron pairs _ , neural networks , * 4 * ( 1991 ) 803 - 815 d.l .hill and j.a .wheeler , _ nuclear constitution and the interpretation of fission phenomena _ , phys* 89 * ( 1953 ) 1102 - 1146 ; j.j .griffin and j.a .wheeler , _ collective motion in nuclei by the method of generator coordinates _, phys . rev .* 108 * ( 1957 ) 311 - 327
|
neural nets are known to be universal approximators . in particular , formal neurons implementing wavelets have been shown to build nets able to approximate any multidimensional task . such very specialized formal neurons may be , however , difficult to obtain biologically and/or industrially . in this paper we relax the constraint of a strict `` fourier analysis '' of tasks . rather , we use a finite number of more realistic formal neurons implementing elementary tasks such as `` window '' or `` mexican hat '' responses , with adjustable widths . this is shown to provide a reasonably efficient , practical and robust , multifrequency analysis . a training algorithm , optimizing the task with respect to the widths of the responses , reveals two distinct training modes . the first mode induces some of the formal neurons to become identical , hence promotes `` derivative tasks '' . the other mode keeps the formal neurons distinct .
|
several earth - based interferometric experiments for the detection of gravitational waves ( gw ) are currently under development , and expected to reach the data - taking stage in the near future . on a longer timescale , space - based experiments are foreseen .these experiments will search , among other , for gw generated by inspiralling compact binary - star systems .the expected functional form of the signal produced by a coalescing system is known to good approximation , so matched filtering is an effective strategy to extract gw signals from the noise background .matched filtering is basically obtained by projecting the experimental output ( signal plus noise ) onto the expected theoretical signal , and is best done in fourier space , using fast fourier transform ( fft ) techniques ( see later for more details ) .the functional form of the expected signal depends however on the physical parameters ( e.g. , masses , angular momenta , eccentricity ) of the inspiralling system .it is necessary to match the experimental output to a set of expected signals ( so - called templates ) corresponding to points in the parameter space that cover the physical region of interest and are close enough ( under some appropriate metric ) to ensure sufficient overlap with any expected gw event .the number of needed templates for e.g. the virgo experiment is of order of , so the corresponding computational cost is huge by current standards .a nice requirement is the possibility of real - time analysis of the experimental data , which means that the available computational power is enough to process experimental data at the rate at which they are produced , so a prompt `` trigger '' of a gw event is possible .matched filtering to a ( large ) set of templates is an obvious candidate for parallel processing of the simplest form , e.g. , data farming with all elements of the farm performing the same computation ( single program multiple data ( spmd ) processing ) .indeed , the experimental data stream is sent to all processors in the farm , each element performing the matching procedures for a subset of the physical templates .massively parallel specialized spmd architectures , with peak processing power of the order of 1 tflops have been developed by several groups to fulfill the computational requirements of lattice gauge theories ( lgt ) . in this paperwe want to analyze the performance of one such system ( the apemille system ) for matched filtering of gw signals .this paper is not a proposal to use ape - like systems in an actual experimental ( the relative merits of different computer systems in a large experiment have so many facets that they can only be assessed by those directly working on it ) .rather , the potential usefulness of our work lies in the following : given the fast pace of development in the computer industry , an experiment will try to delay the commissioning of a production system to as late a point in time as possible , since huge gains in price and/or price / performance can be expected .this means that very large computing capabilities will not be available for much needed early tests and simulations .ape systems might provide an answer to this problem .the focus of this paper is the measurement of the performance of ape systems for matched filtering .some parts of the paper have however a more general scope and refer to general parallelization criteria for the problem at hand .this paper is structured as follows : section 2 briefly reviews the formalism of matched filtering .section 3 evaluates the associated computational cost in general terms and discusses some strategies to minimize this quantity .section 4 discusses the features of the ape systems relevant for the problem , while section 5 presents a procedure for allocation of templates to processors suitable for ape and general enough to adapt to other computer systems .section 6 presents the result of actual performance measurements made on ape , while section 7 contains our concluding remarks .in this section we briefly summarize the mathematical formalism recently developed to analyze matched filtering of gw signals from coalescing binaries .we closely follow the notation presented in .we call the interferometer output , which is the sum of the signal and the noise , while is a template . is characterized by its one - sided spectral density : = \frac{1}{2}\ \delta(f_{1 } - f_{2})\ s_{n}(|f_{1}|)\ ] ] where $ ] means ensemble expectation value , tilde ( ) stands for fourier transformed functions and asterisk ( ) for complex conjugation . for the sake of definiteness , we consider in the following templates computed to second post - newtonian expansion .they depend , in principle , on several parameters : the coalescing phase and coalescing time , and the parameters corresponding to the physical characteristics of the system , called intrinsic parameters and globally referred to by the vector .a template is precisely identified by .it is believed that the most relevant intrinsic parameters are the masses of the binary systems , so as a first approximation it is usual to neglect all intrinsic parameters except masses . in this approximation, is a vector of two components . in a matched filter the signal to noise ratio ( snr )is usually defined by where is a particular inner product defined as : it can be shown that , so ( [ s / n ] ) simplify to , if normalized templates are used . filtering a signal means to look for local maxima of the signal to noise ratio , in terms of its continuous parameters .the maximization over the phase can be done analytically ( it can be seen that the maximum value is obtained computing two inner product as in ( [ prod ] ) on two real templates with opposite phases and then summing their square values ) .maximization over instead is achieved at low computational cost calculating the cross correlations by the fft algorithm .maximizations over the intrinsic parameters are not possible analytically .for this reason the normal procedure consists in a discretizations of templates in the space of the intrinsic parameters .the obvious question concerns the number of templates needed to cover the whole parameter space .a differential geometrical approach has been developed recently .one introduces a new function , the _ match _ , which is the product of two templates with different intrinsic parameters , where a maximization is assumed over and : the match between two templates with near equal parameters may be taylor expanded suggesting the definition of a metric in the limit of close template spacing we have an analytical function able to measure the distance between templates in the intrinsic parameter space .the metric depends on the intrinsic parameters so the real volume covered by a template varies locally .this effect can be reduced writing the templates in terms of some new variables for which the metric is more regular .one suitable choice is the following : where is the total mass of the binary system , the reduced mass , and an arbitrary frequency .this change of variables makes the metric tensor components constant at the first post - newtonian order , so only small -dependent contributions are present at the second order approximation .it is now possible to simply estimate the total number of templates necessary to recover the signal at a given level of accuracy .we calculate the volume covered by a single template in the parameter space in term of a minimal value for the match , the so called _ minimal match _ which states a minimal requirement on signal recovering capabilities .for example , if we simply use a face centered hyper - cubic lattice , we can write the maximum covering volume with : where is the dimension of the parameter space ( 2 in our example ) . an approximate estimation of the total template number , applicable when is very large ,is given taking the ratio between the total volume of the physically relevant parameter space and the volume covered by a template placed in the center of a lattice tile using ( [ numtot ] ) we estimate that in the range from to solar masses the total template number is roughly for ligo and for virgo ( see section 5 for the additional assumptions involved in this calculation ) .a last remark we want to make is that the minimal match requirement also determines a threshold value for the signal ( and templates ) sampling frequency .this frequency can be simply estimated and will be take into account later on in our computational estimates .in this section we present some observations about a general strategy to compute correlations . herewe consider an _ ideal _case in which most computer - related issues are neglected .we also limit our treatment only to the _ stored templates strategy _ , where templates are pre - calculated , then fourier transformed and prepared to be processed and finally stored in memory .this ideal case is not unrealistic , given the pace at which actual memory sizes increase in real computers .the quantity to be evaluated on every template is given by where is the fourier transform of a complex template .+ at present the best way of compute uses a fft algorithm , reducing the number of needed operations from to .the fft algorithm assumes input periodicity , while in our case signal and templates are not repeated data .the usual trick to overcome this problem consists in _ padding _ with a certain number of zeros the tail of the templates to be processed .assume that the template has points .we pad it so its total length become , and then compute the correlation by using the padded template and signal points .the resulting correlations are only valid in their first points , all remaining points being affected by the periodicity assumption implied in the fft technique .we define padding - ratio the quantity .the result obtained in this way covers a time - period of length , where is the sampling frequency of the experimental signal .the last data - points will have to be re - analyzed in a successive analysis .the computing power necessary for an on - line analysis of templates of given and ( floating point operations per second ) is given by : and are constants , usually of the same order , depending on the specific algorithm used .in this paper we use a simple - minded fft algorithm for power of two length vectors that involves and for the whole analysis .although more general and efficient algorithms exist , our choice does not influences strongly the following observations and final results .one interesting question concerns the optimal padding that minimizes computing requirements . if one disregards the fact that ( [ costab ] ) holds only for values that are powers of 2 , the answer is given by fig.[minimum ] , where the minimum in of eq .14 is plotted as a function of , for .the behavior is very close to a logarithmic function in , so computing costs depend very weakly on .this result is obtained for an optimal choice of , as discussed above . as shown in fig.[best ] , the optimal value for grows with , implying in principle very large memory requests . in practicehowever ( see again fig.[best ] ) a value of is very close to the optimal case for reasonable values of .this finally means that deviations from the optimal padding length do not produce drastic consequences on the computing power needed to perform the analysis , and that can be easily adjusted to a suitable power of two .the ape family of massively parallel processor has been developed in order to satisfy the number crunching requirements of lattice gauge theories ( lgt) . machines of the present ape generation ( apemille ) are installed at several sites , delivering an overall peak processing power of about 2 tflops .the largest sites have typically 1000 processing nodes ( i.e. , 520 gflops ) . sustained performance on production - grade lgt codes is about 45 % of peak performance .a new ape generation ( apenext ) is under development , and expected to reach the physics - production stage in early 2004 . peak performance installations are being considered .apemille systems are based on a building block containing 8 processing nodes ( processor and memory ) running in single instruction multiple data ( simd ) mode .each processor is optimized for floating point arithmetics and has a peak performance of 500 mflops in ieee single precision mode .the processors are logically assembled as the sites of a mesh , with data links connecting the edges .this arrangement is called a `` cluster '' or a `` cube '' .large apemille systems are based on a larger 3-dimensional mesh of processor , based on replicas of the above - described building block .the resulting mesh has a full set of first neighbor communication links . in a typical lgt applicationthe whole system works in lock - step mode as a single simd system .more important for the present application , each cube is able to operate independently , running its own program under the control of a linux - based personal - computer acting as a host .there is one host machine every 4 cubes .a set of up to 32 cubes ( i.e. , 256 nodes ) and the corresponding 8 host machines is a fully independent unit housed in a standard - size mechanical enclosure .each cube has access to networked disks with a bandwidth of about 4 mbyte / sec . in some apemille installations ,disks have been mounted directly on the host pcs .in this case , bandwidth increases approximately by a factor 4 .the next generation ape system ( apenext ) is , for the purposes of the present discussion , just a faster version of the same architecture .the only ( welcome ) architectural difference is the fact that the basic logical building block ( capable of independent operation ) is now just one processing node .a large apemille system can be seen as a large farm of processors , whose basic element is a simd machine of dimension 8 .a better way to look at the simd cluster in our case follows the paradigm of vector computing : the simd cluster applies the input signal to a vector of 8 templates and produces a vector of 8 correlations . in a variation of the same method, the same template could be present on all nodes of the simd cluster , and correlations at 8 staggered time points could be computed .since the number of correlations is of the order of , each element of a large farm ( say simd clusters ) takes responsibility for several hundreds or thousands of templates .this is good news , since ape processors can exploit vector processing within the node to reach high efficiency ( we just recall here for reader interested in architectural details that vector processing effectively helps to hide memory access latencies ) .we have written an ape code performing all the steps needed for matched filtering on a pre - calculated ( and pre - fft transformed ) set ( vector ) of templates each of length , and measured its performance on an ape cluster .an analysis of the details of the apemille processor suggest to model the computation time as is related to the complexity of the computation , that we model as , following eq.[costab ] and introducing one more parameter ( ) covering machine effects . is a measure of the processor efficiency as a function of the vector length , that we normalize to .taking into account that the computation is memory - bandwidth limited ( as opposed to processing - power limited ) , we adopt the following functional form for : measured and fitted values for and are shown in fig.[f_n ] and fig.[g_k ] respectively .apemille efficiencies are smooth functions of and .a rather good value of , including all computational overheads , is possible when large sets of templates ( ) are used .a general templates allocation strategy on real computers has to take into account the limited size in memory and the available computing power available . herewe present some quantitative aspects of memory and cpu usage involved in our analysis , then we give our allocation criteria for the optimal template number manageable by a single processor .this discussion focuses on criteria that are appropriate for the ape family of processors .the focus is to exploit vectorization as much as possible and to find ways to reduce input - output bandwidth requirements , so our discussion can be applied to a larger class of processors .we start from memory .each processor has stored templates of similar length .( in the apemille case , the term processor must be understood to refer to the basic cluster of 8 processing element ) .vector processing of all the templates requires that they are all padded to the same , so we need arrays of complex words , and matching space for the final correlation results .there are two basic memory allocation strategies : we may assign different sets of templates to each element in a basic 8 processor cluster , and have all of them compute the corresponding correlations for the same time stretch , so each cluster computes correlations .alternatively , we may assign the same set of templates to all processing elements and have each of them compute correlations for different time intervals . with this choice correlationsare computed for a longer time stretch .the best choice between these two nearly equivalent cases is based on bandwidth constraints . in apemille, data items reaching the cluster can be delivered to just one element , or broadcast to all of them . in the latter case , bandwidth is effectively multiplied by a large factor ( ) , so there is an advantage if large data blocks must be broadcast to the complete cluster .we will use quantitatively these observations later on in this section .we now consider processing power .the real - time requirement stipulates that each processor cluster completes processing all its templates within an elapsed time ( or ) .as shown later on , for several realistic templates sizes , the processing time is much shorter that the elapsed time for the value allowed by memory constraints .we may therefore try to use the same cluster for a different set of templates .this may become inefficient since loading a large data base ( the new set of templates ) may be a lengthy procedure .this cost may be reduced by using the same templates several times ( corresponding to longer elapsed times ) before loading a new set of templates .we disregard the overhead associated to the output of the computer correlations , that can be made very small taking into account the gaussian character of the noise ( e.g. a -cut could reduce the number of the output correlations to the order of ) .more interestingly a cross correlation among closely spaced templates could be performed on line packing more densily the available information .we would like to optimize among these conflicting requirements .let us consider the total compute time both for different sets of templates ( case 1 ) or the same set of templates ( case 2 ) on each cluster element .we have * case 1 : we want to compute sets of correlations each on templates of length , corresponding to the same time interval .we compute correlations on adjoining time intervals before switching to a new set of templates .the computation time can be modeled as + where b is the cluster input - output bandwidth ( measured in words per unit time ) .the first term in ( [ cost1 ] ) is the time required to load the templates on all processors , the second term is the time needed to broadcast signal points to all cluster elements while the third term refers to the actual computation , to be performed times .templates , correlations and input data must fit inside the memory , implying that , where is the available memory on each node ( measured in units of complex words ) .also , the computation must complete in a time interval .in ( [ cost1 ] ) we assume that all data - points are loaded once .this reduces input - output time but reserves a large fraction of memory space to data - points ( as opposed to templates ) . alternatively ( case 1b ), we may load a smaller set of data - points every time we start a new computation .the corresponding compute time becomes + + while the memory constraint changes to . for any physical template of length , we must maximize in terms of , , and satisfying all constraints .* case 2 : the procedure discussed above can be applied also in this case .the corresponding processing time is given by this equation differs from ( [ cost1 ] ) since we now broadcast templates while we load different data - points to each processing elements .the memory constraint is the same as in case 1 , while the maximum allowed processing time is .case 2b ( multiple data loads ) is also easily computed as in case 2 , we are interested in optimizing in terms of the same parameters as in the previous case .there is one free parameter in the optimization process ( ) .if we increase we reduce the relative cost associated with template loading , but increase the latency associated to the computation .we arbitrarily decide to keep small enough so the latency for any is not longer that a fixed amount of time .we choose as the time length of the longest template contained in the set .this choice may be useful also for data - organization purposes : every time interval all correlations corresponding to templates of all lengths are made available .the result of the optimization process are given in table 1 for apemille and table 2 for apenext .results depend weakly on the allocation procedure discussed above , and are largely dominated by the sustained processing power .bandwidth limitations are neatly dealt with : if we increase the available bandwidth by a factor four ( e.g. , using local disks ) the number of templates handled by each cluster increases by less than 10% . with our choice of parameters case 1bis the preferred one for almost all template lengths .+ .number of templates handled by each apemille processor cluster , as a function of the template length .parameters are ( see the text for definitions ) , , .numbers in bold flag the best case , while mark cases where allocation can not be performed due to memory limits . [ cols="<,>,>,>,>",options="header " , ] first , we show in fig.[ideal ] the total computational cost to compute the correlation for binary systems whose masses are in a range of to solar masses , as a function of , under the assumption of optimal padding .we use the parameters listed in tables [ noise ] and [ cuts ] .the computational cost roughly follows a ( fitted ) power - law behavior , with exponent of the order of .this behavior can be easily guessed , taking advantage of the fact that the computational load of each template depends very weakly on its length , and that the depends weakly on the variables . under these assumptionsthe computational cost scales up to log - corrections as the area of the region in space corresponding to a given interval of allowed star - masses .the latter can be easily shown by power counting to behave as .+ the large difference in computational cost between the two experiment , clearly noticeable in fig.[ideal ] , derives , although in a complex way , from the different noise spectra and from the correspondingly different frequency cuts .we now specialize the discussion to ape systems .we proceed establishing a mass interval , then generating its template distribution .we `` stretch '' template lengths to the nearest power of two larger than the actual length ( a slightly pessimistic assumption ) .finally we divide each group of templates of equal length by the corresponding number of templates handled by one processor cluster ( the bold numbers in tab.1 and tab.2 ) , and sum all the resulting quotients . the final resultrepresent the number of ape processor needed to satisfy the real time requirement on the given mass interval .the computational cost of this matching filter analysis is particularly sensible to the lower mass limit because of the increasing template length and of the irregular behavior of the metric tensor in that region of the parameter space .for this reason it is useful to plot the number of processor versus the lower mass limit . the number of nodes ( one cluster consist of 8 nodes ) for a mass interval from to is plotted in fig . [ proc ] , where we use noise spectra relevant for ligo and virgo .this complete our analysis .in this paper we have developed a reliable estimate of the computational costs for real - time matched filters for gw search from binary star systems , in a massively parallel processing environment .we have analyzed some criteria to optimally allocate the processing load to a farm of processors .we have written a code performing the analysis on an ape system and we have measured its performances .our result is that available ( apemille ) systems are able to satisfy the requirements of a real - time analysis of the complexity corresponding to the ligo experiment in the mass range between 0.25 and 10 . the virgo experiment ( with its lower and wider noise curve ) has substantially larger computing requirements that can not be fulfilled by an apemille system in the same mass range .the new ape generation , expected to be available in early 2004 , partially closes this performance gap .we thank f. vetrano for reading our manuscript .t. giorgino and f. toschi wrote the fft code for apemille .this work was partially supported by neuricam spa , through a doctoral grant program with the university of ferrara . for a review ,see for instance : a.rudiger a.brillet k.danzmann a.giazotto and j.hough c.r.acad.sci .paris t.2 , series iv , 1331 ( 2001 ) , and `` proceedings of the 4th e.amaldi conference , perth(2001 ) '' , in class . quantum gravity , 19 ( 7 ) ( 2002 ) and references therein .l.blanchet , t.damour , b.r.iyer , c.m.will and a.g.wiseman , phys.rev.lett . 3515 ( 1995 ) , see also l.blanchet c.r.acad.sci .paris , series iv 2 , 1343 ( 2001 ) .b.allen _ et al ._ phys.rev.lett . 1498 ( 1999 ) .n. h. christ , nucl .b ( proc . suppl . ) , 111 ( 2000 ) .r. tripiccione , parallel computing , 1297 ( 1999 ) .c.cutler and .e.flanagan , phys.rev.d , 2658 ( 1994 ) .b.j.owen , phys.rev.d , 6749 ( 1996 ) . b.j.owen and b.s.sathyaprakash , phys.rev.d , 2002 ( 1999 ) .et al . _ , numerical recipes in c _ the art of scientific computing _ , cambridge university press .p.canitrot , l.milano , a.vicer _ computational costs for coalescing binaries detection in virgo using matched filters _ , vir - not - pis-1390 - 149 , issue 1 , 5/5/2000 .a.vicer _ optimal detection of burst events in gravitational wave interferometric observatories _ arxiv : gr - qc/0112013 , phys.rev.d in press . see table ii and reference [ 43 ] therein .a.bartoloni _ et al . _nucl.phys.b ( proc.suppl ) 106 - 107 1043 ( 2002 ) .
|
in this paper we discuss some computational problems associated to matched filtering of experimental signals from gravitational wave interferometric detectors in a parallel - processing environment . we then specialize our discussion to the use of the apemille and apenext processors for this task . finally , we accurately estimate the performance of an apemille system on a computational load appropriate for the ligo and virgo experiments , and extrapolate our results to apenext . , , , , gw interferometric detectors ; coalescing binaries ; parallel computing
|
the lovsz local lemma ( lll ) is a powerful tool with numerous uses in combinatorics and theoretical computer science .if a given probability space and collection of events satisfy a certain condition , then the lll asserts the existence of an outcome that simultaneously avoids those events .the classical formulation of the lll is as follows .let be a probability space with probability measure .let be certain `` undesired '' events in that space .let be an undirected graph with vertex set ={\left \{ 1,\ldots , n \right \}} ] .[ thm : lll ] suppose that the events satisfy the following condition that controls their dependences ~=~ \pr_\mu[e_i ] \qquad\forall i \in [ n ] , \ , j \subseteq [ n ] \setminus \gamma^+(i)\ ] ] and the following criterion that controls their probabilities ~\leq~ x_i \prod_{j \in \gamma(i ) } ( 1-x_j ) ~\quad\forall i \in [ n].\ ] ] then > 0 ] with neighborhood structure ( not necessarily satisfying the condition ) .the three subroutines required by our algorithm are as follows . _ sampling from : _ there is a subroutine that provides an independent random state ._ checking events : _ for each ] , there is a randomized subroutine with the following properties . if is an event and , then .( the oracle removes conditioning on . ) for any , if then also .( resampling an event can not cause new non - neighbor events to occur . )when these conditions hold , we say that is a resampling oraclefor events and graph .if efficiency concerns are ignored , the first two subroutines trivially exist .we show that ( possibly inefficient ) resampling oracles exist if and only if a certain relaxation of holds ( see section [ sec : lopsided - intro ] ) . *main result .* our main result is that we can find a point in efficiently , whenever the three subroutines above have efficient implementations .consider any probability space , any events , and any undirected graph on vertex set ] .however that restriction is ultimately unnecessary because , in the context of the lll , the theorem of erds and spencer implies that } \overline{e_j } ] > 0 ] to obtain that \geq \pr_\mu[e_i] ] was shown to be a sufficient condition to ensure that > 0 ] .section [ sec : analysis ] formally defines shearer s criterion and uses it in a fundamental way to prove theorem [ thm : lll - tight - result ] .moreover , we give an algorithmic proof of the lll under shearer s criterion instead of the criterion .this algorithm is efficient in typical situations , although the efficiency depends on shearer s parameters .the following simplified result is stated formally and proven in section [ sec : shearer - automatic - slack ] .suppose that a graph and the probabilities ,\ldots,\pr_\mu[e_n] ] .the entropy method roughly shows that , if the algorithm runs for a long time , a transcript of the algorithm s actions provides a compressed representation of the algorithm s random bits , which is unlikely due to entropy considerations .following this , moser and tardos showed that a similar algorithm will produce a state in , assuming the independent variable model and the criterion .this paper is primarily responsible for the development of witness trees , and proved the `` witness tree lemma '' , which yields an extremely elegant analysis in the variable model .the witness tree lemma has further implications .for example , it allows one to analyze separately for each event its expected number of resamplings .moser and tardos also extended the variable model to incorporate a limited form of lopsidependency , and showed that their analysis still holds in that setting .the main advantage of our result over the moser - tardos result is that we address the occurrence of an event through the abstract notion of resampling oraclesrather than directly resampling the variables of the variable model .furthermore we give efficient implementations of resampling oraclesfor essentially all known probability spaces to which the lll has been applied .a significant difference with our work is that we do not have an analogue of the witness tree lemma ; our approach provides a simpler analysis when the lll criterion has slack but requires a more complicated analysis to remove the slack assumption . as a consequence ,our bound on the number of resampling oraclecalls is larger than the moser - tardos bound .our lack of a witness tree lemma is inherent .appendix [ app : witness - trees ] shows that the witness tree lemma is false in the abstract scenario of resampling oracles .the moser - tardos algorithm is known to terminate under criteria more general than , while still assuming the variable model .pegden showed that the cluster expansion criterion suffices , whereas kolipaka and szegedy showed more generally that shearer s criterion suffices .we also extend our analysis to the cluster expansion criterion as well as shearer s criterion , in the more general context of resampling oracles .our bounds on the number of resampling operations are somewhat weaker than those of , but the increase is at most quadratic .kolipaka and szegedy present another algorithm , called generalizedresample , whose analysis proves the lll under shearer s condition for arbitrary probability spaces .generalizedresample is similar to maximalsetresamplein that they both work with abstract distributions and that they repeatedly choose a maximal independent set of undesired events to resample .however , the way that the bad events are resampled is different : generalizedresample needs to sample from , which is a complicated operation that seems difficult to implement efficiently . thus maximalsetresamplecan be viewed as a variant of generalizedresample that can be made efficient in all known scenarios .harris and srinivasan show that the moser - tardos algorithm can be adapted to handle certain events in a probability space involving random permutations .their method for resampling an event is based on the fischer - yates shuffle .this scenario can also be handled by our framework ; their resampling method perfectly satisfies the criteria of a resampling oracle .the harris - srinivasan s result is stronger than ours in that they do prove an analog of the witness tree lemma .consequently their algorithm requires fewer resamplings than ours , and they are able to derive parallel variants of their algorithm .the work of harris and srinivasan is technically challenging , and generalizing it to amore abstract setting seems daunting .achlioptas and iliopoulos proposed a general framework for finding `` flawless objects '' , based on actions for addressing flaws .we call this the a - i framework .they show that , under certain conditions , a random walk over such actions rapidly converges to a flawless object .this naturally relates to the lll by viewing each event as a flaw . at the same time, the a - i framework is not tied to the probabilistic formulation of the lll , and can derive results , such as the greedy algorithm for vertex coloring , that seem to be outside the scope of typical lll formulations , such as theorem [ thm : lll ] .the a - i framework has other restrictions and does not claim to recover any particular form of the lll .nevertheless , the framework can accommodate applications of the lll where lopsidependency plays a role , such as rainbow matchings and rainbow hamilton cycles .in contrast , our framework embraces the probabilistic formulation and can recover the original existential lll ( theorem [ thm : lll ] ) in full generality , even incorporating shearer s generalization .the a - i analysis is inspired by moser s entropy method .technically , it entails an encoding of random walks by witness forests " and combinatorial counting thereof to estimate the length of the random walk .the terminology of witness forests is reminiscent of the witness trees of moser and tardos , but conceptually they are different in that the witness forests grow forward in time " rather than backward .this is conceptually similar to forward - looking combinatorial analysis " , which we discuss next .giotis et al . show that a variant of moser s algorithm gives an algorithmic proof in the variable model of the symmetric lll .while this result is relatively limited when compared to the results above , their analysis is a clear example of forward - looking combinatorial analysis . whereas moser and tardos use a _ backward - looking _ argument to find witness trees in the algorithm s `` log '' , giotis et al .analyze a _ forward - looking _ structure : the tree of resampled events and their dependencies , looking forward in time .this viewpoint seems more natural and suitable for extensions .our approach can be roughly described as _ forward - looking analysis _ with a careful modification of the moser - tardos algorithm , formulated in the framework of resampling oracles .our main conceptual contribution is the simple definition of the resampling oracles , which allows the resamplings to be readily incorporated into the forward - looking analysis .our modification of the moser - tardos algorithm is designed to combine this analysis with the technology of `` stable set sequences '' , defined in section [ sec : stable - set - sequences ] , which allows us to accommodate various lll criteria , including shearer s criterion .this plays a fundamental role in the full proof of theorem [ thm : lll - tight - result ] .our second contribution is a technical idea concerning slack in the lll criteria .this idea is a perfectly valid statement regarding the existential lll as well , although we will exploit it algorithmically .one drawback of the forward - looking analysis is that it naturally leads to an exponential bound on the number of resamplings , unless there is some slack in the lll criterion ; this same issue arises in .our idea eliminates the need for slack in the and criteria .we prove that , even if or are tight , we can instead perform our analysis using shearer s criterion , which is never tight because it defines an open set .for example , consider the familiar case of theorem [ thm : lll ] , and suppose that holds with equality , i.e. , = x_i \prod_{j \in \gamma(i ) } ( 1-x_j) ] .the proof of this fact crucially uses shearer s criterion and it does not seem to follow from more elementary tools . * follow - up work . *subsequently , achlioptas and iliopoulos generalized their framework further to incorporate our notion of resampling oracles . this subsequent work can be viewed as a unification of their framework and ours ; it has the benefit of both capturing the framework of resampling oracles and allowing some additional flexibility ( in particular , the possibility of regenerating the measure approximately rather than exactly ) .we remark that this work is still incomparable with ours , primarily due to the facts that our analysis is performed in shearer s more general setting , and that our algorithm is efficient even when the lll criteria are tight . * organization . *the rest of the paper is organized as follows . in section [ sec : resample - existence ] , we discuss the connection between resampling oracles and the assumptions of the lovsz local lemma .we also show here that resampling oracles as well as the lll itself can be computationally hard in general . in section [ sec : implementation ], we show concrete examples of efficient implementations of resampling oracles . in section [ sec : applications ] we discuss several applications of these resampling oracles. finally , in section [ sec : analysis ] we present the full analysis of our algorithm .[ sec : resample - existence ] the algorithms in this paper make no reference to the lopsidependency condition and instead assume the existence of resampling oracles . in section [ sec : existence ] we show that there is a close relationship between these two assumptions : the existence of a resampling oraclefor each event is equivalent to the condition , which is a strengthening of .we should emphasize that the _ efficiency of an implementation _ of a resampling oracleis a separate issue .there is no general guarantee that resampling oraclescan be implemented efficiently . indeed , as we show in section [ sec : hardness ] , there are applications of the lll such that the resampling oracles are hard to implement efficiently , and finding a state avoiding all events is computationally hard , under standard computational complexity assumptions . nevertheless , this is not an issue in common applications of the lll : resampling oraclesexist and can be implemented efficiently in all uses of the lll of which we are aware , even those involving lopsidependency .section [ sec : implementation ] has a detailed discussion of several scenarios .[ sec : existence ] this section proves an equivalence lemma connecting resampling oracleswith the notion of lopsided association .first , let us define formally what we call a resampling oracle .[ def : resampling - oracle ] let be events on a space with a probability measure , and let , e) ] denoted by .let be a randomized procedure that takes a state and outputs a state .we say that is a resampling oracle for with respect to , if for , we obtain .( the oracle removes conditioning on . ) for any , if then also .( resampling an event can not cause new non - neighbor events to occur . ) next , let us define the notion of a lopsided association graph .we denote by ] is a monotone non - decreasing function of the functions \,:\ , j \notin \gamma^+(i ) \,) ] and assume > 0 ] for any event . resampling oracles exist for events with respect to a graph if and only if is a lopsided association graphfor .both statements imply that the lopsidependency condition holds .lemma [ lem : resample - existence ] ( a ) ( b ) : consider the coupled states where and . by ( r1 ) , . for any event ,if does not occur at then it does not occur at either , due to ( r2 ) .this establishes that ~=~ \e_{\omega ' \sim \mu}[f[\omega ' ] ] ~\leq~\e_{\omega \sim \mu|e_i}[f[\omega ] ] ~=~\pr_\mu[f \mid e_i],\ ] ] which implies \geq \pr_\mu[f ] \cdot \pr_\mu[e_i] ] , and for , / \pr_\mu[e_i] ] is a _ non - increasing _ function of : j \notin\gamma^+(i ) \,) ] , we can rewrite this as \leq \pr_\mu[\overline{f } ] ~\forall f \in \cf_i ] and = \sum_{w \in \gamma(a ) } p_w ] be a probability distribution , i.e. , .we assume that is _ log - supermodular _ , meaning that as an example , any product distribution is log - supermodular . consider monotone increasing events , i.e. , such that .note that any monotone increasing function of such events is again monotone increasing .it follows directly from the fkg inequality that condition ( b ) of lemma [ lem : resample - existence ] is satisfied for such events with an _ empty _ lopsided association graph . therefore , a resampling oracleexists in this setting. however , the explicit description of its operation might be complicated and we do not know whether it can be implemented efficiently in general .alternatively , the existence of the resampling oraclecan be proved directly , using a theorem of holley ( * ? ? ?* theorem 6 ) .the resampling oracleis described in algorithm [ alg : monotone ] .the reader can verify that this satisfies the assumptions ( r1 ) and ( r2 ) , using holley s theorem . : if , * fail*. randomly select with probability . .[ thm : holley ] let and be probability measures on satisfying then there exists a probability distribution let be an arbitrary set and let . then .define to be if , and otherwise zero .define .we now argue that holds . if then the right - hand side zero , whereas the left - hand side is non - negative .otherwise , we have by log supermodularity of .so there exists a distribution satisfying the conclusion of theorem [ thm : holley ] .recall that .then , for each fixed , we have & ~=~ \sum_{x ' \supseteq a } \pr [ x = x ' ] \cdot \pr [ r_a(x)=y \mid x = x ' ] \\ & ~=~ \sum_{x ' \supseteq a } \mu_1(x ' ) \cdot \frac{\nu(x',y)}{\sum_{y ' } \nu(x',y ' ) } ~=~ \mu_2(y),\end{aligned}\ ] ] by and .this shows that .the resampling oracle applied to a set satisfying does not cause any new event .this follows since equals with probability proportional to , which is zero unless by .[ sec : hardness ] this section considers whether the lll can always be made algorithmic .we show that , even in fairly simple scenarios where the lll applies , finding the desired output can be computationally hard , a fact that surprisingly seems to have been overlooked .we first observe that the question of algorithmic efficiency must be stated carefully otherwise hardness is trivial ._ a trivial example ._ given a boolean formula , let the probability space be , and let be the uniform measure on .there is a single event defined to be if is satisfiable , and if is not satisfiable .since =1/2 ] , there is a subroutine which determines for any given whether , in time .as far as we know , no prior work refutes the possibility that there is an algorithmic form of the lll , with running time , in this general scenario .our results imply that resampling oraclesdo _exist _ in this general scenario , so it is only the question of whether these resampling oraclesare _ efficient _ that prevents theorem [ thm : lll - tight - result ] from providing an efficient algorithm . nevertheless , we show that there is an instance of the lll that satisfies the reasonable assumptions stated above , but for which finding a state in requires solving a problem that is computationally hard ( under standard computational complexity assumptions ) . as a consequence , we conclude that the resampling oraclescannot always be implemented efficiently , even under the reasonable assumptions of this general scenario .we remark that np - completeness is not the right notion of hardness here .problems in np involve deciding whether a solution exists , whereas the lll _ guarantees that a solution exists _ , and the goal is to explicitly find a solution .our result is instead based on hardness of the _ discrete logarithm _problem , a standard belief in computational complexity theory . in the following, for a prime and integer denotes a finite field of order , and its multiplicative group of nonzero elements .[ thm : lll - hardness ] there are instances of events on a probability space under the uniform probability measure , such that the events are mutually independent ; for each ] ; but finding a state in is as hard as solving the discrete logarithm problem in ._ superficially , this result seems to contradict the fact that the lll can be made algorithmic in the variable model , where events are defined on underlying independent random variables .the key point is that the variable model also relies on a particular type of dependency graph ( defined by shared variables ) which might be more conservative than the true dependencies between the events .theorem [ thm : lll - hardness ] shows that , even if the probability space consists of independent random variables , the lll can not in general be made algorithmic if the true dependencies are considered .consider an instance of the discrete logarithm problem in the multiplicative group .the input is a generator of and an element .the goal is to find an integer such that .we define an instance of events on as follows .we identify with ] by and for , where the exponentiation is performed in . for each ] .further , the events are mutually independent , since for any ] , with a uniform measure .the bad events are assumed to be simple " in the following sense : each bad event is defined by a pattern " .the event occurs if for each .let denote the variables of relevant to event .let us define a relation to hold iff there are pairs such that or ; i.e. , the two events entail the same value in either the range or domain .this relation defines a lopsidependency graph .it is known that the lopsided lll holds in this setting . ) : , i.e. , the variables in affecting event ; fix an arbitrary order ; swap with for uniformly random among \setminus \{x_1,\ldots , x_{i-1}\} ] can be viewed as perfect matchings in . )a state here is a perfect matching in , which we denote by .we consider bad events of the following form : for a set of edges occurs if .obviously , > 0 ] .this has the surprising consequence that for vertex - disjoint forests , we have = \pr[f_1 \subseteq t ] \cdot \pr[f_2 \subseteq t] ] .this implies that the distribution of is exactly the same for a uniformly random spanning tree as it is for one conditioned on ( formally , by applying the inclusion - exclusion formula ) .therefore , the forest is distributed as it should be in a random spanning tree restricted to .the final step is that we extend to a spanning tree , where is a uniform spanning tree in .note that is a multigraph , i.e. , it is important that we preserve the multiplicity of edges after contraction .the spanning trees in are in a one - to - one correspondence with spanning trees in conditioned on .this is because each such tree extends to a different spanning tree of , and each spanning tree where can be obtained in this way .therefore , for a fixed , is a uniformly random spanning tree conditioned on . finally , since the distribution of is equal to that of a uniformly random spanning tree restricted to , is a uniformly random spanning tree .the resampling oracle applied to a spanning tree satisfying does not cause any new event such that .note that the only edges that we modify are those incident to .therefore , any new event that the operation of could cause must be such that contains an edge incident to and not contained in .such an edge shares exactly one vertex with some edge in and hence .[ sec : product - resampling ] suppose we have a product probability space , where on each we have resampling oracles for events , with respect to a graph .our goal is to show that there is a natural way to combine these resampling oracles in order to handle events on that are obtained by taking intersections of the events .the following theorem formalizes this notion .[ thm : thm : product - resampling ] let be probability spaces , where for each we have resampling oracles for events with respect to a graph .let be a product space with the respective product probability measure .for any set of pairs where each ] and two edges in of the same color , occurs if ; * : for each ] if and = 4/n^2 ] , since each of the two trees contains independently with probability .hence , the probability of each bad event is upper - bounded by .in section [ sec : resample - trees ] we constructed a resampling oracle for a single spanning tree . by theorem [ thm : thm : product - resampling ] , this resampling oracleextends in a natural way to the setting of independent random spanning trees . in particular , for an event , we define as an application of the resampling oracle to the tree . for an event , we define as an application of the resampling oracle independently to the trees and .it is easy to check using theorem [ thm : thm : product - resampling ] that for independent uniformly random spanning trees conditioned on either type of event , the respective resampling oraclegenerates independent uniformly random spanning trees .let us define the following dependency graph ; we are somewhat conservative for the sake of simplicity .the graph contains the following kinds of edges : * whenever intersects ; * whenever intersects ; * whenever intersects .we claim that the resampling oraclefor any bad event can cause new bad events only in its neighborhood .this follows from the fact that the resampling oracleaffect only the trees relevant to the event ( in the superscript ) , and the only edges modified are those incident to those relevant to the event ( in the subscript ) .let us now verify the cluster expansion criterion , introduced as in section [ sec : generalizinglll ] , so that we may apply theorem [ thm : cluster - with - slack ] .let us assume that each color appears on at most edges , and we generate random spanning trees .we claim that the neighborhood of each bad event can be partitioned into cliques of size and cliques of size .first , let us consider an event of type .the neighborhood of consists of : ( 1 ) events where or shares a vertex with ; these events form cliques , one for each vertex of , and the size of each clique is at most , since the number of incident edges to a vertex is , and the number of other edges of the same color is at most .( 2 ) events where intersects ; these events form cliques , one for each vertex of , and each clique has size at most , since its events can be identified with the edges incident to a fixed vertex and the remaining trees .second , let us consider an event of type .the neighborhood of consists of : ( 1 ) events and where intersects ; these events form cliques , one for each vertex of and either or in the superscript , and the size of each clique is at most by an argument as above .( 2 ) events where intersects ; these events form cliques , one for each vertex of and either or in the superscript .the size of each clique is at most , since the events can be identified with the edges incident to a vertex and the remaining trees . considering the symmetry of the dependency graph, we set the variables for all events equal to .the cluster expansion criteria will be satisfied if we set the parameters so that where denotes either or .the second inequality holds due to the structure of the neighborhood of each event that we described above .we set and assume .the reader can verify that with the settings and , we get .therefore , which verifies the assumption of theorem [ thm : cluster - with - slack ] .theorem [ thm : cluster - with - slack ] implies that maximalsetresample terminates after resampling oraclecalls with high probability .the total number of events here is and for each event the respective variable is .therefore , the expected number of resampling oraclecalls is . given an edge - coloring of , a perfect matching is called rainbow if each of its edges has a distinct color .this can be viewed as a non - bipartite version of the problem of latin transversals .it is known that given any _ proper _-edge - coloring of ( where each color forms a perfect matching ) , there exists a rainbow perfect matching .however , finding rainbow matchings algorithmically is more difficult .achlioptas and iliopoulos showed how to find a rainbow matching in efficiently when each color appears on at most edges , .our result is that we can do this for .the improvement comes from the application of the cluster expansion " form of the local lemma , which is still efficient in our framework .( we note that an updated version of the achlioptas - iliopoulos framework also contains this result . )[ thm : rainbow - matching ] given an edge - coloring of where each color appears on at most edges , a rainbow perfect matching exists and can be found in resampling oraclecalls with high probability .in fact , we can find many disjoint rainbow matchings up to a linear number , if we replace above by a smaller constant .[ thm : rainbow - matchings ] given an edge - coloring of where each color appears on at most edges , at least edge - disjoint rainbow perfect matchings exist and can be found in resampling oraclecalls whp .we postpone the proof to section [ sec : latin ] , since it follows from our result for latin transversals .we apply our algorithm in the setting of uniformly random perfect matchings , with the following bad events ( identical to the setup in ) : for every pair of edges of the same color , occurs if .if no bad event occurs then is a rainbow matching .we also define the following dependency graph : unless are four disjoint edges .note that this is more conservative than the dependency graph we considered in section [ sec : resample - matchings ] , where two events are only connected if they do not form a matching together .the more conservative definition will simplify our analysis . in any case , our resampling oracleis consistent with this lopsidependency graphin the sense that resampling can only cause new events such that .we show that this setup satisfies the criteria of the cluster expansion lemma .let , and .consider the neighborhood of a bad event .it contains all events such that there is some intersection among the edges .such events can be partitioned into cliques : for each vertex , let denote all the events such that and has the same color as .the number of edges incident to is , and for each of them , the number of other edges of the same color is by assumption at most .therefore , the size of is at most . in the following ,we use the short - hand notation .consider the assumptions of the cluster expansion lemma : for each event , we should have \leq \frac{y_{ef}}{\sum_{i \subseteq \gamma^+(e_{ef } ) , i \in \ind } y^i}.\ ] ] we have = p = \frac{1}{(2n-1)(2n-3)} ] .we consider the following two types of bad events : * : for each ] such that , the event occurs if and ; * : for each ] , the event occurs if . clearly ,if none of these events occurs then the permutations correspond to pairwise disjoint latin transversals .the probability of a bad event of the first type is = \frac{1}{n(n-1)} ] .thus the probability of each bad event is at most . it will be convenient to think of the pairs \times [ n] ] , ] , . for a stableset sequence , .we relate stable set sequences to executions of the algorithm by the following coupling argument .although the use of stable set sequences is inspired by , their coupling argument is different due to its backward - looking nature ( similar to ) , and their restriction to the variable model .[ lem : prod - bound ] for any proper stable set sequence , the probability that the maximalsetresamplealgorithm follows is at most . given ,let us consider the following -checking " random process .we start with a random state . in iteration , we process the events of in the ascending order of their indices .for each , we check whether satisfies ; if not , we terminate . otherwise , we apply the resampling oracle and replace by .we continue for .we say that the -checking process succeeds if every event is satisfied when checked and the process runs until the end . by induction, the state after each resampling oraclecall is distributed according to : assuming this was true in the previous step and conditioned on satisfied , we have . by assumption , the resampling oracle removes this conditioning and produces again a random state .whenever we check event , it is satisfied with probability ] . to conclude , we argue that the probability that maximalsetresamplefollows the sequence is at most the probability that the -checking process succeeds . to see this ,suppose that we couple maximalsetresampleand the -checking process , so they use the same source of randomness .in each iteration , if maximalsetresampleincludes in , it means that is satisfied .both procedures apply the resampling oracle and by coupling the distribution in the next iteration is the same .therefore , the event that maximalsetresamplefollows the sequence is contained in the event that the -checking process succeeds , which happens with probability .we emphasize that we do _ not _ claim that the distribution of the current state is after each resampling oraclecall performed by the maximalsetresamplealgorithm .this would mean that the algorithm is not making any progress in its search for a state avoiding all events .it is only the -checking process that has this property .let denote the set of all stable set sequences and the set of proper stable set sequences .let us denote by the set of stable set sequences of length , and by the subset of such that the first set in the sequence is .similarly , denote by the set of proper stable set sequences of length , and by the subset of such that the first set in the sequence is . for ,let us call the total size of the sequence .[ lem : iteration - bound ] the probability that maximalsetresampleruns for at least iterations is at most .the probability that maximalsetresampleresamples at least events is at most .if the algorithm runs for at least iterations , it means that it follows some proper sequence . by lemma [ lem: prod - bound ] , the probability that the algorithm follows a particular stable set sequence is at most . by the union bound ,the probability that the algorithm runs for at least iterations is at most .similarly , if the algorithm resamples at least events , it means that it follows some proper sequence of total size . by the union bound ,the probability of resampling at least events is upper - bounded by .we note that these bounds could be larger than and thus vacuous .the events that the algorithm follows " are disjoint for different sequences of fixed total size , while they could overlap for a fixed length ( because we can take to be different prefixes of the sequence of events resampled in iteration ) . in any case, the upper bound of on each of the events could be quite loose .[ lem : overkill - runtime ] the expected number of events resampled by maximalsetresampleis at most . by a standard argument , = \sum_{s=1}^{\infty } \pr[\mbox{at least } s\mbox { events are resampled}].\ ] ] by lemma [ lem : iteration - bound ] , this is upper - bounded by [ sec : lllslack ] in this section we will analyze the algorithm under the assumption that the criterion holds with some `` slack '' .this idea of exploiting slack has appeared in previous work , e.g. , .this analysis proves only a weaker form of theorem [ thm : lll - tight - result ] .the full proof , which removes the assumption of slack , appears in section [ sec : lllimpliesshearer ] .to begin , let us prove the following ( crude ) bound on the expected number of iterations .we note that this bound is typically exponentially large .[ lem : crude - bound ] provided that the satisfy the criterion , , we have it will be convenient to work with sequences of fixed length , where we pad by empty sets if necessary .note that by definition this does not change the value of : e.g. , . recall that denotes the set of all stable set sequences of length where the first set is .we show the following statement by induction on : for any and any , this is true for , since by the lll assumption .let us consider the expression for .we have by the inductive hypothesis .this can be simplified using the following identity : we use this with .therefore , now we use the lll assumption : because each element of appears in for at least one .we conclude that this proves ( [ eq : stab - induction ] ) . adding up over all sets ] over all events in the sequence . by the slack assumption, we have and , where . using lemma [lem : crude - bound ] , we obtain for , we obtain therefore , the probability of resampling more than events is at most .[ sec : shearer ] in this section we discuss a strong version of the local lemma due to shearer .shearer s lemma is based on certain forms of the multivariate independence polynomial .we recall that denotes .given a graph and values , define for each ] .if then \geq q_\emptyset ] is exactly the probability that no event occurs .( see for more details . ) in this section we summarize some of the important properties of these polynomials , most of which may be found in earlier work .since some of the proofs are not easy to recover due to different notation and/or their analytic nature ( in case of ) , we provide short combinatorial proofs for completeness .[ clm : breve1 ] for any , we have every independent set either contains or does not .in addition , if then is independent iff is an independent subset of .[ clm : breve2 ] for every ] as required .[ clm : qandqdown ] assume that ^n ] . clearly , } = \min_i \ , p^i ] .[ clm : srequiv ] the two characterizations of the shearer region , and , are equivalent . by claim [ clm : qandqdown ] , if and ] .conversely , if for all ] for all .[ clm : brevemonotone ] let ^n ] , claim [ clm : breve1 ] implies that and .thus , the case that and differ in multiple coordinates is handled by induction .[ clm : breve - submodular ] for any and ] , we have . we can assume ; otherwise the right - hand side is zero . by claim [ clm : breve3 ], we have \setminus \gamma^+(a ) } \cdot p^{b } { \breve{q}}_{[n ] \setminus \gamma^+(b)}.\ ] ] by claim [ clm : breve - submodular ] , \setminus \gamma^+(a ) } \cdot { \breve{q}}_{[n ] \setminus \gamma^+(b ) } ~\geq~ { \breve{q}}_{[n ] \setminus ( \gamma^+(a ) \cup \gamma^+(b ) ) } \cdot { \breve{q}}_{[n ] \setminus ( \gamma^+(a ) \cap \gamma^+(b))}.\ ] ] here we use the fact that , and . therefore , by the monotonicity of , \setminus \gamma^+(a ) } \cdot { \breve{q}}_{[n ] \setminus \gamma^+(b ) } ~\geq~ { \breve{q}}_{[n ] \setminus \gamma^+(a \cup b ) } \cdot { \breve{q}}_{[n ] \setminus \gamma^+(a \cap b)}.\ ] ] also , . using claim [ clm : breve3 ] one more time , we obtain \setminus \gamma^+(a \cup b ) } \cdot p^{a \cap b } { \breve{q}}_{[n ]\setminus \gamma^+(a \cap b ) } ~=~ q_{a \cup b } \cdot q_{a \cap b}.\ ] ] [ clm : sumqj ] suppose that .for any set ] . by claim [ clm : breve2 ] , \setminus \{i\}}}{{\breve{q}}_{[n]}}.\ ] ] [ clm : shearerslack ] if then for each ] does not depend on , while }(p) ] .since , we know that }(p_1,\ldots , ( 1+\epsilon ) p_i , \ldots , p_n ) \geq 0 ] .claim [ clm : qsingleton ] then implies that .kolipaka and szegedy showed that stable set sequences relate to the independence polynomials .the following is the crucial upper - bound for stable set sequences when shearer s criterion holds .in fact , this result is subsumed by lemma [ lem : shearer - sum - equality ] but we present the upper bound first , with a shorter proof . [lem : shearer - sum - bound ] if for all ] . ) hence , .the inductive step : every stable set sequence starting with has the form where .therefore , by the inductive hypothesis , .also , recall that if . therefore , using lemma [ lem : q - expansion ] to obtain the last equality .the inequality in lemma [ lem : shearer - sum - bound ] actually becomes an equality as , as shown in lemma [ lem : shearer - sum - equality ] .this stronger result is used only tangentially in section [ sec : cllsss ] , but we provide a detailed proof in order to clarify the arguments of kolipaka and szegedy .[ lem : shearer - sum - equality ] for a dependency graph and , the following statements are equivalent : 1 . and for all ] and .lemma [ lem : shearer - sum - bound ] proves that this implies .clearly , so this also implies that for all .note that is the column of corresponding to : for each .therefore , we can write , where is the canonical basis vector in corresponding to .we have .we may subtract these two limits since we have shown that every is finite , obtaining .we note that has strictly positive coordinates for , and for . by lemma [ lem : q - expansion ], we have for the vector with coordinates .consider , a nonnegative vector with in the coordinate corresponding to .we can choose large enough so that coordinate - wise , .from this we derive that so equality holds throughout . recalling the definition of , we conclude that . : trivial . : let be the vector .we can assume that , otherwise we are done by claim [ clm : qandqdown ] .let us consider the values of on the line \;\right\}} ] .we observe that for , which can be verified directly by considering the alternating sum defining .( intuitively , shearer s lemma holds in this region just by the union bound . )therefore , we have .furthermore continuity also implies , so claim [ clm : qandqdown ] yields }(\lambda^ * p ) = 0 ] and , * remark . * an equivalent statement using the language of `` traces '' appears in the recent manuscript of knuth ( * ? ? ?* page 86 , theorem f ) , together with a short proof using generating functions .furthermore , using claim [ clm : breve2 ] , we may derive \setminus a}}{{\breve{q}}_{[n]}},\ ] ] for any ] so , in expectation , independent samples from would also suffice to find a state in .section [ sec : shearerslack ] improves this analysis by assuming that shearer s criterion holds with some slack , analogous to the result in section [ sec : lllslack ] .section [ sec : shearer - automatic - slack ] then removes the need for that assumption it argues that shearer s criterion always holds with some slack , and provides quantitative bounds on that slack .[ sec : shearerslack ] [ sec : shearer - slack ] in this section we consider scenarios in which shearer s criterion holds with a certain amount of slack . to make this formal, we will consider another vector of probabilities with .for notational convenience , we will let denote the value and let denote as before .let us assume that shearer s criterion holds with some slack in the following natural sense .[ def : shearerslack ] we say that satisfies shearer s criterion with coefficients at a slack of , if is still in the shearer region and .[ thm : shearer - slack ] recall that ] by claim [ clm : breve2 ] , claim [ clm : breve3 ] and claim [ clm : brevemonotone ] .let us define by the chain rule and claim [ clm : breve - diff ] , we have \setminus \gamma^+(i ) } ~=~ -\sum_{i=1}^{n } q_{\{i\}}\ ] ] where we used claim [ clm : breve3 ] in the last equality .assuming that is in the shearer region , we also have by claim [ clm : breve - diff ] that is , is a convex function for as long as is in the shearer region .our goal is to prove that this indeed happens for ] , and let be the minimum such value ( which exists since the complement of the shearer region is closed ) . by claim [ clm : qandqdown ] , anywhere in the shearer region , } ] is the minimum coefficient among for all ] .on the other hand , by the minimality of , is positive and convex on and therefore which is a contradiction .therefore , is positive and convex for all ] .suppose that the three subroutines described in section [ sec : algass ] exist .if then the probability that maximalsetresampleresamples more than events is at most .we note that the corresponding result in the variable model was that the expected number of resamplings is at most . here, we obtain a bound which is at most quadratic in this quantity .directly from theorem [ thm : shearer - slack ] and lemma [ lem : shearer - automatic - slack ] : given in the shearer region , lemma [ lem : shearer - automatic - slack ] implies that in fact satisfies shearer s criterion with a bound of at a slack of . by theorem [ thm : shearer - slack ] , the probability that maximalsetresampleresamples more than events is at most , where using claim [ clm : sumqj ] , we can replace by .[ sec : lllimpliesshearer ] shearer s lemma ( lemma [ lem : shearer - lemma ] ) is a strengthening of the original lovsz local lemma ( theorem [ thm : lll ] ) : if satisfy then they must also satisfy shearer s criterion .nevertheless , there does not seem to be a direct proof of this fact in the literature .shearer indirectly proves this fact by showing that , when it is possible that = 0 ] and , we have [ cor : shearer->lovasz ] if satisfies then . for any ] , so the result follows from claim [ clm : shearerslack ] .lemma [ lem : gllimpliesshearer ] we proceed by induction on .the base case , , is trivial : there is no to choose .consider and an element . by claim [ clm : breve1 ], we have . by the inductive hypothesis applied iteratively to the elements of , we have therefore , we can write by the claim s hypothesis , , so we conclude that . these results , together with our analysis of shearer s criterion with slack ( corollary [ cor : shearer - no - q0 ] ) ,immediately provide an analysis under the assumption that holds with slack , similar to theorem [ thm : gll - with - slack ] .however , this connection to shearer s criterion allows us to prove more .we show that our algorithm is in fact efficient even when the criterion is tight . this might be surprising in light of corollary [ cor : shearer - bound ] , which does not use any slack and gives an exponential bound of } } \leq \prod_{i=1}^{n } \frac{1}{1-x_i} ] , we are in the shearer region and by claim [ cl : breve - diff ] , we have i.e. , is a convex function . by claim [ cl : breve - diff ] andclaim [ clm : breve4 ] , for any we have therefore , since , this implies as we argued the function is convex as long as for all ] .( we know certainly that this is true for , by lemma [ lem : shearer->lovasz ] . )then we have using our choice of .this proves in fact that for all ] : suppose not and take ] . by the arguments above , .however , for any we have which contradicts the continuity of .therefore , we can conclude that .in particular , }(\epsilon ) \geq \frac12 \breve{q}_{[n]}(0 ) \geq \frac12 \prod_{i=1}^{n } ( 1-x_i)\ ] ] where we used claim [ clm : breve2 ] and lemma [ lem : shearer->lovasz ] in the last step .[ thm : lovasz - no - slack ] let be events and let ] , with respect to a graph , is this criterion was introduced in the following non - constructive form of the lll .[ thm : bissacot ] let be events with a ( lopsi-)dependency graph , and let ] . to see that this strengthens the original lll ( theorem [ thm : lll ] ) , one may verify that implies : if , we can take ( so ) and then use the simple bound on the other hand , shearer s lemma ( lemma [ lem : shearer - lemma ] ) strengthens theorem [ thm : bissacot ] , in the sense that implies .this fact was established by bissacot et al . by analytic methods that relied on earlier results . in this sectionwe establish this fact by a new proof that is elementary and self - contained .an algorithmic form of theorem [ thm : bissacot ] in the variable model was proven by pegden .in fact , that result is subsumed by the algorithm of kolipaka and szegedy in shearer s setting , since implies . in this section , we prove a new algorithmic form of theorem [ thm : bissacot ] in the general framework of resampling oracles .to begin , we establish the following connection between the parameters and the polynomials . for convenience , let us introduce the notation ] and , we have the proof is in section [ sec : cllimpliesshearer ] below .[ cor : pins ] if satisfies then . for any ] under the criterion .recall that .hence for all ] , so the result follows from claim [ clm : shearerslack ] .these corollaries lead to our algorithmic result under the cluster expansion criterion .the following theorem subsumes theorem [ thm : cluster - no - slack ] and adds a statement under the assumption of slack .[ thm : cluster - with - slack ] let be events and let ] .it suffices to consider the case that and are disjoint , as replacing with decreases the right - hand side and leaves the left - hand side unchanged .every summand on the left - hand side can be written as with and .the product appears as a summand on the right - hand side , and all other summands are non - negative .lemma [ lem : cllimpliesshearer ] we proceed by induction on .the base case is . in that casewe have on the other hand , by the two claims above and , we have } ~=~ y_{[n]-a } + y_a y_{[n ] \setminus \gamma^+(a ) } ~\geq~ y_{[n ] - a } + p_a y_{\gamma^+(a ) } y_{[n ] \setminus \gamma^+(a ) } ~\geq~ y_{[n ] - a } + p_a y_{[n]}.\ ] ] therefore , -a}}{y_{[n ] } } \leq 1 - p_a ] , a witness tree is a finite rooted tree , with each vertex in given a label ] .we show here that this can be grossly violated in the setting of resampling oracles .our example actually uses the independent variable setting but resampling oracles different from the natural ones considered by moser and tardos .consider independent bernoulli variables and where and .the probability distribution is uniform on the product space of these random variables . consider the following events : these events are mutually independent .however , let us consider a dependency graph where for each ; this is a conservative choice but nevertheless a valid one for our events .( one could also tweak the probability space slightly so that neighboring events are actually dependent . ) in any case , is an isolated vertex in the graph . switches the variables and and thus can cause to occur ( which is consistent with the dependency graph ) . conditioned on , it makes uniformly random and preserves a uniform distribution on . affects the values of but no event depends on , so can not cause any event except to occur .conditioned on , since are distributed uniformly , it produces again the uniform distribution . first , let us consider the moser - tardos algorithm : in the most general form , it resamples in each step an arbitrary occurring event . for concreteness , let s say that the algorithm always resamples the occurring event of minimum index ( in some fixed ordering ) .if the moser - tardos algorithm considers events in the order , then at the time it gets to resample , the variables are independent are equal to with probability each .let us fix .whenever some variable is initially equal to , we have to resample at some point .however , we only resample if does not occur , which means that must be at that time .so the resampling oracle forces to be equal to .the only way could remain equal to is that it is initially equal to and none of the events need to be resampled , which happens with probability .therefore , when we re done with and for , is equal to with probability .this happens independently for each . by the ordering of events, is resampled only when all other events have been fixed .also , resampling can not cause any other event , so the algorithm will terminate afterwards .however , as we argued above , when we get to resampling , each variable is equal to independently with probability . considering the resampling oracle ,if as well as all the variables are equal to , it will take at least resamplings to clear the queue and get a chance to avoid event .this happens with probability .let consist of a path of vertices labeled . for , we conclude that the witness tree appears with constant probability in the log of the moser - tardos algorithm , as opposed to which would follow from the witness tree lemma .a slightly more involved analysis is necessary in the case of maximalsetresample . by nature of this algorithm , we would resample in parallel " with the other events and so the variables evolve somewhat differently . for each independently , after 2 iterations of the maximalsetresamplealgorithm , with probability .any further updates of other than those caused by resampling can only change the variable from to . the claim is that unless and initially , in the first two iterations we will possibly resample and then one of the events , which makes equal to . any further update to occurs only when is resampled ( which shifts the sequence ) or when is resampled , which makes equal to . in the first two iterations , the probability that is resampled twice is at least ( the values of and are initially uniform , and if is updated , it can only increase the probability that we resample ) . independently ,the probability that after the first two iterations is , by the preceding claim .( we are not using which is possibly correlated with the probability of resampling in the second iteration , and which would be refreshed by this resampling in the second iteration . )if this happens , we will continue to resample at least additional times , because it will take executions of before a zero can reach the variable .again , consider setting .the total number of events is , so and . with constant probability , the witness tree consisting of a path of vertices labeled will appear in the log of maximalsetresamplealgorithm .thus , with constant probability , the algorithm will require a stable set sequence of length at least .
|
the lovsz local lemma is a seminal result in probabilistic combinatorics . it gives a sufficient condition on a probability space and a collection of events for the existence of an outcome that simultaneously avoids all of those events . finding such an outcome by an efficient algorithm has been an active research topic for decades . breakthrough work of moser and tardos ( 2009 ) presented an efficient algorithm for a general setting primarily characterized by a product structure on the probability space . in this work we present an efficient algorithm for a much more general setting . our main assumption is that there exist certain functions , called _ resampling oracles _ , that can be invoked to address the undesired occurrence of the events . we show that , in _ all _ scenarios to which the original lovsz local lemma applies , there exist resampling oracles , although they are not necessarily efficient . nevertheless , for essentially all known applications of the lovsz local lemma and its generalizations , we have designed efficient resampling oracles . as applications of these techniques , we present new results for packings of latin transversals , rainbow matchings and rainbow spanning trees .
|
most species , and humans in particular , exhibit striking changes in social style across the lifecycle , in most cases as a consequence of a shift in emphasis from development to reproduction . in humans , a greatly extended period of socialization , combined with a virtually unique period of post - reproductive ( grandparental ) investment , adds significant complexity to this .although this much is obvious from casual observation , we actually know very little about the relative investment that individuals make as they age , or how this differs between the sexes . the last decade has seen a rapid growth and development in the information and communications technology ( ict ) , which has increasingly aided humans to connect to each other . among the different channels that have become accessible ,mobile phone communication is perhaps the most prominent as regards the number of users .this is the reason mobile phone call data records ( cdrs ) have increasingly been used to study various aspects of human behaviour .for example , from these cdrs one can construct egocentric networks that in turn allow one to undertake detailed studies of ego - alter relationships and the patterns of social investment that individuals make in the different members of their social networks .previous studies have shown that individuals telephone communication ( landline and mobile ) rates correlate with their face - to - face interactions .both the age and gender of individuals have been found to be important factors influencing their communication patterns in mobile phone networks : the gender and age preferences of egos for their alters , for example , have been found to correlate with their geographic proximity .furthermore , the dynamics of activation and deactivation of ties between individuals have been found to be different for the two genders and across age . in general ,homophily and heterophily , which are known to be factors shaping human social interactions , have turned out to be important in mobile phone communications , at least as regards to the gender preferences of an ego . in a previous studyit was found that for younger egos , the most contacted alter is of the opposite sex .taken together , this suggests that , whatever their limitations might be , mobile phone data provide valid and reliable insights into human social patterns . in this paper, we analyze a large mobile phone dataset and study the structure of the individual level or egocentric networks . in general , we focus on their static structure for different non - overlapping periods , ranging from a month to a full year . in everyday life for both humans and other primates , time represents a direct measure of relationship quality . and ,because time is limited and social investment is costly in terms of time , individuals are forced to choose how to distribute that time across the members of their network . here, we use cross - sectional data on the frequency and duration of phone calls to examine how the pattern of social investment varies across the lifecycle in the two sexes . b. the figures in the inset focus on the region where the crossover in behaviour for males and females is found .the dashed lines are used to denote the age of the crossover in each case.,scaledwidth=90.0% ]we analyze anonymized cdrs from a particular operator in a european country during 2007 .the cdrs contain full calling histories for the subscribers of this operator ( we term them ` company users ' and subscribers of other operators ` non - company users ' ) .there are million company users and around million non - company users appearing in the cdrs in the full one year period . out of the total set of company usersthere are million users for whom both the age and the gender are available and only a single subscription is registered . in this study , we have only focused on the voice calls and excluded sms entries from the cdrs .we construct an ego - alter pair if there is at least one call event between them during the observed time period . in general, we study calling patterns pertaining to pairs for whom age and gender are known .however , when the demographic information of the alters is not important for the analysis , we include individuals for whom this information is not known . * additional filtering . * in the data set , there are company users for whom multiple subscriptions are found under the same contract numbers . for such usersit is difficult to determine their real age and gender .we bypass this issue by considering the gender and age to be unknown for such users .the stored age of each company user corresponds to the year when the contract was signed ; as the starting year of users contracts ranged from 1998 to 2007 , we updated the age of each user according to the number of years between the beginning of the year when the contract was signed and the first day of 2007 .for some users , their contract starting date is unknown , so we add the average age - correction in the population , which was years ( rounded from the actual value of ) .first we show the variation in the average number of alters that egos contact in a month , as a function of ego s age . from fig .[ fig-1]a , we find that the number of alters reaches a maximum at an age of around .this is followed by a decrease till an age of around . from age , the number of alters contacted stabilizes for about a decade .after , there is again a steady decrease . in fig .[ fig-1]b , we partition these data by gender . from the plot, it is clear that the average number of alters for males is greater than that for females for ages below .but from the age of onwards we observe that the number of alters for females is greater than that for males .to check the robustness of this finding , we use time windows of different length , as shown in fig .[ fig-2 ] .we observe a consistent pattern and that there is a crossover age at around , irrespective of the time window used .months . only those cdrs were used where the demographic information of the egos as well as the alters was available.,scaledwidth=80.0% ] ( a1 , b1 , c1 ) , ( a2 , b2 , c2 ) and ( a3 , b3 , c3 ) .the different symbols and colours denote the sexes in the ego - alter pairs and are similar to that used in fig .[ fig-3 ] .the quantities are obtained from monthly call patterns and are averaged over the months period.,scaledwidth=80.0% ] months : ( a ) total time ( sec ) per ego for all calls aggregated in the period , ( b ) time spent ( sec ) per alter per ego ( sec ) , ( c ) time spent ( sec ) per ego with the first ranked alter , and ( d ) the fraction of the total time per ego that is spent with the first rank . red circles and blue squaresindicate female and male egos , respectively.,scaledwidth=80.0% ] to investigate the interaction pattern of the egos belonging to different age groups , we measure the probability of interaction as a function of the age of alters . for egos of a given age, we find this probability by calculating the number of alters of any age and sex and divide by the total number of alters ( male and female ) . in fig .[ fig-3 ] we plot the distribution for egos belonging to six different decadal age classes , namely , , , , and .in general , the distributions appear to be have double peaks .the difference between the ages at which the peaks appear is around years .this is roughly a generation gap and is similar to the results in where the age distribution of the most frequently contacted alter was investigated .notice that the focus of the peaks differs with increasing age : in the younger age groups , the main peak is on individuals of the same age ( peers ) , but from age this starts to be replaced by an increasingly large peak that is a generation younger than ego ( presumably ego s now adult offspring ) .note how these peaks track each other across the age space as ego ages .note also the asymmetry in calling pattern between parent and child : -year - olds ( the parents ) call -year - olds ( their adult children ) more than twice as often as the -year - olds call them .this probability distribution is based on counting the number of alters at any given age . to examine in detail the appearance of the third peak , we quantify the strength of the interaction between the egos of age around and their alters at different ages .we consider egos of age , and years and measure the following quantities in the time window of a month : ( i ) number of calls per alter , ( ii ) number of distinct days each alter is contacted , and ( iii ) calling time per alter ( time of all calls aggregated within monthly window ) .however , the total calling time fluctuates strongly , so in lieu of ( iii ) , we express the monthly aggregated duration of phone calls to an alter for a given ego as a fraction of the total calling time of that ego . in fig .[ fig-4 ] we show these three quantities as a function of the age of the alters , averaged over months .the plot shows the conspicuous presence of three peaks of comparable heights . for older alters ( those aged years or more ) the averagesare inevitably affected by the small number of older mobile phone users .nonetheless , in general , the strength of communication appears to be larger when the alter is of the opposite gender and of similar age ( compare the plots corresponding to female - ego - to - male - alters [ red circles ] and male - ego - to - female - alters [ blue squares ] ) . the structure of the communication pattern of egos is also reflected in the variation in the monthly aggregated call durations . in fig . [ fig-5]a and [ fig-5]b we plot the total calling time of egos and the calling time per alter , respectively .the plots show that females have larger total calling time as well as larger time per alter than males .interestingly , the crossover in the number of alters ( fig .[ fig-1]b ) does not get translated to the calling time per alter . for a given ego ,we rank the alters in terms of the monthly calling times and plot the calling time to the first rank alter ( fig . [fig-5]c ) .the time spent by an ego with the first rank is approximately times the time spent with an average alter .however , the variation in these quantities is very similar to that of the dependence of number of alters on the age of the ego .when the calling time to the first rank is expressed as a fraction of the total calling time , we observe three broad regimes , a rapid decrease till years of age , a slow variation in the range years and a steady rise from years onwards .note that the variation over the whole age range is only of the average value which is around .additionally , a crossover in the behaviour of males and females is visible at the age of .months.,scaledwidth=80.0% ] having discussed the dependence of mobile communication upon the gender and age of the egos , we provide a different perspective by measuring the inequality in the way social effort ( indexed as calling times ) is partitioned among the alters through the gini coefficient for each ego .the gini coefficient is mainly used to quantify income inequality and its value varies from ( implying perfect equality ) to ( implying extreme inequality ) .we here use it as an evenness index : a gini value of implies that ego devotes equal amounts of time with all the alters and a value of implies that the ego spends all the time with only one alter .it is natural to expect that there should be a strong bias among the egos with regard to the calling time spent with the alters , during a certain period of time .here we analyse the nature of this bias by using the gini coefficient in two different ways .we note that , for egos of a given age , there is a typical value for the number of alters .this poses a difficulty in comparing two egos of different ages because the value of the coefficient is known to depend upon the size of the sample set .we circumvent this issue in the following way .first , we consider egos irrespective of their ages but having a fixed number of alters , and calculate the gini coefficient for the set of their monthly aggregated call times to the alters . fig .[ fig-6]a plots the distributions for sets of egos having different genders .comparison between the locations of the peaks suggest that females have overall higher gini values compared to males .next , we consider egos irrespective of their gender . we choose egos in the following age brackets : ( i ) , ( ii ) and ( iii ) . for each egowe rank the alters with respect to the time of monthly aggregated call durations .then we choose the call times belonging to the top alters . for egos having less than alterswe assume the missing call times to be zero .however , in the analysis we exclude all egos who have less than alters .the resulting distribution of gini values is shown in fig .[ fig-6]b .we observe that the inequality among alters is larger for older people than for younger ones .the distributions in fig .[ fig-6]b suggest that social effort becomes progressively less evenly distributed as people get older , and that this is true for both genders . in other words ,older people devote more attention to their first ranked alters than younger people do . in effect , younger people are socially more promiscuous , but as they age they focus more and more of their effort , or social capital , on a smaller subset of meaningful relationships . as it is likely that most of an ego s first few rank alters are family members , this might suggest that older people become more attached to their family compared to younger people .overall , the female egos exhibit higher inequality values than males do , and this suggests that females may be not only more socially focused than males , but also more attached to their family ( as folk wisdom would also suggest ) .in order to explore the patterns of social investment across the lifespan in humans , we studied the records of mobile communication belonging to a particular european operator over a one year period . as these records include information on the service subscribers age and gender , we are able to elucidate the nature of the interactions across the lifecycle . one important conclusion we can drawis that the average number of contacts is quite modest : in most cases , people focus their ( phone - based ) social effort each month on around people .this corresponds rather closely to the size of the second layer of egocentric personal networks in the face - to - face world . in the face - to - face world, this layer also represents the number of alters contacted at least once a month .thus , we provide some evidence that the use of mobile phone technology does not change our social world. it also provides further indirect evidence for the fact that we use the phone to contact those who are emotionally closest to us rather than simply those who live furthest away ( see also ) .our main finding , however , is the fact that the maximum number of connections for both males and females occurs at the age of around ( fig .[ fig-1 ] ) . during this early phase, males appear to be more connected than females .after this , the number of alters decreases steadily for both genders , although the decrease is faster for males than for females .the different rates of decrease result in a crossover around the age of such that after females become more connected than males .note , however , in the age group , the number of alters stabilizes to a very conspicuous plateau for both males and females .projecting the slopes for the two graphs before the plateau suggests that the plateau represents a ` saving ' of around two alters who are retained as monthly alters rather than being lost to the next layer of less frequent contact .the difference between the plateau heights for females and males is around alters when the time window corresponds to one month .this difference grows to and alters when the window size is increased to four and twelve months , respectively .thus , there are two separate but interrelated phenomena : the plateau that appears in both sexes during this period and the difference between males and females in the number of alters contacted .since this age cohort is that in which ego s children typically marry and begin to reproduce in their turn , one likely explanation for this plateau is that it reflects the fact that parents are maintaining regular interaction with their adult children at a time when some of these might otherwise be lost .the difference between the sexes seems to be primarily due to the more frequent interactions by the females with their adult children and the children s spouses .also , females intimately interact with their own close family members ( e.g. keeping grandparents up - dated on the children s activities ) and the new in - laws created by their children s marital arrangements .this shift in women s social focus once her offspring reach adulthood and start to reproduce themselves is suggested by the appearance of a rather clear secondary peak in the number of alters aged about a generation ( years ) younger that appears in the contacts of -year - olds ( fig .[ fig-3]d ) .this is in contrast with the profiles of younger cohorts ( those aged years ) who show a small , but distinct , secondary peak about a generation older than themselves ( presumably their own parents ) .the positions of the peaks in fig .[ fig-4 ] tell us quite a lot about domestic arrangements .for example , for this same - age cohort the peaks in the f - m ( circles ) and m - f ( squares ) curves in fig .[ fig-4 ] are slightly offset , with the m - f leading by about years .in other words , on average a woman s main same - age alter is three years older than she is , while that for a man is about three years younger .this is almost exactly the typical age difference between spouses in contemporary europe , including the country from which our sample derives .it seems likely that in fig .[ fig-4 ] the peaks to the left are ego s children and the peaks to the right are ego s own parents .this suggestion is reinforced by the fact that these peaks track each other across the age space as ego ages .in addition , we found another crossover when we looked at the fraction of the total calling time devoted to the top ranked alter .this crossover occurs during the reproductively active period and its location roughly corresponds to the maxima in fig .[ fig-1 ] .note , that before the crossover , the fraction for females , in fig .[ fig-5]d , is larger than that for males , even though their maximum number of alters is actually lower . as the most frequently contacted alter is typically of the opposite sex , we assume this to be the spouse . because the time costs of reproduction in humans are very high ( and may continue to be high for nearly two decades until the children reach marriageable age ), we expect that females give priority to their spouses rather than other kinds of peers ( siblings , cousins , friends ) during this period when their time ( and energy ) budgets are under intense pressure . as a consequence ,they maintain fewer relationships compared to males of the same age whose investment in their preferred alter seems to be much lower .a similar pattern of withdrawal from casual relationships so as to invest their increasingly limited available time in core relationships as time budgets are squeezed by the foraging demands of parental investment ( in this case , lactation ) has been noted in baboons .these results also seem to reflect female mate choice , with females persistently targeting their spouse in order to maintain investment in their chosen mate once they have made a choice ( see also ) .note that , when examined over the whole age range , the fraction varies little and remains around ( fig .[ fig-5]d ) .this observation suggests that across the lifespan , the fractional allocation for the top ranked alter ( the spouse ) remains conserved even though the absolute time budget decreases ( as can be seen from fig .[ fig-5]a ) .this is reminiscent of the finding by , who reported , for a much smaller dataset , that the proportional distribution of social effort across all alters in an ego s network remains remarkably constant over time despite considerable change in network membership .more generally , fig .[ fig-6 ] suggest that there was a marked difference in the evenness with which the two genders distributed their social effort , as well as a progressive shift towards being less even with age .females seemed to be generally more focused in their social arrangements than males , targetting more of their social effort onto fewer alters .this is reminiscent of the finding in that women appear to have a small number of extremely close same - sex friendships , whereas males do not ( they typically have a larger number of more casual same - sex friendships ) .in addition , both genders exhibit the same tendency to shift from being more socially promiscuous ( a more even gini value ) early in life to a more uneven ( higher gini value ) in their . since family dominate the inner layers of most people s social networks , this would suggest an increasing focus on family and close friendship relationships with age .this might reflect the fact that family relationships are more robust and resilient than friendships , as well as the fact that they are much more important as sources of lifelong support .in contrast , the greater social promiscuity of younger individuals could be interpreted as a phase of social sampling in which individuals explore the range of opportunities ( both for friendships and for reproductive partners ) available to them before finally settling down with those considered optimal or most valuable . in this respect, the younger individuals may be viewed as ` careful shoppers ' who continue to check out the availability of options , only later concentrating their social effort on a select set of preferred alters .one implication of this is that turnover ( or churn ) in network membership might start to fall dramatically at a particular point in the life cycle marked by a shift from this more promiscuous phase to the more stable phase associated with a reduced social network .[ fig-1 ] suggests that the mean number of alters contacted falls from during this early phase to after age [ fig-1 ] and [ fig-5]d suggest that this switch in social focus may start to occur by the end of the third decade of life , and may thus coincide with the onset of reproduction .the average age of women at first birth in europe for the currently reproducing generation is around and would fit well with this prediction .kb , ag and dm acknowledges project cosdyn , academy of finland for financial support .dm also acknowledges conacyt , mexico for support .rd is supported by an erc advanced grant .we thank jnos kertsz , tamas david - barrett and hang - hyun jo for helpful discussions .kb , ag and dm carried out the analysis of the data .all the authors were involved in designing the project and the preparation of the manuscript .the authors declare no competing financial interests .dashun wang , dino pedreschi , chaoming song , fosca giannotti , and albert - laszlo barabasi . human mobility , social ties , and link prediction . in _ proceedings of the 17th acmsigkdd international conference on knowledge discovery and data mining _ , pages 11001108 .acm , 2011 .giovanna miritello , esteban moro , rubn lara , roco martnez - lpez , john belchamber , sam gb roberts , and robin i m dunbar .time as a limited resource : communication strategy in mobile phone networks ., 35(1):8995 , 2013 .tamas david - barrett , anna rotkirch , james carney , isabel behncke izquierdo , jaimie a krems , dylan townley , and ri dunbar .women favour dyadic relationships , but men prefer clubs : cross - cultural evidence from social networking ., 10(3):e0118329 , 2015 .
|
age and gender are two important factors that play crucial roles in the way organisms allocate their social effort . in this study , we analyse a large mobile phone dataset to explore the way lifehistory influences human sociality and the way social networks are structured . our results indicate that these aspects of human behaviour are strongly related to the age and gender such that younger individuals have more contacts and , among them , males more than females . however , the rate of decrease in the number of contacts with age differs between males and females , such that there is a reversal in the number of contacts around the late 30s . we suggest that this pattern can be attributed to the difference in reproductive investments that are made by the two sexes . we analyse the inequality in social investment patterns and suggest that the age and gender - related differences that we find reflect the constraints imposed by reproduction in a context where time ( a form of social capital ) is limited .
|
information processing using the rules of quantum mechanics may allow tasks that can not be performed using classical laws .the efficient factorization algorithm of shor and secure quantum cryptography are two examples . of the many possible realizations of quantum information processes , optical realizations have the advantange of negligible decoherence: light does not interact with itself , and thus a quantum state of light can be protected from becoming entangled with the environment .several proposed optical schemes offer significant potential for quantum information processing . in order to prove theorems regarding the possibilities and limitations of optical quantum computation, one must construct a framework for describing all types of physical processes ( unitary transformations , projective measurements , interaction with a reservoir , etc . )that can be used by an experimentalist to perform quantum information processing .most frameworks currently employed ( e.g. , ) are restricted to describing only unitary transformations . however , such transformations are a subset of all possible physical processes. non - unitary transformations such as dissipation , noise , and measurement must also be described within a complete framework .the new results of knill et al show that photon counting measurements allow for operations that are `` difficult '' with unitary transformations alone ; thus , non - unitary processes may be a powerful resource in quantum information processing and must be considered in any framework that attempts to address the capabilities of quantum computation with optics . in this paper , we show that unitary transformations , measurements and any other physical process can be described in the unified formalism of completely positive ( cp ) maps .also , a broad class of these maps which includes linear optics and squeezing transformations , noise processes , amplifiers , and measurements with feedforward that are typical to quantum optics experiments can be described within the framework of a _gaussian semigroup_. this framework allows us to place limitations on the potential power of certain quantum information processing tasks .one important goal is to identify classes of processes that can be efficiently simulated on a classical computer ; such processes can not possibly be used to provide any form of `` quantum speedup '' .the gottesman - knill ( gk ) theorem for qubits and the cv classical simulatability theorems of bartlett et al provide valuable tools for assessing the classical complexity of a quantum optical process .it is shown here that semigroup techniques provide a powerful formalism with which one can address issues of classical simulatability .in particular , a classical simulatability result is presented for a general class of quantum optical operations , and thus a no - go theorem for quantum computation with optics is proven using semigroup techniques .consider an optical quantum information process involving coupled electromagnetic field modes , with each mode described as a quantum harmonic oscillator .the two observables for the ( complex ) amplitudes of a single field mode serve as canonical operators for this oscillator . a system of coupled oscillators , then , carries an irreducible representation of the heisenberg - weyl algebra hw( ) , spanned by the canonical operators along with the identity operator .these operators satisfy the commutation relations = i\hbar \delta_{ij } i ] , with the skew - symmetric matrix and the identity matrix . for a state represented as a density matrix, the _ means _ of the canonical operators is a vector defined as the expectation values , and the _ covariance matrix _ is defined as a _ gaussian state _ ( a state whose wigner function is gaussian and thus possesses a quasiclassical description ) is completely characterized by its means and covariance matrix .coherent states , squeezed states , and position- and momentum - eigenstates are all examples of gaussian states .we define to be the group of linear transformations of the canonical operators ; this group corresponds to the infinite - dimensional ( oscillator ) representation of the `` clifford group '' employed by gottesman . for a system of oscillators , it is the unitary representation of the group isp( ) ( the inhomogeneous linear symplectic group in phase space coordinates ) which is the semi - direct product of phase - space translations ( the heisenberg - weyl group hw( ) ) plus one- and two - mode squeezing ( the linear symplectic group sp( ) ) .phase space displacements are generated by hamiltonians that are linear in the canonical operators ; a displacement operator hw( ) is defined by a real -vector .a symplectic transformation sp( ) , with a real matrix satisfying , is generated by a hamiltonian that is a homogeneous quadratic polynomial in the canonical operators . a general element be expressed as a product , and transforms the canonical operators as the group consists of unitary transformations that map gaussian states to gaussian states ; however , unitary transformations do not describe all physical processes . in the following ,we include other ( non - unitary ) cp maps that correspond to processes such as dissipation or measurement .we define the _ gaussian semigroup _, denoted , to be the set of gaussian cp maps on modes : a gaussian cp map takes any gaussian state to a gaussian state . because gaussian cp maps are closed under composition but are not necessarily invertible , they form a semigroup . a general element defined by its action on the canonical operators as where is a real -vector , and are real matrices , and is no longer required to be symplectic .( [ eq : actionofsemigrouponcanonical ] ) includes the transformations ( [ eq : clifford ] ) plus additive noise processes described by quantum stochastic noise operators ( the vector ) with expectation values equal to zero and covariance matrix here , is a gaussian ` reservoir ' state which , in order to define a cp map , must be chosen such that the noise operators satisfy the quantum uncertainty relations .this condition is satisfied if the noise operators define a positive definite density matrix , which leads to the condition the group is recovered for .the action of the gaussian semigroup on the means and covariance matrix is straightforward and given by because the means and covariance matrix completely define a gaussian state , the resulting action of the gaussian semigroup on gaussian states can be easily calculated via this action .the gaussian semigroup represents a broad framework to describe several important types of processes in a quantum optical circuit .the group comprises the unitary transformations describing phase space displacements and squeezing ( both one and two mode ) .introduction of noise to the circuit ( e.g. , via linear amplification ) is also in .furthermore , the gaussian semigroup describes certain measurements in the quantum circuit .these include measurements where the outcome is discarded ( thus evolving the system to a mixed state ) or retained ( where the system follows a specific quantum trajectory defined by the measurement record ) .finally , the gaussian semigroup includes gaussian cp maps conditioned on the outcome of such measurements . for details and examples of all of these types of gaussian semigroup transformations ,see .using the framework of the gaussian semigroup , it is straightforward to prove the classical simulatability result of bartlett and sanders . _theorem : _ any quantum information process that initiates in a gaussian state and that performs only gaussian semigroup maps can be _ efficiently _ simulated using a classical computer . _ proof : _ recall that any gaussian state is completely characterized by its means and covariance matrix . for any quantum information process that initiates in a gaussian state and involves only gaussian semigroup maps , one can follow the evolution of the means and the covariance matrix rather than the quantum state itself . for a system of oscillators , there are independent means and elements in the ( symmetric ) covariance matrix ; thus , following the evolution of these values requires resources that are polynomial in the number of coupled systems ._ qed _ because most current experimental techniques in quantum optics are describable by gaussian semigroup maps ,this theorem places a powerful constraint on the capability of achieving quantum computational speedups ( tasks that are not efficient on any classical machine ) using quantum optics .semigroup techniques provide a powerful tool for constructing and assessing new quantum information protocols using quantum optics .these techniques have been used to show that algorithms or circuits consisting of only gaussian semigroup maps can be efficiently simulated on a classical computer , and thus do not provide the ability to perform quantum information processing tasks efficiently that can not be performed efficiently on a classical machine .eisert et al use related techniques to show that local gaussian semigroup transformations are insufficient for distilling entanglement : an important process for quantum communication and distributed quantum computing .most current quantum optics experiments consist only of gaussian semigroup transformations ; thus , the challenge is to exploit this semigroup to prove new theorems , limitations and possibilities for quantum information processing using optics . this project has been supported by macquarie university and the australian research council .the author thanks b. c. sanders for helpful discussions .99 nielsen m a and chuang i l 2000 quantum computation and quantum information ( cambridge : cambridge university press )
|
a framework to describe a broad class of physical operations ( including unitary transformations , dissipation , noise , and measurement ) in a quantum optics experiment is given . this framework provides a powerful tool for assessing the capabilities and limitations of performing quantum information processing tasks using current experimental techniques . the gottesman - knill theorem is generalized to the infinite - dimensional representations of the group stabilizer formalism and further generalized to include non - invertable semigroup transformations , providing a theorem for the efficient classical simulation of operations within this framework . as a result , we place powerful constraints on obtaining computational speedups using current techniques in quantum optics .
|
fully nonlinear pdes arise in many areas , including differential geometry ( the monge ampre equation ) , mass transportation ( the monge kantorovich problem ) , dynamic programming ( the bellman equation ) and fluid dynamics ( the geostrophic equations ) .the computer approximation of the solutions of such equations is thus an important scientific task .there are at least three main difficulties apparent to someone attempting to derive numerical methods for fully nonlinear equations : first , the strong nonlinearity on the highest order derivative which generally precludes a variational formulation , second , a fully nonlinear equation does not always admit a classical solution , even if the problem data is smooth , and the solution has to sought in a generalised sense ( e.g. , viscosity solutions ) , which is bound to slow down convergence rates , and third , a common problem in nonlinear solvers , the exact solution may not be unique and constraints , such as convexity requirements must be included in the constraints to ensure uniqueness .regardless of the problems , the _ numerical approximation of fully nonlinear second order elliptic equations _, as described in , have been the object of considerable recent research , particularly for the case of monge ampre of which are selected examples . for more general classes of fully nonlinear equations some methods have been presented , most notably , at least from a theoretical view point , in where the author presents a finite element method shows stability and consistency ( hence convergence ) of the scheme , following a classical `` finite difference '' approach outlined by which requires a high degree of smoothness on the exact solution . from a practical point of viewthis approach presents difficulties , in that the finite elements are hard to design and complicated to implement , in a useful overview of bzier - bernestein splines in two spatial dimensions is provided and a full implementation in .similar difficulties are encountered in finite difference methods and the concept of _ wide - stencil _ appears to be useful , for example by . in the authors give a method in which they approximate the general second order fully nonlinear pde by a sequence of fourth order quasilinear pdes .these are quasilinear biharmonic equations which are discretised via mixed finite elements , or using high - regularity elements such as splines .in fact for the monge ampre equation , which admits two solutions , of which one is convex and another concave , this method allows for the approximation of both solutions via the correct choice of a parameter . on the other handalthough computationally less expensive than finite elements ( an alternative to mixed methods for solving the biharmonic problem ) , the mixed formulation still results in an extremely large algebraic system and the lack of maximum principle for general fourth order equations makes it hard to apply vanishing viscosity arguments to prove convergence .a somewhat different approach , based on -penalty , has been recently proposed by , as well as `` pseudo time '' one by .it is worth citing also a _ least square _ approach described by .this method consists in minimising the mean - square of the residual , using a lagrange multiplier method . also here a fourth order elliptic term appears in the energy . in this paper , we depart from the above proposed methods and explore a more `` direct '' approach by applying the _ nonvariational finite element method _ , introduced in , as a solver for the newton iteration directly derived from the pde . to be more specific , consider the following model problem } : = { f}({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}u ) - f = 0\ ] ] with homogeneous dirichlet boundary conditions where is prescribed function and is a real - valued algebraic function of symmetric matrixes , which provides an elliptic operator in the sense of , as explained below in definition [ def : ellipticity ] .the method we propose , consists in applying a newton s method , given below by equation of the pde , which results in a sequence of linear nonvariational elliptic pdes that fall the framework of the nonvariational finite element method ( nvfem ) proposed in .the results in this paper are computational , so despite not having a complete proof of convergence , we test our algorithm various problems that are specifically constructed to be well posed .in particular , we test our method on the monge ampre problem , which is the de - facto benchmark for numerical methods of fully nonlinear elliptic equations .this is in spite of monge ampre having an extra complication , which is conditional ellipiticity ( the operator is elliptic only if the function is convex or concave .a crucial , empirically observed feature of our method is that the convexity ( or concavity ) is automatically preserved if one uses elements or higher . for elementsthis is not true and the scheme must be stabilized by reenforcing convexity ( or concavity ) at each timestep .this was achieved in using a semidefinite programming method . in a different spirit , but somewhat reminiscent , a stabilization procedure was obtained in by adding a penalty term .the rest of this paper is set out as follows . in [ sec : notation_and_discretisation ] we introduce some notation , the model problem , discuss its ellipticity and newton s method , which yields a sequences of nonvariational linearised pde s . in [sec : ndfem ] we review of the nonvariational finite element method proposed in and apply it to discretise the nonvariational linearised pde s in newton s method . in [ sec : unconstrained - fnl - pde ] we numerically demonstrate the performance of our discretisation on a class of fully nonlinear pde , those that are elliptic and well posed without constraining our solution to a certain class of functions . in [ sec : ma ] we turn to conditionally elliptic problems by dealing with the prime example of such problems , i.e. , monge ampre .we apply the discretisation to the monge ampre equation making use of the work to check _ finite element convexity _ is preserved at each iteration . finally in [ sec : pucci ] we address the approximation of pucci s equation , which is another important example of fully nonlinear elliptic equation . all the numerical experiments for this research , were carried out using the dolfininterface for fenics and making use of gnuplotand paraviewfor the graphics .let be an open and bounded lipschitz domain .we denote to be the space of square ( lebesgue ) integrable functions on together with its inner product and norm .we denote by the action of a distribution on the function .we use the convention that the derivative of a function is a row vector , while the gradient of , is the derivatives transpose ( an element of , representing in the canonical basis ) .hence for second derivatives , we follow the common innocuous abuse of notation whereby the hessian of is denoted as ( instead of the more consistent ) and is represented by a matrix .the standard sobolev spaces are where is a multi - index , and derivatives are understood in a weak sense .we consider the case when the model problem is uniformly elliptic in the following sense .[ def : ellipticity ] the operator } ] is _ conditionally elliptic_. the operator } ] is conditionally elliptic on and unless otherwise stated we will also assume that .the smoothness assumption [ hyp : smooth - elliptic - operator ] allows to apply newton s method to solve problem ( [ eq : modelproblem ] ) .given the initial guess , with , for each , find with such that },\ ] ] where indicates the ( frchet ) derivative , which is formally given by } - { { \ensuremath{{\mathscr}n}\xspace}[u]}}{\epsilon } \\ & = \lim_{\epsilon\rightarrow 0 } \frac{{f}({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}u + \epsilon { \ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}v ) - { f}({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}u)}{\epsilon } \\ & = { f}'({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}u ) : { \ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}v , \end{split}\ ] ] for each . combining ( [ eq : newtonsmethod ] ) and ( [ eq : newtonsmethod2 ] ) then results in the following nonvariational sequence of linear pdes .given for each find such that the pde ( [ eq : system - of - nonvar - pdes ] ) comes naturally in a nonvariational form .if we attempted to rewrite into a variational form , in order , say , to apply a `` standard '' galerkin method , we would introduce an advection term which would depend on derivatives of , generic }}- { \operatorname{div}}{\ensuremath{\!\left [ { { f}'({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}v ) } \right]}}\nabla w.\ ] ] where the matrix - divergence is taken row - wise : } } : = { \ensuremath{\!\left ( { \sum_{i=1}^d \frac{\partial}{\partial x_i } { \ensuremath{\!\left[{{{\ensuremath{{\ensuremath{\!\left[{{{{\ensuremath{\boldsymbol{f'}}}}}}\right]}}_{i}^{1}}}}({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}v({\ensuremath{\boldsymbol{x}}}))}\right ] } } , \dotsc , \sum_{i=1}^d \frac{\partial}{\partial x_i } { \ensuremath{\!\left[{{{\ensuremath{{\ensuremath{\!\left[{{{{\ensuremath{\boldsymbol{f'}}}}}}\right]}}_{i}^{d}}}}({\ensuremath{{\ensuremath{\,\mathrm{d}}}^2}}v({\ensuremath{\boldsymbol{x}}}))}\right ] } } } \right)}}\ ] ] and the chain rule provides us , for each }}}}}} ] .let be a conforming , shape regular triangulation of , namely , is a finite family of sets such that 1 . implies is an open simplex ( segment for , triangle for , tetrahedron for ) , 2 . for any we havethat is a full subsimplex ( i.e. , it is either , a vertex , an edge , a face , or the whole of and ) of both and and 3 . .we use the convention where denotes the _ meshsize function _ of , i.e. , we introduce the _ finite element spaces _ where denotes the linear space of polynomials in variables of degree no higher than a positive integer .we consider to be fixed and denote by and .the discretisation of problem then reads : find ) \in { \mathring{{\ensuremath{{{\ensuremath{{\mathbb}v}\xspace}}^{}}}}}\times { \ensuremath{{{\ensuremath{{\mathbb}v}\xspace}}^{^}}}{d\times d} ] is a regular function , is a _ regular function _ , or just _ regular _ , if it can be represented by a lebesgue measurable function such that for all .we follow the customary and harmless abuse in identifying with . ]which the generalised hessian might fail to be .this allows to apply nonlinear functions such as to ] such that ,\phi}\right\rangle } } } } + \int_{\ensuremath{\varomega}\xspace}{\nabla u^{n+1}}\otimes{\nabla \phi } - \int_{\partial { \ensuremath{\varomega}\xspace } } { \nabla u^{n+1}}\otimes{{{\ensuremath{\boldsymbol{n}}}}}\,\phi = { { \ensuremath{\boldsymbol{0 } } } } { \quad{\:\forall\:}}\phi\in{\ensuremath{{{\ensuremath{{\mathbb}v}\xspace}}^{\\ } } } { \ensuremath{\text { and } } } { \ensuremath{{\ensuremath{\left\langle{{\ensuremath{{{\ensuremath{{\ensuremath{\boldsymbol{n}}}}}({\ensuremath{{\ensuremath{\boldsymbol{h}}}}}[u^{n}])}{:}{{\ensuremath{{\ensuremath{\boldsymbol{h}}}}}[u^{n+1}]}}},\psi}\right\rangle } } } } = { \ensuremath{{\ensuremath{\left\langle{g({\ensuremath{{\ensuremath{\boldsymbol{h}}}}}[u^n]),\psi}\right\rangle } } } } { \quad{\:\forall\:}}\psi\in{\mathring{{\ensuremath{{{\ensuremath{{\mathbb}v}\xspace}}^{}}}}}. \end{gathered}\ ] ] in this section we detail numerical experiments aimed at demonstrating the application of to a simple model problem .[ ex : fullynl - abs ] the first example we consider is a fully nonlinear pde with a very smooth nonlinearity . the problem is } : = { \ensuremath{\operatorname{sin}\left(\delta u\right ) } } + 2\delta u -f & = 0 \text { in } { \ensuremath{\varomega}\xspace } , \\u & = 0 \text { on } { \ensuremath{\partial_{}}\xspace}{\ensuremath{\varomega}\xspace}. \end{split } \ ] ] which is specifically constructed to be uniformly elliptic .indeed which is uniformly positive definite .the newton linearisation of the problem is then : given , for find such that and our approximation scheme is nothing but [ eq : discretenewtonsmethod ] with figure [ fig : nonlin - regular - abs - lap ] details a numerical experiment on this problem when and when ^ 2 ] triangulated with a criss - cross mesh .a similar example is also studied in ( * ? ? ?* ex 5.2 ) using bhmers method .in this section we propose a numerical method for the monge ampre dirichlet(mad ) problem our numerical experiments exhibit robustness of our method when computing ( smooth ) classical solutions of the mad equation .most importantly we noted the following facts : 1 .the use of elements with is essential as do not work , 2 .the convexity of the newton iterates is conserved throughout the computation , in a similar way to the observations in , where the authors prove this convexity - conservation property .our observations are purely empirical from computations , which leaves an interesting open problem of proving this property . to clarify assumption [ ass : well - posed ] for the mad problem ( [ eq : ma ] ) , in view of the characteristic expansion of determinant if where is the matrix of cofactors of .hence this implies that the linearisation of mad is only well posed if we restrict the class of functions we consider to those that satisfy for some ( -dependent ) .note that ( [ eqn : mad - ellipticity - via - derivative ] ) is equivalent to the following two conditions as well have shown that for the _ continuous _ ( infinite dimensional ) newton method described in [ sec : newtons - method ] , given an strictly convex initial guess , each iterate will be convex .it is crucial that this property is preserved at the discrete level , as it guarantees the solvability of each iteration in the _ discretised _ newton method . for this itthe right notion of convexity turns out to be the _ finite element convexity _ as developed in . in , an intricate method based on semidefinite programming provided a way to constrain the solution in the case of elements .here we observe that the finite element convexity is automatically preserved , provided we use or higher conforming elements . in view of ( [ eq : der - of - det - is - cof ] ) it is clear that applying the methodology set out in [ sec : unconstrained - fnl - pde ] we set [ rem : cof - to - det ] for a generic ( twice differentiable ) function it holds that using this formulation we could construct a simple fixed point method for the monge ampre equation . in view of remark [ rem :cof - to - det ] can be further simplified newton s method reads : given for each find such that in this section we study the numerical behaviour of the scheme presented in definition [ the : nlfem ] applied to the mad problem .we present a set of benchmark problems constructed from the problem data such that the solution to the monge ampre equation is known .we fix to be the square ^ 2 ] ( specified in the problem ) and test convergence rates of the discrete solution to the exact solution .figures [ fig : eoc - ma1][fig : eoc - ma2 ] details the various experiments and shows numerical convergence results for each of the problems studied as well as solution plots , it is worthy of note that each of the solutions seems to be convex , however this is not necessarily the case .they are all though _ finite element convex _ . in each of these casesthe dirichlet boundary values are not zero .the implementation of nontrivial boundary conditions is described in or in more detail in . as with any newton method we require a starting guess , not just for but also of ] was sufficient . the initial guess to the mad problem must be more carefully sought .since we restrict our solution to the space of convex functions , it is prudent for the initial guess to also be convex .moreover we must rule out constant and linear functions over , since the hessian of these objects would be identically zero , destroying ellipticity on the initial newton step .hence we specify that the initial guess to ( [ eq : linearised - ma ] ) must be strictly convex . rather than postprocessing the finite element hessian from a initial project ( although this is an option ) to initialise the algorithm we solve a linear problem using the nonvariational finite element method . following a trick , described in , we chose to be the standard -finite element approximation of such that [ rem : degree - of - fe - space ] in the previous example the lowest order convergent scheme was found by taking to be the space of piecewise linear functions ( ) .for the mad problem we require a higher approximation power , hence we take to be the space of piecewise quadratic functions , .although the choice of gives a stable scheme , convergence is not achieved .this can be characterised by ( * ? ? ?* thm 3.6 ) that roughly says you require more approximation power than what piecewise linear functions provide to be able to approximate all convex functions .compare with figure [ fig : eoc - ma4 ] .the numerical examples given in figures [ fig : eoc - ma1][fig : eoc - ma2 ] both describe the numerical approximation of classical solutions to the mad problem . in the case of figure[ fig : eoc - ma1 ] whereas in figure [ fig : eoc - ma2 ] .we now take a moment to study less regular solutions , solutions which are not classical . in this testwe the solution for .the solution . in figures [ fig : eoc - alpha-0 - 55][fig : eoc - alpha-0 - 7 ] we vary the value of and study the convergence properties of the method .we note that the method fails to find a solution for .finally in figure [ fig : monge - adaptive ] we conduct an adaptive experiment based on a gradient recovery aposteriori estimator .the recovery estimator we make use of is the zienkiewicz zhu patch recovery technique see , or for further details .in this section we look to discretise the nonlinear problem , in this case pucci s equation as a system of nonlinear equations .pucci s equation arises as a linear combination of pucci s extremal operators .it can nevertheless be written in an algebraically accessible form , without the need to compute the eigenvalues .let and be it s spectrum , then the extremal operators are with .the maximal ( minimal ) operator , commonly denoted ( ) , has coefficients that satisfy respectively . in the case the normalised pucci s equation reduces to finding such that where .note that if ( [ eq : pucci ] ) reduces to the poisson dirichlet problem .this can be easily seen when reformulating the problem as a second order pde . making use of the characteristic polynomial, we see thus pucci s equation can be written as which is a nonlinear combination of monge ampre and poisson problems .however owing to the laplacian terms , and unlike the monge ampre dirichletproblem ,pucci s equation is ( unconditionally ) uniformly elliptic for the discrete problem we use is a direct approximation of ( [ eq : pucci - pde ] ) , we seek }\right)}} ] . in figure[ fig : pucci - converence ] we detail a numerical experiement considering the case ] and the boundary data be given as figure [ fig : pucci - piecewise ] details the numerical experiment on this problem with various values of .since the solution to the pucci s equation with piecewise boundary ( [ eq : pucci - pw - boundary ] ) is clearly singular near the discontinuities we have also conducted an adaptive experiment based on a gradient recovery aposteriori estimator ( as in [ sec : nonclassical - monge ] ) . as can be seen from figure [ fig : pucci - adaptive ]we regain qualitively similar results using far fewer degrees of freedom .in this work we have proposed a novel numerical scheme for fully nonlinear and generic quasilinear pdes .the scheme was based on a previous work for nonvariational pdes ( those given in nondivergence form ) .we demonstrated for classical solutions to the monge ampre equation the method is robust again showing numerical convergence . for less regular viscocity solutionswe have found that the method must be augmented with a penalty term in a similar light to .n. e. aguilera and p. morin .on convex functions and the finite element method ._ siam j. numer ._ , 470 ( 4):0 31393157 , 2009 .issn 0036 - 1429 .doi : 10.1137/080720917 .. m. ainsworth and j. t. oden . _ a posteriori error estimation in finite element analysis_. pure and applied mathematics ( new york ) .wiley - interscience [ john wiley & sons ] , new york , 2000 .isbn 0 - 471 - 29411-x .k. bhmer . on finite element methods for fully nonlinear elliptic equations of second order ._ siam j. numer ._ , 460 ( 3):0 12121249 , 2008 .issn 0036 - 1429 .doi : 10.1137/040621740 .url http://dx.doi.org/10.1137/040621740 .s. c. brenner , t. gudi , m. neilan , and l .- y . sung .penalty methods for the fully nonlinear monge - ampre equation . _ math . comp ._ , 800 ( 276):0 19791995 , 2011 .issn 0025 - 5718 .doi : 10.1090/s0025 - 5718 - 2011 - 02487 - 7 .url http://dx.doi.org/10.1090/s0025-5718-2011-02487-7 .l. a. caffarelli and x. cabr ._ fully nonlinear elliptic equations _ , volume 43 of _ american mathematical society colloquium publications_. american mathematical society , providence , ri , 1995 .isbn 0 - 8218 - 0437 - 5 .o. davydov and a. saeed .stable splitting of bivariate splines spaces by bernstein - bzier methods .online preprint , university of strathclyde , department of mathematics and statistics , university of strathclyde , glasgow , scotland gb , 11 2010 .url http://personal.strath.ac.uk/oleg.davydov/stable_splbb.html . to appear on lncs .o. davydov and a. saeed .numerical solution of fully nonlinear elliptic equations by bhmer s method . technical report , university of strathclyde , 26 richmond street , glasgow gb - g1 1xh , scotland , uk , january 2012 .url http://personal.strath.ac.uk/oleg.davydov/fully_nonlin.html .e. j. dean and r. glowinski .numerical solution of the two - dimensional elliptic monge - ampre equation with dirichlet boundary conditions : an augmented lagrangian approach . _ c. r. math .3360 ( 9):0 779784 , 2003 .issn 1631 - 073x .e. j. dean and r. glowinski . on the numerical solution of a two - dimensional pucci s equation with dirichletboundary conditions : a least - squares approach ._ c. r. math .paris _ , 3410 ( 6):0 375380 , 2005 .issn 1631 - 073x .x. feng and m. neilan .mixed finite element methods for the fully nonlinear monge - ampre equation based on the vanishing moment method ._ siam j. numer ._ , 470 ( 2):0 12261250 , 2009 .issn 0036 - 1429 .doi : 10.1137/070710378 .url http://dx.doi.org/10.1137/070710378 .x. feng and m. neilan .vanishing moment method and moment solutions for fully nonlinear second order partial differential equations ._ , 380 ( 1):0 7498 , 2009. issn 0885 - 7474 .doi : 10.1007/s10915 - 008 - 9221 - 9. url http://dx.doi.org/10.1007/s10915-008-9221-9 .c. t. kelley . _ iterative methods for linear and nonlinear equations _ , volume 16 of _ frontiers in applied mathematics_. society for industrial and applied mathematics ( siam ) , philadelphia , pa , 1995 .isbn 0 - 89871 - 352 - 8 . with separately available software .h. j. kuo and n. s. trudinger .discrete methods for fully nonlinear elliptic equations ._ siam j. numer ._ , 290 ( 1):0 123135 , 1992 .issn 0036 - 1429 .doi : 10.1137/0729008 .url http://dx.doi.org/10.1137/0729008 .kuo and n. s. trudinger .estimates for solutions of fully nonlinear discrete schemes . in_ trends in partial differential equations of mathematical physics _ , volume 61 of _ progr .nonlinear differential equations appl ._ , pages 275282 .birkhuser , basel , 2005 .doi : 10.1007/3 - 7643 - 7317 - 2_20 .url http://dx.doi.org/10.1007/3-7643-7317-2_20 .o. lakkis and t. pryer . a finite element method for second order nonvariational elliptic problems ._ siam j. sci ._ , 330 ( 2):0 786801 , 2011 .issn 1064 - 8275 .doi : 10.1137/100787672 .url http://dx.doi.org/10.1137/100787672 .a. logg and g. n. wells .dolfin : automated finite element computing ._ acm trans .math . software _ , 370 ( 2):0 art . 20 , 28 , 2010 .issn 0098 - 3500 .doi : 10.1145/1731022.1731030 .url http://dx.doi.org/10.1145/1731022.1731030 . a. m. oberman .wide stencil finite difference schemes for the elliptic monge - ampre equation and functions of the eigenvalues of the hessian ._ discrete continb _ , 100 ( 1):0 221238 , 2008 . issn 1531 - 3492 .
|
we present a continuous finite element method for some examples of fully nonlinear elliptic equation . a key tool is the discretisation proposed in lakkis & pryer ( 2011 ) allowing us to work directly on the strong form of a linear pde . an added benefit to making use of this discretisation method is that a _ recovered ( finite element ) hessian _ is a biproduct of the solution process . we build on the linear basis and ultimately construct two different methodologies for the solution of second order fully nonlinear pdes . benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the monge ampre equation and the pucci equation .
|
interferometers are widely used for astronomical observations as they provide high angular resolutions and large collecting areas .existing interferometers in the radio ( e.g. the very large array ( vla ) , the berkeley - illinois - maryland association ( bima ) , etc . ) and in the optical band ( e.g. , the center for high angular resolution astronomy ( chara ) , the palomar testbed interferometer , the keck interferometer , the very large telescope ( vlt ) interferometer , etc . )will soon be complemented by new facilities such as the expanded very large array ( evla ) , the square kilometer array ( ska ) , the atacama large millimeter array ( alma ) , and the low frequency array ( lofar ) .interferometers are now also being developed to produce maps of the cosmic microwave background on small scales ( e.g. , the cosmic background imager ( cbi ) , the array for microwave background anisotropy ( amiba ) , the very small array ( vsa ) , etc ) .interferometric arrays , however , do not provide a direct image of the observed sky , but instead measure its fourier transform at a finite number of discrete samplings , or ` ' points , corresponding to each antenna pair in the array .the image in real space must therefore be reconstructed from the plane by inverse fourier transform while deconvolving the effective beam arising from the finite sampling ( see thompson et al .1986 , perley et al .1989 , and taylor et al .1999 for reviews ) .for this purpose several elaborate methods have been developed .for instance , the commonly used cleaning algorithm implemented in the nrao aips software package ( hogbom 1974 ; schwarz 1978 ; clark 1980 ; cornwell 1983 ) , relies on successive subtraction of real - space delta functions from the plane .another method is based on maximum entropy ( e.g. , cornwell & evans 1985 ) and consists of finding the simplest image consistent with the data .these methods are well - tested and appropriate for various applications ; however , the methods are non - linear and do not necessarily converge in a well - defined manner .consequently , they are not well - suited for quantitative image shape measurements requiring high precision . in particular , weak gravitational lensing ( see mellier 1999 ; bartelmann & schneider 2000 for reviews ) requires the statistical measurements of weak distortions in the shapes of background objects and thus can not afford the instabilities and potential biases inherent in these methods . while interferometric surveys offer great promises for weak lensing ( kamionkowski et al .1998 ; refregier et al 1998 ; schneider 1999 ) , a different approach for shape measurements is therefore required to achieve the necessary accuracy and control of systematics . in this paper , we present a new method for reconstructing images from interferometric observations .it is based on the formalism introduced by refregier ( 2001 , paper i ) and refregier & bacon ( 2001 , paper ii ) , in which object shapes are decomposed into orthonormal shape components , or ` shapelets ' .the hermite basis functions used in this approach have a number of remarkable properties which greatly facilitate the modeling of object shapes .in particular , they are invariant under fourier transformation ( up to a rescaling ) and are thus a natural choice for interferometric imaging .we show how shapelet components can be directly fitted on the plane to reconstruct an interferometric image .the fit is linear in the shapelet coefficients and can thus be performed by simple matrix multiplications . since the shapelet components of all sources are fitted simultaneously ,cross - talk between different sources ( e.g. , when the sidelobe from one source falls at the position of a second source ) are avoided , or at least quantified .the method also provides the full covariance matrix of the shapelet coefficients , and is robust .we also show how the complex effects of bandwidth smearing , time averaging and non - coplanarity of the array can be easily and fully corrected for in our method .our method is thus well - suited for applications requiring unbiased , high - precision measurements of object shapes .in particular , we show how the method can be combined with the results of paper ii to provide a clean measurement of weak gravitational lensing with interferometers .we test our methods using both observations from the first radio survey ( becker et al .1995 ; white et al . 1997 ) and numerical simulations corresponding to the observing conditions of that survey .we also show how our method can be implemented on parallel computers and discuss its performance in comparison with the cleaning method .our paper is organized as follows . in [ shapelets ] , we first summarize the relevant features of the shapelet method . in [ method ] , we describe how shapelets can be applied to image reconstruction with interferometers . in [ results ] , we discuss tests of the method using both simulated and real first observations . in [ lensing ] we show how our method can be used for weak lensing applications . our conclusions are summarized in [ conclusion ] .we begin by summarizing the relevant components of the shapelet method described in paper i. in this approach , the surface brightness of an object is decomposed as where ^{\frac{1}{2}}}\ ] ] are the two - dimensional orthonormal hermite basis functions of characteristic scale , is the hermite polynomial of order m , and .the basis is complete and yields fast convergence in the expansion if and are , respectively , close to the size and location of the object .the basis functions can be thought of as perturbations around a two - dimensional gaussian , and are thus natural bases for describing the shapes of most astronomical objects .they are also the eigenfunctions of the quantum harmonic oscillator ( qho ) , allowing us to use the powerful formalism developed for that problem .a similar decomposition scheme using laguerre basis functions has been independently proposed by bernstein & jarvis ( 2001 ) .the hermite basis functions have remarkable mathematical properties .in particular , let us consider the fourier transform of an object intensity , .it can be decomposed as , where are the fourier - transforms of the basis functions , which obey the dual property from the orthonomality of the basis functions , the coefficients are given by this invariance ( up to a rescaling ) under fourier transformation ( eq . [[ eq : duality ] ] ) makes this basis set a natural choice for interferometric imaging .in this section , we describe how shapelets can be applied to interferometric imaging .we first briefly discuss how images are mapped onto the plane by interferometers .we also show how the plane can be binned into cells to reduce computation time and memory requirements .we then describe how the shapelet coefficients can be directly fit onto the binned plane using a linear procedure .finally , we describe how the resulting shapelet coefficients can be optimally combined to reconstruct the image , to co - add several pointings , and to measure shape parameters .an interferometer consists of an array of antennae whose output signals are correlated to measure a complex ` visibility ' for each antenna pair ( see thompson et al .1986 , perley et al .1989 , and taylor et al .1999 for reviews ) .each visibility is then assigned a point on the ` plane ' corresponding to the two - dimensional spacings between the antennae . in practice ,the visibilities are close to , but not exactly equal to a two - dimensional fourier transform of the sky brightness . within the conventions of perley ,schwab & bridle ( 1989 ) for the vla , the visibility measured for the antenna pair at time and at frequency is indeed given by },\ ] ] where is the surface brightness of the sky at position with respect to the phase center , and is the ( frequency - dependent ) primary beam .for the vla , the primary beam power pattern can be well - approximated as the bessel function , where , is the observation frequency and is the position offset from the phase center ( condon et al .the coordinates are given by where is the wavelength of observation , and are the hour angle and declination of the phase center , and are the coordinate differences for the two antennas .the latter are measured in a fixed - earth coordinate system , for which the sky rotates about the axis .note that the positions of the visibilities define the synthesized beam pattern .since the coordinates are entirely determined by the antenna positions , source coordinates , and time and frequency of the observations , the synthesized beam is precisely known for interforemeters . only in the absence of a primary beam ( ) , for observations at zenith ( ) , and for small displacements from the phase center ( ) , does the visibility reduce to an exact fourier transform of the intensity . furthermore , the visibilities are measured in practice by averaging over small time and frequency intervals .the resulting averaged visibility is given by where and are the time and frequency window functions , respectively , and are normalized as . because the time and frequency intervals are typically very small , this double integralcan be evaluated by taylor expanding about the central values and of the window functions . for square - hat window functions of width ( exact ) and ( approximate ) , respectively , we obtain + \cdots\end{aligned}\ ] ] when the telescope points to a fixed location on the sky , the hour angle of the phase center changes as , where is the angular frequency of the earth . on the other hand ,the declination of the phase center remains constant .the above expression for can thus be computed analytically , leaving the two - dimensional -integral to evaluate numerically .note that this provides a direct and complete treatment of primary beam attenuation , time - averaging , bandwidth smearing and non - coplanarity of the array .these effects are difficult to correct for in the context of the standard cleaning method . in practice, the number of visibilities per observation is large ( ) . directly fitting the shape parameters to all pointswould thus require prohibitively large computing time and memory .instead , we use a binning scheme to reduce the effective number of points without loosing information . in the plane, we set a grid of size and average the visibilities inside each cell , where is one - half of the intended field of view , and the factor accounts for the nyquist frequency .the choice of is designed both to minimize the number of cells and to avoid smearing at large angular scales , which would otherwise act like an effective primary beam attenuation .we thus calculate the average visibility in the cell ( of size ) as where is the number of visibilities in the cell .this is the data we will use to reconstruct the image .we now wish to model the intensity of each source as a sum of shapelet basis functions centered on the source centroid , and scale .our goal is to estimate the shapelet coefficients of the sources given the binned data .( we will describe how the centroid and shapelet scales are chosen in practice in [ sim ] ) . in principle , the full plane provides complete shape information for the sources .however , due to the finite number and non - uniform spacings of the antennae , the ( fourier ) space is poorly sampled , thus hampering the decomposition .this prevents us from performing a simple linear decomposition as is done with optical images in real space ( see paper i ) .this problem can be largely resolved by making a linear fit to the plane with the shapelet coefficients as the free parameters . for this purpose ,the first step is to compute the binned visibilities corresponding to each shapelet basis functions for each source .this can be done by first computing the time- and frequency - averaged visibility by setting in equations ( [ eq : v_ij ] ) and ( [ eq : v_ij_bar ] ) . to prevent potential biases introduced by the binning scheme, we evaluate the basis functions at every visibility point and then average them inside each cell to compute just as in equation ( [ eq : v_c ] ) .note that this ensures that the systematic distortions induced by the primary beam , bandwidth smearing , time - averaging and non - coplanarity can all be fully corrected in our method .the next step is to form and minimize where is the data vector , is the theory matrix , and is the parameter vector . the covariance error matrix = \left\langle ( { \mathbf d } - \langle { \mathbf d } \rangle)^{t}{(\mathbf d } - \langle { \mathbf d } \rangle ) \right\rangle\ ] ] for the binned visibilities can be estimated in practice either from the distribution of the visibilities in each bin or from the error tables provided by the interferometric hardware .because the model is linear in the fitting parameters , the best - fit parameters can be computed analytically as ( e.g. , lupton 1993 ) the covariance error matrix ] of the co - added coefficients are then given by we can then find an optimal weighting to reconstruct the image of a source from the estimated coefficients . to do so we seek the reconstructed coefficients given by the weights are chosen so that the reconstructed image is ` as close as possible ' to the true image , in the sense that the least - square difference ^ 2 = \sum_{\mathbf n } [ f^{r}_{\mathbf n } - f_{\mathbf n } ] ^{2}\ ] ] is minimized .it is easy to show that this will be the case when where the right - hand side provides an approximation which can be directly derived from the data .this weighting amounts to wiener filtering in shapelet space , in analogy with that performed in fourier space ( see , e.g. , press et al .figures [ fig : sim ] and [ fig : data ] show several reconstructed images using this weighting scheme . note that this produces an estimate for the _ deconvolved _ image of the source . for display purposes , it is sometimes useful to smooth the reconstructed image by a gaussian kernel ( the restoring beam in radio parlance ) .this can easily be done in shapelet space by multiplying the coefficients by the analytic smoothing matrix described in paper i. while wiener filtering yields an optimal image reconstruction , it is _ not _ to be used to measure source parameters such as flux , centroid , size , etc . instead , an unbiased estimator for shape parameters can be derived directly from the shapelet coefficients ( see paper i ) .for instance , an estimate for the flux of a source is given by where if and are both even ( and vanishes otherwise ) .the variance uncertainty in the flux is then simply = { \mathbf a}^{t } { \mathbf w } { \mathbf a},\ ] ] which provides a robust estimate of the signal - to - noise snr $ ] of the source .similar expressions can be used to compute the centroid and rms size of the source .this can be easily generalized to compute in addition the major and minor axes of the source and its position angle .note that these expressions are , again , estimates for deconvolved quantities .as an application , we consider the first radio survey ( becker et al . 1995 ; white et al .1997 ) , being conducted with the vla at 1.4 ghz in the b configuration . for this survey ,the primary beam fwhm is 30 and the angular resolution is ( fwhm ) .the survey currently contains about sources with a flux - density limit of 1.0 mjy over deg ; the mean source redshift is .observing time has been allocated to extend its coverage to 9,000 deg . the survey is composed of 165-second ` grid - pointings ' with a time - averaging interval seconds .it was conducted in the spectral synthesis mode , with a channel bandwidth of 3 mhz . because this wide - field survey was performed in the snapshot mode ,its sampling is very sparse .this makes shape reconstruction particularly challenging for first , providing a good test for our method . as explained in [ interferometers ] , higher order effects such as bandwidth smearing and time - averaging produce small distortions in the reconstructed image shapes if they are left uncounted for .these must be carefully corrected for high - precision statistical measurements of object shapes such as those required in weak lensing surveys .the effects are , however , very small and not noticeable on an object - by - object basis .for the purpose of this test , we thus ignore these effects and instead focus on the dominant factor in shape reconstruction , the finite and discrete sampling . as a first test, we generated simulated vla data using the observational parameters of first .a grid pointing was generated at zenith with 33 5-second time - averaging intervals and 14 3-mhz channels in the b configuration .simulated sources were randomly distributed within 23.5 of the phase center , the cutoff adopted for creating the final co - added first maps ( becker et al .1995 ) ; the number density , flux density and size distributions chosen for the sources were similar to sources in the first catalog .after generating the visibilities , we added uncorrelated gaussian noise to the real and imaginary component of each data point , with an rms of , where is the total number of visibilities .the real - space rms noise was set to mjy beam , which is somewhat higher than the typical first noise level , 0.2 mjy beam .we then simultaneously fitted all 23 sources in the grid pointing directly in the plane .we imposed the constraint that that source intensities are real ( i.e. , non - imaginary ) .each source was modeled as a shapelet with scale , maximum shapelet order , and center position . in principle , it is possible to determine these parameters with a source detection algorithm which directly uses shapelets .one can , for instance , tile ground - state shapelets with different sizes in the plane , and thus detect sources with different sizes .however , this is computationally expensive and , since the first catalog is conveniently available , we have not implemented this algorithm . instead, good choices for these parameters were derived from the first catalog , which lists basic shape parameters for each source , such as its centroid , flux density , major and minor axes , and position angle , all obtained from an elliptical gaussian fit .the shapelet position was simply set to the centroid position from the catalog .the choices for the shapelet scales and maximum shapelet orders were derived as follows .as described in paper i , the hermite basis functions have two natural scales : corresponding to the overall extent of the basis functions , and corresponding to the smallest - scale oscillations in the basis functions .these scales are related to the shapelet scale and maximum order by and .as increases , the large - scale size of the shapelet grows , while its small - scale features become finer .the shapelet thus becomes more extended both in real and in fourier space .we therefore choose to be the rms major axis from the first catalog , and to correspond to the longest baseline of the vla : 1.8 ( rms ) in real space .this provides us with a choice for and for for each source . solving equation ( [ eq : chi2 ] ) ,we then obtain the shapelet coefficients and the covariance matrix using equations ( [ eq : f ] ) and ( [ eq : cova ] ) .the results are presented in fig .[ fig : sim ] , where the input images ( before the addition of noise ) , inverse fourier - transformed data ( ` dirty ' images ) , and shapelet - reconstructed images ( with weiner filtering , see [ weighting ] ) of three of the sources are shown .each image shown is 32 across and the resolution is about 5.4 ( fwhm ) .the poor sampling of first and the effect of noise are evident in the dirty images .for both resolved ( left panels ) and unresolved or marginally resolved ( right panels ) sources , the reconstructions agree with the inputs very well .the more complicated structure in the central panel is not fully recovered by shapelets .this is expected , since the small - scale structure of the source is not resolved and therefore can not be fully recovered in the reconstruction .the comparison between the input and shapelet - reconstructed flux density for all sources in the grid pointing is shown in figure [ fig : flux ] .the shapelet flux density is given by equation ( [ eq : flux ] ) and its 1 error by equation ( [ eq : error ] ) .the source flux densities are well - recovered by the shapelets in an unbiased manner .note the range of error bars at a given input flux is due to the range of source sizes .for instance , for an input flux density of about 2 mjy , the source with a relatively large error bar has a major axis of about ( fwhm ) , while those with small error bars are unresolved or barely resolved ( major axis fwhm ) . in general, we find the shapelet reconstruction from the sparsely sampled and noisy simulated data to be in good agreement with the input ( noise - free ) image .note that our method can be used to identify and discard spurious sources arising from sidelobes and other artifacts in the dirty image .indeed , when we place an extra shapelet centered at a random positions in the field , the coefficients of that shapelet are consistent with zero . 0.1 in 0.1 in next ,we test our method by applying it to one of the first grid pointings ( 14195 + 38531 ) . for this purpose, we selected all sources within 23.5 of the phase center from the first catalog with a measured flux density limit ( i.e. , including the primary beam response ) of 0.75 mjy .for each of the resulting 23 sources , we use the source major axis to estimate and as described in the previous section .we then simultaneously fit all the sources for the shapelet coefficients directly in the plane .note that the shapelet coefficients obtained are deconvolved coefficients .figure [ fig : data ] shows the reconstruction of three representative sources in the bottom panels . also shown for comparisonare the images of the sources constructed using the standard aips clean algorithm with a cleaning limit of 0.5 mjy ( central panel ) , along with the dirty images ( top panel ) .each panel is 32 across and the fwhm of the first resolution is 5.4. the shapelet method does not involve image pixels in the modeling ; one is therefore free to specify the pixel size when constructing the images . herethe dirty and cleaned images have pixel sizes of 1.8 , while the shapelet images have pixel sizes of 1 and thus show finer details . for demonstration ,the shapelet reconstructions have been weiner - filtered using the resulting covariance matrix . for a direct comparison , they have also been smoothed with a gaussian kernel with a standard deviation of 2.3 , reproducing the restoring beam of the cleaned image .we find that the shapelet reconstructions compare well with the cleaned images .in further tests , we have encountered cases where a bright source ( mjy ) lies in or near a grid pointing .we have found that the presence of the bright source does not affect the fit of the other sources in the grid in a noticeable way .our method can thus well handle the dynamical range of the first survey , which spans more than 3-orders of magnitude . for fainter sources ( mjy ;i.e. , detection snr ) , the reconstructions are rather poor at times , in contrast to those for brighter sources ( which are almost always well fitted ) .this is of course reasonable , given the larger impact of noise for faint sources .in figure [ fig : cova ] we display a portion of the covariance matrix for the shapelet coefficients for the nine sources in the pointing with the highest peak flux densities .the horizontal and vertical lines separate the nine sources .the diagonal line from the lower - left to the upper - right corner represents the variance of the shapelet coefficients .the block - diagonal boxes are the covariance matrix of the coefficients of the nine sources .the off - diagonal blocks quantify the cross - talk between sources .note that the correlation between coefficients are roughly an order of magnitude smaller than the variance .figure [ fig : cova_s4 ] shows the error in the shapelet coefficients ( n1,n2 ) of the source shown in the left panels of figure [ fig : data ] .( these errors are the diagonal segment of the 4 diagonal box counting from the lower left in fig .[ fig : cova ] ) .in general , we find that higher- coefficients tend to be noisier .this is expected since convolution ( or , equivalently , sampling ) suppresses the small scale information encoded by coefficients with large ( see paper i ) .the covariance matrix thus provides us with useful information on the error in each coefficient , and quantifies cross - talk between coefficients both within and among sources .since the shapelet coefficients of all sources are simultaneously fit to a large number of visibilities , the computing memory required for the calculation is not negligible .we have implemented the method on the uk cosmos sgi origin 2000 supercomputer , which has 64 r10000 mips processors with a shared - memory structure .numerically , the shapelet coefficients can be obtained by performing simple matrix operations as in equation ( [ eq : f ] ) , or by solving the linear least - squares problem , , using matrix factorization or singular value decomposition , and assuming that the data covariance matrix is diagonal .both methods can be efficiently parallelized . with our binning scheme , the run - time memory required for this particular first grid pointing was about 700 mb , for 23 sources and a total of 177 shapelet parameters .the cpu time required was about 26 seconds with 10 processors or about 5 minutes in scalar mode .for other grid pointings with different numbers of sources , the computation time ranges between 20 and 60 seconds with 10 processors , with a run time memory between 0.5 to 1.5 gb .weak gravitational lensing is now established as a powerful method for mapping the distribution of the total mass in the universe ( for reviews see mellier 1999 ; bartelmann & schneider 2000 ) .this technique is now routinely used to study the dark matter distribution of galaxy clusters and has recently been detected in the field ( wittman et al 2000 ; van waerbeke et al 2000 ; bacon , refregier & ellis 2000 ; kaiser et al 2000 ; maoli et al 2001 ; rhodes , refregier & groth 2001 ; van waerbeke et al 2001 ) .all studies of weak lensing have been performed in the optical and ir bands , where the images are directly obtained in real space . 0.1 in 0.2 inthere are a number of reasons to try to extend these studies to interferometric images in the radio band .firstly , the brightest radio sources are at high redshift , thereby increasing the strength of the lensing signal .secondly , radio interferometers have a well - known and deterministic convolution beam , and thus do not suffer from the irreproducible effects of atmospheric seeing .thirdly , existing surveys such as the first radio survey ( becker et al .1995 ; white el al .1997 ) provide a sparsely sampled but very wide - area survey , which offers the unique opportunity to measure a weak lensing signal on large angular scales ( kamionkowski et al .1998 ; refregier et al . 1998; see also schneider ( 1999 ) for the case of ska ) .finally , surveys at higher frequencies or in more extended antenna configurations could potentially yield very high angular resolution and are not limited by the irreducible effects of the seeing disk in ground - based optical surveys .because the distortions induced by lensing are only on the order of 1% , the shapes of background objects must be measured with high precision .in addition , systematic effects such as the convolution beam and instrumental distortions must be tightly controlled . for this purpose, a number of shear measurement methods have been developed .the original method of kaiser , squires & broadhurst ( 1995 ) was found to be acceptable for current cluster and large - scale structure surveys ( bacon et al .2000b ; erben et al .2000 ) , but are not sufficiently reliable for future high - precision surveys .consequently , several other methods have been proposed ( kuijken 1999 ; kaiser 2000 ; rhodes , refregier & groth 2000 , berstein & jarvis 2001 ) .recently , refregier & bacon ( 2001 , paper ii ) developed a new method based on shapelets and demonstrated its simplicity and accuracy for ground - based surveys .it is thus straightforward to apply this method to interferometric measurements .indeed , the shapelet coefficients which we derive from the fit on the plane ( after co - adding if required ) are already deconvolved from the effective beam and can thus be directly used to estimate the shear .this can be done using the estimators for the shear components and which are given by ( see paper ii ) where the sum is over even ( odd ) shapelet coefficients for ( ) and the brackets denote an average over an ( unlensed ) object ensemble .the matrix is the shear matrix , and can be expressed as simple combinations of ladder operators in the qho formalism .these estimators for individual shapelet components are then optimally weighted and combined to provide a minimum - variance estimator for the shear .this permits us to achieve the highest possible sensitivity ( while remaining linear in the surface brightness ) by using all the available shape information of the lensed sources . in kamionkowski( 1998 ) and refregier et al .( 1998 ) , it has been shown that the first radio survey is a unique database for measuring weak lensing by large - scale structure on large angular scales . in a future paper , we will apply the method described here to this survey , search for the lensing signal , and , from its amplitude , derive constraints on cosmological parameters .we have presented a new method for image reconstruction from interferometers .our method is based on shapelet decomposition and is simple and robust .it consists of a linear fit of the shapelet coefficients directly in the plane , and thus permits a full correction of systematic shape distortions caused by bandwidth smearing , time - averaging and non - coplanarity .because the fit is linear in the shapelet coefficients it can be implemented as simple matrix multiplications .it provides the full covariance matrix of the shapelet coefficients which can then be used to estimate errors and cross - talk in the recovered shapes of sources .we have shown how source shapes from different pointings can be easily co - added using a weighted sum of the recovered shapelet coefficients .we have also described how the shapelet parameters could be combined to derive optimal image reconstruction , photometry , astrometry and pointing co - addition .our method can be efficiently implemented on parallel computers .we find that a fit to all the sources in a first grid pointing takes about 1 minute on an origin 2000 supercomputer with 10 processors ( 10 minutes in scalar mode ) . because we are fitting all sources simultaneously, 0.5 to 1.5 gb of memory is required . to test our methods , we considered the observing conditions of the first radio survey ( becker et al .1995 ; white et al .1997 ) whose snapshot mode yields a sparse sampling in space . using numerical simulations tuned to reproduce the conditions of first, we find that the sources are well - reconstructed with our method .we have also applied our method to a first snapshot pointing and found that the shapes are well - recovered .the reconstruction of our method compares well with the clean reconstruction , without suffering the potential biases inherent in the latter method .our method is thus well - suited for applications requiring quantitative and high - precision shape measurements .in particular , our method is ideal for the measurement of the small distortions induced by gravitational lensing in the shape of background sources by intervening structures .such a measurement from cleaned images may well not be practical since the systematic distortions induced by that method are very difficult to control .( one could perhaps imagine running numerical simulations to calibrate the shear estimator , but this would be both computationally expensive and rather uncertain ) .we have shown how our results can be combined with the shear measurement method described in refregier & bacon ( 2001 ) to derive a measurement of weak lensing with interferometers .this is facilitated both by the fact that our recovered shapelet coefficients are already deconvolved from the effective ( dirty ) beam , and as a consequence of the remarkable properties of shapelets under shears .our method therefore opens the possibility of high - precision measurements of weak lensing with interferometers . while to date all weak - lensing studieshave been carried using optical data ( and therefore in real space ) , an interferometric measurement of weak lensing in the radio band is very attractive ( kamionkowski et al .1998 ; refregier et al . 1998; schneider 1999 ) .indeed , the lensing signal is expected to be larger because radio sources have a higher mean redshift .in addition , such a measurement would not suffer from the irreproducible effects of atmospheric seeing .instead , the effective ( dirty ) beam is fully known for interferometers and the noise properties of the antennas are well - understood . as a result, the impact of systematic effects , the crucial limitation in the search for weak lensing , are expected to be lower with radio interferometers . in a future paper, we will describe our measurement of weak lensing by large - scale structure with the first survey using the present method .99 bacon , d. , refregier , a. , ellis , r. , 2000 , mnras , 318 , 625 bacon , d. refregier , a. , clowe , d. , & ellis , r. , 2000b , to appear in mnras , preprint astro - ph/0007023 bartelmann , m. , & schneider , p. , 2000, preprint astro - ph/0007023 becker , r.h . , white , r.l . ,helfand , d.j . , 1995 , , 450 , 559 bernstein , g.m . , & jarvis , m. , 2001 , accepted by aj , astro - ph/0107431 clark , b.g . , 1980 , a&a , 89 , 377 condon , j.j . , cotton , w.d . , greisen , e.w ., yin , q.f . , perley , r.a . ,taylor , g.b . ,broderick , j.j ., 1998 , , 115 , 1695 cornwell , t , j ., 1983 , a&a , 121 , 281 cornwell , t.j . & evans , k.f . , 1985 ,a&a , 143 , 77 erben t. , van waerbeke , l. , bertin , e. , mellier , y. , schneider , p. , 2001, a&a , 366 , 717 hogbom , j.,a . , 1974 , a&as , 15 , 417 kaiser , n. , 2000 , apj , 537 , 555 kaiser , n. , wilson , g. , luppino , g. a. , 2000 , preprint astro - ph/0003338 kamionkowski , m. , babul , a. , cress , c. , refregier , a. , 1998 , mnras , 301 , 1064 kuijken , k. , 1999 , a&a , 352 , 355 lupton , r. , 1993 , statistics in theory and practice , princeton university press maoli , r. et al , 2001 , a&a , 368 , 766 mellier , y. , 1999 , ara&a , 37 , 127 narayan , r. , & bartelmann , m. , 1999 , in formation of structure in the universe . ed . by dekel ,a. and ostriker , j.p . , p.360 ( preprint astro - ph/9606001 ) perley , r.a . ,schwab , f.r . , & bridle , a.h ., 1989 , synthesis imaging in radio astronomy , a.s.p.c.s .vol . 6 press , w.h . ,teukolsky , s.a . , vetterling , w.t . ,flannery , b.p ., 1987 , numerical recipes , cambridge university press refregier et al .1998 , in proc . of the xivth iap meeting ,wide - field surveys in cosmology , held in paris in may 1998 , eds .mellier , y. & colombi , s. ( paris : frontieres ) , preprint astro - ph/9810025 refregier , a. , 2001 , ( paper i ) submitted to mnras , preprint astro - ph/0105178 refregier , a. & bacon , d.j ., 2001 , ( paper ii ) submitted to mnras , preprint astro - ph/0105179 rhodes , j. , refregier , a. , & groth , e. , 2000 , apj , 536 , 79 rhodes , j. , refregier , a. , & groth , e. , 2001 , to appear in apjl , preprint astro - ph/0101213 schneider , p. , 1999, in perspectives on radio astronomy , scientific imperatives at cm and m wavelengths , proceedings of a workshop in amsterdam , april 1999 , preprint astro - ph/9907146 schwarz , u.j . , 1978 , a&a , 65 , 345 taylor , g.b . , carilli , c.l . , & perley , r.a . , 1999 ,synthesis imaging in radio astronomy ii , a.s.p.c.s . vol .180 thompson , a.r . ,moran , j. & swenson , jr . , g.w , 1986 , interferometry and synthesis in radio astronomy ( wiley - interscience ) van waerbeke , l. et al , 2000 , a&a , 358 , 30 .van waerbeke , l. et al , 2001 , submitted to a&a , preprint astroph/0101511 .white , r.l . ,becker , r.h . , helfand , d.j . ,gregg , m.d ., 1997 , , 475 , 479 wittman , d. , tyson , j. a. , kirkman , d. , dellantonio , i. , bernstein , g. , 2000 , nature , 405 , 143 .
|
we present a new approach for image reconstruction and weak lensing measurements with interferometers . based on the shapelet formalism presented in refregier ( 2001 ) , object images are decomposed into orthonormal hermite basis functions . the shapelet coefficients of a collection of sources are simultaneously fit on the plane , the fourier transform of the sky brightness distribution observed by interferometers . the resulting -fit is linear in its parameters and can thus be performed efficiently by simple matrix multiplications . we show how the complex effects of bandwidth smearing , time averaging and non - coplanarity of the array can be easily and fully corrected for in our method . optimal image reconstruction , co - addition , astrometry , and photometry can all be achieved using weighted sums of the derived coefficients . as an example we consider the observing conditions of the first radio survey ( becker , white & helfand 1995 ; white et al . 1997 ) . we find that our method accurately recovers the shapes of simulated images even for the sparse sampling of this snapshot survey . using one of the first pointings , we find our method compares well with clean , the commonly used method for interferometric imaging . our method has the advantage of being linear in the fit parameters , of fitting all sources simultaneously , and of providing the full covariance matrix of the coefficients , which allows us to quantify the errors and cross - talk in image shapes . it is therefore well - suited for quantitative shape measurements which require high - precision . in particular , we show how our method can be combined with the results of refregier & bacon ( 2001 ) to provide an accurate measurement of weak lensing from interferometric data .
|
people making a decision in a ballot are expected to follow a rational behavior .rational arguments based on utility functions ( payoff ) have been considered in the literature regarding vote modeling .the rational hypothesis , however , tends to consider the individuals as isolated entities .this might actually be the reason why it fails to account for relatively high turnout rates in elections .the presence of a social context increases the incentive for a voter to actually vote , as he or she can influence several other individuals towards the same option .this effect is not only restricted to turnout , but also applies to the choices expressed in the election .it is easy to find examples of people showing their electoral preferences in public in the hope of influencing their peers .still social influence can also act in more subtle ways , without the explicit intention of the involved agents to influence each other .the collective dynamics of social groups notably differs from the one observed from simply aggregating independent individuals .social influence is thus an important ingredient for modeling opinion dynamics , but it requires as well the inclusion of a social context for the individuals . even though nowadays the pervasive presence of new information technologies has the potential to change the relation between distance and social contacts , we assume that daily mobility still determines social exchanges to a large extent .human mobility has been studied in recent years with relatively indirect techniques such as tracking bank notes or with more direct methods such as tracking cell phone communications . a more classical source of information in this issue is the census . among other data ,respondents are requested by the census officers their place of residence and work .census information is less detailed when considered at the individual level , but it has the advantage of covering a significant part of the population of full countries .recent works analyzing mobile phone records have shown that people spend most of their time in a few locations .these locations are likely to be those registered in the census and , indeed , census - based information has been also used recently to forecast the propagation patterns of infectious diseases such as the latter influenza pandemic . in this work ,we follow a similar approach and use the recurrent mobility information collected in the us census as a proxy for individual social context . this localized environment for each individual accounts mostly for face - to - face interactions and leavesaside other factors , global in nature , such as information coming from online media , radio and tv .our results show that a model implementing face - to - face contacts through recurrent mobility and influence as imperfect random imitation is able to reproduce geographical and temporal patterns for the fluctuations in electoral results at different scales .we have investigated the voting patterns in the us on the county level .we have used the votes for presidential elections in years 1980 - 2012 . for each county , we have data about the county geographic position , area , adjacency with other counties as well as data regarding population , and number of voters for each party for every election year ( note that , since not everybody is entitled to vote ) .raw vote counts are not very useful for comparing the counties as populations are distributed heterogeneously. therefore we switch from vote counts to vote shares since the votes received by parties others than republicans and democrats are minority , we have focused on the two main parties .we have further considered mostly relative voteshares , which are absolute vote shares minus the national average for a given party every electoral year where i stands for the number of counties .the relative voteshares show how much above ( positive ) or below ( negative ) the national average are the results in given county .+ our main focus is on the persistence of voting patterns . we define a _ stronghold _ to be a county which relative voteshare remains systematically positive ( or negative ) . in the sirm model agents live in a spatial system divided in non - overlapping cells .the agents are distributed among the different cells according to their residence cell .the number of residents in a particular cell will be called . while many of these individuals may work at , some others will work at different cells .this defines the fluxes of residents of recurrently moving to for work . by consistency , .the working population at cell is and the total population in the system ( country ) is . in this work ,the spatial units correspond to the us counties and the population levels , , and commuting flows , , are directly obtained from the 2000 census .we describe agents opinion by a binary variable with possible values or .the main variables are the number of individuals holding opinion , living in county and working at .correspondingly , stands for the number of voters living in holding opinion and for the number of voters working at holding opinion .we assume that each individual interacts with people living in her own location ( family , friends , neighbors ) with a probability , while with probability she does so with individuals of her work place . once an individual interacts with others , its opinion is updated following a noisy voter model : an interaction partner is chosen and the original agent copies her opinion imperfectly ( with a certain probability of making mistakes ) .a more detailed description of the sirm model can be found in .pursuing the topic of persistence and changes in opinion , as expressed by voting results , we have investigated the _ strongholds _ of both parties in usa .figure [ strongholds ] shows the spatial arrangement and the duration of the strongholds ( measured in elections ) for data of the us presidential elections during the period 1980 - 2012 and simulations using the electoral results of 1980 as initial condition .it can be seen at a glance , that the strongholds are not randomly distributed across the country , but clustered .this indicates that some form of correlation is present between the voting patterns .the republican strongholds seem to be concentrated mostly in the central - west , while democrat strongholds are dispersed mostly through the eastern parts , including urbanized areas .this is in agreement with the population distributions , republican strongholds being mostly lower populated counties , while democrats strongholds include some significant cities .we quantify in figure [ strongholds_decay ] the temporal evolution of the strongholds of the election data for the period 1980 - 2012 as well as for the strongholds forecasted by simulations after 9 elections .the influence of the commuting network and their interactions was revealed by contrasting the simulations results with and without network interaction .the evolution of the number of strongholds observed in the data at early stages is well described by the model with commuting interactions . for longer times, the model predicts that the number of strongholds will decay with time following an exponential law as shown in figure [ strongholds_decay ] . in the absence of commuting interaction ,the number of strongholds decreases at a slower rate .furthermore , the model overestimate the number of strongholds in this case since no other mechanisms than internal fluctuations act driving the county relative voteshare towards the average .counties set as strongholds at the beginning of the simulation remains strongholds for a longer time .we further test the accuracy of our model by computing the percentage of strongholds accurately predicted .as figure [ percenthits ] shows , the model with no commuting interactions reproduces a higher percentage of strongholds , however , it also gives higher and increasing number of false positives . on the contrary ,the model with commuting interactions maintain a flat rate of false positive below 20% .the accuracy of the model in this case remain higher than 50% after elections .another question is what determines how long a county would be a stronghold for ?we try to answer this question for the model by looking into the dependence of the strongholds duration with respect to the initial voteshare of the counties .as figure [ strongholds_cidependence ] shows , there is a linear dependence for the duration of being a stronghold with the distance to the mean voteshare .counties with initial larger deviation from the mean voteshare tend to be strongholds longer time .the fitting reveals that every tenth of relative voteshare corresponds on average to a stronghold duration of elections .we have studied the persistence on the electoral system using the recently introduced social influence and recurrent mobility ( sirm ) model for opinion dynamics which includes social influence with random fluctuations , mobility and population heterogeneities across the u.s .the model accurately predicts generic features of the background fluctuations of evolution of vote - share fluctuations at different geographical scales ( from the county to the state level ) , but it does not aim at reproducing the evolution of the average vote - share . we have contrasted the evolution of the number of strongholds of the election data for the period 1980 - 2012 with the strongholds forecasted by simulations finding a good agreement between them .the evolution of the number of strongholds observed in the electoral data at early stages is well described by the model with commuting interactions . however , the number of strongholds observed in the data changes abruptly after the presidential election of 2008 .our model is not able / does not intend to reproduce this behavior since it involves external driving forces not included in the model .our results also shows a good agreement between data and simulations for the location and duration of the strongholds .strongholds are not randomly distributed across the country , but clustered .this indicates that some form of correlation is present between the voting patterns .the model reproduces nicely the spatial concentration of the both types of strongholds .republican strongholds are mostly concentrated in the central - west , while democrat strongholds are dispersed mostly through the eastern parts , including urbanized areas . as for the duration of the strongholds , we have found a linear dependence for the duration of being a stronghold with the distance to the mean voteshare .counties with initial larger deviation from the mean voteshare tend to be strongholds for a longer time , on average , 5 elections for each tenth of relative voteshare . when commuting interactions are taken into consideration , our model exhibits an accuracy higher than 50% for up to elections .the lack of this interactions causes an overestimation of the number of strongholds with an associated increase of the number of false positive decreasing the prediction accuracy of the model .our contribution sets the ground to include other important aspects of voting behavior and demand further investigation of the role played by heterogeneities in the micro - macro connection .further elements will have to be included in order to produce predictions mimicking more accurately real electoral results .some examples are the effects of social and communication media or the erosion of the governing party .the use of alternative communication channels is expected to affect voting behavior .
|
influence among individuals is at the core of collective social phenomena such as the dissemination of ideas , beliefs or behaviors , social learning and the diffusion of innovations . different mechanisms have been proposed to implement inter - agent influence in social models from the voter model , to majority rules , to the granoveter model . here we advance in this direction by confronting the recently introduced social influence and recurrent mobility ( sirm ) model , that reproduces generic features of vote - shares at different geographical levels , with data in the us presidential elections . our approach incorporates spatial and population diversity as inputs for the opinion dynamics while individuals mobility provides a proxy for social context , and peer imitation accounts for social influence . the model captures the observed stationary background fluctuations in the vote - shares across counties . we study the so - called political strongholds , i.e. , locations where the votes - shares for a party are systematically higher than average . a quantitative definition of a stronghold by means of persistence in time of fluctuations in the voting spatial distribution is introduced , and results from the us presidential elections during the period 1980 - 2012 are analyzed within this framework . we compare electoral results with simulations obtained with the sirm model finding a good agreement both in terms of the number and the location of strongholds . the strongholds duration is also systematically characterized in the sirm model . the results compare well with the electoral results data revealing an exponential decay in the persistence of the strongholds with time .
|
astronomers use telescopes to investigate a wide range of scientific problems with almost as diverse a range of instrumentation .the output from these investigations provides an immense amount of data that needs to be reduced and analyzed .the results from the investigation are published in scientific journals , and these papers have an impact , small or large , on future observational and/or theoretical investigations .two measures of the effectiveness of a telescope are the number of papers published in refereed journals that are based on data obtained by the telescope , and the citation count of those papers .the effectiveness , or lack therein , of a telescope can have far - reaching consequences .for example , in canada the effectiveness of a single major telescope , the canada - france - hawaii telescope ( cfht ) , may have a significant impact on the funding of future telescopes . compared the impact of two facility telescopes ( the ctio and kpno 4-m ) with that of two telescopes run by private observatories ( the lick 3-m and the palomar 5-m ) .he found no significant difference . compared the impact of large us optical telescopes for papers published in 1990 - 1991 .more recently compared the scientific impacts of telescopes world - wide based on their contributions to the 1000 most - cited papers ( 1991 - 98 ) and the number of papers published in nature between 1989 - 1998 .they found that cfht was the most productive and most highly - cited of all 4-m class telescopes during this time period .productivity , as measured by the number of papers , and impact , as measured by citation numbers are the two measures we will use to assess the effectiveness of the cfht over its approximately twenty - year history . simply counting the number of papers in refereed journals is an easy way to measure effectiveness but misses completely the influence these papers have on the field .it should be noted that citation numbers are not a perfect measure of a paper s impact , nor are they necessarily a measure of the paper s scientific value . in this contributionwe will examine the productivity and impact history of cfht papers .we will also examine the productivity and impact of the various instruments that have been used at cfht during its twenty years of operation .finally , we will look at how the citation counts for published papers are related to the grade assigned the original observing proposal by the time allocation committee .cfht maintains a database of publications in refereed journals that are based on data obtained with the telescope .the database contains information on 1065 papers published between 1980 - 1999 .papers are identified from four main sources : reprints submitted by authors , scanning of all major journals , observers time request forms , and searching nasa s astrophysics data system ( ads ) for papers referring to cfht in the abstract .the following criteria are used to judge whether a paper is considered a cfht publication : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` a paper must report new results based on significant observational data obtained at cfht or be based on archival data retrieved from the cfht archive . if data from multiple telescopes are included , the cfht data should represent a significant fraction of the total data '' . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a staff astronomer examines each paper to judge whether it meets these criteria .although an author may footnote a paper to indicate that it is based on cfht observations , the paper may not meet the criteria for inclusion in the database .in our view , this rigorous emphasis on validation of all papers by astronomers within cfht makes the database unique .the cfht publication information is maintained within a microsoft access database .several routines , written in visual basic for applications within the database , query the ads for information on each publication .these routines utilize an internet data transfer library downloaded from the internet .the software generates the appropriate query as a url , sends the url to the ads , and parses the returned text to extract the relevant information .the information for each publication in the ads is accessed by a publication bibliography code , bibcode , which is generated from the year , journal , volume and page information for a publication .one of the many services ads provides is a verification utility that returns a yes / no as to whether a particular bibcode is valid . for each entry in our database ,the ads bibcode is generated from the publication information and verified with the ads .the information for those entries with invalid bibcodes is checked and updated , then a new bibcode is generated and verified .this verification of each paper s bibcode ensures that we have the correct publication information for each entry .once each publication has a valid bibcode , the ads is queried for the full title , list of authors , the number of citations , and the number of self - citations ( ones in which the first author of the cited and citing paper are the same person ) .finally , the bibcodes of each citing paper and the number of citations by year of the citing paper are recorded for each publication . the instrument , or instruments , used to acquirethe data used for each publication was identified by browsing each of the papers .this use of the ads allows us to verify the basic bibliographic information , obtain a complete list of authors , and collect the citation data for each publication .the citation information in the ads is incomplete .much of the citation information in the ads is based upon a subset of the science citation index purchased from the institute for scientific information ( isi ) by the ads .this subset is seriously incomplete in referring to articles in the non - astronomical literature , as it only contains references that were in the ads when the subset was purchased .the ads currently builds citation links itself for all publications in its database .the ads does not include many physics journals but does include a subset of conference proceedings .isi , an established and reputable commercial firm , has been considered the best resource for citation information among astronomers and librarians for many years .however , the ads provides publication and citation information from the web at no cost . how does the citation information obtained from these two sources compare ?we selected three highly cited cfht papers : ; ; , and performed a detailed analysis of citations to these papers using both the ads and isi ( through the online service dialogweb ) .while the total number of citations to the three papers from isi / ads are remarkably similar ( 146/153 , 125/124 , 172/177 ) , there are interesting differences in the details of the citing papers .the number of citing papers in common to ads and isi for the three papers is 142 , 109 and 165 .each database missed several citing papers that the other one included .isi tended to find citations from physics journals missed by ads , while ads had some conference citations and citations from the major journals that were missed by isi .the citing papers in the major journals were missed by isi primarily due to incorrect citations ( e.g. wrong year or volume ) in the citing papers .our conclusion from this detailed look at a small number of papers is that , on average , the ads provides citation numbers that are consistent with those obtained from isi and any differences will have a minimal impact on our study .we define two terms that we will use throughout the rest of the paper : productivity and impact .productivity refers to the number of publications in the context of the telescope , an instrument or a particular researcher .productivity is not the same as scientific impact .impact is usually measured by using citation numbers .overall impact is measured by summing citation numbers of all the relevant papers .the average number of citations per paper ( cpp ) measures the average impact . for each entry in the cfht database, we have retrieved the year of every citing paper and stored the total number of citations for each year in the database . a paper published in 1990 , for example ,has the number of citations received for each year from 1990 to 1999 .these data allow us to investigate the citation rate as a function of the number of years since publication .the solid curve in figure 1 shows the average citations per paper ( cpp ) as a function of the number of years after publication for all papers in the database with citations .there are some citations in the year of publication for papers published early in the year ; for example , a paper published in january may receive a citation in november . as the number of years since publication increases , the number of papers included decreases since the relevant data for all papers does nt yet exist . the papers published in 1999are included in only the data point for zero years after publication , and 1998 papers are included in the zero and one year data points , etc .this curve peaks at two years after publication and has a fairly smooth decay after that .it has been known for many years that the number of citations a paper receives declines exponentially with the age of the paper ( e.g. ) .this is true of cfht publications as well .the dashed line in figure 1 is the fit of a simple exponential decline in citations with a half - life of 4.93 years beginning two years after publication .our analysis does not include a correction for a growth in publication numbers over the period 1982 - 1999 . found a half - life of around twenty years for papers published in the 1961 issues of , and .however , he pointed out that the growth in the number of papers published over the eighteen year - period he gathered citation numbers was part of the reason the half - life was so long . shows that number of papers published in those three journals increased by 4.6 times over this period .kurtz et al .( 2000 ) show that the number of papers in their big8 journals ( , apjl , , , , , and ) increased at approximately a 3.7% yearly rate between 1976 and 1998 .the astronomical literature has doubled between 1982 and 1999 , the years covered in our study .has the electronic distribution of preprints and journal articles changed the citation history of papers ? to examine this question , we divided the cfht papers in two groups : those published between 1984 and 1991 , and those published between 1992 and 1999 .there were 428 and 522 papers in these two groups . in figure 2 the average citation rate for these two periodsis shown along with the fit of a simple exponential model for each period .the citation rate for the newer papers clearly declines more rapidly than that of the older papers .the half - life of the older papers is 7.11 years while the half - life for the newer papers is 2.77 years .we believe that if one were able to sample citation rates monthly , the citations for an average paper in the more recent dataset would peak less than two years after publication .this is the result of more rapid dissemination of results by the electronic distribution of pre - prints ( astro - ph ) and journal articles .the faster decline in citations for the recent subset also indicate that new results supersede earlier results more quickly than in the past . comparing the citation numbers for papers published in different yearsis difficult since the number of citations to a paper increases with time .we have established a method for estimating the total number of citations that a paper can be expected to achieve after a suitably long period .how do we compare papers published over almost twenty years , given the natural growth in citations with time ? we have used the average citation history of all cfht papers to define a growth curve for citations ( figure 3 ) .this curve shows the percentage of the final number of citations , defined as the number eighteen years after publication , for an average paper versus the years since publication . using this curve we can estimate the final citation count ( fcc ) for each paper given a citation count and the number of years since publication .the first cfht paper was submitted in may 1980 and was published in august of that year .cfht s productivity ( figure 4 ) rose more or less continuously through the 1980s until it reached a fairly constant level of around seventy - five papers per year between 1991 to 1997 .it took approximately ten years for cfht to hit its stride and reach a consistently high level of paper production .a telescope s productivity in any one year is linked to many factors such as weather , competitiveness of the available instruments , and the reliability of instruments and the telescope , all in the several years before the year of publication .we attribute the increase in publications during the first ten years of cfht to the increase in the reliability of both the instruments and the telescope and to the development of more competitive instruments .there are two possible reasons the number of cfht publications may be in a slow decline .first , as more 8 - 10 meter telescopes come on - line , cfht is no longer a forefront facility .second , the use of large mosaic ccd cameras has increased at cfht .these generate a tremendous amount of data , and the time from acquisition of data to the publication of results has likely increased .trimble ( 1995 ) studied the productivity of large , american optical telescopes including cfht .she compiled publication data for an eighteen month period beginning january 1990 , by examining the major north american journals : , apjl , , , . according to trimble s list, cfht ranked fourth in productivity behind the ctio 4-meter , palomar and the kpno 4-meter ; and , as trimble notes , many cfht publications appear in journals not included in her study . taking all of the 1990 papers and half of the 1991 papers , we count sixty - seven cfht papers ( trimble counted 58.6 ) that were published in the major north american journals during this period .( trimble pro - rated each paper based upon the number of telescopes used in the paper , which we have not done . )our database contains one hundred one cfht papers published in _ all _ refereed journals during this period .if we correct this number by the same factor that our earlier number differs from trimble s for only north american journals , we end up with a total of 88.3 papers . the total number of cfht publication changed significantly by including publications from all journals . while the other telescopes undoubtedly had publications in non - north american journals , except for the anglo - australian telescope , their numbers would not have increased as significantly .thus , any future study of papers and citations , especially those that compare different facilities , should include all major journals .the average cpp for all papers in a given year , by year of publication , is shown in figure 5 .one would expect the average cpp to grow smoothly with time since publication .however , due to the relatively small number of papers in any given year , the average cpp can be influenced by a small number of highly cited papers .for example , the bump in 1996 is due to two highly cited papers ( lilly et al .1996 , carlberg et al .1996 ) that are based on data taken with mos , the multi - object spectrograph .the fluctuations in citation numbers are much higher in earlier years when the number of papers was smaller .most cfht observers are from canada , france or the university of hawaii ( uh ) .the french tend to publish in european journals , mainly , while canadian and uh researchers favor north american journals . how are cfht publications distributed across the major journals ?the distribution of publications across eight journals ( we include apjl with ) is shown on the left side of table 1 .in addition , each paper has been tagged as belonging to one of the three partners based upon the affiliation of the first author or the agency that granted time for the observations .( canada grants some time to international researchers ) .the majority of cfht papers have been published in the three major journals - , and account for more than 78% of cfht papers .apj has the most publications with 33% of all cfht publications , while receives 25.8% of the publications .the breakdown of publications by journal for different years shows an interesting change . in 1996/1997 33% and 25% of paperswere published in and respectively , while for 1998/1999 the numbers were 25% and 45% .one explanation for this change is that the french are publishing more and the canadians / uh , less in recent years .only 75 ( 18.9% ) of the french papers appeared in american journals while only 53 ( 9.7% ) canadian papers , and 6 uh papers ( 5.1% ) , appeared in non - north american journals .there is a very strong trend for european authors to publish in european journals and north american authors to publish in north american journals .this may be a result of the fact that has no page charges and subsequently the french do not have a large budget for page charges .this tendency for authors to publish on their side of the `` atlantic ocean '' is particularly meaningful for any comparison of publication activity levels between north american ( only ) and multinational observatories .the distribution of citations per paper ( cpp ) across the journals is shown in table 2 .the three major journals , ( including the letters ) , and , account for 84.7% of the citations to cfht papers . while apj papers account for 33% of the cfht total , these papers received almost half ( 48.1% ) of the citations ., , , and all have a citation rate lower than the average cpp of 20.35 .nature has the highest cpp of any journal ( 2.8% ) ; and yet these represent only 1.6% of cfht papers .the primary instrument used to acquire the data was identified for each publication . in a few cases , several instruments were grouped together into a single category .for example , fp refers to several different `` fabry - perot '' instruments , coud refers to the two coud spectrographs that have been used at cfht , and `` direct imaging '' combines several different direct imaging cameras that have been used at cfht over the years .hrcam , which incorporated fast tip - tilt correction , and , mocam and uh8k two mosaic cameras are identified separately from other direct cameras because they represent new technologies , and we wish to track their impact directly .a total of 39 distinct instrument or instrument categories were identified ; however , a large number of instruments produced a very few papers and approximately 70% of cfht papers were produced by the top five instrument / instrument categories .table 2 shows the number of papers , the fcc per paper and the fcc per night of scheduled telescope time for the top - ten paper producing instruments .the number of nights of scheduled telescope time was determined by looking at each semester s schedule from 1982 onward and counting the number of nights for each instrument / instrument group .cfht is known for its exceptional image quality , and it is no surprise that direct imaging has produced the highest number of papers .it also has the highest efficiency of turning scheduled nights into citations .the two coud spectrographs and the multi - object spectrograph ( mos ) produced the 2nd and 3rd highest number of papers .the cfrs and cnoc studies are a large contributing factor to the high impact of mos .who have been the most prolific authors over the twenty years of cfht publications ?table 3 shows the top nine most prolific authors with the total number of publications , the number of publications in four of the major journals , the total number of citations to their papers , their average cpp and their projected fcc assuming they publish no more papers based on cfht data .the most prolific authors have favoured north american journals .the two french authors in this list have 45% of their papers in north american journals as compared to only 18.9 % of all papers designated as french .citations will be discussed more thoroughly in the next section .however , it is clear that the average cpp for these authors varies significantly .the issue of self - citations ( ones where the first author of cited and citing paper are the same ) is one frequently asked of ( and discussed by ) librarians .what is the average self - citation rate ? as trimble points out , this number is difficult to determine exactly .authors do not always use a consistent name ( first name or first initial , for example ) which may lead to the incorrect counting of self - citations .we have counted self - citations for cfht papers by matching first authors on the cited and citing papers .the average self - citation rate for all cfht papers is 6.3% . on average , corrections to citation numbers for self - citationsare not important. however , the self - citation rate for individual papers can be much higher .there are almost ninety papers with a self - citation rate of 30% or more and many of these have ten or more citations .also , certain authors tend to favour their own work .several authors with four papers , or more , have average self - citation rates of 20% or higher . highly cited papers had much lower than average self - citation rates .the twenty most cited papers in the database have an average self - citation rate of 3% , less than half of the average rate for all papers .we computed the final citation count ( fcc ) for each cfht paper in the database using the growth curve described above .the ten papers with the highest fcc are listed in table 4 .it is interesting to note that the top three papers in this list have the same first author , simon lilly .lilly also has the highest total fcc , summed over all papers , of any author in the cfht database .two of the top three papers are based on data from a large , ambitious project undertaken with a new forefront instrument .the top three papers all have the word `` survey '' in their title as well .the observing time requested by proposals to use any large telescope such as the cfht , generally outnumbers the available time by a significan t factor .a time allocation committee ( tac ) is established to review and to rank the submitted proposals , and , in classical scheduling , only the highest - ranked proposals make it to the telescope . however , in queue scheduling , the relative ranking of the proposals will be an important factor in determining which programs are actually executed .the role of the tac becomes even more critical in the era of queue scheduling .how effective is the tac in judging the scientific merit of proposals ? most would agree that almost all programs that reach the telescope will likely produce a scientific publication if the weather and the equipment co - operate .however , one would expect that the more highly ranked proposals will , on average , produce publications with a higher impact , i.e. number of citations .we feel this evaluation of the tac process is important as several large telescopes undertake queue scheduling . observing time at cfht is allocated by country : 42.5% for each of canada and france , and 15% for the university of hawaii. each country runs its own tac , which assigns the grades .the international tac meets to deal with scheduling conflicts and program overlaps .we have identified the original proposal associated with twenty - two cfht papers published between 1997 and 1999 .one of us ( d.c . )has access to the tac ranking for these proposals as he served as senior resident astronomer for three years .we are thus able to look at the correlation between the tac ranking of the proposal and the predicted fcc for the papers resulting from those observations .we selected only those papers that were based on a proposal that used only data from cfht and data from a single observing run .figure 5 shows the fcc versus tac ranking ( a small number is a higher ranking ) for these twenty - two proposals . except for onehighly ranked , highly cited study , there is a weak inverse correlation from a simple linear fit to the data .another interpretation of the data is that highly - ranked studies ( 0.4 ) show small scatter around a constant number of citations , while lower - ranked studies show a much larger scatter .is the tac being conservative and ranking sure bets higher while riskier studies end up with lower rankings ?we want to emphasise that this result is very preliminary , as only twenty - two papers are included , and it relies on the predicted final citation counts .we found no dependence of the fcc on the number of nights of telescope time awarded for the same twenty - two programs .we have studied the productivity and impact of the cfht over its twenty - year history by looking at the number of papers in refereed journals and the number of citations to these papers .it took ten years for cfht to achieve and maintain a high level of paper production .we attribute this to a fairly long commissioning period for the telescope and the time to develop a competitive suite of instrumentation .direct imagers ( photographic plates , ccd imagers ) have been cfht s most productive instruments , both in the number of papers and the number of papers per night of scheduled telescope time .the excellent image quality at cfht is a significant factor in direct imaging s high productivity .we retrieved citation counts and the years of the citing papers from the ads for all cfht papers in our database . using this data, we developed a procedure for estimating the number of citations that a paper can be expected to receive after a period of almost twenty years .this estimation allowed us to compare the citation numbers for papers from different years and to compare the impact of different instruments .the instrument that produced the papers with the highest impact ( average citations / paper ) was the multi - object spectrograph ( mos ) , which was used in the highly cited cfrs and cnoc studies .direct imaging had the second highest impact . in looking at the number of citations / night of allocated time ,direct imaging had the highest impact , followed by mos and the two coud spectrographs .the efficiency of converting observing nights into papers or citations varies considerably between instruments .for example , there is a factor of five difference in the average final citation count per night between `` direct imaging '' and the fts . in order to maximize a telescope s impact, one might consider offering only the `` high - efficiency '' instruments .finally , a look at the correlation between the predicted final citation count and the tac ranking of the observing proposal , showed a weak negative correlation , i.e. lower - ranked proposals end up with a higher number of citations .an alternative interpretation has higher - ranked proposals with a lower number of citations with a small scatter .lower - ranked proposals have more scatter in the number of citations and some of these end up with significantly more citations than most of the higher ranked proposals .we acknowledge the canada - france - hawaii telescope and the herzberg institute of astrophysics for their support of this project .we thank pierre couturier for his impetus in initiating the analysis of cfht publications .we also thank gordon w. bryson for his editing and virginia smith for her assistance during her mentorship at cfht .this research has made use of nasa s astrophysics data system bibliographic services .abt , h.a .1981 , , 93 , 207 abt , h.a . 1985 , ,97 , 1050 ashish , d. , & kreft , t. 1998 , http://www.mvps.org/access/modules/mdl0037.htm benn , c.r .& snchez , s.f .2001 , , 113 , 385 burton , r.e . , & kebler , r.w . 1960 , american documentation , 11 , 18 carlberg , r.g . ,yee , h. , ellingson , e. , abraham , r. , gravel , p. , morris , s. , & pritchet , c.j .1996 , , 462 , 32 cowie , l.l ., songaila , a. , hu , e.m . , &cohen , j.g .1996 , , 112 , 839 cuillandre , j .- c . , mellier , y. , dupin , j .-p . , tilloles , p. , murowinski , r. , crampton , d. , wooff , r. , luppino , g.a .1996 , , 108 , 1120 kormendy , j. 1985 , , 295 , 73 kurtz , m.j . , eichorn , g. , accomazii , a. , grant , c. , murrray , s.s . , & watson , j.m .2000 , , 143 , 41 lilly , s.j ., cowie , l.l , & gardner , j.p .1991 , , 369 , 79 lilly , s.j ., tresse , l. , hammer , f. , crampton , d. , & le fvre , o. 1995 , , 455 , 108 lilly , s.j . , le fvre , o. , hammer , f. , & crampton , d. 1996 , , 460 , l1 mccarthy , p.j . , van breugel , w. , spinrad , h. , djorgovski , s. 1987 , , 321 , 29 mcclure , r.d . ,grundmann , w.a . ,rambold , w.n . ,fletcher , m.j . ,richardson , e.h ., stilburn , j.r . ,racine , r. , christian , c.a . ,waddell , p. 1989, , 101 , 1156 metzger , m.r . , luppino , g.a . , miyazaki , s. 1995 , aas , 187 , 7305 peterson , c. 1988 , , 100 , 106 pierce , m.j ., welch , d.l . ,mcclure , r.d ., van den bergh , s. , racine , r. , & stetson , p.b .1994 , , 371 , 385 spite , f. , & spite , m. 1982 , , 115 , 357 trimble , v. 1995 , , 107 , 977 tyson , j.a . ,wenk , r.a . , & valdes , f. 1990 , , 349 , 1 van den bergh , s. 1980 , , 92 , 409 lccccccccc & 275 & 34 & 239 & 2 & & 14.86 & 8.79 & 15.82 & 4.00 + & 37 & 4 & 32 & 1 & & 11.92 & 4.25 & 12.91 & 11.00 + & 209 & 174 & 10 & 25 & & 18.70 & 18.06 & 15.00 & 24.64 + & 351 & 223 & 59 & 69 & & 29.73 & 29.20 & 22.34 & 37.77 + & 22 & 18 & 1 & 3 & & 28.00 & 30.28 & 2.00 & 23.00 + & 20 & 10 & 10 & 0 & & 9.25 & 11.90 & 6.60 & 0.00 + & 18 & 5 & 9 & 3 & & 33.72 & 47.20 & 19.89 & 64.00 + & 82 & 66 & 5 & 11 & & 11.02 & 11.33 & 7.60 & 10.73 + total / average & 1065 & 549 & 397 & 117 & & 20.35 & 21.42 & 15.75 & 31.30 + p6.0cmcccc direct imaging & 358 & 0.36 & 35.47 & 12.86 + coud spectrograph & 169 & 0.25 & 28.37 & 6.97 + multi - object spectrograph & 75 & 0.19 & 48.38 & 9.92 + fourier transform spectrometer & 64 & 0.16 & 17.28 & 2.70 + hrcam & 49 & 0.24 & 27.60 & 6.61 + herzberg spectrograph & 34 & 0.24 & 29.11 & 6.86 + fabry - perot & 24 & 0.16 & 19.84 & 3.11 + adaptive optics near - ir imaging & 21 & 0.21 & 20.76 & 4.45 + sis & 18 & 0.12 & 26.82 & 3.24 + lccccccccc hutchings , j. & 38 & 6 & 0 & 21 & 8 & & 909 & 23.9 & 1106 + davidge , t. & 35 & 12 & 0 & 19 & 1 & & 310 & 8.9 & 492 + nieto , j - l .& 16 & 11 & 0 & 1 & 0 & & 282 & 17.6 & 333 + le fvre , o. & 15 & 3 & 9 & 0 & 0 & & 333 & 22.2 & 525 + kormendy , j. & 14 & 12 & 2 & 0 & 0 & & 719 & 51.4 & 1024 + boesgaard , a. & 13 & 1 & 0 & 0 & 3 & & 577 & 44.4 & 682 + crampton , d. & 13 & 3 & 1 & 7 & 2 & & 200 & 15.4 & 239 + richer , h. & 13 & 10 & 1 & 0 & 0 & & 423 & 32.5 & 515 + harris , w. & 12 & 3 & 6 & 2 & 0 & & 389 & 32.4 & 513 + p6.0cmp9.7cml & the canada - france redshift survey . vi .evolution of the galaxy luminosity function to z 1 + & a deep imaging and spectroscopic survey of faint galaxies + & the canada - france redshift survey : the luminosity density and star formation history of the universe to z 1 + & a correlation between the radio and optical morphologies of distant 3cr radio galaxies + & galaxy cluster virial masses and omega + & abundance of lithium in unevolved halo stars and old disk stars - interpretation and consequences + & new insight on galaxy formation and evolution from keck spectroscopy of the hawaii deep fields + & families of ellipsoidal stellar systems and the formation of dwarf elliptical galaxies + & detection of systematic gravitational lens galaxy image alignments - mapping dark matter in galaxy clusters + & the hubble constant and virgo cluster distance from observations of cepheid variables +
|
we have investigated the productivity and impact of the canada - france - hawaii telescope ( cfht ) during its twenty - year history . cfht has maintained a database of refereed publications based on data obtained with cfht since first light in 1979 . for each paper , we analysed the cumulative number of citations and the citation counts for each year , from data supplied by the nasa astrophysics data system ( ads ) . we have compared citation counts retrieved from the ads with those from the institute for scientific information ( isi ) for a small sample of papers . we have developed a procedure that allows us to compare citation counts between older and newer papers in order to judge their relative impact . we looked at the number of papers and citations not only by year , but also by the instrument used to obtain the data . we also provide a preliminary look as to whether programs given a higher ranking by the time allocation committee ( tac ) produced papers with a higher number of citations .
|
nearly all the physical processes that determine the structure and evolution of stars occur in their ( deep ) interiors .the production of nuclear energy that powers stars takes place in their cores for most of their lifetime .the effects of the physical processes that modify the simplest models of stellar evolution , such as mixing and diffusion , also predominantly take place in the inside of stars .the light that we receive from the stars is the main information that astronomers can use to study the universe .however , the light of the stars is radiated away from their surfaces , carrying no memory of its origin in the deep interior .therefore it would seem that there is no way that the analysis of starlight tells us about the physics going on in the unobservable stellar interiors . however , there are stars that reveal more about themselves than others ._ variable stars _ are objects for which one can observe time - dependent light output , on a time scale shorter than that of evolutionary changes .there are two major groups of variable star , extrinsic and intrinsic variables .extrinsic variables do not change their light output by themselves .for example , the light changes of eclipsing binary stars are caused by two stars passing in front of each other , so that light coming from one of them is periodically blocked .the individual components of eclipsing binary stars are not necessarily variable . by analysing the temporal light variations and orbital motion of eclipsing binaries, one can determine their fundamental properties , and by assuming that their components are otherwise normal stars , determine fundamental properties of all stars , most importantly their masses . in this way , stars and stellar systems can be understood better .intrinsic variables , on the other hand , change their light output physically .supernovae , which are stellar implosions / explosions , can become brighter than their host galaxies because of the ejection of large amounts of material .even more revealing are stars that vary their sizes and/or shapes : _pulsating variables_. the first pulsating star was discovered more than 400 years ago . in 1596 david fabriciusremarked that the star ceti ( subsequently named `` mira '' , the wonderful ) disappeared from the visible sky .about 40 years later , it was realized that it did so every 11 months ; the first periodic variable star was known ( although we know today that in this case , the term `` periodic '' is not correct in a strict sense ) . in 1784john goodricke discovered the variability of cephei , and in 1914 enough evidence had been collected that harlow shapley was able to demonstrate that the variations of cephei and related stars ( also simply called `` cepheids '' ) was due to radial pulsation .also in the teens of the previous century , henrietta leavitt pointed out that the cepheids in the small magellanic clouds follow a period - luminosity relation , still one of the fundamental methods to determine distances in the visible universe - and one of the major astrophysical applications of pulsating stars . with the ever increasing precision in photometric and radial velocity measurements , a large number of groups of pulsating star is nowadays known .figure 1 shows theoretical ( in the sense that the logarithm of the effective temperature is plotted versus the logarithm of the stellar luminosity ) hr diagrams containing the regions in which pulsating stars were known some 40 years ago and today .table 1 gives a rough overview of the classes of pulsator contained in fig ..selected classes of pulsating star [ cols="<,^,<",options="header " , ] the different types of pulsator have historically been classified on a phenomenological basis .the separation between those types has usually later turned out to have a physical reason .the individual classes are different in terms of types of excited pulsation mode ( or , less physical , pulsation period ) , mass and evolutionary state , hence temperature and luminosity .the names of these classes are assigned either after the name of a prototypical star or give some description of the type of variability and star .it must be pointed out that the present overview does by far not contain all types and subgroups of pulsating star that have been suggested .the cepheids are subdivided according to population and evolutionary state , into cephei , w vir , rv tau and bl her stars .jeffery ( 2008 ) proposed a number of types of evolved variable , there are the luminous blue variables , and there may be new classes of white dwarf pulsator , oscillating red and brown dwarfs , etc . furthermore ,some of the instability domains of different pulsators overlap and indeed , some objects called `` hybrid '' pulsators that show oscillations of two distinct types , have been discovered .also , the instability boundaries of some of these variables may need to be ( considerably ) extended and/or revised in the near future .for instance , there may be supergiant spb stars , and solar - like oscillations are expected in all stars having a significant surface convection zone . whereas the classification of and distinction between the different classes of pulsating star , that are historically grown and modified designations , can in some casesbe called arbitrary today , one recognizes an important fact : pulsating stars populate almost the entire hr diagram , and this means that they can be used to learn something about the physics of most stars .what can make a star oscillate ?after all , stars are in hydrostatic equilibrium : the gravitational pull on the mass elements of normal stars is balanced by gas pressure . if something would hit a star , the inwards moving regions will be heated , and the increased heat loss damps the motion . consequently , self - excited pulsations require a driving mechanism that overcomes this damping and results in a periodic oscillation .four major driving mechanisms have been proposed . mechanism _( rosseland & randers 1938 ) assumes a variation in the stellar nuclear reaction rate : if a nuclear burning region is compressed , the temperature rises and more energy is produced .this gives causes expansion , the pressure drops , and so does the energy generation : the motion is reversed and oscillations develop .the mechanism ( where is the usual designator for the nuclear reaction rate in formulae ) , that operates similar to a diesel engine , has been proposed for several different types of pulsating star , such as our sun and pulsating white dwarfs , but observational proof for oscillations driven by it is still lacking . considerably more successful in explaining stellar oscillations is the _ mechanism _( baker & kippenhahn 1962 and references therein ) . in layers where the opacity increases and/or the third adiabatic exponent decreases with increasing temperature , flux coming from inner layers can be temporally stored .such layers in the stellar interior are generally associated with regions where ( partial ) ionization of certain chemical elements occurs .the energy accumulated in this layer during compression is additionally released when the layer tries to reach its equilibrium state by expanding .therefore , the star can expand beyond its equilibrium radius .when the material recedes , energy is again stored in the stellar interior , and the whole cycle repeats : a periodic stellar oscillation evolves .this mechanism is also called the _ eddington valve _ , and it explains the variability of most of the known classes of pulsating star .the classical pulsators in the instability strip , ranging from the cephei stars to the rr lyrae stars and the scuti stars draw their pulsation power from the heii ionization zone , whereas the oscillations of the roap stars are believed to be excited in the hi and hei ionization zones , those of the mira variables in the hiionization zone , and those of the cephei and spb stars are triggered in the ionization zone of the iron - group elements . a very similar mechanism , in the sense that it is also due to a region in the star behaving like a valve , is convective blocking ( or _ convective driving _ ) . in this scheme ( e.g. , brickhill 1991 ) , the base of a convection zone blocks the flux from the interior for some time , releasing the energy stored during compression in the subsequent expansion phase .the pulsations of white dwarf stars of spectral types da and db as well as doradus stars are thought to be excited ( at least partly ) via this mechanism , that may also be of importance in cepheids and mira stars. finally , the pulsations of the sun and solar - like stars , that are intrinsically stable and therefore not called self - excited , are _ stochastically excited _ due to turbulence in their surface convection zones .the vigorous convective motion in the outer surface layers generates acoustic noise in a broad frequency range , which excites solar - like oscillation modes . due to the large number of convective elements on the surface , the excitation is of random nature , and the amplitudes of the oscillations are temporally highly variable .given the physical nature of these driving mechanisms , the existence of the different instability domains in the hr diagram ( cf .1 ) easily follows .a star must fulfil certain physical conditions that it can pulsate , as the driving mechanism must be located in a specific part of the star to give rise to observable oscillations .more physically speaking , in the case of self - excited pulsations the driving region must be located in a region where the thermal , and/or convective time scale closely corresponds the dynamical ( pulsational ) time scale .the consequence of the previous requirement is a constraint on the interior structure of a pulsating variable : if the instability region of some class of pulsating variable is accurately known , models of stars incorporating its excitation mechanism must be able to reproduce it . in this way, details of the input physics describing the interior structures of stars can be modified to reflect the observations . however , this is not the only method available to study stellar structure and evolution from stellar pulsation .models also need to explain the oscillation properties of individual stars .we are fortunate to be in the presence of stars having very complex pulsation patterns , multiperiodic radial and nonradial oscillators .the research field that determines the internal constitution of stars from their pulsations is called _it is now due to make clear that the star whose interior structure is best known is the star closest to us , the sun .its surface can be resolved in two dimensions and millions of pulsation modes can be used for seismic analyses .the related research field is _ helioseismology _ , and it has been extensively reviewed elsewhere ( e.g. , christensen - dalsgaard 2002 , gizon et al .the present article will not touch upon helioseismology .the basic idea of asteroseismology ( see gough 1996 for a discussion of why astero " ) is analogous to the determination of the earth s inner structure using earthquakes : these generate seismic waves that travel through rock and other interior structures of our planet , which can then be sounded . today, the earth s interior has been completely mapped down to scales of a few hundred kilometres .the strongest earthquakes can even cause _ normal mode oscillations _( e.g. , see montagner & roult 2008 ) , that is , the whole planet vibrates with its natural frequencies ( also called _ eigenfrequencies _ in theoretical analyses ) .these normal modes are most valuable in determining the deep interior structure of the earth .asteroseismology does just the same : it uses the frequencies of the normal modes of pulsating stars ( that may be seen as `` starquakes '' ) as seismic waves .the eigenfrequencies of stellar models , that are dependent on their physical parameters and interior structures , are then matched to the observed ones .if a model succeeds in reproducing them , it is called a _ seismic model_. the pulsation modes are waves in the stellar interior , just like the waves that musical instruments resonate in .popular articles often compare the frequency pattern of stellar oscillations to the sounds of musical instruments .the superposition of the normal mode frequencies in which a star oscillates can therefore be seen as its sound , although it generally does not involve frequencies the human ear is susceptible to .there is a large variety of normal modes that stars can pulsate in .the simplest are _ radial modes _ : the star periodically expands and shrinks , and its spherical symmetry is preserved . the mathematical description of the displacement due to the oscillations results in differential equations of the sturm - liouville type that yields discrete eigensolutions : the radial mode frequencies of the given model .pulsation in _nonradial modes _ causes deviations from spherical symmetry : the star changes its shape .mathematically , this no longer results in an eigenvalue problem of the sturm - liouville type , and a large number of possible oscillation modes originates .the eigenfunctions are proportional to _ spherical harmonics _ : where is the angle from the polar axis ( colatitude ) , is the longitude , is the associated legendre polynomial , is a normalization constant and and are the spherical degree and azimuthal order of the oscillation . with obtain etc . etc . what does this mean in practice ?nonradial pulsation modes generate distortions on the stellar surface described by these spherical harmonics .the oscillations separate the stellar surface into expanding and receding as well as heating or cooling areas .a graphical example of these is shown in fig . 2 . ,arranged in the same way as the previous expressions for the associated legendre polynomials . whilst the outward moving areas of the star are coloured in dark grey , the light grey areas move inward , and vice versa .the pole and equator of the star are indicated .adapted from telting & schrijvers ( 1997).,width=566 ] between the expanding and receding surface areas , no motion takes place .the lines along which this is the case are called the _ node lines _ ; the nulls of the previously specified associated legendre polynomials specify the locations of these node lines in polar direction .the total number of node lines on the stellar surface is the _ spherical degree _ , which must in all cases be larger than or equal to zero .the number of node lines that are intersected when travelling around the stellar equator is the _ azimuthal order _ . and are the quantities that appear in the expressions for the eigenfunctions and for the associated legendre polynomial given above , and they are used for the classification of the pulsation modes . pulsation modes with are travelling waves (as can also be seen in the defining equation for spherical harmonics ) , and as they can run either with or against the rotation of the star , m can lie in the interval $ ] .modes with are called _ axisymmetric _ , modes with are named _ sectoral _ , and all other modes are referred to as _ tesseral _ modes .the third quantity needed to describe pulsation modes is the _ radial overtone _ ( sometimes also denoted in the literature ) , which is the number of nodes in the stellar interior .a mode that has no node in the interior is called a _ fundamental mode_. a mode with one interior node is called the _ first overtone _ , modes with two interior nodes are the _ second overtone _ , etc .an accurate account of mode classification from the theoretical point of view is given by deubner & gough ( 1984 ) .historically , observationally , and inconsistently with the theoretical definition , radial overtone modes have also been called the first and second harmonics , respectively , and have been abbreviated with f for the fundamental , as well as 1h ( or 1o ) , 2h ( or 2o ) etc . for the overtones .radial pulsations can be seen as modes with ( remember that ) ; all other modes are nonradial oscillations . modes with are also called _ dipole modes _ ; modes are _ quadrupole modes_. there are two major restoring forces for stellar oscillations that attempt to bring the star back in its equilibrium configuration : pressure and buoyancy ( gravity ) . for radial motion ,the gravitational force in a star increases during compression , so it would actually accelerate , and not restore , the oscillation . therefore, pressure must be the restoring force .on the other hand , for a predominantly transverse motion , gravity restores the motion through buoyancy , similar to what can be observed when throwing a stone into a pond .therefore , aside from their identification with the pulsational quantum numbers and , nonradial pulsation modes are also classified into p _ pressure ( p ) modes _ and _ gravity ( g ) modes_. these two sets of modes thus differ by the main direction of their motion ( radial / transverse ) , and their frequencies .pulsation modes with periods longer than that of the radial fundamental mode are usually g modes , whereas p modes have periods equal or shorter than that ; radial pulsations are always p modes .the different modes are often labelled with their radial overtone number , e.g. a p mode is a pressure mode with three radial nodes , and a g mode is a gravity mode with eight radial nodes .modes with no interior nodes are fundamental modes , or f modes . note that the f mode for does not exist , as a dipole motion of the entire star would require a movement of the stellar centre of mass , which is physically impossible .the propagation of pulsation modes in the stellar interior is governed by two frequencies .one of these is the _ lamb frequency _ , which is the inverse of the time needed to travel one horizontal wavelength at local sound speed .the other frequency describes at what rate a bubble of gas oscillates vertically around its equilibrium position at any given position inside a star ; it is called the _ brunt - vaisl _ frequency .these two quantities are defined as : where is the spherical degree , is the local velocity of sound , is the radius , is the local gravitational acceleration , and are local pressure and density in the unperturbed state , respectively , and is the first adiabatic exponent .the lamb and brunt - vaisl frequencies have the following implications : an oscillation with a frequency higher than both experiences pressure as the main restoring force in the corresponding part of the star .on the other hand , a vibration with a frequency lower than both and is restored mostly by buoyancy .in other words , if we have a stellar oscillation with an angular frequency , it is a p mode wherever , and it is a g mode wherever . in stellar interior regions where lies between the lamb and brunt - vaisl frequencies , the amplitude of the wave decreases exponentially with distance from the p and g mode propagation regions ; such parts in the stellar interior are called evanescent regions .a _ propagation diagram _ aids the visualization of this discussion ( fig . 3 ) . stellar model .the run of the lamb ( dashed line ) and brunt - vaisl ( full line ) frequencies with respect to fractional stellar radius is shown .some stellar pulsation modes are indicated with thin horizontal dashed lines , the circles are interior nodes . the lowest frequency oscillation ( lowest dashed line ) shown is a mode ; that with the highest frequency is a mode .the oscillation with intermediate frequency is a mixed mode .data kindly supplied by patrick lenz.,width=434 ] whereas the lamb frequency decreases monotonically towards the model s surface , the brunt - vaisl frequency shows a sharp peak near the model s centre and then rapidly drops to zero .this is because this model possesses a convective core , where , and the spike in is due to a region of chemical inhomogeneity . over a range of stellar models ,the behaviour of in the interior is usually simple , whereas may show considerable changes with evolutionary state and mass .however , is always zero in the stellar centre . in fig . 3 ,the g mode is confined to the innermost parts of the star .it is trapped in the interior , and therefore unlikely to be observed on the surface .the p mode is concentrated near the stellar surface and may be observable .the intermediate frequency mode shows remarkable behaviour : it has three nodes in the outer regions of the star , but also one node in the g mode propagation region .this particular mode is capable of tunnelling through its narrow evanescent region ; it is a g mode in the deep interior , but a p mode closer to the surface . such modes are called _mixed modes_. as mentioned before , stellar pulsation modes can be excited in certain parts of the stellar interior and they can propagate in some regions , whereas in other regions they are damped .work integral _ is the energy gained by the pulsation mode averaged over one oscillation period .an evaluation of the work integral from the stellar centre to the surface is used to determine whether or not a given mode is globally excited in a stellar model . for excitation to occur ,the exciting forces must overcome those of damping and the work integral will be positive . the _ growth rate _ , where is the pulsation frequency and is the mode inertia , parametrizes the increase of oscillation energy during a pulsation cycle , and also indicates how rapidly the amplitude of a given mode increases . the normalized growth rate ( or stability parameter ) , which is the ratio of the radius - integrated work available for excitation to the radius - integrated total work ,is used to evaluate which pulsation modes are excited in the given model . if , a mode is driven and may reach observable amplitude ; if , a mode is driven in the entire stellar model , and if , a mode is damped everywhere in the model .the most widely used application of this stability parameter is the comparison of the excited modal frequency ranges as determined from observations and those predicted by theory .figure 4 shows how the frequency domains predicted to be excited change with the evolution of a 1.8 main sequence model . on the zero - age main sequence ( zams ) , only pure p modes of high radial overtone , and corresponding high frequencies , are excited in the model .later on , some of these become stable again , but modes with lower overtone become excited . at the end of main sequence evolution , a large range of p modes , mixed modes and g modesis predicted to be excited , and the frequency spectrum becomes dense ( and even denser as the model leaves the main sequence ) .the range of excited frequencies is an observable that , when compared with models , can give an estimate of the evolutionary state of the star .oscillation spectra of a 1.8 main sequence model , evolving from hotter to cooler effective temperature .pulsation modes excited in this model are shown with filled circles , stable modes with open circles .the types of mode on the zero - age main sequence ( zams ) are given .note the g modes intruding into the p mode domain.,width=529 ] the reason for the evolutionary change in the pulsation frequencies is that as the model evolves , its convective core , in which the g modes are trapped at the zams , shrinks and the g mode frequencies increase . at the same time the envelope expands due to the increased energy generation in the contracting nuclear burning core , causing a decrease of the p mode frequencies . at some point ,some p and g modes attain the same frequencies , and the modes begin to interact : they become mixed modes , with g mode character in the core and p mode character in the envelope ( see also fig . 3 ) .the frequencies of these modes never reach exactly the same value ; the modes just exchange physical properties .this effect is called _ avoided crossing _ or mode bumping ( aizenman , smeyers & weigert 1977 ) . of the individual modes ,particular astrophysical potential is carried by the mode that originates as on the zams , as pointed out by dziembowski & pamyatnykh ( 1991 ) .this mode is mostly trapped in the shrinking convective core ( for stars sufficiently massive to possess a convective core ) , because the rapid change in mean molecular weight at its edge causes a spike in the brunt - vaisl frequency ( cf .the frequency of this mode is thus dependent on the size of the convective core .the most important parameter determining the evolution of main sequence stars is the amount of nuclear fuel available .stellar core convection can mix material from the radiative layer on top of the core into it , providing more nuclear fuel .this mixing is often named _ convective core overshooting _ and parametrized in theoretical models .overshooting also decreases the gradient in the mean molecular weight at the edge of the core , which means that the frequency of the mode that is on the zams measures the convective core size , a most important quantity for astrophysics in general . to emphasize its sensitivity to the extent of stellar core convection, this mode has also been named mode . in the following , only p and g modes , and mixed modes of these typeswill be considered .other types of mode , such as r modes ( torsional oscillations that may occur in rotating stars ) , g modes ( convectively excited g modes in rotating stars ) , strange modes ( showing up in calculations of highly nonadiabatic environments ) or gravitational - wave w modes , will not be discussed as they have been of little practical importance for the seismic sounding of stars at the time of this writing .the frequencies of the p and g modes of pulsating stars depend strongly on their structure .however , when high radial overtones are considered , some simple relations between mode frequencies emerge .these are derived from _asymptotic theory_. the classical reference on the subject is tassoul ( 1980 ) , the instructive is gough ( 1986 ) . in the high - overtone limitone finds for p modes , whereas for g modes where is the inverse sound travel time through the centre of the star , is a frequency separation dependent on the stellar evolutionary state , is the asymptotic period that is proportional to the integral of the brunt - vaisl frequency throughout the star , and are the spherical degree and radial overtone , respectively , and and are stellar structure parameters , respectively .these relations have important consequences : low - degree p modes of consecutive high radial overtones of the same spherical degree are equally spaced in frequency , whereas low - degree g modes of consecutive high radial overtones of the same spherical degree are equally spaced in period .furthermore , if the parameter were zero , eq .7 indicates that the frequencies of high - overtone p modes of even and odd degrees would be the same , respectively , and odd - degree modes would have frequencies intermediate between those of even degree . in realistic stellar models , these relations hold approximately , but not exactly . in a nonrotating star ,the frequencies of modes with are the same as those of the modes .however , as the modes are travelling waves , their frequencies separate in the observer s frame when looking at a rotating star : the mode moving with rotation appears at higher frequency , the mode moving against rotation appears at lower frequency and the frequency difference to their nonrotating value is m times the rotation frequency ( e.g. , see cox 1984 ) .this effect is called _ rotational frequency splitting _ , and is one basic tool of asteroseismology : if such splittings are observed , the rotation frequency of the star can be determined . unfortunately , reality is not quite as simple as that .the coriolis force acts on the travelling waves and modifies their frequencies .in addition , the modes cause tidal bulges , on which centrifugal forces act .therefore , the frequencies of these modes are often expressed as in case of ( moderately ) slow stellar rotation ( dziembowski & goode 1992 ) . is the observed frequency of the mode , is the stellar rotation frequency , and and are constants that describe the effects of the coriolis and centrifugal forces , respectively ; they are also called ledoux constants .these constants are usually determined from stellar model calculations .the rotational splitting constant also approaches asymptotic values for high radial overtones . for g modes, becomes , and for p modes in the asymptotic limit .it should be made clear that the previous formula is only an approximation for the case that rotation can be treated as a perturbation to the equilibrium state : stellar rotation distorts the spherical shape of a star . as a consequence , the individual modes can no longer be described with single spherical harmonics .for instance , radial modes receive contamination of and other modes with even .vice versa , modes obtain some , , etc . contributions . in the previous formulathis means that for rapid rotation higher order terms may be added , but to arrive at reliable results , two - dimensional numerical calculations are really required ._ rotational mode coupling _ can affect the properties of oscillation modes with close frequencies ( e.g. , see ( daszynska - daszkiewicz et al .furthermore , the rotational distortion causes the stellar temperature to increase at the flattened poles and to decrease at the equatorial bulge .it follows that asteroseismology of rapidly rotating stars has an additional degree of complexity .even more complexity to the theoretical treatment of stellar pulsation is added by the presence of a magnetic field .a weak magnetic field would generate a second - order perturbation to the pulsation frequencies , just like the centrifugal force in case of rotation , but with opposite sign , i.e. the observed oscillation frequencies would increase with respect to the non - magnetic case ( e.g. , jones et al .1989 ) . in the presence of a strong magnetic field, the effect would be more severe : if the field was oblique to the rotation axis ( just like the earth s magnetic field ! ) , the pulsation axis would align with the magnetic axis , and no longer with the rotation axis , as implicitly assumed so far .this means that an observer would see each pulsation mode at a varying angle over the stellar rotation period , creating amplitude and phase variations with exactly that period , and therefore separable from the effects of rotational m - mode splitting .a star oscillating in this way is called _ oblique pulsator _ ( kurtz 1982 ) .we note for completeness that the most general case of rotational splitting is the _ nonaligned pulsator _ ( pesnell 1985 ) , where components of a given mode appear , with frequency separations proportional to the rotation period and to the angle between the rotation and pulsation axes . to summarize , different stellar pulsation modes propagate in different interior regions , and their energy within those regions is not equally distributed .each single pulsation mode has a different cavity , and its oscillation frequency is determined by the physical conditions in its cavity .this means that different modes are sensitive to the physical conditions in different parts of the stellar interior .some modes teach us more about stellar envelopes , whereas other modes tell us about the deep interior .the more modes of different type are detected in a given star , the more complete our knowledge about its inner structure can become . some stars do us the favour to oscillate in many of these radial and/or nonradial modes simultaneously .interior structure models of the stars can then be refined by measuring the oscillation frequencies of these stars , identifying them with their pulsational quantum numbers , and by reproducing all these with stellar models .this method is very sensitive because stellar oscillation frequencies can be measured to extremely high precision , up to one part in 10 ( kepler et al .2005 ) . in the followingit will be described what observables are available to base asteroseismic models on , how the measurements can be interpreted , and how observations and theory are used to develop methods for asteroseismic interpretations .because stellar oscillations generate motions and temperature variations on the surface , they result in observable variability .the interplay of these variations causes light , radial velocity and line profile changes .pulsating stars can thus be studied both photometrically and spectroscopically , via time series measurements .these time series are subjected to _frequency analysis _ , meaning that the constituent signals are extracted from the data . in many cases ,this is done by harmonic analysis , transforming the time series into frequency / amplitude space , e.g. by using the discrete fourier transformation of the input function , corresponding to the time series of the measurements .periodograms , amplitude or power spectra are means of visualizing the results ; an example is given in fig .the amplitude spectrum in fig . 5 can be used to estimate the frequencies of the dominant signals in the time series . in many cases ,the analysis is carried forward by fitting sinusoids to the data , determining and optimizing their frequencies , amplitudes and phases , often by least squares methods .it has also become common practice to subtract this optimized fit from the data and to compute periodograms of the residuals , a procedure called _prewhitening_. this process is repeated until no more significant signals can be found in the data ( a most delicate decision ! ) .depending on the specific case and requirements of the data set , a large number of alternative frequency analysis methods can also be applied , such as phase - dispersion minimization , autocorrelation methods , wavelet analysis etc .care must be taken to keep the limitations of the data sets and of the applied methods in mind .periods no longer than the data set itself can be reliably determined , and adjacent frequencies spaced no more than the inverse length of the data set can be resolved .nonsinusoidal signals cause harmonics and combination frequencies in fourier amplitude spectra that must not be mistaken for independent mode frequencies . no frequencies shorter than the inverse of twice the median distance between consecutivedata points can be unambiguously detected ; the highest frequency that can be retrieved in a given set of measurements is also called its _nyquist frequency_. having determined the frequencies characterizing the stellar variability , the next step is their interpretation. measurements of distant stars have an important limitation : nonradial oscillations create patterns of brighter and fainter , approaching and receding areas on the stellar surface .however , as a distant observer can usually not resolve the stellar surface , she or he can only measure the joint effect of the pulsations in light and radial velocity . as a consequence , the effects of oscillations with high spherical degree average out in disk - integrated measurements , andtheir observed amplitudes are reduced with respect to the intrinsic value .this effect is called _ geometric cancellation _ ( dziembowski 1977 ) .calculations show that the amplitude drops as for high . in ground based observational studiesit is mostly assumed ( and confirmed ) that only modes with are observed in light and radial velocity , with a few exceptions of up to 4 .radial velocity measurements are somewhat more sensitive to higher than photometric observations .once the oscillation frequencies of a given star have been determined , how can they be made asteroseismic use of ? as an example , high - precision radial velocity measurements of the close - by star centauri a showed the presence of solar - like oscillations ; the power spectrum ( square of amplitude vs. frequency ) of these data is shown in fig .centauri a ( bedding et al . 2004 ) .the vertical dotted lines are separated by 106.2.,width=585 ] this graph contains a series of maxima equally spaced in frequency : a high - overtone p mode spectrum , as predicted by asymptotic theory .the mean frequency spacing is 106.2 .however , it is obvious that only every other of the strongest peaks conforms to this spacing ; there are other signals in between . the signals halfway between the vertical lines in fig .6 , denoting modes of spherical degree , are a consequence of eq. 7 : these are modes with . given the effects of geometrical cancellation , it is straightforward to assume that these are the modes of lowest , viz . and 1 . however , there is more information present. a close look at fig .6 reveals that many of the strongest peaks seem to be split .again , the explanation for this finding lies within eq . 7 : the close neighbours are modes with .these frequencies no longer have the same values as those with degree because the stellar interior has structure .as stars evolve on the main sequence , their nuclear burning cores shrink , increase in density , and change in chemical composition as evolution progresses .this alters the acoustic sound speed in the core and is reflected in the frequency differences between modes of degrees , also called the _ small frequency separation . on the other hand , evolution causes expansion of the outer regions of stars that become more tenuous , which increases the sound travel time through the star .this is measurable via the frequency difference between consecutive radial overtones and is called the _ large frequency separation . the large and the small separations can be computed for a range of theoretical stellar models .it turns out that a plot of vs. allows an unambiguous determination of stellar mass and evolutionary state , and this method works particularly well for models with parameters similar to our sun ( main sequence , ) .this diagnostic is called an _ asteroseismic hr diagram _ ( christensen - dalsgaard 1988 ) .another important tool to analyse high - overtone p mode pulsation spectra is the _echelle diagram_. this diagram plots the oscillation frequencies versus their modulus with respect to the large separation .figure 7 shows the echelle diagram for cen a constructed from the frequencies of the signals apparent in fig .modes.,width=491 ] the frequencies fall onto four distinct ridges , corresponding to modes of the same spherical degree . in this way , can be identified . from left to right , the ridges correspond to and 1 , respectively , and it can be seen that , within the errors , , consistent with eq .the scatter in this diagram is due to a combination of the temporal resolution of the data and , more importantly , the finite lifetimes of the stochastically excited modes .the example of cen a shows that once a sufficient number of intrinsic pulsation frequencies of a given star has been determined , one may identify their spherical degrees by _pattern recognition_. this method is also applicable to the oscillation spectra of pulsating white dwarf stars , as exemplified in fig .the series of peaks discernible in fig .8 does not form a pattern of equally spaced frequencies , but of equally spaced periods .this is the signature of a high overtone g mode pulsation spectrum ( eq .8) . in this case ,all the strong peaks are roughly aligned with the vertical lines ; there are no strong modes in between .some of the expected signals are missing , but in most cases only apparently : they are just much weaker in amplitude .a mean period spacing of 39.5 seconds has been determined from this analysis , and the obvious identification of the strongest modes is .equation 8 then results in . for pulsating white dwarf stars, is a measure of the stellar mass , which was consequently determined . furthermore , several of the strongest peaks in fig .8 are split into triplets .these triplets are equally spaced in frequency , and they are the signature of rotationally split m modes ( eq .the rotation period of gd 358 could therefore also be determined , just from an inspection of the power spectrum of its photometric time series , and basic application of theory .the fact that the -mode splitting only results in triplets strengthens the previous identification of the spherical degree as .the relative amplitudes within those triplets are dependent on the inclination of the stellar pulsation axis ( eq . 4 ) , which may then be determined , but in reality there are other , presently unknown , effects that modify the relative multiplet amplitudes .there is even more information present .it is noticeable that the peaks in the amplitude spectrum do not perfectly conform to their asymptotically predicted locations .this is a sign of _ mode trapping_. white dwarf stars consist of a degenerate core , with subsequent outer layers of different chemical elements .the transition regions between these layers create spikes in the brunt - vaisl frequency .the pulsation modes prefer to place their radial nodes in these transition regions , and be standing waves to both sides of the nodes .this modifies their frequencies compared to the case of homogeneous interior structure and gives rise to the observed deviations from equal period spacing .the observant reader will have noticed three apparent inconsistencies in the previous paragraphs .how can two stars with similar oscillation frequencies be high - overtone p and g mode pulsators , respectively ?how does one know that 106.2 is the mean frequency spacing for cen a , and not half this value ? why can the claimed small frequency separation not be the effect of rotation ?this is because some additional constraints are available that help in the interpretation of pulsational mode spectra .as the p mode pulsation periods depend on the sound travel time through the star , they must be related to its size , or more precisely , to its mean density .the _ pulsation constant _ where is the pulsation period and is the mean stellar density , is a useful indicator of what type of mode one sees in a given star . over the whole hr diagram ,the q value for the radial fundamental mode is between ; for the sun , it is 0.033d ( 0.8hr ) . as mentioned before, radial modes can only be p modes , ( pure ) p modes always have frequencies of the same order or larger than that of the radial fundamental , and ( pure ) g modes always have frequencies lower than that . because gd 358 is a white dwarf star and therefore has high mean density , its radial fundamental mode period would be of the order of 4s .on the other hand , cen a , a little more massive and more evolved than our sun , has a radial fundamental mode period of about 1hr .therefore , the similar 5 - 10 minute pulsation periods of the two stars correspond to completely different types of mode .the asymptotic p mode frequency separation also relates to the stellar mean density where .given the knowledge on the mass and evolutionary state of cen a , it immediately be inferred that 106.2 must be the large frequency spacing .finally , the rotation period of cen a is much too long to generate -mode splitting with the observed .apart from these examples , there is another clue towards the nature of observed pulsation modes just from observed frequency spectra .radial modes are global oscillations that may reveal themselves because the period ratios of consecutive overtones are well known . on the main sequence , the period ratio of the radial fundamental mode and the first overtone is around 0.773 , and the period ratio of the first to second overtone radial modes is 0.810 . for more evolved stars , such as the cephei stars , these ratios change to 0.705 and 0.805 , respectively .such period ratios are therefore suggestive to correspond to radial modes .it is now clear that besides the pulsational mode spectra of stars themselves , the incorporation of other constraints is useful for their interpretation .this is particularly important for oscillation spectra that do not show obvious imprints of the underlying modes , like those of stars pulsating in high radial overtones .examples are pulsations of low radial overtone or stars rotating so rapidly that the rotational splitting is of the same order as the frequency spacing of consecutive radial overtones of the same or of modes of the same , but different . in practice ,most observed stellar oscillation spectra are incomplete , either because the star chooses so or because the observations do not have sufficient sensitivity , which makes it even more difficult to recognize patterns in the observed frequencies or periods and to type the modes accordingly .therefore , _mode identification methods _ have been developed , of which a variety is available .the first method uses photometric data only .the flux change for nonradial pulsation in the linear regime can be expressed as ( watson 1988 ) , where is the time and wavelength dependent magnitude variation of an oscillation , is an amplitude parameter transformed from fluxes to magnitudes , is the associated legendre polynomial , is the cosine of the inclination of the stellar pulsation axis with respect to the observer , is the angular pulsation frequency , is time and is the phase lag between the changes in temperature and local geometry ( mostly originating in convection zones near the stellar surface ) .the term is the local temperature change on the surface , is the temperature - dependent limb darkening variation , is the local geometry change on the stellar surface , is the local surface pressure change and is the gravity - dependent limb darkening variation .the terms can be determined for different types of pulsator from theoretical model atmospheres , and the observables best suited to reveal the types of mode present can also be deduced .these would for instance be the photometric amplitude ratios or phase shifts between different filter passbands , but also the optimal passbands themselves .an example of such a mode identification is shown in fig .cephei star 12 lacertae from multicolour photometry ( taken from handler et al .the amplitudes are normalized to unity in the ultraviolet and compared with theoretical predictions .the full lines are for , the dashed lines for , the dashed - dotted lines for , the dotted lines for , and the dashed - triple dotted lines for .the modes investigated in the two left panels are , the next is and the rightmost one is .,width=604 ] from photometry alone , only the spherical degree of a given pulsation mode can be identified .the method can be supported by adding radial velocity measurements , which increases its sensitivity ( daszynska - daszkiewicz , dziembowski , & pamyatnykh 2005 ) , but still does not supply a determination of the azimuthal order . to this end , high resolution spectroscopy must be invoked .the lines in a stellar spectrum are broadened by rotation through the doppler effect : the intrinsic line profile is blueshifted on the parts of the stellar surface approaching the observer , and redshifted on the areas moving away .the effect is strongest on the stellar limb and decreases towards the centre . as a consequence ,a rotationally broadened line profile contains spatial information of the stellar surface .for instance , in a dark starspot flux is missing , and a spike will move through the line profile as the spot rotates over the visible disk .the method to reconstruct stellar surface structure from line profile variations is called _ doppler imaging _ ( vogt & penrod 1983 ) .apart from radial velocity changes , stellar oscillations also cause variations in rotationally broadened line profiles .the areas on the stellar surface that have an additional approaching component to their motion with respect to the observer have their contribution to the line profile blueshifted , whereas the receding parts are redshifted by the corresponding amount of pulsational doppler velocity .the net result of all these motions is bumps travelling through the line profile , and their shapes are governed by the oscillation mode causing them ( e.g. , see telting 2003 for a review ) .examining stellar line profiles , pulsation modes of up to can be observed and identified , a vast extension in compared to photometric and radial velocity techniques .some examples of pulsational line profile variations are shown in fig .10 . ) generates a different distortion of the line profile .adapted from telting & schrijvers ( 1997).,width=585 ] the task now is to extract the correct values of and from the observed variations .the principle is to fit the theoretically calculated 3-d velocity field to the observed line profiles , and a wide range of spectroscopic mode identification methods is available .some of the most commonly used are the moment method ( most suitable for low ) , the pixel - by - pixel method , the fourier parameter fit method , or doppler reconstruction of the surface distortions .high - resolution spectroscopy is better suited for the determination of rather than which makes it complementary to photometric and radial velocity methods . when resorting to photometric or spectroscopic methods , it is not required to arrive at unique identifications ofall observed pulsation modes in each given star .what is needed is the secure identification of a sufficient number of modes to rule out all possible alternative interpretations .an example will be presented later . the observed pulsation frequencies and their identifications are then matched to theoretical models ( see kawaler , this volume , for details ). these would ideally be full evolutionary models with pulsation codes operating on them , although a few codes based on envelope models are still in use .most pulsation codes use the _ linear approximation _ : the oscillations are treated as linear perturbations around the equilibrium state .this allows the evaluation of the excitation of oscillation modes and thus the computation of theoretical domains of pulsation in the hr diagram .nonlinear computations , that would allow predictions of oscillation amplitudes , are still rather the exception than the rule because they are , even today , expensive in terms of computing time , as are numerical hydrodynamical simulations .many stellar pulsation codes employ the _adiabatic approximation _( sometimes called isentropic approximation ) to compute oscillation frequencies , which is the assumption that no energy exchange between the oscillating element and its surroundings takes place .other codes perform nonadiabatic frequency computations .a wide variety of stellar oscillation codes is available , and most theory groups use their own routines , optimized for the application to their objects of main interest .there are several strategies to find seismic models from observations .some compare the observed oscillation frequencies with those of a grid of stellar models and perform automatic matching between them .this is computationally extensive , which means that supercomputers or parallel processing are invoked , or that intelligent optimization methods such as genetic algorithms are employed , or both .other strategies start with first imposing observational constraints . besides the observed oscillation frequencies themselves and the identification of the underlying pulsation modes, these would often be estimates of the objects positions in the hr diagram .as an example , these are shown for the cephei star eridani in fig .the numbers on top of each mode ( group ) are their identifications , consistent in photometry and spectroscopy ( de ridder et al .right : a plot of the star s position in the theoretical hr diagram ( star symbol ) with its error bars and lines of equal mean density for the observed radial mode periods assuming they the fundamental , first and second overtones , respectively ( thick lines ) .some model evolutionary tracks labelled with their masses , the zero - age main sequence , and the borders of the cephei ( dashed - dotted line ) and spb star ( dashed lines ) instability strips are also shown.,width=585 ] the detection of a radial mode in the frequency spectrum is an asset for asteroseismic studies of this star : there are only three possibilities for the value of its mean density .these depend on whether its frequency corresponds to the fundamental , the first or the second radial overtone ( eq . 11 ) .a comparison of the observed position of the star in the hr diagram ( right panel of fig . 11 ) with its error bars leads to the rejection of the second overtone hypothesis , and to the elimination of models with masses below 8.5m . nowthe modes come into play : moving along the lines of constant mean density in the hr diagram , a comparison between their observed and theoretically predicted frequencies can be made .this is done on the left side in fig .12 . given the uncertainties and assumptions in the asteroseismic model construction , all modes are reproduced by models between that have the radial mode as the fundamental .however , the frequency of the highest - overtone dipole mode can not be explained within the errors for models that assume the first overtone for the radial mode .this interpretation can therefore also be rejected .the modal assignment to the observed frequencies is now unambiguous , and the range of models to be explored for seismic fitting is severely reduced .the reason why this way of mode identification was successful is that the mode and its respective p mode neighbour are in the process of avoided crossing in the parameter space of interest .small changes in the evolutionary states of the models depending on mass therefore lead to significant changes in their frequencies .the observed mode has been excluded from the mode identification procedure as all models in the parameter domain under consideration reproduce its frequency correctly : it is nearly a pure p mode and its frequency hardly changes in models of the same mean density .modes of eri for models of different mass but same mean density .the full horizontal lines are the observed frequencies , the dotted lines are theoretical model frequencies .upper panel : assuming that the radial mode is the fundamental .the vertical dashed - dotted lines show the mass range in which the first g mode and the first p mode fit the observed frequencies ( ) .lower panel : the same , but assuming the radial mode is the first overtone . hereit would be the first g mode and the second p mode that give an acceptable fit between .however , the observed mode with highest frequency is not compatible with the more massive models .right : a comparison of the rotational splittings of a rigidly rotating model , with a rotation rate chosen to fit the observed triplet , with the observations . the observed splitting of the tripletis not reproduced with this assumption , demonstrating the presence of differential interior rotation.,width=585 ] in this restricted parameter space , an interesting observation can be made : the observed m - mode splittings do not agree with those predicted by uniformly rotating models ( fig .12 , right - hand side ) . fitting the rotational splitting for the mode , that samples the deep stellar interior , results in predicted splitting about 30% larger than that observed for the mode , which is concentrated closer to the surface .this means that the star s rotation rate increases towards its interior , as predicted by theory ( talon et al .in addition , as the mode frequency is sensitive to the size of the convective core ( it is the mode ! ) , a constraint on the convective core overshooting parameter was also obtained ( pamyatnykh , handler , & dziembowski 2004 ) this example shows the potential of asteroseismology , even if only few pulsation modes are available .radial pulsations are most valuable in the identification process as they immediately provide accurate constraints on the stellar mean density that can become unambiguous if the radial overtone of the mode can be inferred , supported by other estimates such as on effective temperature and luminosity .theory and observations work hand in hand .once a unique identification of the normal modes is achieved , firm results on stellar structure can be obtained , even given uncertainties in the modelling procedure . in the case of eridani , the most severe problem is that the mode is hard to be excited and accurately matched in frequency given the observed effective temperature of the star and its chemical surface composition . only changes to the input physics would solve this problem .this is just the goal of asteroseismology : to improve our knowledge about stellar physics .such an improvement can only be achieved using observational results that present models can not account for !the first pulsators that could be studied asteroseismically were white dwarf and pre - white dwarf stars , pulsating in high - overtone low - degree g modes .a remarkable initial result was the theoretical prediction of a then new class of pulsating white dwarfs of spectral type db , and their subsequent observational confirmation ( winget et al .1982 ) . upon the realization that some pulsating white dwarf stars have very complicated frequency spectra , andhaving a well - developed theory of white dwarf pulsation available , the main obstacle to extracting correct mode frequencies from observational data was the daily interruptions of the measurements by sunlight .figure 13 explains why .the interruptions cause ambiguities in the determination of the frequencies of the signals present in the data : a frequency different by one cycle per sidereal day from the real oscillation frequency would generate a fit of comparable quality . in the presence of complicated variability and most notably for signals of low signal - to - noisethis could lead to erroneous frequency determinations .any seismic model based on incorrect observational input is misleading .the solution to this problem is to avoid , or at least minimize , daytime gaps in time resolved measurements .this can be accomplished by concerted observational efforts , involving interested colleagues over the whole globe , passing on the asteroseismic target from one observatory to the next .the best known of these collaborations is the whole earth telescope ( wet , nather et al .1990 ) , invented for the study of pulsating white dwarf stars .one of the first wet runs was devoted to the prototypical pulsating pre - white dwarf star pg 1159 - 035 = gw vir .it resulted in the detection of over 100 g - mode frequencies of and 2 modes of high radial overtone , leading to precise determinations of the stellar mass and rotation period , an asteroseismic detection of compositional stratification , and an upper limit to the magnetic field strength ( winget et al .subsequent wet observations of the prototypical pulsating db white dwarf star gd 358 = v777 her showed a mode spectrum dominated by high - overtone g modes ( cf .8) , resulting in determinations of its total and surface helium layer mass , luminosity and rotation rate ( winget et al .1994 ) .the evolution of white dwarf stars is dominated by cooling , at ( nearly ) constant radius . as they cool , they pass through a number of instability strips .evidence is that all white dwarf stars located in such instability domains in the hr diagram do pulsate .this has an important consequence : the interior structures of the pulsators must be representative of all white dwarf stars .thus asteroseismic results for white dwarf pulsators can be extended to the stars that do not oscillate .cooling of pulsating white dwarf stars changes their oscillation periods , and the rate of period change is directly related to their energy loss .the hottest db pulsators have reasonably high neutrino emission rates , and their evolutionary period changes may be able to tell us whether their neutrino emission is compatible with the standard model of particle physics ( winget et al .2004 ) . measurements to detect such a period change are ongoing . on the other hand , the period changes of da pulsators ( some of which are the most stable optical clocks in the universe , kepler et al .2005 ) could reveal the masses of axions , if the latter existed ( kim , montgomery , & winget 2006 ) .asteroseismology of white dwarf pulsators does not only allow to detect stratification in their chemical profiles near the surface , it also gives evidence of their core composition .this , in turn , is dependent on their history of evolution on the asymptotic giant branch ( agb ) , and can be used to obtain constraints on the nuclear reaction rates in agb stars .present results ( metcalfe 2005 ) indicate consistency with evolutionary models . as white dwarf stars cool ,their cores become crystallized . being composed mainly of carbon and oxygen , such corescan be seen as giant diamonds !massive white dwarf stars begin to crystallize when still in the dav ( zz ceti ) instability strip , and asteroseismic investigations of one such massive pulsator have proven substantial crystallization in its interior ( metcalfe , montgomery & kanaan 2004 , brassard & fontaine 2005 ) .the light curve shapes of pulsating white dwarf stars of spectral types db and da are often nonsinusoidal .the nonlinearities originate in their convection zones , that can not instantly adjust to the pulsational variations ( the g mode pulsations of pulsating white dwarf stars are almost exclusively due to temperature changes ; robinson , kepler , & nather 1982 ) .as the light curve shapes of such pulsators depend on the thermal response time of the convection zone , the latter parameter can be determined from nonlinear light curve fits ( montgomery 2005 ) . as a final example , there are pulsating white dwarf stars in mass accreting close binary systems .if the mass transfer rate is in a certain range , the surface temperature of the accreting white dwarf places it in an instability strip .a handful of such oscillators is known to date ( mukadam et al .2007 ) , but attempts at asteroseismology have proven difficult due to low amplitudes and unstable mode spectra . among all classes of pulsating star ( aside from the sun itself ) , asteroseismology of pulsating white dwarf starsis certainly in the most advanced state .a new class of such oscillators has recently been proposed , hot dq stars with carbon - dominated atmospheres and temperatures similar to that of the db pulsators ( montgomery et al .2007 , dufour et al . 2009 ) .there is little doubt remaining that the variability of these stars is due to pulsation .theory predicts yet another new type of white dwarf oscillator , da white dwarf stars somewhat hotter than the db pulsators , but observational searches for them have so far been inconclusive ( kurtz et al .we refer to montgomery ( 2009 ) , and references therein , for more information on asteroseismology of pulsating white dwarf stars . at about the same time when the necessity for worldwide observing efforts for pulsating white dwarf stars was realized, the same conclusion was reached for scuti stars , some of which also exhibit complex oscillation spectra composed of g , p and mixed modes of low overtone .the delta scuti network , founded over 25 years ago , was the first multisite observing collaboration for these stars , followed by a number of others such as stephi or stacc ; a few wet runs were also devoted to scuti stars .the asteroseismic potential of scuti stars is enormous , but could so far not be fully exploited .part of the reason is visible in fig . 4 :the pure p mode spectrum on the zams mainly allows a determination of the stellar mean density . when the scientifically more interesting mixed modes appear , the frequency spectrum is fairly dense and requires a large amount of data and long time base to be resolved observationally .however , the real stars seldom co - operate in showing many of the potentially excited modes at observable amplitude , inhibiting mode identification by pattern recognition .in addition , mode amplitudes are often small , counteracting reliable identifications of many modes by applying the methods discussed before .even though dozens or even hundreds of pulsation modes have been detected in some scuti stars , little has been learnt on their interior structures from asteroseismology so far .pulsational amplitude limitation of scuti stars is a major problem for theory : what makes the star excite only certain modes , which modes would these be and what determines their amplitudes ?observational evidence suggests that in evolved stars mode trapping is ( part of ) the answer , and that oscillations with frequencies around those of radial modes are preferentially excited ( breger , lenz , & pamyatnykh , 2009 ) . for slowly rotating scuti stars gravitational settling and radiative levitationgive rise to chemical surface peculiarities and are believed to deplete the pulsational driving regions .consequently , am and ap stars are not expected to pulsate , although a few of them do ( kurtz 2000 ) .this also means that many scuti stars rotate rapidly , which requires special calculations to extract information from their distorted pulsation modes , a field that has made considerable progress in the recent past ( reese et al .2009 ) .some pre - main sequence stars cross the scuti instability strip on their way to the zams , and consequently pulsate .the interior structures of these stars are fairly simple , which may make them more accessible to asteroseismic investigation compared to their main sequence counterparts .the oscillation spectra of pre - main sequence and main sequence scuti stars in the same position in the hr diagram are predicted to be different , which may allow an asteroseismic separation ( suran et al .2001 ) .these two classes of high - overtone g mode pulsator , although well separated in effective temperature , share most of their asteroseismic characteristics .they also share the problems with respect to observations , caused by their long periods : resolving their oscillation spectra requires measurements over a long time baseline , possibly many years .it is therefore no surprise that most of the known members of these two groups of pulsator were discovered with the hipparcos ( high precision parallax collecting satellite ) mission from its data set spanning over three years ( waelkens et al .1998 , handler 1999 ) . as in pulsating white dwarf stars ,effects of inhomogeneities in stellar structure would manifest themselves in mode trapping and thus in oscillatory behaviour in the g mode periods ( miglio et al .the dominant inhomogeneity is the change in mean molecular weight at the edge of the convective core , whose size can be measured .this is a method alternative to measuring the frequency of the mode in p mode pulsators .several of the spb and doradus stars rotate with periods comparable to their oscillation periods .this , again , calls for models that take rotation into account with a more sophisticated approach than perturbation theory ; the corresponding work is in progress .the doradus stars are located in a domain where the influence of convection on the pulsations is significant ; convection is also responsible for the red edge of the scuti instability domain .modelling with a time dependent convection approach allowed dupret et al .( 2005 ) to reproduce the observed boundaries of these instability strips , and also to perform predictions of mode excitation in these stars .the situation for the spb and doradus stars with respect to asteroseismology is therefore the same as for scuti stars : the basic theory is in place , the difficulty remains to find stars permitting the extraction of the required information from the observations .the cephei stars are massive ( ) early - b main sequence stars that oscillate radially and nonradially in p , g and mixed modes of low radial overtone .this is roughly the same type of modes as excited in the scuti stars , but asteroseismology has been more successful for cephei stars in the recent past due to several reasons .the observed frequency spectra are simple enough to provide initial clues for mode identification , yet complicated enough to reveal information about the stars interior structures .photometric and spectroscopic mode identification methods ( and combinations of both ) could be applied successfully to some cephei stars ( e.g. , de ridder et al .2004 ) , additionally aided by the large radial velocity to light amplitude ratios ( of the order of several hundreds km / s / mag ) of their pulsation modes .sufficient information for unique identifications of all modes was obtained ; an example was shown earlier .asteroseismic modelling was also eased for cephei stars , as radial modes have sometimes been identified ; an example was shown earlier .this immediately reduces the parameter space in which a seismic model must be sought by one dimension .due to the evolutionary state of cephei stars ( near the centre of the main sequence ) , a few of the observed nonradial modes are of mixed p / g type , which has provided information about the convective core size and/or differential interior rotation in a number of stars ( e.g. , aerts et al .2003 , pamyatnykh et al .2004 ) .apart from learning about stellar interiors , asteroseismology of cephei stars has interesting astrophysical implications . given their high masses , they are progenitors of supernovae of type ii which are largely responsible for the chemical enrichment of galaxies .the evolution of massive stars is strongly affected by rotational mixing and angular momentum transport ( maeder & meynet 2000 ) ; their internal rotation profile is testable by asteroseismology of cephei stars .however , the field has not yet matured to a point where we can claim satisfactory understanding of all aspects of the physics governing the interior structures of massive main sequence stars .several questions may be answered by seismic sounding of cephei stars : how strong is differential interior rotation ? how efficient is internal angular momentum transport ( townsend 2009 ) ?how strong is convective core overshooting ? can all stars between be modelled with the same convective overshooting parameter ?there are additional questions related to the pulsation physics of cephei stars that need to be addressed .only the centre of the theoretically predicted cephei instability strip is populated by observed pulsators .is this an observational shortcoming or a weakness of theory ?what is the upper mass limit of the cephei stars ?are there post - main sequence cep stars , contrary to theoretical predictions ?five different observables strongly depend on the opacities and element mixtures used for theoretical modelling of cephei stars : the radial fundamental to first overtone period ratio , the excited range of pulsation modes , the frequencies of p modes with radial overtones larger than two , the dependence of bolometric flux amplitude on the surface displacement ( daszyska - daszkiewicz & walczak 2009 ) and in case of a hybrid " pulsator ( see below ) the excited range of g modes .most of these observables are largely independent of each other ; modelling of some stars shows that no standard input opacities and element mixtures can explain the pulsation spectra in detail .therefore , the last question that may be answered from asteroseismology of cephei stars is : where must we improve the input physics for stellar modelling ?some cephei and spb stars have emission - line spectra and are thus be stars .these objects rotate rapidly and have circumstellar disks , occasional outbursts etc .they may also be studied asteroseismically , but their oscillations are hard to be detected and identified , and their rapid rotation requires special theoretical treatment - that is underway . three types of pulsating subdwarf star are known : long - period subdwarf b ( sdb ) stars that oscillate in high - overtone g modes ( v1093 herculis stars ) , short - period sdb stars pulsating in low - overtone p and g modes ( v361 hydrae stars ) , and the only oscillating subdwarf o ( sdo ) star known to date , a low - overtone p mode pulsator . although their g modes would allow the sounding of deep interior regions , their faintness , long periods and low amplitudes made v1093 herculis stars escape asteroseismic study so far .theoretical studies of the sdo pulsator have so far been focused on the problem of mode excitation .therefore , the only subdwarf pulsators that have been asteroseismically modelled are among the v361 hydrae stars .this is no easy undertaking as the problem with mode identification and mostly sparse ( but for a few stars rich and highly variable ) frequency spectra again occurs .it is possible that these objects have steep interior rotation gradients .pulsation models must also be built upon evolutionary models including the effects of gravitational settling and radiative levitation . in practice , modelling is carried out by surveying parameter space in effective temperature , gravity , mass and hydrogen mass fraction and by seeking best agreement between observed and theoretically predicted oscillation frequencies .results have been obtained on about a dozen of those stars , and a mass distribution consistent with that expected from a double star evolutionary scenario has been obtained .charpinet et al .( 2009 ) elaborated on many aspects of asteroseismic modelling of pulsating subdwarf stars .the rapidly oscillating ap ( or short : roap ) stars are special among the pulsating stars because their high - overtone p mode oscillations are predominantly governed by a magnetic field that aligns their pulsation axis with the magnetic , and not with the rotation , axis of the stars .the magnetic field also distorts the pulsations modes , so they can no longer be described with a single spherical harmonic . because of their short periods and very low amplitudes ( below 1% )these stars have for many years been studied photometrically only ; an extensive review was given by kurtz & martinez ( 2000 ) .spectroscopic observations of roap stars , however , provided a new level of insight , particularly into atmospheric physics of these pulsators .the reason is that the vertical wavelength of the pulsation modes is shorter than or about the same order of the layer thicknesses of the chemical elements in their atmospheres that are highly stratified by radiative levitation .therefore the radial velocity amplitudes of the oscillations change with line depth and from one chemical species to the other .the chemical elements are also inhomogeneously distributed over the surface , allowing three - dimensional studies of the abundances and pulsational geometry ( kurtz 2009 ) . because of the unique possibilities offered by the atmospheric structure of roap stars , spectroscopy is also much more sensitive in detecting oscillations compared to photometry . as an outstanding example , mkrtichian et al .( 2008 ) detected a complete mode spectrum for przybylski s star over radial orders and performed some initial seismic modelling .the low amplitudes of the stochastically excited oscillations of solar - type stars made their observational detection elusive for a long time .in retrospect , the first detection was made by kjeldsen et al .( 1995 ) , but confirmed only several years later .meanwhile , the observational accuracy has improved to an extent that detections were made in hundreds of stars ( mostly giants ) , and seismic analyses of several were performed .the potential of solar - like oscillations for seismic sounding is large .once detected , the pulsation modes are rather easy to be identified because they are high - overtone p modes ; an example was provided earlier .so are the large and the small frequency separations ( if detectable ) , immediately placing main sequence stars on the ( asteroseismic ) hr diagram .most interesting , as for all pulsators with nearly - asymptotic frequency spectra , are irregularities in the latter , caused by features in the stellar interiors. these would for instance be the base of their envelope convection zones , or the helium ionization region .houdek ( 2009 ) gave an overview of the expected seismic signatures of such features and their astrophysical importance .the phenomenon of avoided crossing ( fig .4 ) also takes place in solar - like oscillators once they have reached the subgiant stage , and it makes itself obvious in echelle diagrams ( bedding et al .2007 ) . as for other types of pulsator, this would allow an asteroseismic determination of the convective core size ( for stars sufficiently massive to possess a convective core ) .the limited lifetimes of the intrinsically damped solar - like oscillations enable inferences concerning pulsation mode physics .the observed power spectra at individual mode frequencies show a multitude of peaks whose overall shape would correspond to a lorentzian .the half - widths of these lorentzians yield a determination of the mode damping rates ; the _ mode lifetimes _ are inversely proportional to those .the mode lifetimes are in turn dependent on properties of the surface convection zone .theoretical predictions of the amplitudes of solar - like oscillators are important not only for understanding their physics , but also for planning observational efforts .after years of predictions resulting in amplitudes larger than were observed , the incorporation of the mode lifetimes and subsequent computation of _ mode heights _ appear to result in a scaling law that estimates observed amplitudes well ( chaplin et al .2009 ) , and seems in agreement with measurements up to oscillating giants ( hekker et al .2009 ) .finally , it is worth to note that as all cool stars possess a convective envelope , it can be expected the solar - like oscillations are excited in all of them , up to red supergiants ( kiss et al .2006 ) . some of the pulsational instability strips shown in fig .1 partly overlap .it is therefore logical to suspect that stars that belong to two different classes of pulsating star , having two different sets of pulsational mode spectra excited simultaneously , may exist .indeed , a number of those have been discovered .this is good news for asteroseismology as the information carried by both types of oscillation can be exploited .the confirmed cases have so far always been high - overtone g modes and low - overtone p modes , as evaluations of the pulsation constants of the oscillations show .mixed - mode pulsations by themselves are not `` hybrid '' pulsations because they occupy the same frequency domain as pure p modes. there are scuti/ doradus stars , cephei / spb stars and long / short - period subdwarf b pulsators .the main physical difference between the b - type and a / f - type `` hybrid '' pulsators is that in the first group the same driving mechanism excites both types of oscillation , whereas in the scuti/ doradus stars two main driving mechanisms are at work .the cooler scuti stars and all doradus stars should have thin surface convection zones that would support the excitation of solar - like oscillations ( e.g. , samadi et al . 2002 ) .ongoing searches for such oscillations have so far remained inconclusive , but there has been a report of solar - like oscillations in a cephei star ( belkacem et al .2009 ) .the revision in our knowledge of the solar chemical element mixture ( asplund et al .2004 ) resulted in the prediction of a much larger overlap region between cephei and spb stars in the hr diagram , and the frequency ranges of the excited long and short period modes in `` hybrid '' b - type pulsators suggest that the heavy - element opacities used for stellar model calculations are still too low ( dziembowski & pamyatnykh 2008 , handler et al .intriguingly , a similar conclusion has been independently obtained from helioseismology ( guzik , keady , & kilcrease 2009 ) .one of the first hybrid pulsators reported in the literature ( hd 209295 , handler et al .2002 ) turned out to have its g mode oscillations tidally excited by a another star in a close eccentric orbit .the changing gravitational influence of the companion gives rise to forced oscillations with frequencies that are integer multiples of the orbital frequency .the tidal deformation of the pulsating star is similar to a sectoral mode , and such modes are therefore most easily excited .there are also several known cases of pulsation in ellipsoidal variables .whereas close binarity represents an additional complication for theoretical asteroseismic studies due to the gravitational distortion of the pulsator , some other cases of binarity can be used to obtain additional constraints for seismic modelling . the fundamental way to determine stellar physical parameters , in particular masses , is the analysis of detached eclipsing binary systems whose components can be assumed to have evolved as if they were single ( torres et al .such constraints are most welcome for asteroseismology and therefore it is self - evident that pulsators be sought for in such binaries . to date , several dozens of such systems are known .most of these would be scuti pulsators , some are cephei stars , but the best studied case is a short - period sdb pulsator .these objects provide another possibility for mode identification ( nather & robinson 1974 ) : throughout the eclipse , different parts of the stellar surface become invisible . in case of nonradial oscillations , only part of the pulsation modesis seen , and the light amplitudes and phases change according to the types of mode . in this way, the oscillation mode can be identified , a method known as _eclipse mapping _ or _ spatial filtration _( e.g. , reed , brondel , & kawaler 2005 ) .another way to gain support for asteroseismic modelling is to study pulsators in stellar clusters . besides the possibility to observe several objects simultaneously ( e.g. via ccd photometry or multifibre spectroscopy ), cluster stars can be assumed to have originated from the same interstellar cloud .therefore , they should be of the same age and chemical composition .these parameters can be well determined from the properties of the cluster itself , and be imposed as a constraint on the seismic modelling procedure of all pulsating cluster members , also known as _ ensemble asteroseismology_. most observational results reported in this article so far were based on classical ground based observing methods , such as single- and multisite photometry and spectroscopy .however , a new era in observational asteroseismology has begun .asteroseismology requires knowledge about as many intrinsic stellar oscillation frequencies as possible , and these often have low amplitude , calling for measurements with the highest accuracy . as this is also a requirement for the search for extrasolar planets ,synergies between the two fields have emerged .spectroscopically , asteroseismology has benefited from high precision radial velocity techniques , such as the iodine cell method , originally invented to find extrasolar planets .measurements of oscillations in distant stars with amplitudes down to 20 cm / s , about one - tenth of human walking speed , have become possible in that way .only about a dozen spectrographs worldwide are capable of reaching the required precision .such observations are still therefore expensive in terms of observing time and complexity of data analysis , but new observing networks , such as the stellar observations network group ( song , http://astro.phys.au.dk/song/ ) aim at achieving similar precision on a regular basis .siamois ( sismomtre interfrentiel mesurer les oscillations des intrieurs stellaires , mosser et al .2009 ) is expected to work at the same precision and duty cycle with its single node placed in antarctica .concerning photometry , the main problem of ground based observations is scintillation , irregular changes in the measured intensity due to anomalous atmospheric refraction caused by turbulent mixing of air with different temperatures .the solution to this problem is to observe stellar variability from space . herethe synergy with extrasolar planet research is that the measurements have sufficient precision to detect the signature of transits of planets when they pass in front of their host star . aside from fine - guidance sensorhubble space telescope photometry , the first asteroseismic data from space were due to an accident .the main science mission of the wire ( wide - field infrared explorer ) satellite failed due to loss of cooling fluid for the main instrument , but the star trackers , of 52 mm aperture , were consequently used for time - resolved photometry .an overview of the results can be found in the paper by bruntt & southworth ( 2008 ) ; wire has fully ceased operation in october 2006 .the first dedicated asteroseismic space mission that was successfully launched is most ( microvariability and oscillations of stars , http://www.astro.ubc.ca/most/ , walker et al .2003 , matthews 2006 ) .the spacecraft is in orbit since june 2003 and still continues to provide asteroseismic data with its 15-cm telescope as main instrument .one of the most interesting results from most was the discovery of high - overtone g mode oscillations in a supergiant b star ( saio et al . 2006 ) , suggesting an extension of the spb star instability domain to the highest luminosities .the corot ( convection , rotation and transits , baglin 2003 ) mission was successfully launched in december 2006 and hosts a 27-cm telescope .it observes two fields of degrees on the sky each , one devoted to asteroseismology of a few bright stars , and the other searching for planetary transits in many stars , at the same time performing a high - precision stellar variability survey .a special volume of astronomy & astrophysics ( 2009 ) reports some of the early corot science .asteroseismically , the detection of solar - like oscillations in almost a thousand giant stars is remarkable ( hekker et al . 2009 ) .in addition , one of the most intriguing corot results was the detection of large numbers of variability frequencies in two scuti stars .interpreted as independent pulsation modes , these would supply hundreds of oscillations to be asteroseismically modelled , reverting the basic problem for the study of these objects : first , there were too few modes available , now there would be too many !however , doubts have been raised whether all the frequencies extracted from those data would really correspond to normal modes of pulsation , or would rather be a signature of granulation ( kallinger & matthews 2010 ) .originally designed for the search for earth - like planets in the habitable zone , the latest addition to the asteroseismic space fleet became kepler ( http://www.kepler.nasa.gov ) , launched in march 2009 .asteroseismology can measure the radii of solar - like oscillators , among which should be planet - hosting stars , to a relative accuracy of 3% .thus the radii of transiting planets would be known to the same precision .therefore it was decided to devote a small percentage of the observing time of this space telescope with an effective 95-cm aperture to asteroseismology .kepler is the most powerful photometry tool for asteroseismology to date and observes a degree field for three years , practically without interruption , providing a time base considerably longer than all other present missions .initial results of the kepler asteroseismic investigation , based on the first 43 days of science operations , were summarized by gilliland et al .( 2010 ) .a possible shortcoming of all these asteroseismic space missions is that they observe in only one passband .therefore all information available for seismic modelling are the targets oscillation frequencies , unless mode identifications are provided by ground - based support observations .these are often difficult because the larger space telescopes mostly observe faint targets .the brite - constellation ( bright target explorer , http://www.brite-constellation.at ) mission therefore adopts quite a different strategy .this mission consists of three pairs of nanosatellites hosting a 3-cm telescope each that will observe in at least two different passbands ( one per satellite ) , facilitating mode identification with the photometric method .brite - constellation will preferentially observe stars brighter than 5th magnitude and has a large degree field of view .given the brightness of the science targets , mode identification from high - resolution spectroscopy can also easily be done .the first pair of satellites are to be launched in early 2011 . finally , plato ( planetary transits and oscillations , http://sci.esa.int/plato ) is a mission designed to provide a full statistical analysis of exoplanetary systems around nearby and bright stars .currently in the definition phase , it will host 28 telescopes of 10 cm aperture that will observe two 557 square degree fields for 2.5 years each .it is intended to observe 100000 stars to a precision of 1 ppm per month and 500000 stars to somewhat poorer accuracy to determine stellar and planetary masses to 1% .asteroseismology is a research field evolving so rapidly that some of the results reported here will already be outdated when this book appears in print .given the 30-year headstart of helioseismology in comparison , the field now is in its teenage years , but matures rapidly .the theoretical basis for asteroseismic studies is laid , although far from being perfect .some of the problems that require solution comprise improved treatment of magnetic fields , convection , internal flows , and fast rotation .it is still poorly known what causes stellar cycles , and what makes certain classes of pulsator select the types of mode they oscillate in . some asteroseismic results point towards a requirement of still higher heavy - element opacities , which current calculations do not seem capable of providing .observationally , pulsating white dwarf stars , cephei stars and solar - like oscillators have been studied asteroseismically , and continue to be. there are high hopes that present asteroseismic space missions will open the scuti , spb , doradus and v1093 herculis stars for interior structure modelling , and further improve the situation for v361 hydrae stars .future high - precision radial velocity networks and sites will improve our knowledge mostly for solar - like oscillators and roap stars , with the latter guiding theory of stellar pulsation under the influence of rotation and magnetic fields .the kepler mission will provide asteroseismic results for solar - like oscillators en masse , and a large number of massive stars are expected to be studied with brite - constellation .the gaia mission ( http://www.rssd.esa.int/gaia ) is expected to provide accurate luminosity determinations for a vast number of asteroseismic targets , tightly constraining the modelling .it is therefore only appropriate to finish with a quote by eyer & mowlavi ( 2008 ) : now is the time to be an asteroseismologist !i am grateful to victoria antoci , tim bedding , gnter houdek and mike montgomery for their comments on this manuscript , as well as to joris de ridder and thomas lebzelter for helpful input .tim bedding and patrick lenz provided some results reproduced here .i apologize to all colleagues whose work was not properly cited here due to a rigorous restriction on the total number of literature sources to be quoted in this article .aerts , c. , 2007 , _ lecture notes on asteroseismology _ , katholieke universiteit leuven charpinet , s. , brassard , p. , fontaine , g. , green , e. m. , van grootel , v. , randall , s. k. , chayer , p. , 2009, in _ stellar pulsation : challenges for observation and theory _ , eds .j. a. guzik & p. a. bradley , aip conference proceedings , vol . 1170 , p. 585
|
asteroseismology is the determination of the interior structures of stars by using their oscillations as seismic waves . simple explanations of the astrophysical background and some basic theoretical considerations needed in this rapidly evolving field are followed by introductions to the most important concepts and methods on the basis of example . previous and potential applications of asteroseismology are reviewed and future trends are attempted to be foreseen .
|
in this paper we consider pattern formation problem in the developmental biology .mathematical approaches to this problem start with the seminal work by a. m. turing devoted to pattern formation from a spatially uniform state .turing s model is a system of two reaction - diffusion equations .after , similar phenomenological models were studied by numerous works ( see for the review ) .computer simulations based on this mathematical approach give patterns similar to really observed ones .however , there is no direct evidence of turing s patterning in any developing organism ( , p.347 ) . the mathematical models are often selected to be mathematically tractable and they do not take into account actual experimental genetic information .moreover , within the framework of the turing - meinhardt approach some important theoretical questions are left open .for example , whether there exist `` universal '' mathematical models and patterning algorithms that allow to obtain any , even very complicated , patterns .in fact , a difficulty in using of simple reaction - diffusion models with polynomial or rational nonlinearities is that we have no patterning algorithms . to obtain a given pattern ,first we choose a reasonable model ( often using intuitive ideas ) and later we adjust coefficients or nonlinear terms by numerical experiments ( an excellent example of this approach is given by the book of h. meinhardt on pigmentation in shells ) . to overcome this algorithmic difficulty we use genetic circuit models .we are going to show that they can serve as `` universal models '' , which are capable to generate any spatio - temporal patterns by algorithms .the gene circuits were proposed and investigated by many works ( for the review see ) in order to use available genetic information , to take into account some fundamental properties of gene interaction and understand mechanisms of cell gene regulation . in this paperwe investigate the model from , which is similar to the well studied hopfield neural networks .this model describes activation or depression of one gene by another and have the following form : where is the number of genes included in the circuit , are the concentration of the -th protein , are the protein decay rates , are some positive coefficients and are the protein diffusion coefficients .we consider ( [ genemodel ] ) in some bounded domain with a boundary .the real number measures the influence of the -th gene on the -th one .the assumption that gene interactions can be expressed by a single real number per pair of genes is a simplification excluding complicated interactions between three , four and more genes .clearly such interactions are possible , however in this case the problem becomes mathematically much more complicated .since the pair interaction is capable to produce any patterns , it seems reasonable to restrict our consideration only to such interaction .the parameters are activation thresholds and is a monotone function satisfying the following assumptions the well known example is . the functions are other activation thresholds depending on . they can be interpreted as densities of proteins associated with the maternal genes .this model takes into account only three fundamental processes : ( a ) decay of gene products ( the term ) ; ( b ) exchange of gene products between cells ( the term with ) and ( c ) gene regulation and protein synthesis .notice that this model of gene circuit can be considered as a hopfield s neural network with thresholds depending on and where diffusion is taken into account .the hopfield system is the first model of so - called attractor neural network , both fundamental and simple .analytical methods for the hopfield models were developed in .let us fix a function satisfying ( [ sigma1 ] ) , ( [ sigma2 ] ) and functions . on the contrary , we consider and as parameters to be adjusted .we denote the set of these parameters by : model ( [ genemodel ] ) allows to use data on gene regulation ( see , where the least square approximation of experimental data and simulated annealing were used to determine the values of the parameters ) . in order to study ( [ genemodel ] ) , many previous works used numerical simulations . for example , the work is devoted to the segmentation in _ drosophila _ , in the authors analyse complex patterns occurring under a random choice of the coefficients .let us formulate now mathematically our main problem .let and . given a function ] and .for example , we can assume that ] can be performed as well by genetic networks . in other words , the pattern capacity of the gene circuits on bounded time intervalsare not less than the pattern capacity of reaction - diffusion systems . to conclude this section , let us notice that an inverse problem , namely an approximation of a neural network by a reaction - diffusion system has been considered in and .in this section we state an analytical algorithm resolving the following problem : given spatio - temporal pattern , to find a gene circuit generating this pattern . we show that this problem can be solved even without diffusion ( ) . in our approachthe space signalling is provided by space - depending activation thresholds .it is important from the biological point of view since the molecular transport is often performed by non - diffusional mechanisms . for time discrete networks ,similar results were obtained in . beside multilayered network theory ( lemma [ approxlemma ] )we also use the following result .[ superposition theorem ] [ superp ] let us consider a family of gene circuits ( [ genemodel ] ) with the parameters , where the functions are fixed and identical for all the circuits .assume these networks generate the output patterns .then , for any and for any continuous positive function , there is a network ( [ genemodel ] ) generating an output pattern such that this result can be interpreted as a _superposition principle_. if given circuits are capable to produce patterns , for any function there is a new circuit , which can approximate the pattern of the form , in other words , `` superposition by '' of these previous patterns .this result also has interesting biological corollaries ; we discuss it in sect .[ sec - ccl ] .let us describe first the outline of the proof .the proof is based on _modular principle_. we suppose that an unknown interaction matrix of the network can be decomposed in blocks .some blocks contain the known matrices corresponding to -th network of given network family .an additional block determines an interaction between new genes and the genes involved in the networks of the family .this structure allows us to apply the approximation results of the multilayered network theory ( see lemma [ approxlemma ] ) .this assumption about the structure of the matrix also is in agreement with contemporary ideas in molecular biology .the proof ( which , by _ modular principle _, is quite straightforward ) can be found in the appendix .since the basic element of the proof of superposition principle is lemma [ approxlemma ] , and the proof of this lemma gives us an algorithm , therefore we obtain a complicated but quite constructive algorithm resolving the patterning problem .moreover , we can estimate the number of the genes involved in patterning process as a function of the pattern complexity defined by ( [ comp ] ) .namely , using the results of the work , we find that depends polynomially on , where is a conditional complexity of respectively given patterns . to explain this relation and its biological meaning , let us consider a simple example .suppose our problem is to construct a periodic one - dimensional pattern , where is a large number .our target pattern therefore is sharply oscillating .moreover , we have no stored ( old ) patterns and thus is proportional to . in this case , to resolve the pattern approximation problem , the network have to involve many genes .assume now that there are old patterns and , in particular , the patterns of the form , where but .in this case the function can be expressed through as a polynom of degree .thus is much less for large and . roughly speaking , a complex target pattern may be simple respectively to another complex pattern .we discuss a biological interpretation of this property in sect .[ sec - ccl ] .using theorem [ superp ] , we can resolve now the pattern programming problem .suppose the functions possess the following property .they can be considered as `` coordinates '' in the domain , i.e. , there exist continuous functions such that this condition holds , for example , if and for each , the function is a strictly monotone function of only one variable .a biological example can be given by the distribution of maternal genes in drosophila .let us prove first an auxiliary mathematical result .[ changevar ] suppose that condition ( [ gy ] ) holds .then any continuous function + can be represented as a function of variables for , where and are two different positive constants . to prove this lemma , let us observe that where is a strictly monotone function of .therefore , can be written as a function of and .then any can be presented as a function of and . using ( [ gy ] ), one obtains that each is a function of the variables and . the lemma is proved .let us formulate the main result of this work .this result means that any patterning process can be realized by a gene circuit .[ main ] suppose that condition ( [ gy ] ) holds .then for any continuous positive ] with , and with , , this equilibrium is unstable with respect to some non - homogeneous perturbations . using an initial perturbation on at the left side ( for ] by the map , which is a one - to - one map of .[ und1 ] and [ und2 ] present the output of system ( [ sys1d1])([sys1d3 ] ) approximating the function and , respectively , for ] .we have used sigmoidal functions for these simulations .also we have generated spatio - temporal patterns with space dimensions .the corresponding gene circuit is the time , the spatial coordinates and can be expressed as functions of : and hence , any continuous function can be represented as a function of , which has to be approximated by jones method in order to solve the pattern generation problem . since , and are singular in and , these functions were approximated in the image of the cubic domain \times[x_{1,0},x_{1,1}]\times[x_{2,0},x_{2,1}] ] .this function is independent of time , but time - dependent functions have also been approximated ( it is not shown ) .we have used sigmoidal functions for this simulation .the last point we illustrate is the superposition principle and its relation with the conditional complexity ( see sect .[ sec - prog ] ) .the superposition theorem [ superp ] states that a given network generating a pattern and a given continuous function , one can device a new network generating .the number of the genes involved in this new network depends on the complexity of the target pattern .this complexity can be defined by the fourier transform of the pattern .we define the conditional complexity as the complexity of _ considered as a function of . the point is that can be much less than .so generating through we may use much less genes than generating directly ( or , if the same gene number is involved , a better precision may be achieved ) .we illustrate this fact by generating for ] , positive numbers and , there are a function ] , since any continuous function can be approximated by a smooth function .moreover , if given is a superposition of the form , where are defined by some system of autonomous differential equations , then can also be represented as a superposition : .to finish the proof of theorem [ superp ] , it is sufficient to prove that for any continuous function of the form , where ] . using the monotonicity of and choosing a sufficiently large , we simplify the last estimate and obtain , \label{space - threshold16}\ ] ] where is givenwe are thankful to dr .j. reinitz ( new york ) for fruitful discussion and dr .v. volpert ( lyon ) for the help .the paper was supported by program pics ( cnrs and russia academy of sciences ) .the second author was supported by the grant past ( france ) .anosov , d.v . , grines , v.z . ,arason , s.k . ,plykin , r.v . ,safonov , a.v . :dynamical systems ix : dynamical systems with hyperbolic behavior , volume 66 of encyclopedia of mathematical sciences .new york : springer verlag , 1995 meinhardt , h. : beyond spots and stripes : generation of more complex patterns by modifications and additions of the basic reaction . in : mathematical models for biological pattern formation ( p.k .maini , h.g .othmer , ed . )new york : springer verlag , 2001 , pp .
|
we consider here the morphogenesis ( pattern formation ) problem for some genetic network models . first , we show that any given spatio - temporal pattern can be generated by a genetic network involving a sufficiently large number of genes . moreover , patterning process can be performed by an effective algorithm . we also show that turing s or meinhardt s type reaction - diffusion models can be approximated by genetic networks . these results exploit the fundamental fact that the genes form functional units and are organised in blocks ( modular principle ) . due to this modular organisation , the genes always are capable to construct any new patterns and even any time sequences of new patterns from old patterns . computer simulations illustrate analytical results .
|
recently _ quantum computing _ has attracted great attention by its potential abilities . to realize a quantum algorithm , it is necessary to design the corresponding _ quantum circuit _ as small as possible .thus , it should be very important to study quantum circuit design methods even before quantum computing is physically realized .indeed , there has been a great deal of research for quantum circuit design .typical quantum circuit design methods are based on _ matrix decomposition _ since a quantum algorithm is expressed by a matrix .they can treat any kind of quantum circuits , but they can not treat large ( hence , practical ) size problems since they need to express matrices explicitly and thus they need exponential time and memory .( note that a matrix for an -bit quantum circuit is , which will be explained later . )there is a different approach for quantum circuit design .the approach is to focus on quantum circuits calculating only ( classical ) boolean functions by the following observation : standard quantum algorithms usually consist of two parts , which we call _ common parts _ and _ unique parts _ below . _common parts _ do not differ for each problem instance . on the other hand ,_ unique parts _differ for each problem instance . for example , grover search algorithm , one of the famous quantum algorithms , consists of so called an _ oracle _ part and the other part .an _ oracle _part calculates ( classical ) boolean functions depending on the specification of a given problem instance , while the other part consists of some quantum specific operations and does not change for all the problem instances .when we developed a new quantum algorithm , we should have designed the common part .therefore , we do not need to design the common part for individual problem instances . on the other hand ,since the unique part of a quantum algorithm differ for each problem instance , we need to have efficient design and verification methods for that part .since unique part calculates classical boolean functions , by focusing on only unique parts , we may have a design method to handle practical size problems based on ( classical ) logic synthesis techniques , especially reversible logic synthesis techniques .indeed there has been a great deal of research focusing on quantum circuits to calculate classical boolean functions in the conventional logic synthesis research community .we also focus on this type of quantum circuits in this paper .it should be noted that there are many different points between our target quantum circuits and the conventional logic circuits ( as will be explained later ) although our target quantum circuits calculate only classical boolean functions .this is because we need to implement circuits with quantum specific operations ( as will be explained later ) .therefore , we definitely need quantum specific design and verification methods even for our target quantum circuits .recently a paper discussed a problem of the equivalence check of _ general _ quantum circuits and quantum states considering the so - called _ phase equivalence _property of quantum states . even for quantum circuits calculating_ only _ boolean functions , it should be very important to verify and analyze the functionalities of designed circuits as in the case of classical logic synthesis .for example , we may consider the following situation : one of the possible realizations of quantum computation is considered to be so called a _ linear - nearest - neighbor ( lnn ) _ architecture in which the _ quantum bits ( qubits ) _ are arranged on a line , and only operations to neighboring qubits are allowed .thus , we need to modify a designed quantum circuit so that it uses only gates that operate to two adjacent qubits . in such a case, we may use some complicated transformations by hand , and thus it is very convenient if we have a verification tool to confirm that the original and the modified quantum circuits are functionally equivalent . if we consider only the classical type gates , it is enough to use the conventional verification technique such as binary decision diagrams ( bdds ) for the verification . however , even if we consider quantum circuits calculating only boolean functions , it is known that non - classical ( quantum specific ) gates are useful to reduce the circuit size .thus we need to verify quantum circuits with non - classical gates . in such cases , a classical technique is obviously not enough .as for simulating quantum circuits , efficient techniques using decision diagrams such as quantum information decision diagrams ( quidds ) and quantum multiple - valued decision diagrams ( qmdds ) have been proposed . by using these efficient diagrams , we can express the functionalities of two quantum circuits , and then verify the equivalence of the two circuits .however , they are originally proposed to simulate _general _ quantum circuits , and thus there may be a more efficient method that is suitable for verifying the functionalities of quantum circuits only for boolean functions . * our contribution described in this paper . * considering the above discussion , we introduce a new quantum circuit class : _ semi - classical quantum circuits ( scqcs)_. although scqcs have a restriction , the class of scqcs covers all the quantum circuits ( for calculating a boolean function ) designed by the existing methods . moreover , because of the restriction of scqcs , we can express the functionalities of scqcs very efficiently as in the case of conventional verifications by bdds .for that purpose , we introduce a new decision diagram structure called _ a decision diagram for a matrix function ( ddmf)_. then , we show that the verification method based on ddmfs are much more efficient than the above mentioned methods based on previously known techniques .we provide an analytical comparison between ddmfs and quidds , and reveal the essential difference : ( 1 ) we show that their ability to express the functionality of one quantum gate is essentially the same , but ( 2 ) we also show that our approach based on ddmfs is much more efficient for the verification of scqcs than a method based on quidds . (note that this does not mean that ddmfs are better than quidds : ddmfs are only for scqcs , whereas quidds can treat all kinds of quantum circuits . )moreover , we show by preliminary experiments that ddmfs can be used to verify scqcs of practical size ( 60 inputs and 400 gates ) . in order to introduce ddmfs, we also introduce new concepts , _ quantum functions _ and _ matrix functions _ , which may be interesting and useful on their own for designing quantum circuits with quantum specific gates .this section introduces new concepts : scqcs together with quantum functions , matrix functions and ddmfs . before introducing our new concepts ,let us briefly explain the basics of quantum computation . in quantum computation, it is assumed that we can use a _ qubit _ which is an abstract model of a _ quantum state . _a qubit can be described as , where and are two basic states , and and are complex numbers such that .it is convenient to use the following vectors to denote and , respectively : and , thus , can be described as a vector : then , any quantum operation on a qubit can be described as a 2 matrix . by the laws of quantum mechanics ,the matrix must be _ unitary ._ we call such a quantum operation a _ quantum gate_. for example , the operation which transforms and to and , respectively , is called a gate whose matrix representation is as shown in fig .[ fig : matrix ] .in addition to the above gates , we can also use any quantum specific unitary matrix in quantum circuits .for example , _ rotation gates _ denoted by are often used in quantum computation .the matrix for the gates is as shown in fig .[ fig : matrix ] .although the functionality of rotation gates is not classical , they are useful to design quantum circuits even for ( classical ) boolean functions .another quantum specific gate called gate is also utilized to design quantum circuits for boolean functions .the matrix for the gate is as shown in fig .[ fig : matrix ] .this gate has the interesting property that . in the following ,our primitive gates are ( generalized ) _ controlled - u gates _ which are defined as follows : a controlled - u gate has ( possibly many ) positive and negative control bits , and one target bit .it applies a 2 unitary matrix to the target qubit when the states of all the positive control bits are the states and the states of all the negative control bits are the state .a controlled - u gate may not have a control bit .in such a case , it always applies to the target qubit .see an example of a quantum circuit consisting of two controlled- gates in fig .[ fig : circuit1 ] .this circuit has three qubits , , and , each of which corresponds to one line . in quantum circuits ,each gate works one by one from the left to the right . for the first gate ,the target bit is and the symbol means the operation .the positive control bits are and denoted by black circles .this gate performs on only when both and are the state .consider the second gate in the same figure .the white circles denote negative controls , which means the gate performs only when both and are the states .in addition to controlled- gates which are essentially classical gates , we can consider any ( quantum specific ) unitary operation for controlled gates .for example , the functionalities of controlled gates in figs .[ fig : adder ] and [ fig : non - scqc ] are various ( e.g. , not , , , and ) .consider fig .[ fig : circuit1 ] again .this circuit transforms the state of the third bit into , where .( throughout the paper , we use to mean the logical negation of . )thus , we can use this circuit ( as a part of a quantum algorithm ) to calculate the boolean function . as mentioned before , although our goal is to construct such a quantum circuit that calculates a boolean function , quantum specific gates ( such as and ) are useful to make the circuit size smaller .for example , the circuit as shown in fig .[ fig : adder ] ( reported in ) utilizes controlled- and controlled- gates to become much smaller than the best one with only classical type gates , i.e. , controlled- gates .( that was confirmed by an essentially exhaustive search . ) in order to characterize such a quantum circuit that calculates a classical boolean function with non - classical gates , we introduce a _ semi - classical quantum circuit ( scqc ) _ whose definition is as follows .a semi - classical quantum circuit ( scqc ) is a quantum circuit consisting of controlled - u gates with the following restriction . * restriction . *if all the initial input quantum states of the circuit are or ( i.e. , just classical values ) , the quantum states of the control qubits of all the gates in the circuit should be or at the time when the gate is being operated .the circuit in fig .[ fig : adder ] is an scqc .this is because the quantum states of the control qubits of all the gates are either or when the gate is being operated if the initial input states , and are either or .it is not trivial to see the condition for the quantum state of the control qubit of the last gate ( i.e. , ) in fig .[ fig : adder ] .however , by using our new concepts ( explained in the next section ) , it is easy to verify that the state is indeed the classical value if the input states of the circuit are classical values .on the contrary , the circuit as shown in fig .[ fig : non - scqc ] is not an scqc .again , by using our new concept it is easily verified that the condition is not satisfied for the quantum state of the control qubit of the last gate ( i.e. , ) in fig .[ fig : non - scqc ] .our motivation to introduce scqcs is based on the following observations . *although scqcs are in a subset of all the possible quantum circuits , quantum circuits ( for calculating a boolean function ) designed by the existing methods are all scqcs to the best of our knowledge .* even in the future , it is very unlikely that we come up with a _ tricky _ design method that produces a non - scqc to calculate ( classical ) boolean functions .the reason is as follows .if the circuit is not an scqc , there is a gate such that the quantum state of its control bit is not a simple classical value ( nor ) . in such a case ,the quantum states of the control bit and the target bit after the gate can not be considered separately : their states are not only non - classical values but also correlated with each other .such a situation is called quantum _ superposition _ and _ entanglement _ .since the whole circuit should calculate a classical boolean function , all of the final output quantum states should be again restored to simple classical values ( i.e. , or ) if all the initial input quantum states of the circuit are simple classical values .the reverse operations of creating quantum superposition and entanglement seems to be the only method to restore to a simple classical value .thus , it seems nonsense to consider non - scqc circuits when we consider practical design methods of quantum circuits to calculate boolean functions .* important note : * the restriction of scqcs means that we can not make _ entanglement _ if all the initial input quantum states of an scqc are just classical values .it is well - known that quantum computation without entanglement has no advantage over classical computation .however , this does not mean that scqcs are meaningless by the following reason : as mentioned , an scqc is used as a sub - circuit to calculate a boolean function for some quantum algorithms .thus , in the real situation where an scqc is used as a sub - circuit , the inputs to the scqc are not simple classical values , and so it indeed creates entanglement which should give us the advantage of quantum computation .in other words , the restriction of scqcs in the definition is considered when we suppose the inputs of scqcs are just classical values , which is not a real situation where scqcs are really used .therefore , scqcs should be enough if we consider designing a quantum circuit to calculate a boolean function from the practical point of view .moreover , the restriction of scqcs provides us an efficient method to analyze and verify quantum circuits as we will see in sec .[ sec : veri ] .that is our motivation to introduce the new concept in this paper . before introducing our new representation of the functionalities of scqcs, we need the following definitions . a quantum function with respect to boolean variables is a mapping from to qubit states .see the third bit after the first gate in the circuit in fig .[ fig : adder ] again .if the initial state of is , the resultant state of the third bit can be seen as a quantum function described as in the second column of table [ tb ] .for example , the resultant quantum state becomes when .thus , is defined as as shown in the table . note that a boolean function can be seen as a special case of quantum functions .for example , the third column ( ) of table [ tb ] shows the quantum function of the resultant third qubit after the two gates of the circuit in fig .[ fig : circuit1 ] when the initial state of is .this can be considered as the output of a boolean function when and are considered as boolean values and , respectively .( as mentioned before , the circuit is considered to calculate the boolean function : , which we consider essentially the same as ( ) in table [ tb ] . )the value of a quantum function can always be expressed as , where is a mapping from to 2 unitary matrices .it is convenient to consider instead of itself , thus we introduce the following definition .a matrix function with respect to boolean variables is a mapping from to 2 ( unitary ) matrices .the fourth and the fifth columns of table [ tb ] show the matrix function and for the quantum function and , respectively , in the same table . in this paper , we treat a matrix function whose output values are only or as a classical boolean function by considering that and of the matrix function correspond to and , respectively , of the boolean function . in other words , we represent a boolean function by a matrix function as a special case .we define a special type of matrix function called _ constant matrix function _ as follows .a matrix function is called a constant matrix function if are the same for all the assignments to . denotes a constant matrix function that always equals to the matrix .the sixth and the seventh columns of table [ tb ] show the truth tables for constant matrix functions , and , respectively . by using the matrix function in the fourth column of table [ tb ] , we can easily see how the first gate in fig .[ fig : adder ] transforms the third qubit : is transformed to .for example , when , is transformed to .we would like to stress again the following point : the above means that the representation ( and so the analysis ) by matrix functions works even when is any general quantum state .indeed , we can use an scqc even when the input states are not simple classical values , i.e. , the restriction of scqcs does not say that scqcs can not be used when the inputs are not classical .( if so , we may not be able to use an scqc for a part of a quantum algorithm . ) for matrix functions , we introduce two operators `` '' and ` ,' which are used to construct ddmfs for a quantum circuit in the following sections .[ def1 ] let , and be matrix functions with respect to to .then is defined as a matrix function such that where means normal matrix multiplication .let also be a boolean function with respect to to .then is a matrix function which equals to when , and equals to when .note that the operator is defined as asymmetric , i.e. , the first argument should be a boolean function whereas the second argument can be any matrix function .this is due to the restriction of scqcs such that the state of a control bit should be or ( i.e. , just classical value ) whereas the state of a target bit can be any quantum state .see examples in table [ operators ] . note that if both of and are considered to be boolean functions like in table [ tb ] , the operator corresponds to the exor of the two boolean functions .note also that if is essentially a boolean function like in table [ tb ] , the operator corresponds to the and of the two boolean functions .a matrix function for a quantum function can be expressed efficiently by using an edge - valued binary decision diagram structure , which we call a ddmf whose definition is as follows : a decision diagram for a matrix function ( ddmf ) is a directed acyclic graph with three types of nodes : ( 1 ) a single terminal node corresponding to the identity matrix , ( 2 ) a root node with an incoming edge having a weighted matrix , and ( 3 ) a set of non - terminal ( internal ) nodes .each internal and the root node are associated with a boolean variable , and have two outgoing edges which are called 1-edge ( solid line ) leading to another node ( the 1-child node ) and 0-edge ( dashed line ) leading to another node ( the 0-child node ) .every edge has an associated matrix .the matrix function represented by a node is defined recursively by the following three rules .\(1 ) the matrix function represented by the terminal node is the constant matrix function .\(2 ) the matrix function represented by an internal node ( or the root node ) whose associated variable is is defined as follows : , where and are the matrix functions represented by the 1-child node and the 0-child node , respectively , and and are the matrices of the 1-edge and the 0-edge , respectively .( see an illustration of this structure in fig .[ fig : ddmf1 ] . )\(3 ) the root node has one incoming edge that has a matrix .then the matrix function represented by the whole ddmf is , where is a matrix function represented by the root node . like conventional bdds, we achieve the canonical form for a ddmf if we impose the following restriction on the matrices on all the edges .a ( ddmf ) is canonical when ( 1 ) all the matrices on 0-edges are , ( 2 ) there are no redundant nodes : no node has 0-edge and 1-edge pointing to the same node with as the 1-edge matrix , and ( 3 ) common sub - graphs are shared : there are no two identical sub - graphs .any ddmf can be converted to its canonical form by using the following transformation from the terminal node to the root node : suppose the matrices on incoming edge , 0-edge and 1-edge of a node be , and , respectively .then , if is not , we modify these three matrixes as follows : ( 1 ) the matrix on the incoming edge is changed to be .( 2 ) the matrix on the 1-edge is changed to be .( 3 ) the matrix on the 0-edge is changed to be .it is easily verified that this transformation does not change the matrix function represented by the ddmf .see the example in fig .[ fig : canonicali ] where the matrix on 0-edge of the node is converted to . in the example , the matrices on edges are omitted .* note : * the concepts of _ quantum functions _ and _ matrix functions _ may be used implicitly in the design method of , and the decision diagram structure is similar between ddmfs and the quantum decision diagrams used in .however , the quantum decision diagrams in are used to represent conventional boolean functions whereas ddmfs are used for representing matrix functions : the terminal node of a ddmf is a matrix .also a weight on an edge in ddmfs is generalized to any matrix .thus , ddmfs can be considered as a generalization of quantum decision diagrams to treat matrix functions rather than boolean functions .( as we have seen in table [ tb ] , boolean functions can be seen as a special case of quantum functions . )we will use the same operators , and , for ddmfs as for matrix functions : let , and be ddmfs that represent matrix functions , and , respectively .then is defined as a ddmf that represents a matrix function .let also be a ddmf that represents a boolean function .then is defined as a ddmf that represents a matrix function two scqcs in fig . [fig : scqc1 ] and fig .[ fig : scqc2 ] .it is easy to see that their functionalities are the same .however , the problem is how to verify the equality for much larger circuits .thanks to the introduction of ddmfs , we propose a method to verify the equality of given two -qubit scqcs in the following .* we construct a ddmf to represent the matrix function that expresses the functionality for each qubit state at the end of each circuit . * step 2 .* we compare two ddmfs for the corresponding qubits of the two circuits .the comparison of two ddmfs can be done in time as in the case of bdds .1 is performed in a similar manner of constructing bdds to represent each boolean function in a logic circuit : ( 1 ) we first construct a ddmf for each primary input , and then ( 2 ) we pick a gate one by one from the primary inputs , and construct a ddmf for the output function of the gate from ddmfs for the input functions of the gate . the construction of a ddmf from two ddmfs can be done recursively as exactly the same as the construction of a bdd from two bdds . in the below , we use a notation to express the ddmf for the -th quantum qubit state right after the -th gate .we also use a notation to denote the matrix function ( or the boolean function in a special case ) represented by a ddmf .* initialization .* for each input , we construct a as a ddmf for .this is the ddmf for the matrix function ( in fact , essentially a boolean function ) which is when .* construction of the ddmfs right after the -th gate .* from the first gate to the last gate , we construct from as follows .if the -th bit is not the target bit of the -th gate , . if the -th bit is the target bit of the -th gate where is constructed by the following two steps .\(1 ) for the -th gate , let us suppose that the positive control bits be the -th bits , and the negative control bits be the -th bits .then , by the restriction of scqcs , all the matrix functions for are essentially classical boolean functions .( therefore , in the following expression , we treat as boolean functions , and perform logical operations on them . )thus we can calculate a logical and of them : .note that this boolean function can be obtained by ddmf operations since a ddmf represents a boolean function in a special case .\(2 ) we construct for ) for , where is a unitary matrix associated with the -th gate .note that all the ddmf operations in the above should be performed efficiently by using _apply _ operations and _ operation and node hash tables _ as the conventional bdd operations .we show an example of ddmfs for the quantum circuit as shown in fig .[ fig : scqc1 ] . at the initialization step , we construct ddmfs for functions , and , which are , and , respectively , as shown in fig . [fig : spte0 ] .then we construct the ddmfs for the quantum states right after the first gate . since the target bit is the second bit for the first gate , , and . to construct , we first calculate a boolean function .this is because the first bit and the third bit are negative and positive controls , respectively .then we construct for ) for , whose matrix function is shown in table [ ddmf - control ] .finally , we construct whose matrix function is as shown in table [ ddmf-2 ] .the constructed ddmfs after the first gate are shown in fig .[ fig : spte1 ] . .a truth table for [ cols="^,^",options="header " , ]in this paper , we introduced new concepts : scqcs together with ddmfs . as described , they should be useful for the analysis and the verification of quantum circuits with a practical restriction .it should be noted that ddmfs are provably useful even for quantum circuit design methods since ddmfs can be considered as a generalization of the data structure used in the design method in .we also revealed the essential difference between ddmfs and quidds for representing the functionalities of scqcs . from our comparison, we can conclude that our approach is much more efficient for the verification of scqcs than a method based on known techniques .note that this does not mean that ddmfs are better than quidds : ddmfs are only for scqcs , whereas quidds can treat all kinds of quantum circuits .in other words , in some sense , our approach stands in the middle of classical boolean functions ( bdds ) and general quantum circuit specifications ( quidds or qmdds ) .as described , this standpoint can be considered as a good trade - off point if we consider designing and analyzing quantum circuits from the practical view point , i.e. , when we focus on sub - circuits to calculate boolean functions for quantum algorithms .lastly we would like to add one more issue : since ddmfs are edge - valued decision diagrams , it may be easier to verify _ quantum phase - equivalence checking _ of scqcs by ddmfs than the method based on quidds .thus , we consider that it is an interesting future work to study how efficiently ddmfs work for _ quantum phase - equivalence checking _ of scqcs .
|
recently much attention has been paid to quantum circuit design to prepare for the future `` quantum computation era . '' like the conventional logic synthesis , it should be important to verify and analyze the functionalities of generated quantum circuits . for that purpose , we propose an efficient verification method for quantum circuits under a practical restriction . thanks to the restriction , we can introduce an efficient verification scheme based on decision diagrams called _ decision diagrams for matrix functions ( ddmfs)_. then , we show analytically the advantages of our approach based on ddmfs over the previous verification techniques . in order to introduce ddmfs , we also introduce new concepts , _ quantum functions _ and _ matrix functions _ , which may also be interesting and useful on their own for designing quantum circuits . quantum circuit , verification , decision diagram
|
the field of channel coding was started with shannon s famous theorem proposed in 1948 , which shows that the channel capacity upper bounds the amount of information that can be reliably transmitted over a noisy communication channel .after this result , seeking for practical coding schemes that could approach channel capacity became a central objective for researchers . on the way from theory to practice , many coding schemes are proposed .different types of codes emerge in improving the performance , giving consideration to the trade - off between coding complexity and error decay rate .the history of channel coding traces back to the era of algebraic coding , including the well - known hamming codes , golay codes , reed - muller codes , reed - solomon codes , lattice codes , and others .however , despite enabling significant advances in code design and construction , algebraic coding did not turn out to be the most promising means to approach the shannon limit .the next era of probabilistic coding considered approaches that involved optimizing code performance as a function of coding complexity .this line of development included convolutional codes , and concatenated codes at earlier times , as well as turbo codes and low - density parity - check ( ldpc ) codes afterwards .recently , polar codes have been proved to achieve shannon limit of binary - input symmetric channels with low encoding and decoding complexity . in another recent study , new types of rateless codes , viz .spinal codes , are proposed to achieve the channel capacity .another well - studied ( and practically valuable ) research direction in information theory is the problem of compression of continuous - valued sources .given the increased importance of voice , video and other multimedia , all of which are typically `` analog '' in nature , the value associated with low - complexity algorithms to compress continuous - valued data is likely to remain significant in the years to come . for discrete - valued `` finite - alphabet '' problems , the associated coding theorem and practically - meaningful coding schemes are well known .trellis based quantizers are the first to achieve the rate distortion trade - off , but with encoding complexity scaling exponentially with the constraint length .later , matsunaga and yamamoto show that a low density parity check ( ldpc ) ensemble , under suitable conditions on the ensemble structure , can achieve the rate distortion bound using an optimal decoder . shows that low density generator matrix ( ldgm ) codes , as the dual of ldpc codes , with suitably irregular degree distributions , empirically perform close to the shannon rate - distortion bound with message - passing algorithms .more recently , polar codes are shown to be the first provably rate distortion limit achieving codes with low complexity . in the case of analog sources , although both practical coding schemes as well as theoretical analysis are very heavily studied , a very limited literature exists that connects the theory with low - complexity codes .the most relevant literature in this context is on lattice compression and its low - density constructions . yet ,this literature is also limited in scope and application .the problem of coding over analog noise channels is highly non - trivial in general . to this end , a method of modulation is commonly utilized to map discrete inputs to analog signals for transmission through the physical channel . in this paper , we focus on designing and coding over such mappings .in particular , we propose a new coding scheme for general analog channels with moderate coding complexity based on an expansion technique , where channel noise is perfectly or approximately represented by a set of independent discrete random variables ( see fig .[ fig : expansion_framework ] ) . via this representation, the problem of coding over an analog noise channel is reduced to that of coding over parallel discrete channels .we focus on additive exponential noise ( aen ) , and we show that the shannon limit , i.e. , the capacity , is achievable for aen channel in the high snr regime . more precisely , for any given , it is shown that the gap to capacity is at most when at least number of levels are utilized in the coding scheme together with embedded binary codes .generalizing results to -ary alphabets , we show that this gap can be reduced more .the main advantage of the proposed method lies on its complexity inheritance property , where the encoding and decoding complexity of the proposed schemes follow that of the embedded capacity achieving codes designed for discrete channels , such as polar codes and spinal codes . to .channel noise is considered as its binary expansion , and similar expansions are adopted to channel input and output .carries exist between neighboring levels . ] in the second part of this paper , we present an expansion coding scheme for compressing of analog sources .this is a dual problem to the channel coding case , and we utilize a similar approach where we consider expanding exponential sources into binary sequences , and coding over the resulting set of parallel discrete sources . we show that this scheme s performance can get very close to the rate distortion limit in the low distortion regime ( i.e. , the regime of interest in practice ) .more precisely , the gap between the rate of the proposed scheme and the theoretical limit is shown to be within a constant gap ( ) for any distortion level when at least number of levels are utilized in the coding scheme ( where , is the mean of the underlying exponential source ) . moreover, this expansion coding scheme can be generalized to laplacian sources ( two - sided symmetric exponential distribution ) , where the sign bit is considered separately and encoded perfectly to overcome the difficulty of source value being negative .the rest of paper is organized as follows .related work is investigated and summarized in section [ sec : related_work ] .the expansion coding scheme for channel coding is detailed and evaluated in section [ sec : channel_coding ] . the expansion source coding framework and its application to exponential sourcesare demonstrated in section [ sec : source_coding ] . finally , the paper is concluded in section [ sec : conclusion ] .multilevel coding is a general coding method designed for analog noise channels with a flavor of expansion . in particular ,a lattice partition chain is utilized to represent the channel input , and , together with a shaping technique , the reconstructed codeword is transmitted to the channel .it has been shown that optimal lattices achieving shannon limit exist . however , the encoding and decoding complexity for such codes is high , in general . in the sense of representing the channel input , our scheme is coincident with multilevel coding by choosing , , , for some , where coding of each level is over -ary finite field ( see fig . [fig : multilevel_framework ] ) .the difference in the proposed method is that besides representing the channel input in this way , we also `` expand '' the channel noise , such that the coding problem for each level is more suitable to solve by adopting existing discrete coding schemes with moderate coding complexity .moreover , by adapting the underlying codes to channel - dependent variables , such as carries , the shannon limit is shown to be achievable by expansion coding with moderate number of expanded levels . , only channel input is expressed by multi - levels , but not the channel noise . ]the deterministic model , proposed in , is another framework to study analog noise channel coding problems , where the basic idea is to construct an approximate channel for which the transmitted signals are assumed to be noiseless above a certain noise level .this approach has proved to be very effective in analyzing the capacity of networks . in particular , it has been shown that this framework perfectly characterizes degrees of freedom of point - to - point awgn channels , as well as some multi - user channels . in this sense ,our expansion coding scheme can be seen as a generalization of these deterministic approaches . here , the effective noise in the channel is carefully calculated and the system takes advantage of coding over the noisy levels at any snr .this generalized channel approximation approach can be useful in reducing the large gaps reported in the previous works , because the noise approximation in our work is much closer to the actual distribution as compared to that of the deterministic model ( see fig .[ fig : expansion_deterministic_compare ] ) .there have been many attempts to utilize discrete codes for analog channels ( beyond simple modulation methods ) .for example , after the introduction of polar codes , considerable attention has been directed towards utilizing their low complexity property for analog channel coding .a very straightforward approach is to use the central limit theorem , which says that certain combinations of i.i.d .discrete random variables converge to a gaussian distribution .as reported in and , the capacity of awgn channel can be achieved by coding over large number of bscs , however , the convergence rate is linear which limits its application in practice . to this end, proposes a mac based scheme to improve the convergence rate to exponential , at the expense of having a much larger field size . a newly published result in attempts to combine polar codes with multilevel coding, however many aspects of this optimization of polar - coded modulation still remain open . along the direction of this research, we also try to utilize capacity achieving discrete codes to approximately achieve the capacity of analog channels .the additive exponential noise ( aen ) channel is of particular interest as it models worst - case noise given a mean and a non - negativity constraint on noise .in addition , the aen model naturally arises in non - coherent communication settings , and in optical communication scenarios .( we refer to and for an extensive discussion on the aen channel . )verd derived the optimal input distribution and the capacity of the aen channel in .martinez , on the other hand , proposed the pulse energy modulation scheme , which can be seen as a generalization of amplitude modulation for the gaussian channels . in this scheme ,the constellation symbols are chosen as , for with a constant , and it is shown that the information rates obtained from this constellation can achieve an energy ( snr ) loss of db ( with the best choice of ) compared to the capacity in the high snr regime . another constellation technique for this coded modulation approach is recently considered in , where log constellations are designed such that the real line is divided into ( ) equally probable intervals . of the centroids of these intervals are chosen as constellation points , and , by a numerical computation of the mutual information , it is shown that these constellations can achieve within a db snr gap in the high snr regime .our approach , which achieves arbitrarily close to the capacity of the channel , outperforms these previously proposed modulation techniques . in the domains of image compression and speech coding , laplacian and exponential distributionsare widely adopted as natural models of correlation between pixels and amplitude of voice .exponential distribution is also fundamental in characterizing continuous - time markov processes .although the rate distortion functions for both have been known for decades , there is still a gap between theory and existing low - complexity coding schemes .the proposed schemes , primarily for the medium to high distortion regime , include the classical scalar and vector quantization schemes , and markov chain monte carlo ( mcmc ) based approach in .however , the understanding of low - complexity coding schemes , especially for the low - distortion regime , remains limited . to this end, our expansion source coding scheme aims to approach the rate distortion limit with practical encoding and decoding complexity . by expanding the sources into independent levels , and using the decomposition property of exponential distribution ,the problem has been remarkably reduced to a set of simpler subproblems , compression for discrete sources .in general , expansion channel coding is a scheme of reducing the problem of coding over an analog channel to coding over a set of discrete channels . in particular , we consider the additive noise channel given by where are channel inputs with alphabet ( possibly having channel input requirements , such as certain moment constraints ) ; are channel outputs ; are additive noises independently and identically distributed with a continuous probability density function ; is block length . we represent the inputs as .( similar notation is used for other variables throughout the sequel . ) when communicating , the transmitter conveys one of the messages , , which is uniformly distributed in ; and it does so by mapping the message to the channel input using encoding function such that .the decoder uses the decoding function to map its channel observations to an estimate of the message .specifically , , where the estimate is denoted by .a rate is said to be achievable , if the average probability of error defined by can be made arbitrarily small for large .the capacity of this channel is denoted by , which is the maximum achievable rate , and its corresponding optimal input distribution is denoted as .our proposed coding scheme is based on the idea that by `` expanding '' the channel noise ( i.e. , representing it by its -ary expansion ) , an approximate channel can be constructed , and proper coding schemes can be adopted to each level in this representation . if the approximation is close enough , then the coding schemes that are optimal for each level can be translated to an effective one for the original channel .more formally , consider the original noise and its approximation , which is defined by the truncated -ary expansion of .for this moment , we simply take ( i.e. , considering binary expansion ) , and leave the general case for later discussion . where represents the sign of , taking a value from ; s are mutually independent bernoulli random variables . by similarly expanding the channel input ,we convert the problem of coding over analog channels to coding over a set of binary discrete channels .this mapping is highly advantageous , as capacity achieving discrete codes can be adopted for coding over the constructed binary channels .assume the input distributions for sign channel and discrete channel at are represented by and correspondingly , then an achievable rate ( via random coding ) for the approximated channel is given by where by adopting the same coding scheme over the original channel , one can achieve a rate given by the following result provides a theoretical basis for expansion coding .( here , denotes convergence in distribution . )[ thm : channel_expansion_coding ] if and , as , where , i.e. , the optimal input distribution for the original channel , then .the proof of this theorem follows from the continuity property of mutual information . in other words , if the approximate channel is close to the original one , and the distribution of the input is close to the optimal input distribution , then the expansion coding scheme will achieve the capacity of the channel under consideration . in this section, we consider an example where expansion channel coding can achieve the capacity of the target channel .the particular channel considered is an additive exponential noise ( aen ) channel , where the channel noise in is independently and identically distributed according to an exponential density with mean , i.e. , omitting the index , noise has the following density : where for and otherwise .moreover , channel input in is restricted to be non - negative and satisfies the mean constraint \leq e_{\mathsf{x}}.\label{equ : aen_input_constraint}\ ] ] the capacity of aen channel is given by , where , and the capacity achieving input distribution is given by where if and only if . here ,the optimal input distribution is not exponentially distributed , but a mixture of an exponential distribution with a delta function .however , we observe that in the high snr regime , the optimal distribution gets closer to an exponential distribution with mean , since the weight of delta function approaches to as snr tends to infinity .the basis of the proposed coding scheme is the expansion of analog random variables to discrete ones , and the exponential distribution emerges as a first candidate due to its decomposition property .we show the following lemma , which allows us to have independent bernoulli random variables in the binary expansion of an exponential random variable .[ lem : exponential_expansion ] let s be independent bernoulli random variables with parameters given by , i.e. , , and consider the random variable defined by then , the random variable is exponentially distributed with mean , i.e. , its pdf is given by if and only if the choice of is given by see appendix [ app : exponential_expansion_proof ] .this lemma reveals that one can reconstruct exponential random variable from a set of independent bernoulli random variables perfectly .[ fig : exponential_recovery ] illustrates that the distribution of recovered random variable from expanded levels ( obtained from the statistics of independent samples ) is a good approximation of original exponential distribution . ) .* samples are generated from the expansion form of discrete random variables , where expansion levels are truncated from to . ] a set of typical numerical values of for is shown in fig .[ fig : exponential_expansion_parameter ] .it is evident that approaches to for the `` higher '' levels and approaches for what we refer to as `` lower '' levels .hence , the primary non - trivial levels for which coding is meaningful are the so - called `` middle '' ones , which provides the basis for truncating the number of levels to a finite value without a significant loss in performance . with .* x - axis is the level index for binary expansion ( e.g. , value means the weight of corresponding level is ) , and y - axis shows the corresponding probability of taking value at each level , i.e. , . ]we consider the binary expansion of the channel noise where are i.i.d .bernoulli random variables with parameters by lemma [ lem : exponential_expansion ] , as . in this sense , we approximate the exponentially distributed noise perfectly by a set of discrete bernoulli distributed noises . similarly , we also expand channel input and output as in the following , where and are also bernoulli random variables with parameters and correspondingly . here , the channel input is chosen as zero for levels . noting that the summation in the original channel is a sum over real numbers, we do not have a binary symmetry channel ( bsc ) at each level ( from to ) .if we could replace the real sum by modulo- sum such that at each level we have an independent coding problem , then any capacity achieving bsc code can be utilized over this channel .( here , instead of directly using the capacity achieving input distribution of each level , we can use its combination with the method of gallager to achieve a rate corresponding to the one obtained by the mutual information evaluated with an input distribution bernoulli with parameter .this helps to approximate the optimal input distribution of the original channel . )however , due to the addition over real numbers , carries exist between neighboring levels , which further implies that the levels are not independent .every level , except for the lowest one , is impacted by carry from lower levels . in order to alleviate this issue ,two schemes are proposed in the following to ensure independent operation of the levels . in these models of coding over independent parallel channels ,the total achievable rate is the summation of individual achievable rates over all levels . denoting the carry seen at level as , which is also a bernoulli random variable with parameter , the remaining channels can be represented with the following , where the effective noise , , is a bernoulli random variable obtained by the convolution of the actual noise and the carry , i.e. , here , the carry probability is given by the following recursion relationship : * for level , * for level , using capacity achieving codes for bsc , e.g. , polar codes or spinal codes , combined with the gallager s method , expansion coding achieves the following rate by considering carries as noise .[ thm : aen_expansion_coding_schemei ] expansion coding , considering carries as noise , achieves the rate for aen channel given by ,\label{equ : aen_achievable_rate_schemei}\ ] ] for any , where ] is chosen to satisfy constraint , i.e. , =\frac{1}{n } \sum\limits_{i=1}^n\sum\limits_{l =- l_1}^{l_2}2^l \mathbb{e}[\mathsf{x}_{i , l } ] = \sum\limits_{l =- l_1}^{l_2 } 2^{l } p_l \leq e_{\mathsf{x}}.\nonumber\ ] ] compared to the previous case , the optimization problem is simpler here as the rate expression is simply the sum of the rates obtained from a set of parallel channels .optimizing for these two theoretical achievable rates require choosing proper values for .note that , the optimization problems given by theorem [ thm : aen_expansion_coding_schemei ] and [ thm : aen_expansion_coding_schemeii ] are not easy to solve in general . here , instead of searching for the optimal solutions directly , we utilize the information from the optimal input distribution of the original channel . recall that the distribution in can be approximated by an exponential distribution with mean at high snr .hence , one can simply choose from the binary expansion of the exponential distribution with mean as an achievable scheme , i.e. , we now show that this proposed scheme achieves the capacity of aen channel in the high snr regime for a sufficiently high number of levels .for this purpose , we first characterize the asymptotic behavior of entropy at each level for and correspondingly , where the later one is closely related to carries .[ lem : aen_entropy_bound ] the entropy of noise seen at level , , is bounded by where .see appendix [ app : aen_entropy_bound ] .[ lem : aen_equivalent_entropy_bound ] the entropy of equivalent noise at level , , is bounded by where .see appendix [ app : aen_equivalent_entropy_bound ] .the intuitions behind these lemmas are given by the example scenario in fig .[ fig : aen_expanded_level ] , which shows that the bounds on noise tails are both exponential .now , we state the main result indicating the capacity gap of expansion coding scheme over aen channel .[ thm : aen_achivable_rate_main_result ] for any positive constant , if * ; * ; * _ _ , where _ _ , then , with the choice of as , 1 . considering carries as noise , the achievable rate given by satisfies _ _ where is a constant independent of _ _ and ; 2 .decoding carries , the achievable rate given by satisfies _ _ the proof of this theoremis based on the observation that the sequence of is a left - shifted version of at high snr regime . as limited by power constraint ,the number of levels shifted is at most , which further implies the rate we gain is roughly as well , when carries are decoded .if considering carries as noise , then there is apparent gap between the two version of noises , which leads to a constant gap for achievable rate .[ fig : aen_expanded_level ] helps to illustrate key steps of the intuition , and a detailed proof with precise calculations is given in appendix [ app : aen_achivable_rate_main_result ] . , , , , andrates at each level are shown .in this example , and , which further implies is a left - shifted version of by levels .the coding scheme with and covers the significant portion of the rate obtained by using all of the parallel channels . ] by lemma [ lem : exponential_expansion ] , , and combined with the argument in theorem [ thm : channel_expansion_coding ] , we have as .hence , the coding scheme also works well for the original aen channel .more precisely , expansion coding scheme achieves the capacity of aen channel at high snr region using moderately large number of expansion levels .we calculate the rates obtained from the two schemes above ( as and as ) with input probability distribution given by ( [ equ : aen_input_parameter ] ) .numerical results are given in fig .[ fig : aen_achievable_rate ] .it is evident from the figure ( and also from the analysis given in theorem [ thm : aen_achivable_rate_main_result ] ) that the proposed technique of decoding carries , when implemented with sufficiently large number of levels , achieves channel capacity at high snr regime .another point is that neither of the two schemes works well in low snr regime , which mainly results from the fact that input approximation is only perfect for sufficiently high snr .nevertheless , the scheme ( the rate obtained by decoding carries ) performs close to optimal in the moderate snr regime as well . : the rate obtained by considering carries as noise . : the rate obtained by decoding carry at each level .solid lines represent adopting enough number of levels as indicated in theorem [ thm : aen_achivable_rate_main_result ] , while dashed lines represent only adopting constant number of levels ( not scaling with snr ) . ] in the previous section , only binary expansion was considered .generalization to -ary expansion with is discussed here .note that this change does not impact the expansion coding framework , and the only difference lies in that each level after expansion should be modeled as a -ary discrete memoryless channel . for this , we need to characterize the -ary expansion of exponential distribution .mathematically , the parameters of expanded levels for an exponential random variable with parameter can be calculated as follows : \nonumber\\ & = \frac{\left(1-e^{-\lambda q^l}\right)e^{-\lambda q^l s}}{1-e^{-\lambda q^{l+1}}},\nonumber\end{aligned}\ ] ] where and .based on this result , consider channel input and noise expansions as and then , the achievable rate by decoding carries ( note that in -ary expansion case , carries are still bernoulli distributed ) can be expressed as ,\label{equ : aen_achivable_rate_baseq}\ ] ] where and denote the distribution of expanded random variables at level for input and noise respectively ; represents for the vector convolution .when implemented with enough number of levels in coding , the achievable rates given by achieves the capacity of aen channel for any .more precisely , as shown in the numerical result in fig .[ fig : aen_achievable_rate_baseq ] , expansion coding with larger can achieve a higher rate ( although this enhancement becomes limited when gets greater than ) .this property of the coding scheme can be utilized to trade - off number of levels ( ) and the alphabet size ( ) to achieve a certain rate at a given .-ary expansion .* the achievable rates using -ary expansion coding by decoding carries are illustrated . ]expansion source coding is a scheme of reducing the problem of compressing analog sources to compressing a set of discrete sources . in particular , consider an i.i.d .source .a -rate distortion code consists of an encoding function , where , and a decoding function , which together map to an estimate .then , the rate and distortion pair is said to be achievable if there exists a sequence of -rate distortion codes with \leq d ] for , where is given by .see appendix [ app : expsc_achievable_rate_z ] .note that , the last two terms in are a result of the truncation and vanish in the limit of large number of levels . in later parts of this section , we characterize the number of levels required in order to bound the resulting distortion within a constant gap .note that it is not necessary to make sure for every to guarantee . to this end, we introduce successive coding scheme , where encoding and decoding start from the highest level to the lowest . at a certain level , if all higher levels are encoded as equal to the source , then we must model this level as binary source coding with the one - sided distortion .otherwise , we formulate this level as binary source coding with the symmetric distortion ( see figure [ fig : expsc_successive_coding ] for an illustration of this successive coding scheme ) .in particular , for the later case , the distortion of concern is hamming distortion , i.e. . denoting the equivalent distortion at level as , i.e. =d_l ] for . here , is given by , and the values of are determined by : * for , * for , see appendix [ app : expsc_achievable_rate_x ] . in this sense ,the achievable pairs in both theorems are given by optimization problems over a set of parameters .however , the problems are not convex , so an effective theoretical analysis may not be performed here for the optimal solution .but , by a heuristic choice of , we can still get a good performance .inspired by the fact that the optimal scheme models noise as exponential with parameter in the test channel , we design as the expansion parameter from this distribution , i.e. , we consider we note that higher levels get higher priority and lower distortion with this choice , which is consistent with the intuition .this choice of may not guarantee any optimality , although simulation results imply that this can be an approximately optimal solution . in the following ,we show that the proposed expansion coding scheme achieves within a constant gap to the rate distortion function ( at each distortion value ) .[ thm : expsc_main_result ] for any ] holds for any . to this end , the assertion also holds for level , and this completes the proof of . using, we obtain that for any where the last inequality holds due to for any .finally , we obtain where * is from and the monotonicity of entropy ; * is from for any ; * is from for any . from the proof ,the information we used for is that , so this bound holds uniformly for any snr .we first prove that achieves capacity .denote and .then , we have an important observation that which shows that channel input is a shifted version of noise with respect to expansion levels ( see fig .[ fig : aen_expanded_level ] for intuition ) . based on this , we have \nonumber\\ & \stackrel{(a)}{\geq}\sum_{l =- l_1}^{l_2}\left [ h(p_l)-h(q_l)\right]\nonumber\\ & \stackrel{(b)}{=}\sum_{l =- l_1}^{l_2}\left [ h(q_{l+\eta-\xi})-h(q_l)\right]\nonumber\\ & = \sum_{l =-l_1+\eta-\xi}^{l_2+\eta-\xi}h(q_l ) -\sum_{l =- l_1}^{l_2}h(q_l)\nonumber\\ & = \sum_{l =- l_1+\eta-\xi}^{-l_1 - 1}h(q_l ) -\sum_{l = l_2+\eta-\xi+1}^{l_2}h(q_l)\nonumber\\ & \stackrel{(c)}{\geq}\sum_{l =- l_1+\eta-\xi}^{-l_1 - 1}\left[1 - 2^{l-\eta}\log e \right ] -\sum_{l = l_2+\eta-\xi+1}^{l_2}2^{\eta - l}3\log e\nonumber\\ & \stackrel{(d)}{\geq}(\xi-\eta)-2^{-l_1-\eta}\log e -2^{-l_2+\xi}3\log e \nonumber\\ & \stackrel{(e)}{\geq}\log \left(\frac{e_{\mathsf{x}}}{e_{\mathsf{z}}}\right)-\epsilon\log e-3\epsilon\log e\epsilon\nonumber\\ & \stackrel{(f)}{\geq}\log \left(1+\frac{e_{\mathsf{x}}}{e_{\mathsf{z}}}\right)-\frac{e_{\mathsf{z}}}{e_{\mathsf{x}}}\log e-4\epsilon\log e\nonumber\\ & \stackrel{(g)}{\geq}\log \left(1+\frac{e_{\mathsf{x}}}{e_{\mathsf{z}}}\right)-5\epsilon\log e,\label{equ : aen_rate_gap_proof2}\end{aligned}\ ] ] where * is due to , and monotonicity of entropy ; * follows from ; * follows from and in lemma [ lem : aen_entropy_bound ] ; * holds as and * is due to the assumptions that , and ; * is due to the fact that as for any ; * is due to the assumption that .next , we show the result for . observe that \nonumber\\ & \stackrel{(h)}{\geq } \sum_{l =- l_1}^{l_2}\left [ h(p_l\otimes q_l)-h(\tilde{q}_l ) \right]\nonumber\\ & = \sum_{l =- l_1}^{l_2}\left [ h(p_l\otimes q_l)-h(q_l ) \right]+\sum_{l =- l_1}^{l_2}\left [ h(q_l)-h(\tilde{q}_l ) \right]\nonumber\\ & = \hat{r}_2-\sum_{l =- l_1}^{l_2}\left [ h(\tilde{q}_l)-h(q_l ) \right]\nonumber\\ & \stackrel{(i)}{\geq}\hat{r}_2-\sum_{-l_1}^{\eta}\left[1-\left(1 - 2^{l-\eta}\log e \right)\right]-\sum_{\eta+1}^{l_2}\left[6(l-\eta)2^{-l+\eta}\log e-0\right]\nonumber\\ & = \hat{r}_2-\sum_{-l_1}^{\eta } 2^{l-\eta } \loge -\sum_{\eta+1}^{l_2}6(l-\eta)2^{-l+\eta}\log e\nonumber\\ & \stackrel{(j)}{\geq}\hat{r}_2- 2\log e -12\log e\nonumber\\ & \stackrel{(k)}{\geq}\log \left(1+\frac{e_{\mathsf{x}}}{e_{\mathsf{z}}}\right)-5\epsilon\log e-14\log e,\nonumber\end{aligned}\ ] ] where * is due to , which further implies ; * follows from and , together with the fact that and for any ; * follows from the observations that and * is due to .thus , choosing completes the proof .note that , in the course of providing these upper bounds , the actual gap might be enlarged .the actual value of the gap is much smaller ( e.g. , as shown in fig .[ fig : aen_achievable_rate ] , numerical result for the capacity gap is around bits ) .note that the maximum entropy theorem implies that the distribution maximizing differential entropy over all probability densities on support set satisfying is exponential distribution with parameter . based on this result , in order to satisfy \leq d ] , hence , ; * follows from the observation that for any ] ( due to ) , and the last inequality holds for any .2 . on the other hand , for , tends to , so as and get close .more precisely , we have where * follows from the fact ; * follows from the observation that for any $ ] ; * follows from the fact that where the second inequality holds from for any ( due to ) .* follows from is convex such that for any and , where is the derivative , and setting , completes the proof of this step ; * follows from ; * follows from theorem assumptions that and .
|
a general method of coding over expansion is proposed , which allows one to reduce the highly non - trivial problems of coding over analog channels and compressing analog sources to a set of much simpler subproblems , coding over discrete channels and compressing discrete sources . more specifically , the focus of this paper is on the additive exponential noise ( aen ) channel , and lossy compression of exponential sources . taking advantage of the essential decomposable property of these channels ( sources ) , the proposed expansion method allows for mapping of these problems to coding over parallel channels ( respectively , sources ) , where each level is modeled as an independent coding problem over discrete alphabets . any feasible solution to the resulting optimization problem after expansion corresponds to an achievable scheme of the original problem . utilizing this mapping , even for the cases where the optimal solutions are difficult to characterize , it is shown that the expansion coding scheme still performs well with appropriate choices of parameters . more specifically , theoretical analysis and numerical results reveal that expansion coding achieves the capacity of aen channel in the high snr regime . it is also shown that for lossy compression , the achievable rate distortion pair by expansion coding approaches to the shannon limit in the low distortion region . remarkably , by using capacity - achieving codes with low encoding and decoding complexity that are originally designed for discrete alphabets , for instance polar codes , the proposed expansion coding scheme allows for designing low - complexity analog channel and source codes .
|
consider a cell with receptors on its surface that independently bind ligand , .the cell senses the ligand concentration based on the instantaneous level of a downstream read - out molecule at some time . via error propagation , the cell s uncertainty about is then ( ) : where is the ligand s chemical potential .the uncertainty is low if the average read - out level responds sensitively to changes in ligand concentration , as measured by the gain , but is not noisy , as measured by the variance . if the receptor - ligand complex itself is taken as the read - out , then the error is : since , where is the probability a receptor is bound to ligand .indeed , is the `` instantaneous error '' , _i.e. _ the sensing error based on a single concentration estimate via a single receptor .because each receptor provides an independent concentration measurement ( ) , the total number of independent measurements is .clearly , the sensing error is limited by the total number of receptors on the membrane .cells can reduce the error in eq .[ eq : receperr ] with downstream networks that time - integrate over the history of receptor states ( ) .key to the ability of networks to time - integrate is a memory of these past states , implemented , for example , by a long - lived molecular species or a signaling cascade that delays the signal ( ) .equilibrium systems can have these and hence have memory of the past receptor states .thus , we might expect that equilibrium networks can reduce the sensing error past the bound set by the number of receptors at the expense of downstream signaling molecules .we consider cytoplasmic read - out molecules that bind ligand - free receptors : , .solving the associated langevin equations ( _ materials and methods _ ) shows that the dynamics of the output around its mean is given by the time - integrated fluctuations in the receptor state plus noise due to the receptor - read - out binding : ,\ ] ] where and is the integration time .the latter can be made arbitrarily large by slowing down the read - out dynamics , i.e. by lowering and .this suggests that equilibrium networks can completely filter the extrinsic noise in the receptor states and reduce the sensing error to zero .however , the idea that the sensing error can be reduced to zero ignores the fact that in these equilibrium systems ligand - receptor binding and receptor - read - out binding are coupled . in this specific system , these reactions are coupled because the read - out and the ligand compete for binding to the receptor . to elucidate how the coupling between receptor - ligand binding and receptor - read - out binding compromises sensing in equilibrium networks , we determine the total sensing error . from eq .[ eq : lang ] , the variance of the output can be written as the sum of the extrinsic noise and the intrinsic noise , where with the correlation function .combining with the gain gives the sensing error for this network ( eq . [ eq : error ] ) . analytically minimizing the result, we find that it is never lower than the bound set by the number of receptors ( i.e. in eq .[ eq : receperr ] ) , regardless of the integration time or other parameters of the network ( _ si text _ ) .this raises the paradox of a network that time - integrates the receptor fluctuations yet can not reduce the sensing error with it .the resolution of the paradox is that in equilibrium systems the intrinsic and extrinsic noise are not independent , precisely because receptor - ligand and receptor - readout binding are coupled . as a result , the fluctuations in the receptor state and the read - out become correlated ; is not zero ( ) . because of these correlations , equilibrium networks face a fundamental trade - off between the removal of extrinsic noise in the receptor state and the suppression of intrinsic in the downstream signaling network . in an optimally designed network that minimizesthe sensor error , increasing the integration time reduces the extrinsic noise , but also increases the intrinsic noise by at least the same amount . signaling networks are usually far more complicated than a single read - out molecule that binds the receptor , and it has been shown that additional network layers can reduce the sensing error ( ) .this raises the question whether a more complicated equilibrium network can overcome the limit set by the number of receptors .searching over all possible network topologies to systematically address this question is difficult , if not impossible .however , equilibrium systems are fundamentally bounded by the laws of equilibrium thermodynamics , regardless of their topology .one such law is the fluctuation - dissipation theorem .just as a decrease in the viscosity of a fluid increases both the noise in a particle s brownian motion and the sensitivity of its response to an applied force , so too do modifications in equilibrium networks affect both the noise in the read - out and the sensitivity of its response to changes in the ligand concentration , _i.e. _ the gain .specifically , for any read - out in an equilibrium system , the fluctuation - dissipation theorem implies that the gain is equal to the covariance of the fluctuations in the read - out and the ligand - bound receptor : ( ) .then , the sensing error from any read - out is ( eq . [ eq : error ] ) : . if the receptors themselves are taken as the read - out , the sensing error is . by combining these expressions, it follows that no read - out is better for sensing than the receptors : since the correlation coefficient this relation leads to quantitative bounds on the sensing capacity of equilibrium networks .in general , the variance , and hence , depends on the particular network .however , for any network , since .thus , for equilibrium systems , the fundamental lower bound on the fractional error in the concentration estimate is : this proves that in equilibrium systems , which are not driven by fuel turnover , the precision of sensing is fundamentally limited by the number of receptors ( fig .[ fig : eqneqdiag ] , upper box ) ; a downstream signaling network can never improve the accuracy of sensing. networks in which the receptors cooperatively bind the ligand can achieve the bound of eq .[ eq : bound_eq_net ] ( _ si text _ ) . for networks without cooperative ligandbinding , as in the simple example above , the sensing error is worse : , so ( _ si text _ ) .the sensing error for independent receptor binding is most easily understood for receptors with identical affinity for the ligand , as in our simple example ( eq . [ eq : receperr ] ) , but holds generally : different affinities do not break this bound .the different species in a network can also be viewed as nodes through which information about the ligand flows .we can show that the data processing inequality ( ) also guarantees , for an equilibrium system , that no read - out has more information about the ligand than the receptors at any given time : , where is the mutual information between the instantaneous levels of the arguments ( _ si text _ ) .the history of receptor states does contain more information about the ligand concentration than the instantaneous receptor state , but our results show that an equilibrium signaling network can not exploit this : its output contains only as much information as the instantaneous receptor state ; it does not encode the history of receptor states in any informative way , whether by time - integration or any other method .ultimately , equilibrium systems sense by harvesting the energy of ligand binding .this energy is used to propagate the signal through the downstream network ; in the simple system studied here , for example , the energy of ligand binding is used to expel the read - out molecule from the receptor .however , detailed balance then dictates that the receptor - read - out binding also influences receptor - ligand binding , thus perturbing the signal .indeed , the trade - offs faced by equilibrium networks are all different manifestations of their time - reversibility ( ) . the only way for a time - reversible system to `` integrate '' the past is for it to integrate perturb the future .concomitantly , in a time reversible system , there is no sense of `` upstream '' and `` downstream '' , concepts which rely on a direction of time .although we have referred to the molecule as a `` readout '' of the ligand concentration , the ligand is just as much a readout of .while in equilibrium systems the read - out encodes the receptor state , the read - out is not a stable memory that is decoupled from changes in the receptor state .it merely passively lags . in an equilibrium system , the sensing error , like any static quantity , can only depend on ratios of time scales , which is another way of seeing that increasing the `` integration time '' can not improve sensing .these results show that in an equilibrium system each receptor provides at most one independent measurement of the ligand , regardless of how much information is encoded in the history of the receptor state , how complicated the signaling machinery is downstream , how many molecules are devoted to signaling downstream , or how long the apparent integration time of the network is . energy dissipation fuel turnover is required to break the trade - offs between noise and sensitivity , between intrinsic and extrinsic noise , and , ultimately , between the accuracy of sensing and space on the membrane .networks that can reduce error via time - integration must be non - equilibrium systems . to understand the resources required to reduce the sensing error in these systems , we need to understand how they sense at the molecular level .berg and purcell pointed out that by integrating the receptor signal over a time , the cell can take as many as independent samples of the receptor state ( ) , where is the receptor correlation time .we will show that the cost of sensing depends on how many of these samples the cell actually takes .we therefore view the downstream network , which consists of discrete components , as a system that discretely samples the receptor state , rather than integrating it . ) .( b ) the biochemical network in ( a ) discretely samples the receptor state , illustrated for one receptor .the states of the receptor over time are encoded in the states of the molecules that collided with it : the readout is modified if the receptor is bound ; otherwise it is unmodified .molecules that collide with the unbound receptor are indistinguishable from those that have never collided , leading to an additional error .( c ) active molecules can be degraded .some samples are erased , and the remaining samples are , on average , further apart ( more independent . ) ( d ) all reactions are in principle reversible , compromising the encoding of the receptor state into the readout .the sensing error is determined by parameters that describe the energy flow in the network , including the flux and the free - energy drops and across the activation and deactivation reactions of the readout , respectively.[fig : sensing],width=321 ] . to gain intuition about the resources required to build and operate these networks , we construct step by step a model of a receptor that drives a push - pull network , which is a canonical non - equilibrium motif in prokaryotic and eukaryotic cell signaling ( ) . in these systems ,the receptor itself or the enzyme associated with it , such as chea in bacterial chemotaxis ( ) , catalyzes the ( chemical ) modification of a read - out protein .the general principle is that these networks take samples of the receptor by storing its state in the stable modification states of the read - out molecules ( fig .[ fig : sensing]a , b ) .each read - out molecule that interacted with the receptor provides a memory of the ligand - occupation state of that receptor molecule ; collectively , the read - out molecules encode the history of the receptor states .quantitatively , if there are receptor - readout interactions , then the cell has samples of the receptor state and the error , , is reduced by a factor of , as in eq .[ eq : receperr ] , or less if the samples are not independent . by building up the model step by step ,we seek to understand how different features of the network affect the number of samples , their independence , and their accuracy .one feature is that molecules can be deactivated ( fig .[ fig : sensing]c ) , which we will show is equivalent to discarding or erasing samples. additionally , reactions are microscopically reversible ( fig .[ fig : sensing]d ) , which means that read - out modifications can occur independently of the receptor and receptor - mediated modifications can occur in the wrong direction ; both effects reduce the reliability of a sample .energy is needed to break time - reversibility and to protect the coding .we arrive at an expression for the sensing error that combines these effects .it reveals trade - offs between cellular resources and performance : speed , accuracy , energy , and the number of receptor and downstream molecules .for intuition , we first consider a cell that responds after a time to a change in a ligand s concentration at some time , based on the output of the simple reaction network ( fig .[ fig : sensing]a ) .we assume that the cell starts with a large pool of inactive read - out molecules and that activated molecules are never deactivated . for descriptive ease , we assume the reaction is diffusion - limited , so that each collision between an inactive molecule and a ligand - bound receptor leads to activation of . the resulting sensing error can be derived via eq .[ eq : error ] from the master equation , which describes fluctuations in the network ( see materials and methods ) .however , to understand the required resources , we calculate the error instead by viewing the molecular network as one that discretely samples the receptor state . at the molecular level ,readout molecules collide with the receptor over time and are modified depending on the ligand - occupation state of the receptor. the total rate at which inactive molecules collide with receptor molecules in any state is for a large readout pool , and the total number of such collisions after time is , with on average .if a receptor molecule is bound to ligand at the time of a collision , the read - out molecule is converted to its active form , while if it is not the read - out remains unchanged . in this way, the state of the receptor at the time of a collision is encoded in the state of the read - out molecule that collided with it , and the history of the receptor states is encoded in the states of the read - out molecules at the time ( fig .[ fig : sensing]b ) .the read - out molecules that collided with the receptor thus constitute samples of the receptor state . the average number of samples after time is product of the total number of receptors and the number of samples per receptor during the integration time . the sensing errorcan then be derived by viewing the system as one that employs the sampling process described above , estimating the average receptor occupancy from samples of the receptor state taken at the times of readout - receptor collisions , _i.e. _ as ( _ si text _ ) .this yields for the sensing error : this expression has a clear interpretation in terms of sampling .the first term is exactly the error expected from stochastically taken samples of the receptor over the time . specifically , it is the error of an estimate based on a single sample , , divided by the average number of samples that are independent , , where is the total number of samples times the fraction that is independent : when .clearly , depends on the receptor correlation time and on the time interval between samples of the same receptor ; samples farther apart are more independent .this expression shows that the finite sampling rate reduces the number of independent samples below the berg - purcell factor , the maximum number of independent samples that can be taken during .the latter is reached only when the sampling rate is infinite ( e.g. the number of downstream molecules ) , so that and . the second term in eq .[ eq : twoterm ] accounts for the fact that the cell can not distinguish between those molecules that have collided with an unbound receptor ( and hence provide information on the receptor occupancy ) , and those that have not collided with the receptor at all ( fig .[ fig : sensing]b ) .if the cell could distinguish between those molecules , it could estimate the average receptor occupancy from rather than ; then the second term would be zero ( _ si text _ ) . indeed , the second term arises from the biochemical noise that makes the actual number of samples , , different from its average , .however , when is small and/or is large , the second term is small compared to the first and the sensing error is given by the error of a single measurement , , divided by the average number of independent measurements , . the error in eq .[ eq : twoterm ] decreases with the time , suggesting that the cell can sense perfectly if it waits long enough before responding to a change in its environment. however , modification states of molecules decay , and their finite lifetime , , limits sensing , regardless of how long the cell waits . to explore this at the molecular level , we consider the network in the previous paragraph augmented with the deactivation reaction , ( fig .[ fig : sensing]c ) .we consider the sensing error after long times ( ) , in steady state , again for a large pool of inactive read - out molecules . for pedagogical clarity, we imagine the deactivation is mediated by a phosphatase and that the reaction is diffusion - limited .we calculate the sensing error by solving the master equation or by viewing the system as one that discretely samples the receptor state , as before ( _ si text _ ) .we find that also with deactivation the sensing error is given by eqs .[ eq : twoterm ] and [ eq : ni ] , yet with fewer samples , , spaced effectively farther apart , .the molecular picture of sampling provides a clear interpretation . as before ,the readout molecules encode the state of a receptor and serve as samples of the receptor state . with deactivation ,however , only those readout molecules which have collided with the receptor more recently than with the phosphatase reflect the receptor state . atany given time , the average number of such readout molecules , and hence samples , is ; the lifetime thus sets an effective integration time . as without deactivation ,the fraction of samples that are independent is determined by the effective spacing between them , see eq .[ eq : ni ] . though the time between the creation of samples is still , _i.e. _ the spacing without readout deactivation , some of the samples are erased via collision with the phosphatase .we therefore expect that the spacing between remaining samples is larger . indeed , calculating the effective spacing between samples taking this effect into account yields , which is twice that without decay ( _ si text _ ) .the fact that the remaining samples are more independent explains a previously noted correspondence ( ) between the sensing error in a system with deactivation , , and that in a system without deactivation , , in the infinite sampling limit : they are equal for , and not for as would be expected if their samples were just as independent . the copy numbers of signaling molecules are often small .to take this into account , we compute the sensing error from eq .[ eq : error ] for a finite number of read - out molecules using the linear - noise approximation to the master equation describing the biochemical fluctuations ( _ materials and methods _ ) , and compare the result with eq .[ eq : twoterm ] .this defines an effective number of samples , , where is the relaxation time of the network . for this network , . in essence , cells count only those samples created less than a relaxation time in the past ; nothing that happened earlier can influence the current state , including its ability to sense .the fraction of samples that is independent is given by eq .[ eq : ni ] with , analogously to the previous section .all reactions are in principle microscopically reversible . taking this into account, we recognize that active molecules that collide with the bound receptor sometimes become inactive , , and that inactive molecules that collide with the phosphatase are sometimes activated , ( fig .[ fig : sensing]d ) .these reverse reactions compromise the encoding of the receptor state into the read - out : an active molecule no longer encodes the ligand - bound state of the receptor at a previous time with 100% fidelity , since it can also result from a collision with the phosphatase ; similarly , , rather than , may reflect a collision with the ligand - bound receptor .we compute the sensing error for the reversible network from eq .[ eq : error ] using the linear - noise approximation to the master equation ( see _ materials and methods _ ) . as before ,it can be written as eqs .[ eq : twoterm ] and [ eq : ni ] .the effective number of independent samples is a complicated expression of the 8 fundamental variables in the system : the 6 rate constants describing the forward and reverse rates of the 3 reactions ( including ligand - receptor binding ) , and the total copy numbers and .however , the expression has a particularly simple and illuminating form in terms of variables that describe , as we will show , the resource limitations of the cell .in addition to variables already defined ( , , , and ) , these include : the flux of across the cycle in which it is created by the receptor and deactivated via the phosphatase ; and the average free - energy drops , and , across the receptor - catalyzed pathway and the phosphatase - catalyzed pathway , respectively , in units of ( fig .[ fig : sensing]d ) .each of these variables depends in a complicated way on the fundamental parameters of the system , the rate constants and the copy numbers .in particular , the free - energy drops are related to the propensities and of the reactions in the forward and backward directions , respectively : ( ) .however , the variables can all be varied independently , except that in equilibrium . in terms of these variables ,the effective number of independent samples taken by the push - pull network is : where is the total free - energy drop across the cycle ; is also known as the affinity of the cycle ( ) .[ eq : neff ] is our principle result for non - equilibrium systems .it takes into account readout deactivation , the finite number of readout molecules , and the reversibility of reactions .the equation has a clear interpretation .the product is the number of cycles of read - out molecules involving collisions with ligand - bound receptor molecules during the system s relaxation time .the quantity is the total number of read - out cycles involving collisions with receptor molecules , be they ligand bound or not ; it is thus the total number of receptor samples taken during , .the factor , involving , reflects the quality of each sample . when , an active read - out molecule is as likely to be created by the ligand - bound receptor as by the phosphatase and there is no coding and no sensing ; indeed , in this limit , and the effective number of samples .note also that when the backward reactions are faster than the forward reactions , corresponding to becoming negative , encodes the ligand - bound receptor instead of .this symmetry is reflected by the symmetry of eq .[ eq : neff ] : the number of samples is the same when the signs of , , and are all flipped .the effective number of accurate samples is , less than the total number taken .the fraction of the samples that are independent is , as before , ) with reflecting the time interval between effective samples .. [ eq : neff ] reveals trade - offs among the different resources for sensing , and between these resources and the accuracy of sensing .is increased by increasing the readout copy number , the number of independent measurements saturates at the berg - purcell limit , but the energy consumption and protein cost ( ) continue to rise .( b ) the energy requirements for sensing . in the irreversible regime ( ) , the work to take one sample of a ligand - bound receptor , , equals , because each sample requires the turnover of one fuel molecule , consuming of energy . in the quasi - equilibrium regime ( ) , each effective sample of the bound receptor requires , which defines the fundamental lower bound on the energy requirement for taking a sample . when , the network is in equilibrium and both and are .atp hydrolysis provides , showing that phosphorylation of read - out molecules makes it possible to store the receptor state reliably .the results are obtained from eq .[ eq : neff ] with .( c ) when two resources a and b compensate each other , one resource can always be decreased without affecting the sensing error , by increasing the other resource ; concomitantly , increasing a resource will always reduce the sensing error . when both resources are instead fundamental , the sensing error is bounded by the limiting resource and can not be reduced by increasing the other resource .( d , e ) the three resources time / receptor copies , copies of downstream molecules , and energy are all required for sensing , with no trade - offs among them ( see fig . [fig : eqneqdiag]b ) .the minimum sensing error obtained by minimizing eq .[ eq : twoterm ] is plotted for different combinations of ( d ) and , and ( e ) number of samples were and infinite and .the curves track the bound for the limiting resource indicated by the grey lines , showing that the resources do not compensate each other . the plot for the minimum sensing error as a function of and is identical to that of ( e ) with replaced by .[ fig : efficiency],width=321 ] there is no fundamental relationship between receptor copy number and sensing , as in equilibrium systems .essentially , the error is determined by the total number of samples , and it does not matter , as long as the samples are independent , whether these samples are from the same receptor over time or from many receptors at the same time .an independent sample of the same receptor can be taken roughly every ( eq . [ eq : ni ] ) .naturally , samples can be taken more frequently .in fact , cells can time - integrate : if , the receptors are sampled infinitely fast and and ( eq . [ eq : ni ] ) ; then the number of independent samples taken over reaches its maximum , the berg - purcell factor , and the error can achieve its minimum , . however , the benefit of sampling faster by increasing in reducing the sensing error is rapidly diminishing , while the total protein and energetic costs increase ( see fig . [fig : efficiency]a ) .while the effect of copy number on intrinsic noise has been studied extensively , how copy numbers of read - out signaling molecules affect the fundamental sensing limit has not been elucidated .[ eq : neff ] the factor containing the chemical potentials is always less than 1 ; also , because the system has relaxed when all read - out molecules have completed an activation - deactivation cycle .hence , eq . [ eq : neff ] shows that the number of samples of a ligand - bound receptor , , is always less than the number of downstream molecules , .each read - out molecule provides at most one sample , because at any given time it exists in only one modification state , regardless of how many times it has collided with the receptor or how long the integration time is .there is no mechanistic sense in which a single molecule `` integrates '' the receptor state . as a consequence ,no matter how the network is designed , how much time or energy it uses , or how many receptors it has , cells are fundamentally limited by the pool of read - out molecules : the sensing error , obtained by analytically minimizing eq .[ eq : twoterm ] at fixed .the free - energy drop across a cycle , , must be provided by a fuel molecule such as atp .this free energy represents the maximum work the fuel molecule could do if used by an ideal engine .[ eq : neff ] defines three regimes of sensing with respect to the energy consumption of the network . when the system is in equilibrium and the sensing error diverges ( ) , as discussed above ; indeed , this system employs a fundamentally different signaling strategy than equilibrium systems use to sense .two other regimes are defined by the work that the fuel molecules need to do in order to take a sample of the receptor .the power , the rate at which the fuel molecules do work , is and the total work performed during the relaxation time is .this work is spent on taking samples of receptor molecules that are bound to ligand , because only they can modify downstream read - out molecules .the total number of effective samples of ligand - bound receptors obtained during , is .hence , the work needed to take one effective sample of a ligand - bound receptor is , with given by eq .[ eq : neff ] .[ fig : efficiency]b shows this quantity as a function of .two regimes can be observed . when , the work to take one effective sample of a ligand - bound receptor is simply , independent of kinetic time scales . in this regime ,the read - out reactions are essentially irreversible and the sample quality factor in eq .[ eq : neff ] reaches unity , meaning that each read - out molecule reliably encodes the receptor state at an earlier time .the effective number of samples therefore equals the total number taken , and is given by that of the irreversible case already studied , .the work per sample of a ligand - bound receptor , , equals , because each sample requires the turnover of one fuel molecule , using of energy .the total number of samples is thus limited by the work as . in this regime ,energy limits sensing not because it limits the reliability of each sample , but because it limits the total number of samples that could be taken during by limiting the receptor sampling frequency , the flux : increasing necessarily requires more work . inserting this expression into eq .[ eq : twoterm ] and optimizing puts a lower bound on the sensing error : intriguingly , eq .[ eq : fundlim_irr ] suggests that for a fixed amount of energy , , spent during the relaxation time , the sensing error can be reduced to zero by reducing to zero .however , the lower bound in eq .[ eq : fundlim_irr ] is only achievable ( and eq .[ eq : fundlim_irr ] thus only applies ) , when .when , the system transitions to a quasi - equilibrium regime in which each fuel molecule provides a small but nonzero amount of energy . in this regime , the system can still consume significant amounts of energy when the fuel molecules are consumed at a rapid rate by many distinct read - out molecules . in the limit that and at fixed , the effective number of samples given by eq .[ eq : neff ] reduces to in the quasi - equilibrium regime , each readout - receptor interaction corresponds to an increasingly noisy measurement of the receptor state ( ) , but many noisy measurements ( ) contain the same information as 1 perfect measurement provided that collectively at least was spent on them . indeed , as fig .[ fig : efficiency]b shows , is the fundamental lower bound on the work needed to take one accurate sample of a ligand - bound receptor .it puts another lower bound on the sensing error : inserting eq .[ eq : fundlim ] into eq .[ eq : twoterm ] and optimizing shows that : this power - law bound relates energy to information .the bound can be reached when time and are not limiting , and . when , the lower bound is higher and given by eq .[ eq : fundlim_irr ] .[ eq : fundlim_irr ] and [ eq : ener_req ] show that the sensing precision increases with the work done in the past relaxation time , , setting up a trade - off among speed , power , and accuracy , as found in adaptation ( ) .the trade - off emerges naturally from a molecular picture of sensing .when the response needs to be rapid , needs to be small and the power demand is high : the samples , which require energy , must be taken close together in time . however , when the cell can wait a long time before responding , the power required to make large can be infinitesimal : the samples can be created far apart in time .there is no minimum power requirement for sensing .the above analysis shows that each of the fundamental resource categories time / receptor copy number , downstream read - out molecules , and power / time ( fuel ) has its own trade - off with sensing accuracy ( figs .[ fig : efficiency]c , d , e ) . to a good approximation ,the worst bound is active : , corresponding to . indeed ,one of the most important conclusions of our analysis is that increasing a single resource ( e.g. ) can not reduce the sensing error indefinitely .the sensing accuracy will eventually plateau , namely when it becomes fundamentally limited by another resource ( e.g. ) .clearly , there is no trade - off among these classes of resources : no amount of one resource can overcome a limiting amount of another , as illustrated in figs .[ fig : efficiency]d , e .the reason is clear : taking a sample requires time and receptors , read - out molecules , and fuel .adding receptors and read - out molecules does not improve sensing if not enough energy is available to take the samples ( fig .[ fig : efficiency]d ) .similarly , waiting more time to take another sample is not beneficial if the cell has no more read - out molecules left to write the sample to , or can not expend energy fast enough to accomplish the writing ( fig . [ fig : efficiency]e ) . the picture that emerges from our analysisis summarized in the lower box of fig .[ fig : eqneqdiag ] .the resource classes time / receptors , downstream readout molecules , and energy , act like weak links in a chain that can not compensate each other in achieving a required sensing precision . within these classes trade - offs are possible : time can be traded against the number of receptors to reach a required number of measurements , while power can be traded against speed to meet the energy requirement for a desired sensing accuracy ., , .the blue dots show the sensing error for different parameter values and the red guideline shows the minimum sensing error ( _ si text _ ) .when the energy per receptor is less than a few , the optimized system employs the equilibrium strategy of sequestration , achieving the bound . if the energy input is higher , it uses the non - equilibrium strategy of catalysis to transmit the signal , achieving the bound .there is an intermediate regime around per receptor in which the network modestly outperforms both full catalysis and full binding by partially utilizing the receptor - read - out state.[fig : eqneqmin],width=321 ] to increase the number of measurements , equilibrium networks must increase the number of receptors .non - equilibrium networks may instead use more downstream readouts and energy to take more measurements with the same receptors over time . which sensing strategy is better ?the strategy adopted by the cell will depend on the relative fitness costs of the different resources .if the resources are of similar costs , then , quantitatively , our analysis predicts that an equilibrium strategy will be adopted if its minimum error , for non - cooperative receptors , is less than that of the non - equilibrium strategy , ( fig .[ fig : eqneqdiag ] ) .for example , when the accuracy of the non - equilibrium strategy is limited by energy , meaning that , the predicted transition between the two strategies occurs when the work per receptor . to address this , we have considered a network that combines both strategies .the read - out binds the ligand - bound receptor , which can then boot off the read out in a modified or unmodified state ( _ si text _ ) : , , .this system combines both modes of sensing , because the chemical modification of the readout enables non - equilibrium sensing , while sequestration of the unmodified readout by the receptor upon ligand binding enables equilibrium sensing .optimizing this system over all parameters confirms that when the energy per receptor is less than a few , the optimized system employs the equilibrium strategy of sequestration , while if it is higher it uses the non - equilibrium strategy of catalysis to transmit the signal ( fig .[ fig : eqneqmin ] ) .in addition , the networks that optimize sensing in these two regimes are the networks that we have studied ; a network that combines the two sensing modes does not perform better than the two individually .fundamentally there are only two distinct mechanisms to transmit the information from the receptor to the downstream readout ( fig .[ fig : eqneqdiag ] ) .these two mechanisms , described as equilibrium and non - equilibrium sensing , have different resource requirements ( table [ tab : comparison ] ) . cells face a trade - off with respect to their resources in choosing between these two distinct sensing strategies . in the equilibrium strategythe signal is transmitted from the receptor to the read - out via sequestration reactions , in which binding of an upstream component causes unbinding of a downstream component , or via adaptor proteins , which bind the up- and downstream component simultaneously .both motifs are ubiquitous in cellular signaling .g - protein - coupled receptor ( gpcr ) signaling employs protein sequestration ( ) , while ras signaling uses adaptor proteins like grb2 ( ) .[ tab : comparison ] [ cols= " < , < , < " , ] equilibrium systems do not require fuel turnover .they respond to changes in the environment by harvesting the energy of ligand binding , thereby capitalizing on the work that is performed by the environment to change the ligand concentration .while the response speed is determined by the rate constants , the accuracy of sensing is only limited by their ratio ; there is no trade - off between speed and accuracy . the sensing precision is , however , limited by the number of receptors .this is because the energy of receptor - ligand binding is used to expel or bind the messenger protein , thus coupling receptor - ligand binding to receptor - readout binding .this inevitably leads to correlations between the extrinsic noise in the receptor and the intrinsic noise of the processing network ( ) .these correlations lead to a fundamental trade - off between these sources of noise in equilibrium systems . in nonequilibrium sensing , fuel turnover allows the receptor to transmit information as a catalyst .this makes it possible to remove the correlations and the concomitant trade - off between extrinsic and intrinsic noise , and reach a sensing precision that is not limited by the number of receptors .arguably the most important signaling motif that relies on fuel turnover is the goldbeter - koshland push - pull network studied here .this motif is used in most , if not all , signal transduction pathways .our analysis reveals why it may be beneficial to use this energy consuming motif : it makes it possible to store the history of the receptor state in stable chemical modification states of downstream molecules . in equilibrium sensing the stability of the downstream signaling proteins relies on physical interactions with the receptor molecules , which means that the state of the readout molecules reflects the instantaneous state of the receptor .in contrast , in non - equilibrium sensing the energy to change the state of the signaling proteins is not provided by the physical interactions with the receptor , but by the chemical fuel .the receptor catalyzes the modification of the read - out . after modification , however , the receptor and read - out become decoupled and each read - out molecule provides a stable memory of the receptor state when it was modified .it is this feature that allows these non - equilibriums systems to take samples of the receptor state over time and perform a discrete time integration .this increases the number of measurements per receptor , making it possible to beat the equilibrium sensing limit set by the number of receptors .taking samples fundamentally requires time so that the samples are independent ; downstream molecules to store the samples ; and energy to store them reliably , to protect the coding .we find that at least is needed for reliable encoding , quantifying a relationship between energy and information .one of the most widely used coding strategies is phosphorylation , which requires atp . _ in vivo _ , atp hydrolysis provides about .this is sufficient to take one receptor sample essentially irreversibly ( fig .[ fig : efficiency]b ) , which means that the quality factor reaches unity .readout phosphorylation thus makes it possible to store the receptor state reliably .non - equilibrium networks can exhibit more complicated features than those of the simple push - pull motif , as in the mapk cascade .the molecular picture for time - integration suggests that our results for the push - pull network hold generally , even in these more complicated systems .indeed , we find the same or more severe resource limitations in cascades and networks with simple negative or positive feedback ( _ si text _ ) .although cascades can increase the response time ( ) , which increases information transfer , they do not make sensing more efficient in terms of energy or readout molecules .one- and two - component signaling networks provide a case study for the trade - off between equilibrium and non - equilibrium sensing .one - component systems consist of adaptor proteins which can bind an upstream ligand and a downstream effector , while two - component systems are similar to the push - pull network studied here , consisting of a kinase ( receptor ) and its substrate .interestingly , some adaptor proteins , like rocr , contain the same ligand - binding domain as the kinase and the same effector - binding domain as the substrate of a two - component system , _i.e. _ ntrb - ntrc ( ) .it has been suggested that one - component systems have evolved into two - component systems to facilitate transfer of signals from the membrane to the nucleus ( ) .however , equilibrium networks can also transmit signals across space ( table [ tab : comparison ] ) .our results thus suggest that these one - component systems are really alternative , equilibrium solutions to the problem of signal transduction , selected because of different resource selection pressures . which resource sets the fundamental limit for non - equilibrium sensing ?although it has often been assumed that time / receptors are limiting ( ) , our results , in contrast , show how the accuracy of sensing can instead be limited by energy or downstream copy numbers .interestingly , experiments suggest that some key networks are not time / receptor limited . cheong _have measured the information transmission of several important networks , and have shown that all transmit about 1 bit of information , or less ( ) .this amount is far less than the networks would transmit if they were time / receptor limited ( see _ si text _ ) .this suggests that another resource , such as copy numbers of signaling components or energy , limits sensing . in such scenarios , characterizing the response time of the network is less important for understanding sensing than characterizing protein expression levels and their energy usage .it seems natural to expect that the resources which are limiting sensing are those that affect cell growth or fitness , while the resources that are in excess and thus wasted are those that do not significantly affect cell growth or fitness .this prediction could be tested experimentally , for example by studying the growth and chemotactic performance of bacterial populations with different expression levels of functionally and non - functionally signaling proteins ( ) . to the extent that all resources affect growth ,evolutionary pressure should tend to drive systems so that no resource is wasted , which occurs when all are equally limiting .resource - optimal systems sample the receptor about once per correlation time and use just enough fuel and downstream molecules to do so .quantitatively , all resources are equally limiting when in an optimal sensing system , the number of independent concentration measurements equals the number of readout molecules that store these measurements and equals the work ( in units of ) to create the samples . in two - component signaling systems , including that of bacterial chemotaxis , the downstream component is typically in excess of the receptor ( ) .for the _ e. coli _ chemotaxis system , ( ) .[ eq : opt_sys ] thus predicts that .this prediction can be tested , assuming that the correlation time of the receptor - chea complex is that of receptor - ligand binding . in _e. coli _ , the lifetime of the active ( phosphorylated ) readout , cheyp , is ( ) , which means that , since about a third of the total amount of chey is phosphorylated in steady - state .[ eq : opt_sys ] thus predicts that . to test this prediction, we estimate from the receptor - ligand dissociation rate as , ( ) .the dissociation constant of tar - aspartate ( receptor - ligand ) binding ( ) and with an association rate ( ) , this yields and an estimated correlation time , in line with the prediction of eq .[ eq : opt_sys ] .[ eq : opt_sys ] also predicts that the fundamental resources should vary proportionally to each other across different networks .for example , the relation predicts that the lifetime of the modified state of a readout molecule should increase , _ ceteris parabus _, with its expression level .two - component systems can provide a large - data set for testing these predictions once kinetic data and protein expression levels for many of them become available ( ) .our results are also important for synthetic biology , which uses two - component signaling networks as a building block ( ) .the design principles instruct how such networks should be constructed at the molecular level to minimize resource consumption . in turn ,synthetic networks may provide a platform for testing key predictions .a major question in cell signaling is to what extent the design of signaling pathways is shaped by the same limits that apply to other sorts of machines , and to what extent they face unique limitations because they are constructed out of molecular networks .the process of sampling a time series , like the receptor state over time , defines a specific , familiar computation that could be conducted by any machine ; it is instantiated in the biochemical system by the readout - receptor pair .we find that the free - energy drops across the `` measurement '' and `` erasure '' steps , and , should be identical to minimize the energetic cost , even though the fuel molecule need only be involved in one of the reactions , preparing a non - equilibrium state that relaxes via the other .this allocation of energy differs from that typically considered in the computational literature , in which only the erasure step requires energy ( ) . in the cellular system both steps are computational erasures : though only the `` erasure '' step erases memory of the receptor state , both steps erase the state of the molecule involved in the collision .interestingly , when , the average work to measure the state of the receptor is , which is perhaps surprisingly close to the landauer bound , ( ) .from eq . 1 in the main text ,the sensing error for a biochemical network depends on the gain and the variance of the readout molecule . for all networks studied in this paper ,we have calculated the gains using a mean - field approximation for the steady state level of the readout , which is exact for linear networks ( the base model of the main text and the base model plus deactivation ) . except where otherwise noted ,we have calculated all variances using a linear - noise approximation ( ) , which is , again , exact for the linear networks . for nonlinear networks ,the quality of the approximation improves with system size ; it can already be quite good for systems with only 10 copies of each molecule ( ) .for the base model and the base model with deactivation , we have used tools of discrete stochastic processes to independently calculate the error by viewing the signaling network as a system that samples the receptor state ( see _ si text _ ) the linear - noise approximation gives the covariance matrix for stationary fluctuations in species levels as the solution to the lyapunov equation : where and in terms of the stoichiometric matrix and the reaction propensity vector .the stoichiometric matrix describes how many molecules of each species are consumed or produced in each reaction , and the propensity vector describes the propensity ( rate ) of each reaction . for a network out of steady state ( the base model ) ,a non - stationary version must be used ( ) .the langevin approximation to the dynamics of a biochemical network draws on the same framework as the linear - noise approximation ( ) .it expresses the fluctuations in species copy numbers as : where is a vector containing the copy numbers of all species and are gaussian noises , uncorrelated in time , with covariance . and are the matrices defined in the section `` calculating the sensing error for a biochemical network . ''the equation can be solved ( e.g. by integrating factors ; ) , yielding the result in the main text , eq .3 , for the biochemical network considered there .we thank tom shimizu , andrew mugler and thomas ouldridge for a critical reading of the manuscript . this work is part of the research programme of the foundation for fundamental research on matter ( fom ) , which is part of the netherlands organisation for scientific research ( nwo ) .in the main text , we considered a simple equilibrium system in which the read - out binds the unbound receptor : , . here , we show that the sensing error for this network is limited by the number of receptors on the surface of the cell , as stated in the main text .calculating the variance as described in the main text ( or directly via the linear - noise approximation ) yields for the sensing error : where we have written the result ( following the lagrange multiplier approach from , for example , ) in a form that makes it easy to show that the error is bounded by the number of receptors ; a direct expression in terms of the rate constants is quite complicated . indeed , minimizing the expression over , , , and such that all are positive and and shows the result in the main text , that the error is always greater than ( eq . 2 of the main text ) .first we show that cooperative binding of the ligand to the receptors can achieve the fundamental equilibrium bound .one way in which receptors can cooperatively bind ligand is when the receptors are in clusters .consider clusters , each containing receptors that cooperatively bind ligand molecules , .the number of bound clusters , , is binomially distributed , giving variance , where is the probability a cluster is bound .the fluctuation - dissipation theorem gives the gain as , since each cluster binds ligand molecules . the sensing error is then ( eq . 1 in the main text ): .when all the receptors are in a single cluster ( ) , this can be as low as , achieving the fundamental bound .equilibrium systems without positive cooperativity at the level of the receptors can not achieve this bound , at least under the linear - noise approximation .we prove this in the general case that multiple different receptor species , , can bind the ligand , possibly with different affinities but not cooperatively .the fluctuation - dissipation theorem guarantees that the best readout is the total number of bound receptors , , since that is the variable conjugate to the chemical potential of the ligand . in general , the variance is just the sum of the variances of the species , plus corrections for the correlations between the species : where the inequality follows from the lack of ( positive ) cooperativity .( negative cooperativity can emerge naturally in equilibrium networks due to competition of downstream molecules for binding to the receptors . ) for an equilibrium system , the variance of a species is always less than the mean level of that species , at least under the linear - noise approximation , so : thus : combining this bound with the general bound for all equilibrium systems , , yields the result in the main text : .when is large , systems without cooperativity perform worse than the fundamental bound by about .these arguments show that the absolute bound for equilibrium systems , , can only be achieved in systems which cooperatively bind the ligand or in which multiple ligand bound to a single receptor cooperatively activate the receptor . without cooperativity ,the bound is given by .we consider an arbitrary equilibrium biochemical network in which receptors bind ligand and the cell uses a read - out to sense the environment .we denote the copy numbers of the species in the system by the vector .the copy numbers of , , and the read - out are elements of this vector , along with any other species in the network . since only the receptor binds the ligand , the distribution for species copy numbers in the equilibrium system with species is given in general by ( ) , where is the ( canonical ) partition function in terms of the molecular partition functions . the grand canonical partition function , , normalizes the distribution by summing the numerator over all possible states consistent with the stoichiometric constraints : from this distribution , it is clear that for any read - out , so that forms a markov chain .that is , the chemical potential of the ligand affects the read - out only via the instantaneous state of the receptors .the data processing inequality then leads to the conclusion in the main text , .the information the number of bound receptors , , has about the chemical potential of the ligand is easily bounded , since one of the few restrictions we have imposed on the equilibrium system is that the number of receptors is finite ( less than ) . for any random variables and , where is the entropy of a random variable .furthermore , the maximum entropy distribution on a bounded support is the uniform distribution and the entropy of a discrete uniform distribution is where is the number of possible states for the variable .thus , . the extensions to these proofs when multiple types of receptors bind the ligand or when each receptor molecule binds multiple ligand molecules are straightforward .then , the quantity is replaced in the proofs above by the total number of ligand molecules that can be bound to receptors at any time , . if multiple types of receptors can bind ligand , is just the total number of receptors of any type .if each receptor molecule binds more than one ligand molecule , is just the total number of receptors times the number of ligand molecules each receptor can bind .in this section we show how the sensing error of the biochemical network can be calculated by viewing the network as a discrete sampling process .the important quantities in a sampling protocol are the number of samples taken and the spacing between them , in addition to the properties of the sampled signal . by viewing the biochemical process as a sampling process , we mean that the underlying parameters of the biochemical network affect the sensing error only insofar as they affect these quantities , or the stochasticity in these quantities . the benefit of viewing the network as a sampling process is that the number of samples and the spacing between them have intuitive , and well - known , effects on the sensing error : the more samples , the lower the error ; the further apart the samples are , the more independent they are .perhaps less well known are the effects on the sensing error of stochasticity in the number of samples or the spacing between samples ; these effects emerge in the process of determining the error for a discrete sampling protocol , which we do below .we consider the biochemical networks described by the base model in the main text and the base model plus deactivation the push - pull network .for the base model , we identified the molecules that had collided with the receptors as samples , since these molecules states reflect the receptor states at the times of their collisions with the receptor .for the base model with deactivation , we identified the molecules that collided with the receptor more recently than with the phosphatase as samples .when we refer to the number of samples , we mean the number of these molecules ; when we refer to the times of the samples , we mean the times at which these molecules collided with the receptor .we begin by rewriting the equation for the sensing error in a form that makes the connection to discrete sampling explicit , eq . [ eq : noiseadd ] below .the cell senses its environment through the level of its readout .however , this is no different from estimating the ligand concentration from : since is a constant , independent of .note that the gain is .we first consider the effect of the stochasticity in the total number of samples , .the law of total variance allows us to decompose the variance in the estimate into terms arising from different sources : + \text{var } \left [ e ( \hat{p } |n ) \right]\ ] ] the first term of eq .[ eq : noiseaddv1 ] reflects the mean of the variance in given the number of samples ; the second term reflects the variance of the mean of given the number of samples .the mean and variance of given the number of samples are more familiar quantities than their unconditioned counterparts , as we see below . since , by definition , the samples reflect the state of the receptor at the times of their collisions with the receptor , we can write the number of at the final time as : where denotes the value of the i sample the state of the receptor involved in the i collision at the time of that collision , 1 if bound to ligand , 0 otherwise . in the following ,we consider a single receptor , and .the results generalize to multiple receptors .we can then rewrite eq .[ eq : noiseaddv1 ] : \\ & + & \text{var } \left [ \frac{n}{\bar{n } } e \left ( \frac{\sum_{i=1}^n n(t_i)}{n } \middle\vert n \right ) \right ] \nonumber\end{aligned}\ ] ] the equation is a bit complicated , but what is important is that it fully specifies the sensing error in terms of the number of samples , the spacings between them , and the stochasticity in these quantities .that is , this equation shows that the sensing error is the error of a sampling process. we can use it to calculate the sensing error independently from , for example , the master equation or the linear - noise approximation .the first term describes the error of a very standard sampling process , one with a fixed number of samples .we recognize the variance as the error of a statistical sampling protocol in which exactly samples are taken at random times .this is shown explicitly in the section `` error of discrete sampling protocols with a fixed number of samples . '' in that section , it is shown that the error for such a sampling protocol is : where is the fraction of the samples that are independent , as given by eq . 7 in the main text .then the first term in eq .[ eq : noiseadd ] is just : = & e & \left [ \frac{n^2}{\bar{n}^2 } p ( 1-p ) \frac{1}{f_i n } \right ] \nonumber \\ & = & p ( 1-p ) \frac{1}{f_i \bar{n}}\end{aligned}\ ] ] that is , the first term in eq .[ eq : noiseadd ] is the error of a discrete sampling protocol with exactly samples , as stated in the main text .the only effect of the expectation in the first term is to swap for .dividing by the squared gain ( see eq .[ eq : fromp ] ) , , gives the first term in eq .6 in the main text .we now turn to the second term in eq .[ eq : noiseadd ] . from the law of total variance, this term describes how stochasticity in the number of samples , , contributes to the sensing error . because the number of samples is poisson with mean and variance equal to : & = & \text{var } \left [ \frac{n}{\bar{n } } p \right ] \\ & = & \frac{p^2}{\bar{n}^2 } \text{var } \left [ n \right ] \\ & = & \frac{p^2}{\bar{n}}\end{aligned}\ ] ] where the probability a receptor is bound is =p$ ] .dividing by the squared gain gives the second term in eq .6 in the main text .thus , we have derived eq . 6 in the main text as the result of a discrete sampling protocol .the derivations leading to eq .[ eq : noiseadd ] show that the sampling error for the sampling protocol must be the same as the sensing error for the biochemical network . to check this, we can calculate the sensing error for the biochemical network , eq . 6 in the main text , in a more standard way , determining the gain and the variance of the output and using eq . 1 in the main text .we do this for the base model with deactivation ; results for the base model follow similarly .the mean level of is just .the variance in can be calculated using standard methods ( e.g. the linear - noise approximation ; see section `` calculating the sensing error for a biochemical network '' ) : the gain is : assembling the results , eq .6 in the main text follows , just as it did from the sampling protocol .the second term in eq . 6 in the main text emerges in the derivations above as a consequence of the stochasticity in the number of samples . however , it is more fundamentally a consequence of the fact that the cell does not distinguish between samples of the unbound receptor from blank samples that do not represent a receptor state i.e. it does not distinguish molecules that collided with the unbound receptor from those that never collided with the receptor in any state .a more standard sampling procedure would distinguish between these , and so would estimate as , not , as above . as we show below, this procedure gives rise to only the first term of eq .6 in the main text , allowing us to interpret the second term as the price the cell pays for not distinguishing readout molecules that collide with the unbound receptor from those that have never collided with the receptor in any state .one way to arrive at this conclusion is to imagine that all collisions with the receptor lead to modifications of . yet , while the ligand - bound receptor modifies into state , the unbound receptor modifies into another state .hence , in addition to the reaction we consider the reaction . then , .analogously to eq . 1 in the main text, we can then estimate the variance of by expanding to first order : where the gains are : the variance is then : where the last term accounts for the covariance .the variances can be calculated in many ways since the system is linear .for example , they can be calculated exactly via the linear - noise approximation .the result is the first term of eq . 6 in the main text , as claimed .indeed , there is no second term for the model described here .this is precisely because with this scheme the number of samples is known .while in the scheme of the main text ( see fig .2 ) , the system can not discriminate between the molecules that have collided with an unbound receptor and the molecules that have not collided with the receptor at all , in this scheme the system knows exactly how many collisions there have been with the receptor : + .in this section , we derive the first term of eqs .[ eq : noiseaddv1 ] and [ eq : noiseadd ] , corresponding to eq . 6 in the main text , as the error of a discrete sampling protocol with a fixed number of samples taken of receptor states over time .the average receptor occupancy is estimated as : where is the state of the receptor involved in the sample at the time of that sample , 1 if the receptor was bound at time and 0 otherwise . in what follows, we consider a single receptor , and .the results generalize to multiple receptors .the times of the samples represent the times at which the molecules that store the samples of the receptor collided with the receptor .therefore , we choose the distribution of times between the samples to match the distribution of times between those collisions , which depends on the particular network under consideration , described below .we count time backwards from the present time , . the number of samples and the distribution of times at which they were taken specifies a sampling protocol , independent of the chemical implementation .the variance in the estimate of receptor occupancy is : \end{aligned}\ ] ] since is fixed , where is the variance of the instantaneous occupancy of a single receptor ._ base model _ : we first consider a statistical sampling protocol that matches the distribution of receptor - collision times of samples in the base model .the collisions occur at random times in the interval [ 0,t ] , so we model randomly placed samples .the time between a randomly chosen pair of uniformly distributed samples , not necessarily consecutive , is distributed as : changing variables from and to , we have .the expectation of the covariance is then : = \sigma^2 \int e^{-\delta/\tau_c } p(\widetilde \delta ) d \widetilde \delta\ ] ] assembling the equations above yields the first term in eq . 6 in the main text with ( ) , where we have simplified the result with the standard assumption that and ( it does not make sense to discuss the spacing between a single sample ) ._ deactivation _ : to take into account deactivation , we consider sampling times which match the distribution of the receptor collisions of only those molecules storing samples .we thus have to take into account that some of the samples that have been taken are thrown away due to the deactivation process .we begin with an alternative expression for the expected covariance : = \sigma^2 \int \int e^{-|t_j - t_i|/\tau_c } p(t_i , t_j ) dt_i dt_j\ ] ] to match the biochemical network , the sample times of two samples must be independent from each other , since the collisions of different molecules with the receptor and phosphatase are uncoupled .therefore , .the marginal probability is the probability that the collision time with the receptor of a given molecule storing a sample was , i.e. .this can be written in terms of , the probability that there was a collision with the receptor at the time times the probability that , given a collision at that time , the associated molecule did not subsequently collide with the phosphatase : then : since is uniform . assembling results : = \sigma^2 \int \int e^{-|t_j - t_i|/\tau_c } \frac{e^{-t_i/\tau_{\ell}}}{\tau_{\ell } } \frac{e^{-t_j/\tau_{\ell}}}{\tau_{\ell } } dt_i dt_j\ ] ] it is instructive to change variables , defining , as before . then : = \int_0^{\infty } e^{-\widetilde \delta/\tau_c } \frac{e^{-\widetilde \delta/\tau_{\ell}}}{\tau_{\ell } } d \widetilde \delta\ ] ] from this expression we can identify as the distribution of times between two randomly chosen ( not necessarily consecutive ) samples , when molecules can decay .simulations confirm this distribution .completing the integral and using it in the expression for the sensing error gives the first term in eq .6 in the main text for the effective spacing ( here , ) .we have made the simplifying assumptions that ( it does not make sense to talk of the spacing between just one sample ) and , a standard assumption .the effective spacing is not the mean nearest - neighbor spacing , but it is qualitatively similar and serves to summarize the fact that samples taken further apart in time are more independent . clearly , from eq .[ eq : withdistr ] , the error depends on the distribution of all - pairs spacings , not necessarily nearest - neighbor spacings , and it depends on the full distribution , not just the mean . finally , we iterate that we can perform an independent check on the derivation in this section by computing the sensing error using the linear - noise approximation , which is exact for this linear network .as mentioned , this gives exactly the same result .3d of the main text , we show how the sensing error depends on the pair of resources ( readout copy number , energy ) .these results were obtained via numerical minimization of eq .6 subject to constraints on and . in fig .3e of the main text , we show how the sensing error depends the pair of resources ( time / receptor copy number , energy ) .the plot for ( time / receptor copy number , readout copy number ) is the same . in this section ,we describe the derivation of the results shown in this figure . in order to consider not necessarily large , we need to use a form of the berg - purcell bound that is valid for short integration times ( ) : which identifies as a limiting resource , rather than the result of the main text , , which only holds in the limit .to elucidate how the sensing error depends on ( time / receptor copy number , energy ) and ( time / receptor copy number , readout copy number ) , we calculate the minimum sensing error by optimizing over all parameters while fixing and either or , respectively . for a fixed and a fixed work , the minimum sensingerror is : the equation for the dependence of the sensing error on ( time / receptor copy number , readout copy number ) is the same , with replaced by .the minimum is plotted in fig .the minimum tracks the worst bound , again showing that the resources do not compensate each other .additional constraints on the values of rate constants will generally prevent the network from achieving these bounds .in particular , it is common to consider that the binding of ligand to receptor is diffusion - limited , so that the bound is never achieved .of course , additional constraints can not improve the performance of the network beyond the bounds required here , nor can they alter the fact that all the resources are needed for sensing .networks are often more complicated than a simple one - level push - pull cascade . we investigate some common motifs to understand whether they relax the trade - offs faced by sensory networks . _ multi - level cascades _ : often the signaling molecule activated by the receptor is not taken as the final read - out ; rather that molecule catalyzes the activation of another molecule , and so on in a signaling cascade .all of the molecules are reversibly degraded . using the same approach as for the one - level cascade , we find that the sensing error is bounded by the work done driving just the last step of the cascade : , where is the product of the flux of the last molecule through its cycle and the free - energy drop across that cycle , and is the slowest relaxation time in the cascade ( i.e. the reciprocal of the largest eigenvalue of the relaxation matrix . ) even more work is done at other levels of the cascade .the results suggest that cascades do not enable more energy efficient sensing .additionally , each sample of an active state ( bound receptor or active molecule upstream ) still requires a molecule to store it . _ positive and negative feedback _ : a simple model of positive feedback is autocatalysis , in which the receptor - catalyzed activation of the read - out is enhanced by the activated form of the read - out , : .a simple model of negative feedback can be implemented by requiring inactive for the activation : . in both cases, degrades according to . neither positive feedback nor negative feedback changes the energetic requirements for sensing : . as before , the free - energy drops across the reactionswere calculated as the ratio of mass - action propensities ._ cooperative activation of the read - out _ :if the catalytic activation of the read - out is mediated cooperatively by the receptors ( i.e. ) , then the error is reduced by a factor for the same amount of energy .one way to interpret the result is that each sample requires the same amount of energy as before , but the samples are individually more informative because they reflect ligand bindings , instead of one indeed , the instantaneous error is lower .to understand how energy shapes the design of a network , we modify the push - pull network so that the read - out actually binds the ligand - bound receptor , which can boot the read - out off in a modified state : , .the active read - out decays , as before : .the reaction coarse - grains the reactions and ; explicitly adding these reaction gives the same results because they essentially can always be integrated out .this network interpolates between the equilibrium and non - equilibrium networks considered in the main text . choosing the rate constants of the booting and decay reactions to be 0, the network reduces to the sequestration network studied in the equilibrium section .choosing the rate constants so that the read - out is rarely bound to the receptor , the network reduces to the push - pull network studied in the non - equilibrium section .no resources are coarse - grained in these reductions , though the latter breaks the retroactivity of receptor - read - out binding : energy is required to break reversibility , not retroactivity .we focus on the relationship between the number of receptors ( the equilibrium resource ) and the work ( a non - equilibrium resource ) , as the network shifts from binding to catalysis .the work is defined as , as in the main text , where the relaxation time is chosen as the negative reciprocal of the smallest eigenvalue of the regression matrix of the network . from a scaling argument and dimensional analysis, the relationship between these resources must take the form : for some function independent of any parameters .we probe this function numerically ( fig .4 ) . the figure shows results from 2.5 million explicit parameter evaluations and from about 25,000 numerical minimization trials .minimization trials were constrained steepest descent minimizations , randomly initialized for logarithmically distributed rate constants between and . to promote uniform sampling of the space , we minimized estimation error subject to constraints on the work ; we minimized work subject to constraints on the estimation error ; and we minimized the product of the work and the estimation error subject to constraints on either .we also continued the best solutions over variations in the constraints to probe the global minima .as seen in the figure , when the work per receptor is less than about , the equilibrium scheme of binding is optimal , recovering the equilibrium bound for the sensing error , ( eq . 2 in the main text with ) .when the work per receptor is greater than about , the non - equilibrium scheme of catalysis is optimal , recovering the bound from the main text , . roughly , it only makes sense to use the nonequilibrium catalysis scheme if the energy budget is sufficient to take more than one sample per receptor ( per sample of the bound receptor ) , since the equilibrium scheme can take one sample of the bound receptor without any energy . around 1 is an intermediate regime in which the network outperforms both these regimes by partially utilizing the bound receptor - read - out state .in the main text , we argue that the tnf newtork could transmit much more than one bit if it were time / receptor limited . here, we describe how we arrived at that conclusion .even if the integration time of the network were zero and the network did not integrate the receptor state , it would still be able to transmit the information in the instantaneous receptor state .the information about the ligand concentration , , in the instantaneous receptor occupancy , , is given by : to arrive at this result , we calculated the information transfer of a biochemical system that takes the receptor occupancy , and not a downstream readout , as the final output .we assumed simple ligand - binding kinetics , , and assumed that ligand binding is not affected by any downstream processes .more complicated kinetics ( e.g. cooperativity ) would likely increase the instantaneous information transfer .the result assumes that the ligand - binding kinetics are optimized with respect to the distribution of input concentrations of the ligand ; i.e. the information transfer calculated is the channel capacity of the network .the channel capacity is the appropriate quantity to consider , because it is the experimentally reported quantity in the paper by ( ) .we followed the method in to calculate the channel capacity .tnf signaling utilizes receptors on the cell surface ( ) , corresponding to bits of information .if the network integrates the receptor state , the information could be even higher .the fact that the actual information transfer is instead much less than 5 bits suggests that receptors / time do not limit the accuracy of sensing , but rather another resource , such as copy numbers of signaling components or energy .the following paragraphs address various nuances to the above argument .first , note that restrictions on the probability distribution of inputs can prevent the system from achieving the channel capacity .this is true both for our bound and for the calculated information transmission through the entire network in the paper by cheong _one biologically relevant restriction on the probability distribution of inputs is the support of the distribution , particularly the maximum biologically relevant concentration of the ligand ; if achieving the channel capacity requires input distributions with large probability for concentrations that are much higher than those biologically observed , then the channel capacity is not really a relevant measure for the capacity of the network .important in this context is that the dissociation constant for tnf binding is 0.323 ng / ml ( ) , about the same as the half - saturation for the tnf response as measured by cheong _ et al._. so achieving the channel capacity at the level of the receptors does not require higher concentrations than achieving the channel capacity of the whole network .this means that , while restrictions on the maximum input concentration would prevent the system from achieving the channel capacity of 5.5 bits at the level of the receptors , they would also prevent the system from achieving the channel capacity of 1 bit at the level of the output , maintaining the discrepancy .the above arguments assume that a ) the principal role of the signaling network is to time integrate the receptor , and b ) that this improves information transmission if energy and the copy numbers of the signaling molecules are not limiting , and the network is hence not too noisy . however , signaling systems with enough fuel and signaling molecules that time - integrate the receptor , do not necessarily increase information transmission .they can also reduce information transmission by collapsing many input states onto the same output state .this can happen when the input - output relation is ( strongly ) non - linear .however , the experimental data in the paper by cheong _ et al . _ suggest that this is not the case for the tnf network , as the response increases mono - modally and gradually with the input .in fact , the output is also noisy .indeed , the authors attribute the loss of information transmission to biochemical noise , which , according to our analysis , could be due to limiting amounts of readout molecules or energy .a final note is that while the above arguments show that the number of receptors and time are not limiting and suggest that downstream molecules or energy are limiting , it is can not be ruled out that other sources of noise , which we have not modeled , are instead limiting . for example , the sensing precision could be limited by cell - to - cell variability in the copy numbers of signaling molecules ( expression or capacity noise ( ) ) .these could even involve variations in the number of receptors themselves .however , back - of - the - envelop calculations suggest that such variations are not enough to explain the discrepancy above . moreover , many biological systems , including some two - component systems , are insulated against fluctuations in protein expression ( ) , supporting the idea that in these cases energy or protein copy numbers are indeed limiting the accuracy of sensing .
|
living cells deploy many resources to sense their environments , including receptors , downstream signaling molecules , time and fuel . however , it is not known which resources fundamentally limit the precision of sensing , like weak links in a chain , and which can compensate each other , leading to trade - offs between them . we show by modeling that in equilibrium systems the precision is limited by the number of receptors ; the downstream network can never increase precision . this limit arises from a trade - off between the removal of extrinsic noise in the receptor and intrinsic noise in the downstream network . non - equilibrium systems can lift this trade - off by storing the receptor state over time in chemical modification states of downstream molecules . as we quantify for a push - pull network , this requires i ) time and receptors ; ii ) downstream molecules ; iii ) energy ( fuel turnover ) to drive modification . these three resource classes can not compensate each other , and it is the limiting class which sets the fundamental sensing limit . within each class , trade - offs are possible . energy allows a power - speed trade - off , while time can be traded against receptors . biochemical networks are the information processing devices of life . like any device , they require resources to be built and run . components are needed to construct the network , space is required to accommodate the components , time is needed to process the information , and energy is required to make the components and operate the network . these resources constrain the design and performance of any biochemical network . yet , it is not clear which resources are indispensable , thus fundamentally limiting the performance of the network , and which resources might trade - off against each other . here we consider the interplay among cellular resources , network design , and performance in a canonical biochemical function , namely sensing the environment . living cells can measure chemical concentrations with extraordinary precision ( ) , raising the question what sets the fundamental limit to the accuracy of chemical sensing ( ) . cells measure chemical concentrations via receptors on their surface . these measurements are inevitably corrupted by noise that arises from the stochastic arrival of ligand molecules by diffusion and from the stochastic binding of the ligand to the receptor . berg and purcell pointed out that the sensing error is fundamentally bounded by this noise extrinsic to the cell , but that cells can reduce the error by taking multiple independent measurements , mitigating the risk that any one is corrupted by a noisy fluctuation ( ) . one way to increase the number of measurements is to add more receptors to the surface ( ) . another is to take more measurements per receptor over time ; in this approach , the cell infers the concentration not from the instantaneous number of ligand - bound receptors but rather from the time - average receptor occupancy over an integration time ( ) . this time integration has to be performed by the signaling networks that transmit the information from the surface of the cell to its interior ( ) . to reach the fundamental limit on the accuracy of sensing , these networks have to remove the extrinsic noise in the receptor state as much as possible . signaling networks , however , are also stochastic in nature , which means that they will also add noise to the transmitted signal . most studies on the accuracy of sensing have ignored this intrinsic noise of the signaling network . they essentially assume that the intrinsic noise can be made arbitrarily small and that the extrinsic noise in the input signal can be filtered with arbitrary precision by simply integrating the receptor signal for longer . yet , the extrinsic and intrinsic noise are not generally independent ( ) . indeed , what resources are required to simultaneously remove the extrinsic and intrinsic noise is not understood . while the work of berg and purcell and subsequent studies identify time and the number of receptors as resources that limit the accuracy of sensing , the fundamental limits that have emerged ignore the cost of making and operating the signaling network . making proteins is costly ; producing proteins that confer no benefit to the cell has been shown to slow down bacterial growth ( ) . they also take up valuable space that might be used for other important processes , either on the membrane or inside the cytoplasm . both are highly crowded , with proteins occupying of the membrane area ( ) and of the cytoplasmic volume ( ) . moreover , many signaling networks must be driven out of thermodynamic equilibrium by the continuous turnover of fuel molecules such as atp , leading to the dissipation of heat . fuel is essential for network functions such as bistability , oscillations , and kinetic proofreading ( ) , and can be important for adaptation ( ) . however , whether there exists a fundamental relationship between energy and sensing , independent of the design of the signaling network inside the cell , remains unclear ( ) . in this manuscript we derive how the accuracy of sensing depends on not only time and the number of receptors , but also on the resources required to build and operate the downstream signaling network : the copies of signaling molecules and fuel . this allows us to address the following questions : how do the sensing limits set by the latter resources compare to the canonical limit of berg and purcell , which is set by the resources time and the number of receptors ? how does the limit set by one resource depend on the levels of the other resources ? can resources compensate each other to achieve a desired sensing precision , leading to trade - offs between them , or are the limits set by the respective resources fundamental , _ i.e. _ independent of the levels of the other resources ? and how do the limits depend on the design of the signaling network ? the relationship between the accuracy of sensing , the design of the network , and the resources required to build and operate it time , energy and protein copies underlies the design principles of biochemical sensing systems . we first study the relationship between sensing precision , network design , and resources , for systems that are not driven out of thermodynamic equilibrium , consuming no fuel . we find that these equilibrium networks can time - integrate the receptor signal to remove the extrinsic noise in it , analogous to the mechanism described by berg and purcell . clearly , fuel is not a fundamental resource for sensing or removing extrinsic noise . however , using the fluctuation - dissipation theorem , we will show that equilibrium networks face a fundamental trade - off between the removal of extrinsic noise in the receptor state and the suppression of intrinsic noise in the processing network : decreasing one source of noise necessarily increases the other . as a result , the accuracy of sensing is fundamentally limited by the number of receptors ; in equilibrium networks , adding downstream components can never improve the sensing precision . to improve the sensing accuracy beyond the limit set by the number of receptors , it is essential to break the trade - off between extrinsic and intrinsic noise . as we show , this requires a fundamentally different sensing mechanism . instead of using the receptors to harvest the energy of ligand binding , as in the equilibrium sensing mode , the receptors should be used as catalysts to modify downstream read - out molecules . this non - equilibrium strategy , however , uses not only receptors but requires also time , copies of downstream read - out molecules , and fuel turnover . we quantify the limits that arise from each of the resources copies of receptors and downstream molecules , time , and fuel for a canonical signaling motif , a receptor that drives a goldbeter - koshland push - pull network ( ) . push - pull networks are ubiquitous in prokaryotic and eukaryotic cell signaling ( ) : examples include gtpase cycles , as in the ras system ( ) , and phosphorylation cycles , as in mitogen - activated - protein - kinase ( mapk ) cascades ( ) or in two - component systems like the chemotaxis system of _ escherichia coli _ ( ) . we find that the resource limitations of these systems emerge naturally when the signaling networks are viewed as devices that discretely , rather than continuously , sample the receptor state via collisions of the signaling molecules with the receptor proteins . this analysis reveals that three classes of resources are required : i ) time and receptors ; ii ) copies of downstream molecules ; and iii ) fuel . indeed , these classes can not compensate each other : each imposes a sensing limit , and it is the limiting class that imposes the fundamental limit on the accuracy of sensing . however , there can be trade - offs within each class of resources . receptors and time trade off against each other in achieving a desired sensing accuracy and power and response time trade off against each other to meet the energy requirement for taking a measurement . we end by discussing how our findings apply to specific signaling systems and how our results on push - pull networks generalize to more complex networks involving cascades , and positive and negative feedback . in particular , our analysis leads to a concrete prediction for the optimal resource allocation , which we test for the _ e. coli _ chemotaxis system .
|
the evolution of internet has long been a heat subject since the dawn of complex network theory , due to its rich data , wide application and nontrivial properties .while models based on either stochastic process or optimal strategy are continually proposed , an urgent question to be addressed is that which of them is really applicable or is validated to describe the actual evolution of internet .this question concerns not only our understanding on the process of internet evolution , but also the possibility of our further goal of control and prediction of this large - scale system .most popular models of internet base on the mechanism of preferential attachment(pa ) , which describes that the probability of a node to capture links is proportional to its current degree .it is considered to be essential for producing the power - law degree distribution . while some evidences suggest pa is unapplicable for route - level internet , other empirical studies based on mean - field approach support its validation for as level ( autonomous system level ) and other types of networks .another approach to model internet follows its statistical law instead of the detailed descriptions as pa .the representative case is gibrat law which has been introduced as the candidate of internet model to characterize the dynamics of the constant appearance and disappearance of links and nodes .the traditional gibrat law assumes that the growth rate of a variable such as population , the number of messages sent by a person or the degree of a node has an independent identically distributed(i.i.d ) structure so that both its mean and standard deviation are independent of the initial value of the variable .although this assumption seems rejected by a variety of recent empirical studies , it succeeds in reproducing the exact power - law exponent of the degree distribution of internet .while the validation of both model is still controversial , a more serious problem is that there exists an inconsistency even between themselves . as is indicated in ref and will be specified in section ii in the present paper , the conditional standard deviation of degree growth rate of pa decays with initial degree as a power law of exponent , which contradicts with gibrat assumption .this raises the question that which model is more appropriate for describing the evolution of internet not only at a mean - field level but also on a fluctuation aspect .unfortunately previous empirical studies based on mean - field method can not distinguish pa and gibrat law since both them cause the similar proportionate effect .while the fluctuation property may uncover some important nature of internet , it has been rarely empirically studied .the main purpose of the present paper is to determine the actual fluctuation property of internet topology and the scope of the validation of the two models , which is significant both theoretically and practically .the paper is organized as follow . in sectionii we show the inconsistency between pa and gibrat law by deriving the relation between the standard deviation of degree growth rate and initial degree . in sectioniii we empirically study the fluctuation of internet topology for three different time scale(daily , monthly and yearly ) .we find that the fluctuation of internet experience a crossover transition from pa model to gibrat s law with the increase of the observed period .we determine the validated period for both pa and gibrat s law respectively and discuss the possible cause of the emergence of gibrat law . in section iv we draw the conclusion .the proportionate effect described by gibrat law can be formalized by the following random multiplicative process : k_i(t),\ ] ] where and are the degree of node at time and , and is a random process .the degree growth rate is defined as more generally , if we observe the system by interval , the growth rate is given by the basic assumptions of gibrat law are that is independent of its initial degree and uncorrelated in time .the two assumptions indicate that the fluctuation property of degree growth , characterized by the standard deviation of conditional to initial degree follows on the other hand the fluctuation of degree growth of pa behaves differently .the pa rule describes that the probability of a new link to connect to a node relates to nothing else but the node s current degree , which is given by in other words the creation of links are uncorrelated with each other and the evolution of degree is a memoryless markov process . by mean - field method , the evolution of the degree of a node is , where is usually a function related to the growth pattern of network size .solving the equation we have where is the birth time of the node and .now let us denote random variable as the number of new links connecting to a node at time .its i.i.d structure indicates that it follows the binomial distribution , whose variance is proportional to . considering , we have the degree increment of the node from to is .according to the definition of the growth rate , we have reminding that the creation of links are uncorrelated in time , the conditional variance of is substituting eq(5)(7 ) to eq(9 ) and replacing with , we finally derive the fluctuation property of degree growth rate for pa note that eq(10 ) is valid for other events such as rewiring and link deletion as long as they do not break the memoryless property .eq(4 ) and eq(10 ) indicate a basic contradiction between pa and gibrat law even though both of them are reported to be validated at mean - field level .our question is which model is closer to the reality and what is the real fluctuation property of internet on earth . on the other hand pa and gibrat law share common stationary property in the sense that both the scaling properties of eq(4 ) and eq(10 ) are independent of the observed period .as will be presented in the next section , neither of the models can totally characterize the real fluctuation but is validated for two different periods . with the increase of the periods ,the fluctuation pattern changes gradually , which contrasts to the stationary property of both models .in this section we will empirically study the fluctuation of degree growth rate of internet and determine the periods , for which pa and gibrat law are validated respectively .in addition we will briefly discuss the origin of the emergence of gibrat law .our empirical data come from the oregon route views project .they include snapshots of three different time scales , i.e. daily( days : ) , monthly( months : ) and yearly( years : ) .the original data are collected in the form of border gateway protocol routing tables , from which an internet graph can be constructed . as usual, each node represents a specific as while each edge is the logical link between the inter - connected ases , so that we obtain a network of size of o( ) nodes and of an almost constant average degree about .the topological properties that we measured are stationary for all the three time scale and are consistent with previous empirical studies .the degree distribution is power law as with exponent .we also check the dynamics of the preferential attachment as done in ref .we find for all the three time scale , the linear pa is always valid .the fluctuation property can be calculated by where represents the average taken for the same and the observed period can be one year , one month or one day in the present paper . in fig .[ fig1](a ) , we plot the conditional mean of for different periods .all of them are around constant zero , which is independent of the initial degree .however the conditional deviation of the three periods display different behaviors , as is shown in fig .[ fig1](b ) fig .[ fig1](d ) . for daily fluctuation, decays as power law with exponent about , which coincides with the prediction of pa rule . for monthly fluctuation ,the small - degree region of becomes flat while the rest of region remains unchanged . with the increase of ,the flat area extends gradually and gibrat law becomes dominated for a large region of for yearly fluctuation .these results indicate a crossover transition from pa to gibrat phase , which clearly rejects the stationarity of the fluctuation .therefore neither pa nor gibrat law can characterize the overall fluctuation property of internet .they validate only for a specific period . for short period pa matches while for long period gibrat law takes over .note that our finding is different from those of human dynamics and firm growth , where a single universal scaling law is reported for the whole conditional deviation . and stay around constant .( b)the conditional standard deviation for daily data .it decreases with as power law of exponents , as predicted by pa .( c)the conditional deviation for monthly data .the small - degree region of becomes flat compared to daily data , while the rest of region remains unchanged .( d)the conditional deviation for yearly data .gibrat law dominates for a large range of .all the data are logarithmic binned and are plotted on a log - log scale .red lines represent fitted results.,width=336 ] to better understand the scope of the application of the two classical models , we need to determine their validated periods . for pa, we find the corresponding is no more than several - day magnitude and we can affirm that pa is always valid for as is indicated in fig .[ fig1](b ) . for gibrat law, corresponds to when the correlation coefficient of and is zero .therefore we study the relation between the correlation coefficient and the observed period by using monthly data .specifically , for a particular we calculate the correlation coefficients for all and average them so that we consider characterize the general correlation coefficient of and for the observed interval .we calculate eq(12 ) for and present its absolute value in fig .[ fig2 ] .we find that despite large deviation for the results of (not plotted ) , the main body of displays a linear decay which is fitted as then let , we can evaluate .therefore the gibrat law is expected to be totally valid for at least -year period .note that of gibrat law estimated by using yearly data gives the similar result even though the quality of the fitting is poorer due to much smaller length of both and . and versus observed period .it follows approximately a linear decrease , which is fitted by the red dashed line as .,width=240 ] , as is indicated by the black dashed line .the result contrasts to that of the original yearly data but is consistent with that of pa and daily data .inset : the empirical result of the proportionate effect before(blue circle ) and after(red triangle ) the reshuffling operation .the statistical analysis is based on mean - field treatment as was done in previous studies .the black solid line is of slope , which is a guide for eyes.,width=240 ] after reshuffling the external links appears no significant difference from that of the original yearly data .however after reshuffling the internal links decays with power - law exponent about .the dotted line is horizontal while the dashed one is the fit line.,width=254 ] the crossover transition indicates that there are some underlying mechanism that give rise to the emergence of gibrat law .reminding that memoryless and independent creation of links can only cause a power - law decay with an exponent of of , gibrat law with constant conditional standard deviation probably indicates the existence of strong correlation in the evolution of internet .indeed studies on population growth and human communication dynamics demonstrated that correlation could lower the related power - law exponent .this speculation can be confirmed by reshuffling the creation of links for yearly data . in specific , we change randomly the order of the creation of links while maintain the topology of first year( ) and last year( ) .we first check whether the reshuffling operation changes the basic evolution mechanism , i.e. the proportionate effect .surprisingly , the proportionate effect maintains as before(inset of fig .[ fig3 ] ) , but the fluctuation pattern of the degree growth rate changes from a constant value to a power - law decay of exponent , which is exactly the behavior of pa and daily data(fig .[ fig3 ] ) .this is a direct evidence for the existence of the correlation and its contribution to gibrat law . indeedthe reshuffling process does not change pa at mean - field level at all but only destroys any possible correlation between the creation of links .therefore we draw the conclusion that correlation is the essential ingredient responsible for the emergence of gibrat law .the crossover transition thus indicates that such a correlation occurs first at small - degree nodes and spreads to large - degree nodes gradually . to further identify the origin of gibrat law, we separate the external links(links created between new - coming node and old existing node ) from internal links(links created between existing old nodes ) and reshuffle one while maintain the other . as shown in fig .[ fig4 ] , reshuffling the external links has little effect on the fluctuation pattern . on the other hand reshuffling the internal linkscauses a clear power law decay of the conditional standard deviation with exponents about .therefore we conclude that the major part of the correlation comes from the internal links , which has more contribution to the emergence of gibrat law .we have shown the inconsistency between pa and gibrat law and determine their scope of application to internet . by analyzing the conditional standard deviation of the degree growth rate ,we find that the actual fluctuation of internet exhibits a crossover transition from pa to gibrat law with the increase of the observed period .we have determined that the scope of the validation is about several - day magnitude of period for pa while -year of period for gibrat law .we briefly study the origin of the emergence of gibrat law and find it most related to the correlation between the internal links .there has been an argument that whether the construction of internet is governed by the randomness of self - organized nature or highly designed order of engineered nature .although self - organized system does not rule out the possibility of correlation , the strong correlation found in the evolution of internet consists with the designed order of the engineered intuition .the present empirical results indicate that purely random description based on mean - field approach , which ignores correlation , might match short - term(daily ) internet fluctuation , but is very insufficient to characterize the long - term(yearly ) evolution .the crossover paradigm of the dynamical fluctuation provides a test that any future model should pass .therefore the consideration of memory effect as well as how such effect works is critical for a complete internet model theory .s. h. yook , h. jeong , a. l. barabsi , proc .usa * 99 * , 13382 ( 2002 ) .john c. doyle , d. l. alderson , l. li , s. low , m. roughan , s. shalunov , r. tanaka , w. willinger , proc .natl . acad .usa * 102 * , 14497 ( 2005 ) .r.pastor-satorras , a.vespignani , phys .lett . * 87 * , 258701 ( 2001 ) .j. park , m. e. j. newman , phys .e * 68 * , 026112 ( 2003 ) .l. dallasta , i. alvarez - hamelin , a. barrat , a. vzquez , a. vespignani , phys .e , * 71 * , 036135 ( 2005 ) .s. zhou , phys .e , * 74 * , 016124 ( 2006 ) .r. albert , h. jeong , and a. l. barabasi , nature , * 406 * , 378 ( 2000 ) .k. i. goh , b. kahng , d. kim , phys .lett , * 88 * , 108701 ( 2002 ) .serrano , m. bogun , and a. daz - guilera , phys .lett , * 94 * , 038701 ( 2005 ) .a. l. barabsi , r. albert , science , * 286 * , 509 ( 1999 ) .m. mitzenmacher , internet mathematics , * 2 * , 525 ( 2005 ) .m. mitzenmacher , internet mathematics , * 1 * , 226 ( 2004 ) .m. e. j. newman , siam review , * 45 * , 167 ( 2003 ) .d. d. han , j. h. qian and y. g. ma , europhys .lett * 94 * , 28006 ( 2011 ) .j. h. qian , d. d. han , physica a * 388 * , 4248 ( 2009 ) . s. n. dorogovtsev and j. f. f. mendes , phys .e , * 62 * , 1842 ( 2000 ) .g.bianconi , a. l. barabsi , europhys .* 54 * , 436 ( 2001 ) .s. n. dorogovtsev and j. f. f. mendes , a. n. samukhin , phys .lett , * 85 * , 4633 ( 2000 ) . h. jeong , z. neda , a. l. barabsi , europhysics letters , * 61 * , 567 ( 2003 ) . m. e. j. newman , phys . rev .e , * 64 * , 025102(r ) ( 2001 ) .a. capocci , v. d. p. servedio , f. colaiori , l. s. buriol , d. donato , s. leonardi , g. caldarelli , physical review e * 74 * , 036116 ( 2006 ) gautreau , a. barrat , m. barthlemy , proc .usa , * 106 * , 8847 ( 2009 ) .b. a. huberman , l. a. adamic , nature * 401 * , 131 ( 1999 ) .r. gibrat , les ingalits conomiques , recueil sirey , 1931 .m. h. r. stanely , l. a. n. amaral , s. v. buldyrev , s. havlin , h. leschhorn , p. maass , m. a. sallnger , h. e. stanely nature , * 379 * , 804 ( 1996 ) . h. d. rozenfeld , d. rybski , j. s. andrade , jr . , m. batty , h. e. stanley , h. a. makse , proc .usa * 105 * , 18702 ( 2008 ) .v. plerou , l. a. n amaral , p. gopikrishnan , m. meyer h. e. stanley , nature , * 400 * , 433 ( 1999 ) .m. riccaboni , f. pammolli , s. v. buldyrevd , l. pontac , h. e. stanley , proc .usa * 105 * , 19599 ( 2008 ) .l. a. n. amaral , s .v. buldyrev , s. havlin , m. a. salinger , h.e.stanley , phys .lett , * 80 * , 1385 ( 1998 ) .d. fu , f. pammolli , s. v. buldyrev , m. riccaboni , k. matia , k. yamasaki , h. e. stanley , proc .usa * 102 * , 18801 ( 2005 ) .d. rybski , _et.al._ , proc .usa * 106 * , 12640 ( 2009 ) .route views project , https://www.routeviews.org .as expected the correlation coefficients are all negative .taking the absolute values does not change its trends at all .we choose because larger causes the length of too small to apply statistical analysis .d. rybski , _et.al._ , scientific reports , * 2 * , 560 ( 2012 ) .d. rybski , s. v. buldyrev , s. havlin , f. liljeros , h. a. makse , eur .j. b , * 84 * , 147 ( 2011 ). r. a. bentley , p. ormerod , m. batty , behav ecol sociobiol , * 65 * , 537 ( 2011 )
|
gibrat s law predicts that the standard deviation of the growth rate of a node s degree is constant . on the other hand , the preferential attachment(pa ) indicates that such standard deviation decreases with initial degree as a power law of exponent . while both models have been applied to internet modeling , this inconsistency requires the verification of their validation . therefore we empirically study the fluctuation of internet of three different time intervals(daily , monthly and yearly ) . we find a crossover transition from pa model to gibrat s law , which has never been reported . specifically gibrat - law starts from small degree region and extends gradually with the increase of the observed period . we determine the validated periods for both models and find that the correlation between internal links has large contribution to the emergence of gibrat law . these findings indicate neither pa nor gibrat law is applicable to the actual internet , which requires a more complete model theory .
|
consider a -dimensional nonhomogeneous stochastic differential equation ( sde ) where , is a standard -dimensional wiener process , the drift coefficient is borel measurable and bounded , and the diffusion coefficient is bounded and continuous . inwhat follows we suppose that satisfies the following conditions : 1 .2 . _ uniform ellipticity _ : for each , there exists an ellipticity constant such that for all ] .the authors of based on the malliavin calculus proved that the solution of equation with a bounded measurable drift vector and an identity diffusion matrix belongs to the space for each and any open and bounded .the malliavin calculus is used also in .unfortunately , in these works no representations for the derivatives are given .the one - dimensional case was considered in and explicit expressions for the sobolev derivative were obtained .the formulas involve the local time of the initial process .there are no direct generalizations of these formulas to the multidimensional case because the local time at a point does not exist in the multidimensional situation .the aim of the present paper is to get a natural representation for the derivative of the solution to equation .we assume that satisfies ( c1),(c2 ) , the hlder condition , and for some and all , , the function belongs to the kato - type class , i. e. , we show that the derivative is a solution to the sde where is the -dimensional identity matrix , is a continuous additive functional of the process , which is equal to if is differentiable .this representation is a natural generalization of the expressions for the smooth case .we prove the main result for such that for each and all , is a function of bounded variation on , i.e. , for each the generalized derivative is a signed measure on .besides , we suppose that for all , is of the class , i.e. , where is the variation of ; are measures from the hahn - jordan decomposition .the similar results for a homogeneous sde with an identity diffusion matrix and a drift being a vector function of bounded variation were obtained in . in this casethere is no martingale member in the right - hand side of .this essentially simplifies the proof .the argument is based on the theory of additive functionals of homogeneous markov processes developed by dynkin . in the same method was applied to a homogeneous sde with lvi noise and a drift being a vector function of bounded variation .the existence of a strong solution and the differentiability of the solution with respect to the initial data were proved .unfortunately , the theory by dynkin can not be directly applied to our problem because is not homogeneous .the paper is organized as follows . in section [ section_preliminaries ]we collect some facts from the theory of additive functionals of homogeneous markov processes by dynkin .we intend to consider a homogeneous process and adapt dynkin s theory to the functionals of this process .the main result is formulated in section [ section_main_result ] and proved in section [ proof of theorem_main ] .the idea of the proof is to approximate the solution of equation by solutions of equations with smooth coefficients .the key point is the convergence of continuous homogeneous additive functionals of the approximating processes to a functional of the process being the solution to ( lemma [ lemma_converg_w_functionals ] ) .the proof of the corresponding statement uses essentially the result on the convergence of the transition probability densities of the approximating processes , which is obtained in section [ section_conv_densities ] .the method proposed can be considered as a generalization of the local time approach used in the one dimensional case .let be a cdlg homogeneous markov process with a phase space , where -algebra contains all one - point sets ( see notations in ) .assume that has the infinite life - time .denote [ def_w_func ] a random function adapted to the filtration is called a non - negative continuous additive functional of the process if it is * non - negative ; * continuous in ; * homogeneous additive , i.e. , for all where is the shift operator .if additionally for each then is called a w - functional .it follows from definition [ def_w_func ] that a w - functional is non - decreasing in , and for all the function is called the characteristic of a -functional [ remark_triangle_ineq](see , properties 6.15 ) . for all , , where .[ proposition_uniquely_defined ] a w - functional is defined by its characteristic uniquely up to equivalence .the following theorem states the relation between the convergence of w - functionals and the convergence of their characteristics .[ theorem_convergence_characteristics ] let be w - functionals of the process and be their characteristics .suppose that for each , a function satisfies the condition then is the characteristic of a w - functional .moreover , where denotes the convergence in mean square ( for any initial distribution ) .[ proposition_uniform_convergence ] if for any the sequence of non - negative additive functionals of the markov process converges in probability to a continuous functional , then the convergence in probability is uniform , i.e. }|a_{n , t}-a_t|\to 0 , \n\to\infty , \\mbox{in probability}.\ ] ] [ example_general ] let , be a non - negative bounded measurable function on , let the process has a transition probability density .then is a -functional of the process and its characteristic is equal to where let a measure be such that is well defined . if we can choose a sequence of non - negative bounded continuous functions such that for each }\sup_{z\in e}\left|\int_{e}k_t(z , v)h_n(v)dv-\int_{e}k_t(z , v)\nu(dv)\right|=0,\ ] ] then by theorem [ theorem_convergence_characteristics ] there exists a w - functional corresponding to the measure with its characteristic being equal to . given a measure , a sufficient condition for the existence of a corresponding w - functional is as follows .[ theorem_sufficient_condition ] let the condition hold .then is the characteristic of a w - functional .moreover , and the sequence of characteristics of integral functionals converges to in sense of the relation .cosider a process which is a ( unique ) solution to the system of sdes : giving the initial condition , , we denote the corresponding distribution of the process by .the theory of additive functionals can be applied to because it is a homogeneous markov process .let be a non - negative bounded measurable function on .then ( c.f .example [ example_general ] ) is a w - functional of the process .its characteristic is equal to where , is the transition probability density of the process .let a measure on be such that for all , , .if there exists a sequence of non - negative bounded continuous functions such that for each , }\sup_{t_0\in[0,\infty ) , x_0\in{\mathds{r}^d}}\left|\int_{t_0}^{t_0+t } ds\int_{{\mathds{r}^d}}g(t_0,x_0,s , y)h_n(s , y)dy-\right.\\ \left.\int_{t_0}^{t_0+t } \int_{{\mathds{r}^d}}g(t_0,x_0,s , y)\nu(ds , dy)\right|=0,\end{gathered}\ ] ] then by theorem [ theorem_convergence_characteristics ] there exists a w - functional corresponding to the measure with its characteristic being equal to .[ theorem_sufficient_condition_1 ] let the condition hold. then , is the characteristic of a w - functional .moreover , and the sequence of characteristics of integral functionals converges to in sense of the relation .let be a solution of equation ( [ eq_eta ] ) starting from the point and defined on a probability space .let be the distribution of the process , where , . in dynkin s notation ( see ) , ,is called a markov family of random functions .let a measure satisfy the condition of theorem [ theorem_sufficient_condition_1 ] .then there exists a w - functional of the process .according to the definition of w - functionals , the functional is measurable w.r.t .-algebra generated by the process . since the process is continuous and has the infinite life - time , we can consider as a measurable function on that depends only on the behavior of the process on ] .denote by the transition probability density of a wiener process : by analogy with the kato class ( c.f . ) , we introduce the following definition .[ def_kato_type ] a measure on is a measure of the class if taking into account ( [ eq_gaussian_estimates ] ) it is easy to see that satisfies the condition ( [ cond_a ] ) if and only if it is of the class . a signed measure is of the class if the measure is of the class , where is the variation of ; are the measures from the hahn - jordan decomposition .let be a signed measure belonging to the class .then by theorem [ theorem_sufficient_condition ] there exist w - functionals .denote .[ remark_hahn_decomp ] suppose that the signed measure can be represented in the form , where , are of the class but are not necessarily orthogonal .then one can see that . in what followswe will often deal with measures which have densities with respect to the lebesgue measure on . a measurable function on called a function of the class if the signed measure is of the class .[ remark_homogeneous_coef ] let , where is a measure on .then the relation ( [ cond_a_prime ] ) transforms into the following one it was shown ( e.g. , theorem 2.1 in ) that satisfies the condition if and only if consider now a measure of the form .similarly to - one can obtain that if for each , satisfies the condition then the measure is of the class .[ remark_measure_finite ] let the measure satisfy one of the conditions -. then it can be verified ( c.f . , lemma 8.3 ) that for each , , there exists such that for all , ] .then by theorem [ theorem_convergence_characteristics ] there exists a functional in particular , note that if , the measure is not of the class .this agrees with the well - known fact that the local time for a multidimensional wiener process does not exist . the following lemma deals with the convergence of w - functionals of , generally speaking , different random functions . [ lemma_converg_functionals ]let be a sequence of homogeneous markov random functions defined on a common probability space with the common phase space , where is a metric space , is the borel -algebra . for ,let be a w - functional of the random function with the characteristic .assume that 1 .for each , is continuous in ; 2 . for each , , , in probability ; 3 . for all , where 4 . for each , , .then for each , }|a_{n , t}(\xi_n)-a_{0,t}(\xi_0)|\to 0 , \n\to\infty , \ \mbox{in probability } \ p.\ ] ] note that is a w - functional of the process .denote its characteristic by .then by , lemma 6.5 , for all , similarly to the proof of , theorem 6.6 , we get so for all , using the calculations of the proof of , theorem 6.6 , once more we obtain the inequalities and give us the relation further , we have = 4[i+ii+iii+iv].\end{gathered}\ ] ] for any , by assumption 3 ) we can choose such that . according to 4 )there exists such that for all , then for all , notice that for each , , .this implies that for any , .taking into account ( [ abcd ] ) , we obtain that for all , and the same estimate holds for . by the hlder inequality , the assumptions 4 )yields the estimate valid for all .similarly , the continuity of the function and assumption 2 ) provide the convergence to as tends to in probability . this convergencetogether with 3 ) allow us to use the dominated convergence theorem and prove that as .then the right - hand side of tends to as tends to .the uniform convergence follows from proposition [ proposition_uniform_convergence ] .this completes the proof .the main result on differentiability with respect to the initial data of a flow generated by equation ( [ eq_main ] ) is given in the following theorem .[ theorem_main ] let measurable bounded function be such that for each and all is a function of bounded variation on , i.e. , for each , the generalized derivative is a signed measure on .assume that the signed measures are of the class .let be a bounded continuous function satisfying ( c1 ) , ( c2 ) , and the following conditions 1 . _ hlder continuity _ : for each , there exist , such that for all ] almost surely .the existence and uniqueness of solution for equation ( [ eq_derivative_main ] ) follows from , ch .v , theorem 7 . indeed , condition ( c4 ) provides that for all , a.s .and consequently is a semimartingale .it is well known that the statement of the theorem is true in the case of smooth coefficients , and the derivative satisfies equation . to prove the theorem in general case we approximate the initial equation by equations with smooth coefficients .the proof is divided into two steps .[ [ subsction_compact_case ] ] in the first step , we assume that there exists such that for all , , , , , . for let be a non - negative function such that , and . for all , , , and , put note that for each , where }\sup_{x\in{\mathds{r}^d}}|a(t , x)|.\ ] ] besides , for all , satisfies ( c2 ) , and the ellipticity constant can be chosen uniformly in .[ remark_uniform_gaussian_estimates ] for all the transition probability density of the process satisfies the inequality .it follows from and ( c2 ) , which holds uniformly in , that the constants in can be chosen uniformly in . for each , we have , , in \times{\mathds{r}^d}). ] .consider the sde for each there exists a unique strong solution of equation ( [ eq_main_n ] ) .[ lemma_converg_solutions ] _ for each , _ 1 . for all and any compact set , 2 .for all the first statements follows from the uniform boundedness of the coefficients , the second one is a consequence of , theorem 3.4. for , put , .denote by the matrix of derivatives of in , i.e. , , .then satisfies the equation where is the -dimensional identity matrix . by the properties of convolution of a generalized function ( see , ch .2 , 7 ) , note that for all , , is a bounded measurable function on . then ( see example 1 ) there exists a continuous homogeneous additive functional corresponding to the signed measure .denote . for each , put ( recall that ) . then .it can be easily seen that the measures , , are of the class . by remark [ remark_uniform_gaussian_estimates ] , for each there exist w - functionals , which we will denote by . generally speaking , but , by remark [ remark_hahn_decomp ] , , , , , .[ proposition_moments_a ] for all , there exists a constant such that the statement of lemma follows from lemma [ lemma_exp_moment ] and remark [ remark_uniform_gaussian_estimates ] .[ lemma_converg_derivatives ] for all , , , for all , , define the variation of on ] , and ). ] as ] , the matrix is invertible , and ,\\ z_n^{-1}(0)&=e , \end{aligned } \right.\ ] ] we get it follows from proposition [ prop_gronwall_lemma ] that }|z_n(t)|\leq d^{1/2 } \exp\left\{{\mathop \mathrm{var}}a_{n , t}(\varphi_n(x))\right\}.\ ] ] here we use that . similarly ,}|z_n^{-1}(t)|\leq d^{1/2 } \exp\left\{{\mathop \mathrm{var}}a_{n , t}(\varphi_n(x))\right\}.\ ] ] let us prove that } |z_n(t)-z_0(t)|+\sup_{t\in[0,t ] } |z_n^{-1}(t)-z_0^{-1}(t)|\to 0 , \n\to \infty , \\mbox{in probability } \mathds{p}.\ ] ]we have by proposition [ prop_gronwall_lemma ] , let us apply proposition [ proposition_monot_convergence ] .put , , .taking into account lemma [ proposition_moments_a ] we get that the first summand in the right - hand side of ( [ eq_kkk ] ) tends to as in probability uniformly in ] , , , in we can suppose without lost of generality that , , for each and almost all ] , , , in the measure .notice that the processes , , possess transition probability densities .thus the distributions , , are absolutely continuous w.r.t .the lebesgue measure on and , consequently , w.r.t .the measure . making use of the estimatesit is easily seen that the sequence of densities is uniformly integrable w.r.t .the measure . therefore , all the assumptions of proposition [ proposition_kulik ] are fulfilled , and for almost all ] .recall that we can assume that for all , , and such that . denote , , and for put then , uniformly in \times{\mathds{r}^d} ] for all .fix .making use of the hlder inequality and the estimate we have where are positive constants , .it follows from ( [ eq_g_n_to_g_01 ] ) and , ch .ii , lemma 2 that , uniformly on for any .the relation ( [ eq_g_n_to_g_01 ] ) gives also that , , uniformly on .consider .we have where is a constant , , , \times{\mathds{r}^d})} ] the right - hand side of ( [ eq_l123 ] ) does not exceed .the same estimate for the second summand in the right - hand side of can be obtained similarly . to prove the convergence of the last item in the right - hand side of to zero we note that for each and compact set there exists such that indeed , let be such that .we have that for all $ ] , , and such that , where , , , is a transition probability density of a -dimensional wiener process .for each , fixed , by the relation there exists such that then remark [ remark_triangle_ineq ] implies that there exists such that which entails .now by lemma [ lemma_converg_densities ] and the last summand in the right - hand side of tends to zero uniformly on , .
|
we consider a multidimensional sde with a gaussian noise and a drift vector being a vector function of bounded variation . we prove the existence of generalized derivative of the solution with respect to the initial conditions and represent the derivative as a solution of a linear sde with coefficients depending on the initial process . the representation obtained is a natural generalization of the expression for the derivative in the smooth case . the theory of continuous additive functionals is used .
|
multiple - input multiple - output ( mimo ) wireless communications systems have been a focus of academic and industrial research in the last decade due to their potentially higher data rates in comparison with single - input single - output ( siso ) systems .theoretically , the overall channel capacity can be increased linearly with the number of transmit and receive antennas by using spatial multiplexing schemes .current focus on satellite communication ( satcom ) systems recognizes a demand for higher data rates .hence , it appears to be appropriate to apply mimo to satcom systems in order to increase the available data rate and bandwidth efficiency .the quality of service ( qos ) and data rates requirements of satellite communication systems is recently on the increase .hence , the application of multiple input multiple output techniques to satellite communication systems appear to be appropriate in order to achieve increased spectral and bandwidth efficiency .spatial multiplexing and diversity maximization schemes can be deployed to achieve better spectral efficiencies and bit error rates ( ber ) when compared to the classical single satellite single receive station systems . in ,mimo satellite uplinks and downlinks channel that are optimal in terms of achievable data rates were analyzed .the authors showed that capacity optimization is generally possible for regenerative payload designs using line of sight ( los ) channel models .these analysis were extended to a number of mimo satellite communication systems in and the scope was further extended to general case of satellites with transparent communication payloads component .a cluster based channel model was proposed for mimo satellite formation systems in .based on the standardized models for terrestrial multiple input multiple output ( mimo ) systems , the authors proposed a spatial model and analysed the capacity of formation flying satellite systems . in this contribution , we analyse the performance of satellite communication systems with multiple cooperating satellites in geostationary orbit ( geo ) and single or multiple antennas at the ground receiving station .the analysis in this paper is based on three different modelling approaches for land mobile satellite systems .the remaining part of this paper is organized as follows . in sectionii , we present the system model for mimo satellite systems . a review of the propagation channel models considered in the paper is presented in section iii . in section iv, we derive expressions for channel capacity and bit error rates with mpsk modulation scheme .simulation results and discussions are presented in section v. finally , we draw conclusion in section vi .in this section , we present the system model for single satellite , multiple receive antenna systems ( ss - mra ) and multiple satellite multiple receive antenna systems ( ms - mra ) .consider the downlink of a land - mobile satellite receive diversity system consisting of a single dual polarized satellite antenna and a mobile receive station with non - polarized antennas .the channel impulse response between the satellite and the mobile receive station can be modelled as an mimo communication channel where is the channel between the -th transmit polarization and the -th receive antenna .the received signal at the mobile receive antennas is given by a matrix representation for the receive signal model in is thus where ^t ] is a vector of transmitted symbols on the two polarizations of the satellite antenna and ^t ] are the received signals , ^t r>>c_o r<<c_o ] denotes the smallest integer greater than or equal to . assuming that the mobile ground receive station uses a zero forcing ( zf ) receiver , the mpsk ber can be obtained by integrating the error probability in over where is the chi - square probability distribution function .it can be shown that a closed form expression for is )}\left[\frac{1}{2}(1-\mu_k)\right]^{u}\nonumber\\ & .\sum_{\ell=1}^{u-1}\begin{pmatrix } u-1+\ell\\ \ell \end{pmatrix}\left[\frac{1}{2}(1+\mu_k)\right]^\ell\end{aligned}\ ] ] where and is given by this section we present simulation results for the capacity and ber of different configurations of mimo satellite systems with the models present in section iii .the simulation parameters for the simulations are shown in table [ tab : simulationpar ] except where otherwise stated ..simulation parameters [ cols="^,^",options="header " , ] [ tab : simulationpar ] the intersatellite spacing for systems with receive antennas is calculated using the equation in figure [ capacity1 ] , we present the capacity ( in bps / hz ) as a function of snr for linear formation multiple satellite system using the cluster based spatial channel model .the number of satellites and receive antenna elements is varied between 1 and 8 . as shown in the figure , increasing the signal to noise ratio ( snr ) increases the channel capacity for all antenna sizes as expected .the capacity also increases with increase in the number of satellites and/or receive station antenna elements .for instance , the capacity difference between a and satellite system at is about 10db .figure [ capacity2 ] present the complementary capacity cumulative distribution function ( ccdf ) for a dual polarized satellite system and a mobile ground receive station with four antenna elements ( corresponding to a mimo system ) at different signal to noise ratio ( snr ) levels .the cdf plots show that the variance of the channel capacity is considerably small for each snr level .the capacity increase with snr can also be clearly observed from fig .[ capacity2 ] . in figure[ capacity3 ] , we compare the capacity for different number of satellites and receive antennas using the loo - distribution based analytical satellite channel model for single and multi - satellite scenarios . clearly , the channel capacity also shows an increasing trend with both increase in snr and antenna sizes .we present a plot of the mimo satellite channel capacity versus snr for both single satellite multiple receive antenna ground station ( ss - mra ) and multiple satellites multiple receive antenna ground station ( ms - mra ) using the line of sight ( los ) approximation model in figure [ capacity4 ] .as can be observed from the figure , the channel capacity obtained using the los approximation model shows a similar trend and compare well with the capacity for similar scenarios using the cluster based and analytical channel models . in figure [ capacity5 ]present the complementary capacity cumulative distribution function ( ccdf ) for a dual polarized satellite system and a mobile ground receive station with four antenna elements ( corresponding to a mimo system ) at different signal to noise ratio ( snr ) levels using the line of sight ( los ) approximation model .finally , we plot the bit error rate ( ber ) versus signal to noise ratio ( snr ) for a two - satellite two receive antenna system using the three types of model described in section iii .as shown in the figure , the cluster based model gives lower ber at higher snr .however , no significant difference is observed between the ber curves for the three channel models at low snr region .summarily , the results presented in this section shows that the spectral efficiency of satellite systems can be significantly improved by having multiple satellites and multiple antennas at the ground station .multiple input multiple output dual polarized satellite systems can provide increased spectral efficiency and improved bit error rate ( ber ) compared to the classical single satellite systems . in this paper , we analyzed the capacity and ber of different multiple satellite scenarios using different models .simulation results showed that increasing the number of satellite and/or ground receive station antennas can significantly increase the capacity and decrease the bit error rate .99 p. driessen and g. foschini , _ on the capacity formula for multiple input multiple output wireless channels : a geometric interpretation _ , ieee transactions on communications , vol .2 , pp . 173176 , feb 1999 r. schwarz and a. knopp and d. ogermann and c. hofinann and b. lankl,_optimum - capacity mimo satellite link for fixed and mobile services _ , feb . 2008 , pp .209216 a. knopp and r. schwarz and d. ogermann and c. hofmann , and b. lankl , _ satellite system design examples for maximum mimo spectral efficiency in los channels _ , 30 2008-dec . 4 2008 , pp .r. adeogun _ cluster based channel model and capacity analysis for mimo satellite formation flying communication systems _ , international journal of computer applications ( ijca ) , june 2013 .jinhua lu and k.b letaief and j.c.i chuang and m.l liou , _ m - psk and m - qam ber computation using signal - space concepts _ , ieee trans comm ., vol 47 , pp181 - 184 , feb . 1999 .cheng wang and edward k. s. au and ross d. murch and vincent k. n. lau , _ closed - form outage probability and ber of mimo zero - forcing receiver in the presence of imperfect csi _ , spawc 2006 loo , c .. _ a statistical model for a land mobile satellite links ._ ieee transactions on vehicular technology , vol .34 , no . 3,1985 , pp . 122 - 127 .loo , c. , and butterworth , j. s. _ land mobile satellite measurements and modelling_. ieee proc . ,86(7 ) , 1998 , pp .1442 - 14462 .e. telatar , _ capacity of multi - antenna gaussian channels _ , european transactions on telecommunications , vol . 10 , no . 6 , pp . 585595 , nov .- dec .1999 r. t , schwarz and a. knopp and b. lanki , _ the channel capacity of mimo satellite links in fading environment : a probabilistic analysis _ , iwssc 2009
|
in this paper , we investigated the capacity and bit error rate ( ber ) performance of multiple input multiple output ( mimo ) satellite systems with single and multiple dual polarized satellites in geostationary orbit and a mobile ground receiving station with multiple antennas . we evaluated the effects of both system parameters such as number of satellites , number of receive antennas , and snr and environmental factors including atmospheric signal attenuations and signal phase disturbances on the overall system performance using both analytical and spatial models for mimo satellite systems . mimo , satellite channels , geostionary orbit , capacity , bit error rate .
|
asms have played an important role in the development of x - ray astronomy . a large number of transients have been discovered and a large number of bright persistent sources have been monitored with them .their role as watchdog to alert pointing instruments has been most prominent .previous all sky monitors suffer from several common shortcomings .first , they are only sensitive to x - rays above 2 kev .second , they have relatively small effective areas or very small time - averaged effective areas .third , they do not have any focusing capability . combining all these factors , their measurement limit is , at best , of the order of several mcrab in several hours or in one day . in this paperwe further develop a concept originally put forth by schmitt in 1975 . with advances in both x - ray optics and detector technology in the past two decades ,this concept now is feasible .we will present simulations to show the salient features of such an asm .it can either fly as a small free - flyer or as one of instruments on a large satellite as previous asms .table [ zand.table ] summarizes the most important characteristics of past asms .the last row shows the expected capabilities of blosm .it represents a significant step in improving the capability and sensitivity of x - ray asm .the large field of view ( fov ) optic proposed by schmitt in 1975 is focusing in one dimension ( see peele _ et al . _ 1997 for an illustration of this optic ) by flat reflectors mounted along the radial directions of a cylinder . with one module, it focuses a point source in its fov to a line on the focal surface . in our conceptual design here , we will have two modules situated in perpendicular orientations so that the two coordinates of a source can be obtained simultaneously . as shown in figure [ lobster.ps ] , the instrument consists of two modules .module 1 is a cylinder 100 cm in diameter at its outer edge and 50 cm in height .its focal length is 25 cm .module 2 is 83% ( i.e. , 300 out of 360 degrees ) of a cylinder 70 cm ( i.e. , ) in diameter and 70 cm in height .its focal length is 17.5 cm .module 2 sits about 50 cm away on top of module 1 .the 50 cm distance is to ensure that the vanes at the lower side of module 2 are not blocked by module 1 . for the purpose of discussion in this paper , without loss of generality , we will adopt a fake equatorial coordinate system in which the -axis points to direction ( , and -axis , and -axis , and the sun is in position ( , ) , as shown in figure [ lobster.ps ] .the cylindrical surface of module 1 is defined by : and , and module 2 by : and where all numbers are in cm .the entire satellite spins around the direction to the sun , which , in the specific coordinate system , corresponds to the -axis . the cylindrical position - sensitive detectors need only to measure position along the circumferential direction .every time interval , each model produces a map similar to figure [ mod01_1s.ps ] .for the purpose of illustration , we need to define two angular variables which correspond to the position of a source on the focal surfaces . for module 1 ,the angle measures from the positive -axis with -axis at , i.e. , is the azimuthal angle of module 1 . for module 2 ,the angle measures from the -axis with the -axis at , i.e. , is the azimuthal angle of module 2 .these two variables are fixed with respect to the modules . at time for a source at , it has the following angles in the two modules : which should be interpreted as and .for example , two sources with identical , but different , will have one image in module 1 , but two separate images in module 2 .a new source and its two coordinates can be easily identified this way .the rotation of the satellite can be taken into account by replacing in the above equations with , where is the angular velocity of the satellite , and time .in the ideal case where reflectors are infinetly thin and they are mounted perfectly along their local radial directions , the angular resolution of the system is the same as the angle between two adjacent reflectors .figure [ psf.ps ] compares the ideal situation and a realistic situation where the reflector thickness is 0.05 cm . in both casesthe angular spacing and other dimensions are the same . assuming that the mirrors are coated with ni with perfect smoothness and the length of each reflector along the radial direction is 3.3 cm ( see peele _et al . _1997 ) for module 1 , figure [ aeff.ps ] shows the effective area of module 1 as a function of energy for a source perpendicular to the -axis .module 2 has exactly the same effective area for a source perpendicular to the -axis . though its radius is smaller by a factor of , it is longer by the same factor .there are three sources of background : ( 1 ) detector internal ( non - xray ) background , ( 2 ) background contribution from sources in the same annulus of the sky that share the same spot on the focal surface , and ( 3 ) the diffuse x - ray background . for the systemas outlined in this paper , the detector internal background is much lower than the other two , thus we will ignore it for now .the background contributions of other sources , equivalent to source confusion , though complicated , should not be a problem for relatively bright sources .for the purpose of this paper , we will only consider the diffuse x - ray background .the diffuse x - ray background flux can be parameterized ( priedhorsky _ et al . _1996 ) as where is in and in .the result of simulations with this flux is shown in figure [ bkgnd.ps ] .we have been systematically investigating different materials to characterize their suitability as reflectors .we have studied mylar , kapton , and plexiglas and have found that those materials , even though they can be made very thin and strong , do not satisfy flatness requirements . we have found that thin glass ( 0.05 cm ) which is commercially produced for flat - panel lcd displays has the desired flatness .we have optically tested a few pieces and found that , even though not all of them satisfy the flatness requirements , a significant fraction of them have flatness better that one minute of arc .we are continuing this investigation to identify the best material .for the purpose of this paper , we will assume that we use glass reflectors with a thickness of 0.05 cm .the angular spacing between two adjacent reflectors is assumed to be 10 minutes of arc , which corresponds to a linear distance of 0.12 and 0.08 cm for modules 1 and 2 , respectively .these linear spacings are quite comfortable for mechanically mounting those reflectors to a structure . in general , these glass sheets are quite smooth. coated with either gold or nickel , their measured x - ray reflectivity is nearly identical to theoretical expectations . a challenging part of the building the system as outlined in this paper is the detectors .we require two detectors each with an active area on the order of 7,000 .they must have thin window and good one - dimensional position resolution .we have been investigating the possibility of using the micro - strip proportional counter technology .the glass plate will have anode and cathode traces laid on them photolithographically with a pitch of , say , 300 m .each trace is read out by its own analog electronics to achieve maximum position resolution .since we do not require position resolution along the longitudinal directions of the cylinders , the readout electronics can be made simple .with all these parameters , table [ mass.table ] list the key components of this system and their estimated masses .like previous asms , blosm is capable of monitoring long - term behavior of bright sources .but the most significant aspect of blosm is that it covers the soft band below 2 kev and its coverage of every part of the sky except for sources within of the sun . with these two features ,we expect it to discover many fast ( lasting on the order of seconds or minutes ) and/or soft transients . in this section ,we give quantitative estimates of blosm count rates for several known phenomena .note that in doing these estimates , we will assume the source is ` on - axis , ' i.e. , it is in a direction perpendicular to the symmetry axis of module 1 .the count rates are for module 1 only . for a source that is not on - axis, it may get more or less counts than these estimates , depending on its position .figure [ aeff_factor.ps ] shows how the effective area varies as a function of sun angle , where a factor of 1 corresponds to the estimate here . 1 .as a point of reference , we have included the expected count rates from the crab nebular / pulsar .we have used a power law spectrum with a photon index of -2.05 and an .gamma ray bursts : since their discovery nearly three decades ago , gamma ray bursts have been the most enigmatic astrophysical phenomenon .it has been generally agreed upon in the last few years that one of potentially very useful measurements is to measure the galactic absorption at low energies .for the estimates in table [ rate.table ] , we have used the flux measured with the ginga gamma ray burst detector ( t.e .strohmayer , personal communication ) which corresponds to the spectral characterization of band _ et al . _( 1993 ) with and .we have considered two cases . in the first case, we extrapolate this spectrum all the way down to 0.1 kev with no interstellar absorption . in the second case, we have assumed an interstellar absorption corresponding to and the morrison and mccammon ( 1983 ) cross sections .it is clear that blosm is well suited for differentiating the galactic and extra - galactic origin theories of gamma ray bursts .blosm is perhaps the first asm capable of systematic detection of x - ray bursts from galactic sources .of the 120 or so cataloged galactic low mass x - ray binaries , only 40 or so have been observed to emit type - i bursts . with a few years of operation , blosm should be able to detect x - ray bursts from most of these sources .on the other hand , if no type - i bursts are detected from a significant number of those sources , it may indicate that some of them may very well be black hole systems .for the estimates in table [ rate.table ] , we have assumed that the neutron star has a radius of 10 km and is at a distance of 10 kpc .it is clear that blosm can detect most of type - i x - ray bursts . with its position measurement accuracy of 0.1 degrees ,blosm can associate detected bursts with their persistent counterparts .active galactic nuclei : long - term variability of agns has been the subject of intense organized campaigns in recent years . in table[ rate.table ] we show the blosm count rate for a 1 mcrab agn .the spectrum we have used is a power law with a photon index of -1.7 with ( mushotzky _ et al . _ 1993 ) .it takes about 50,000 seconds of observation to detect a 1 mcrab agn at 10 level .therefore blosm probably can monitor a few bright agns on a daily basis .wga catalog / rosat transients : a large number of soft transients have been detected by rosat during its observations in the last five years ( white 1997 and angelini , giommi , & white 1996 ) . with its all sky coverage, blosm is expected to detect the brighter ones and monitor them on a daily basis .for example , for a source with a flux of at the detector in the band of 0.5 - 1.5 kev , blosm will detect 0.1 counts / s over a background of 4 counts / s .in summary , we have demonstrated that a one - dimensional focusing all sky monitor as outlined in this paper stands a significant step forward in the direction of larger area and true all sky coverage .compared with previously - flown and currently flying asms , its improvement in source location precision and monitoring sensitivity is well over an order of magnitude .in addition to the capabilities of traditional x - ray all sky monitors , it is capable of detecting gamma ray bursts , soft transients , x - ray bursts , and monitoring a number of agns on a daily basis .we conclude by pointing out that the instrument as outlined here is suitable to fly either as a free flyer , such as a small explorer , or as one of many instruments on a larger satellite .we would like to thank scott murphy for help in preparing this paper .l. angelini , p. giommi , and n.e .white , 1996 , in _ rontgendtrahlung from the universe _ ,zimmermann , j.e .trumper , and h. york , pp .645 - 646 .
|
we present a conceptual design for a new x - ray all sky monitor ( asm ) . compared with previous asms , its salient features are : ( 1 ) it has a focusing capability that increases the signal to background ratio by a factor of 3 ; ( 2 ) it has a broad - band width : 200 ev to 15 kev ; ( 3 ) it has a large x - ray collection area : ; ( 4 ) it has a duty cycle of nearly 100% , and ( 5 ) it can measure the position of a new source with an accuracy of a few minutes of arc . these features combined open up an opportunity for discovering new phenomena as well as monitoring existing phenomena with unprecedented coverage and sensitivity .
|
the solution of polarized line transfer equation with angle - dependent ( ad ) partial frequency redistribution ( prd ) has always remained one of the difficult areas in the astrophysical line formation theory .the difficulty stems from the inextricable coupling between frequency and angle variables , which are hard to represent using finite resolution grids .equally challenging is the problem of polarized line radiative transfer ( rt ) equation in multi - dimensional ( multi - d ) media .there existed lack of formulations that reduce the complexity of multi - d transfer , when prd is taken into account . in the first three papers of the series on multi - d transfer ( see anusha & nagendra 2011a - paper i ; anusha et al 2011a - paper ii ; anusha & nagendra 2011b - paper iii ) , we formulated and solved the transfer problem using angle - averaged ( aa ) prd .the fourier decomposition technique for the ad prd to solve transfer problem in one - dimensional ( 1d ) media including hanle effect was formulated by . in anusha & nagendra ( 2011c - hereafter paper iv ) , we extended this technique to handle multi - d rt with the ad prd . in this paperwe apply the technique presented in paper iv to establish several benchmark solutions of the corresponding line transfer problem .a historical account of the work on polarized rt with the ad prd in 1d planar media , and the related topics is given in detail , in table 1 of paper iv .therefore we do not repeat here .in section [ frte ] we present the multi - d polarized rt equation , expressed in terms of irreducible fourier coefficients , denoted by and , where is the index of the terms in the fourier series expansion of the stokes vector and the stokes source vector .section [ numerics ] describes the numerical method of solving the concerned transfer equation .section [ results ] is devoted to a discussion of the results .conclusions are presented in section [ conclusions ] .the multi - d transfer equation written in terms of the stokes parameters and the relevant expressions for the stokes source vectors ( for line and continuum ) in a two - level atom model with unpolarized ground level , involving the ad prd matrices is well explained in section 2 of paper iv .all these equations can be expressed in terms of ` irreducible spherical tensors ' ( see section 3 of paper iv ) .further , in section 4 of paper iv we developed a decomposition technique to simplify this rt equation using fourier series expansions of the ad prd functions . herewe describe a variant of the method presented in paper iv , which is more useful in practical applications involving polarized rt in magnetized two - dimensional ( 2d ) and three - dimensional ( 3d ) atmospheres .let be the stokes vector and denote the stokes source vector ( see * ? ? ?we introduce vectors and given by these quantities are related to the stokes parameters ( see e.g. , * ? ? ?* ) through we note here that the quantities , , , , and also depend on the variables , and ( defined below ) . for a given ray defined by the direction , the vectors and satisfy the rt equation ( see section 3 of paper iv ) .\label{rte - reduced}\end{aligned}\ ] ] it is useful to note that the above equation was referred to as ` irreducible rt equation ' in paper iv . indeed , for the aa prd problems , the quantities and are already in the irreducible form . but for the ad prd problems , and can further be reduced to and using fourier series expansions . here is the position vector of the point in the medium with coordinates .the unit vector defines the direction cosines of the ray with respect to the atmospheric normal ( the -axis ) , where and are the polar and azimuthal angles of the ray .total opacity is given by where is the frequency averaged line opacity , is the voigt profile function and is the continuum opacity .frequency is measured in reduced units , namely where is the doppler width .for a two - level atom model with unpolarized ground level , has contributions from the line and the continuum sources .it takes the form with the line source vector is written as with and the unpolarized continuum source vector = .we assume that with being the planck function .the thermalization parameter with and being the inelastic collision rate and the radiative de - excitation rate respectively .the damping parameter is computed using ] , ] , ] , ] and ] differ noticeably from {\rm aa} ] and {\rm aa} ] and {\rm aa} ] and {\rm ad} ] and {\rm ad} ] and ] and ] , ] , ] and {\rm aa} ] and {\rm aa} ] and {\rm aa} ] , {\rm aa}>0 ] and {\rm ad}<0 ] and {\rm aa} ] , {\rm ad} ] and {\rm ad} ] and {\rm aa} ] and {\rm ad} ] differs from {\rm aa} ] makes a significant contribution to and with other values of vanish ( graphically ) . ] .also , the components and have the same sign for both the values of .therefore from equations ( [ transform-2f1 ] ) and ( [ transform-2f2 ] ) we can see that and have opposite signs for but have the same signs for .the ad and the aa values of sometimes coincide well and sometimes differ significantly .this is because , the fourier components of the ad prd functions with essentially represent the azimuthal averages of the ad functions and are not same as the explicit angle - averages of the ad functions .the latter are obtained by averaging over both co - latitudes and azimuths ( i.e. , over all the scattering angles ) .the -dependence of the ad functions are contained dominantly in the terms and the -dependence is contained dominantly in the higher order terms in the fourier expansions of the ad functions .for this reason the aa prd can not always be a good representation of the ad prd , especially in the 2d polarized line transfer .this can be attributed to the strong dependence of the radiation field on the azimuth angle ( ) in the 2d geometry . as will be shown below, the differences between the ad and the aa solutions get further enhanced in the magnetic case ( hanle effect ) .when , {\rm ad} ] profiles for both values of ( 0.5 and 89 ) do not differ significantly . equations ( [ star2 ] ) and ( [ star4 ] ) suggest that has dominant contribution from for =0.5 and for 89 .looking at the first two columns of figure [ fig - i-1to6-nonmag - fourier ] , it can be seen that nearly coincide with {\rm aa} ] .thus {\rm aa} ] nearly coincide for =0.5 ( see the first two columns of figure [ fig - i-1to6-nonmag ] ) .thus {\rm ad} ] are nearly the same for =0.5 . when =89 ( the first two columns of figure [ fig - i-1to6-mag ] ) , {\rm aa} ] , which is a combination of .thus {\rm ad} ] both are nearly zero for =89. we can carry out similar analysis and find out which are the irreducible fourier components of that contribute to the construction of and which of the components of contribute to generate and to interpret their behaviors .the presence of a weak , oriented magnetic field modifies the values of and in the line core ( ) to a considerable extent , due to hanle effect .further , it is for that the differences between the aa and the ad prd become more significant . in both the figures [ fig - varphi - q ] and [ fig - varphi - u ] , the dashed and dot - dashed curves represent case . as usual, there is either a depolarization ( decrease in the magnitude ) or a re - polarization ( increase in the magnitude ) of both and with respect to those in the case .the ad prd values of and are larger in magnitude ( absolute values ) than those of the aa prd , for the chosen set of model parameters ( this is not to be taken as a general conclusion ) .the differences depend sensitively on the value of . in figures [ fig-2d - t20](a ) and( b ) we present the emergent profiles for 1d and 2d media for and . for 2d rt, we present the spatially averaged profiles .the effects of a multi - d geometry ( 2d or 3d ) on linear polarization for non - magnetic and magnetic cases are discussed in detail in papers i , ii and iii , where we considered polarized line formation in multi - d media , scattering according to the aa prd .we recall here that the essential effects are due to the finite boundaries in multi - d media , which cause leaking of radiation and hence a decrease in the values of stokes , and a sharp rise in the values of and near the boundaries .multi - d geometry naturally breaks the axisymmetry of the medium that prevails in a 1d planar medium .this leads to significant differences in the values of and formed in 1d and multi - d media ( compare solid lines in panels ( a ) and ( b ) of figure [ fig-2d - t20 ] ) .as pointed out in papers i , ii and iii , for non - magnetic case , is zero in 1d media while in 2d media a non - zero is generated due to symmetry breaking by the finite boundaries .for the values chosen in figure [ fig-2d - t20](b ) {\rm aa} ] in the non - magnetic case and of {\rm 1d} ] in the magnetic case are larger in comparison with the corresponding spatially averaged {\rm 2d} ] .this is again due to leaking of photons from the finite boundaries and the effect of spatial averaging ( which causes cancellation of positive and negative quantities ) . in figures[ fig-2d - surface - aa - ad - x0 ] and [ fig-2d - surface - aa - ad - x2.5 ] we present spatial distribution of , and on the plane of the 2d slab for two different frequencies ( and respectively ) .the spatial distribution of source vector components and represent the anisotropy of the radiation field in the 2d medium .it shows how inhomogeneous is the distribution of linear polarization within the 2d medium . in figure[ fig-2d - surface - aa - ad - x0 ] we consider ( line center ) . for the chosen values of spatial distribution of is not very different for the aa and the ad prd . and for both the aa and the ad prd have similar magnitudes ( figures [ fig-2d - surface - aa - ad - x0](b),(c ) and [ fig-2d - surface - aa - ad - x0](e),(f ) ) , but different spatial distributions .the spatial distribution of and is such that the positive and negative contributions with similar magnitudes of and cancel out in the computation of their formal integrals .therefore , the average values of and resulting from the formal integrals of and are nearly zero at for both the aa and the ad prd ( see dashed and dot - dashed lines at in figure [ fig-2d - t20](b ) ) . in figure [ fig-2d - surface - aa - ad - x2.5 ] we consider ( near wing frequency ) .again , does not show significant differences between the aa and the ad prd . for ,the aa prd has a distribution with positive and negative values equally distributed in the 2d slab but the ad prd has more negative contribution .this reflects in the average values of , where {\rm aa} ] values are more negative ( see dashed and dot - dashed lines at in figure [ fig-2d - t20](b ) ) .the positive and negative values of are distributed in a complicated manner everywhere on the 2d slab for the aa prd . for the ad prd ,the distribution of is positive almost everywhere , including the central parts of the 2d slab .such a spatial distribution reflects again in the average value of ( shown in figure [ fig-2d - t20](b ) ) , where {\rm aa} ] .in this paper we have further generalized the fourier decomposition technique developed in paper iv to handle the ad prd in multi - d polarized rt ( see section [ rteff ] ) . we have applied this technique and developed an efficient iterative method called pre - bicg - stab to solve this problem ( see section [ numerics ] ) .we prove in this paper that the symmetry of the polarized radiation field with respect to the infinite axis , that exists for a non - magnetic 2d medium for the aa prd ( as shown in paper ii ) breaks down for the ad prd ( see appendix [ appendixa ] ) .we present results of the very first investigations of the effects of the ad prd on the polarized line formation in multi - d media .we restrict our attention to freestanding 2d slabs with finite optical thicknesses on the two axes ( and ) .the optical thicknesses of the isothermal 2d media considered in this paper are very moderate ( ) .we consider effects of the ad prd on the scattering polarization in both non - magnetic and magnetic cases .we find that the relative ad prd effects are prominent in the magnetic case ( hanle effect ) .they are also present in non - magnetic case for some choices of .we conclude that the ad prd effects are important for interpreting the observations of scattering polarization in multi - d structures on the sun .practically , even with the existing advanced computing facilities , it is extremely difficult to carryout the multi - d polarized rt with the ad prd in spite of using advanced numerical techniques .therefore in this paper we restrict our attention to isothermal 2d slabs. the use of the ad prd in 3d polarized rt in realistic modeling of the observed scattering polarization on the sun will be numerically very expensive and can be taken up in future only with highly advanced computing facilities .* erratum * : in the previous papers of this series ( papers i , iii and iv ) the definitions of the formal solutions expressed in terms of the optical thicknesses have a notational error . in equation( 20 ) of paper i , equations ( 14 ) and ( 20 ) of paper iii , equation ( 14 ) of paper iv , the symbol should have been as explicitly given in equation ( [ i - out - tau ] ) of this paper . is defined in equation ( [ tau ] ) in this paper . in the previous papers of this series( papers i to iv ) the vector was incorrectly defined as .we note here that the numerical results and all other equations presented in papers i iv are correct , and are unaffected by this error in the above mentioned equations .we thank the anonymous referee for very useful comments and suggestions that helped improve the manuscript to a great extent .the reports by the referee helped to correct some of the mistakes that were present in the previous papers of this series , and the corrections are now presented in the form of an erratum in this paper .we also thank the referee for providing figure [ fig - fs ] .in this appendix we show that the symmetry properties that are valid for the aa prd ( proved in paper ii ) break down for the ad prd .we present the proof in the form of an algorithm .+ step ( 1 ) : first we assume that the medium contains only an unpolarized thermal source namely , .+ step ( 2 ) : use of this source vector in the formal solution expression yields .+ step 3 : using this we can write the expressions for the irreducible polarized mean intensity components as where and are positive numbers ( see appendix d of paper iii ) .we recall that , ] . here , \label{redis - nonmag}\end{aligned}\ ] ] is the non - magnetic , polarized redistribution matrix .+ step 4 : a fourier expansion of the ad prd functions with respect to ( instead of ) gives with the fourier coefficients substituting equation ( [ 2d - fourier - series - r23-pf ] ) in equation ( [ jkq - approx ] ) we can show that the components and do not vanish irrespective of the symmetry of with respect to the infinite spatial axis . in other words , to a first approximation , even if we assume that is symmetric with respect to the infinite spatial axis ( as in the aa prd ) , the -dependence of the ad prd functions is such that the integral over leads to non - zero and .this stems basically from the coefficients with in the expansion of the ad prd functions . following an induction proof as in paper ii, it follows that and are non - zero in general because the symmetry breaks down in the first step itself .it follows from equation ( [ transform-1 ] ) , and from the above proof that the stokes parameter is not symmetric with respect to the infinite spatial axis in a non - magnetic 2d media , in the ad prd case , unlike the aa prd and crd cases ( see appendix b of paper ii for the proof for the aa prd ) .anusha , l. s. , & nagendra , k. n. 2011a , , 726 , 6 ( paper i ) anusha , l. s. , & nagendra , k. n. 2011b , , 738 , 116 ( paper iii ) anusha , l. s. , nagendra , k. n. , & paletou , f. , 2011a , 726 , 96 ( paper ii ) anusha , l. s. , nagendra , k. n. , bianda , m. , stenflo , j. o. , holzreuter , r. , sampoorna , m. , frisch , h. , ramelli , r. & smitha , h. n. , 2011b , , 737 , 95 anusha , l. s. , & nagendra , k. n. 2011c , , 739 , 40 ( paper iv ) bommier , v. 1997a , , 328 , 706 bommier , v. 1997b , , 328 , 726 chandrasekhar , s. 1960 , radiative transfer ( new york : dover ) faurobert - scholl , m. 1992 , , 258 , 521 frisch , h. 2007 , , 476 , 665 ( hf07 ) frisch , h. 2009 , in asp conf .405 , solar polarization 5 , ed .s. v. berdyugina , k. n. nagendra & r. ramelli ( san francisco : asp ) , 87 ( hf09 ) hummer , d. g. 1962 , , 125 , 21 landi deglinnocenti , e. , & landolfi , m. 2004 , polarization in spectral lines ( dordrecht : kluwer ) nagendra , k. n. , frisch , h. , & faurobert , m. 2002 , , 395 , 305 nagendra , k. n. , & sampoorna , m. , 2011 , ( in press ) saad , y. 2000 , iterative methods for sparse linear systems ( 2nd ed . ) ( ebook : http://www - users.cs.umn.edu/ saad / books.html ) ccccccc & &&&& + &x&-&-&-&- + &x&-&-&-&- + ] & -&-&-&-&-+ ] & -&x&-&-&- + ] & -&-&-&-&- + ] & -&-&x&-&- +
|
the solution of polarized radiative transfer equation with angle - dependent ( ad ) partial frequency redistribution ( prd ) is a challenging problem . modeling the observed , linearly polarized strong resonance lines in the solar spectrum often requires the solution of the ad line transfer problems in one - dimensional ( 1d ) or multi - dimensional ( multi - d ) geometries . the purpose of this paper is to develop an understanding of the relative importance of the ad prd effects and the multi - d transfer effects and particularly their combined influence on the line polarization . this would help in a quantitative analysis of the second solar spectrum ( the linearly polarized spectrum of the sun ) . we consider both non - magnetic and magnetic media . in this paper we reduce the stokes vector transfer equation to a simpler form using a fourier decomposition technique for multi - d media . a fast numerical method is also devised to solve the concerned multi - d transfer problem . the numerical results are presented for a two - dimensional medium with a moderate optical thickness ( effectively thin ) , and are computed for a collisionless frequency redistribution . we show that the ad prd effects are significant , and can not be ignored in a quantitative fine analysis of the line polarization . these effects are accentuated by the finite dimensionality of the medium ( multi - d transfer ) . the presence of magnetic fields ( hanle effect ) modifies the impact of these two effects to a considerable extent .
|
[ [ background . ] ] background .+ + + + + + + + + + + _ erasure coding _ is a key technology that saves space and retains robustness against faults in distributed storage systems . in short ,an erasure code splits a large data object into fragments such that from any of them the input value can be reconstructed .the utility of erasure coding is demonstrated by large - scale erasure - coding storage systems that have been deployed today .these distributed storage systems offer large capacity , high throughput , and resilience to faults . whereas the storage systems in production use today only tolerate component crashes or outages , storage systems in the _ byzantine failure model _ survive also more severe faults , ranging from arbitrary state corruption to malicious attacks on components . in this paper, we consider a model where _ clients _ directly access a storage service provided by distributed servers , called _ nodes _ a fraction of the nodesmay be byzantine , whereas clients may fail as well , but only by crashing .although byzantine - fault tolerant ( bft ) erasure - coded distributed storage systems have received some attention in the literature , our understanding of their properties lies behind that of replicated storage .in fact , most existing bft erasure - coded storage approaches have drawbacks that prevented their wide - spread use .for example , they relied on the nodesstoring an unbounded number of values , required the nodesto communicate with each other , used public - key cryptography , or might have blocked clients due to concurrent operations of other clients .we consider an abstract _ wait - free _ storage register with _semantics , accessed concurrently by multiple readers and writers ( mrmw ) .wait - free termination means that any client operation terminates irrespective of the behavior of the byzantine nodesand of other clients .this is not easy to achieve with byzantine nodes even in systems that replicate the data .therefore , previous works have often used a weaker notion of liveness called _ finite - write ( fw ) termination _ , which ensures that read operations progress only in executions with a finite number of writes .[ [ contribution . ] ] contribution .+ + + + + + + + + + + + + this paper introduces awe , the _ first _ asynchronous , wait - free distributed bft erasure - coded storage protocol with optimal resilience . as in previous work, we assume there are nodesand up to of them may exhibit non - responsive ( nr-)arbitrary faults , that is , byzantine corruptions .the best resilience that has been achieved so far is , which is optimal for byzantine storage .however , our protocol features a separation of metadata and erasure coded fragments ; with this approach our protocol may reduce the number of _ data nodes _, i.e. , those that store a fragment , to lower values than for .in particular , our protocol takes only data nodes ; this idea saves resources , as in the separation of agreement and execution for bft services . for implementing the metadata service , nodesare still needed .our protocol employs simple , passive data nodes ; they can not execute code and they only support read and write operations , such as the key - value stores ( kvs ) provided by popular cloud storage services .the metadata service itself is an atomic snapshot object , which has only weak semantics and may be implemented in a replicated asynchronous system from simple read / write registers .the protocol is also _ amnesic _ , i.e. , the nodesstore a bounded number of values and may erase obsolete data .the protocol uses only simple cryptographic hash functions but no ( expensive ) public - key operations . in summary , protocol awe , introduced in section [ sec : protocol ] , is the first erasure - coded distributed implementation of a mrmw storage object that is , at the same time : ( 1 ) asynchronous , ( 2 ) wait - free , ( 3 ) atomic , ( 4 ) amnesic , ( 5 ) tolerates the optimal number of byzantine nodes , and ( 6 ) does not use public - key cryptography .furthermore , awecan be implemented from non - programmable nodes(kvs ) that only support reads and writes ( in the vein of disk paxos ) . in practice ,the kvs interface is offered by commodity cloud storage services , which could be used as awedata nodesto reduce the cost of awedeployment and ownership .while some of these desirable properties have been achieved in different combinations so far , they have never been achieved together with erasure - coded storage , as explained later . combining these propertieshas been a longstanding open problem .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + we provide a brief overview of the most relevant literature on the subject .table [ tab : comparison ] summarizes this section .earlier designs for erasure - coded distributed storage have suffered from potential aborts due to contention or from the need to maintain an unbounded number of fragments at data nodes . in the crash - failure model ,orcas and casgc achieve optimal resilience and low communication overhead , combined with wait - free ( orcas ) and fw - termination ( casgc ) , respectively . in the model with byzantine nodes , cachin and tessaro ( ct ) introduced the first wait - free protocol with atomic semantics and optimal resilience .ct uses a verifiable information dispersal protocol but needs node - to - nodecommunication , which lies outside our model .hendricks et al .( hgr ) present an optimally resilient protocol that comes closest to our protocol among the existing solutions .it offers many desirable features , that is , it has as low communication cost , works asynchronously , achieves optimal resilience , atomicity , and is amnesic . compared to our work , it uses public - key cryptography , achieves only fw - termination instead of wait - freedom , and requires _ processing _ by the nodes , i.e. , the ability to execute complex operations beyond simple reads and writes .to be fair , much of the ( cryptographic ) overhead inherent in the ct and hgr protocols defends against poisonous writes from byzantine clients , i.e. , malicious client behavior that leaves the nodesin an inconsistent state .we do not consider byzantine clients in this work , since permitting arbitrary client behavior is problematic .such a client might write garbage to the storage system at any time and wipe out the stored value .furthermore , the standard formal correctness notions such as linearizability fail when clients misbehave ( apart from crashing ) .hendricks discusses correctness notions in the presence of byzantine clients .however , even without the steps that protect against poisonous writes , hgr still requires processing by the nodesand is not wait - free .the m - powerstore protocol employs a cryptographic `` proof of writing '' for wait - free atomic erasure - coded distributed storage .it is the first wait - free bft solution without node - to - nodecommunication .similar to other protocols , m - powerstore uses nodeswith processing capabilities and is not amnesic .several systems have recently addressed how to store erasure - coded data on multiple redundant cloud services but only few of them focus on wait - free concurrent access .hail , for instance , uses byzantine - tolerant erasure coding and provides data integrity through proofs of retrievability ; however , it does not address concurrent operations by different clients .depsky achieves regular semantics and uses lock - based concurrency control ; therefore , one client may block operations of other clients . a key aspect of awelies in the differentiation of ( small ) metadata from ( large ) bulk data : this enables a modular protocol design and an architectural separation for implementations .the farsite system first introduced such a separation for replicated storage ; their data nodesand their metadata abstractions require processing , however , in contrast to awe .non - explicit ways of separating metadata from data can already be found in several previous erasure coding - based protocols .for instance , the cross checksum , a vector with the hashes of all fragments , has been replicated on the data nodes to ensure consistency . finally , a recent protocol called mdstore has shown that separating metadata from bulk data permits to reduce the number of data nodesin asynchronous wait - free bft distributed storage implementations to only . when protocol aweis reduced to use replication with the trivial erasure code ( ), it uses as few nodesas mdstore to achieve the same wait - free atomic semantics ; unlike awe , however , mdstore is not amnesic and uses processing nodes ..comparison of erasure - coded distributed storage solutions . an asterisk ( )denotes optimal properties .the column labeled _ type _ states the computation requirements on nodes : _ proc ._ denotes processing ; _ msg ._ means sending messages to other nodes , in addition to processing ; _ r / w _ means a register object supporting only read and write .[ cols="<,^,^,^,^,^,^",options="header " , ] table [ tab : complexity ] shows the communication and storage costs of protocol aweand the related protocols .we use the wait - free semantics achieved by aweand others as the base case ; in casgc and hgr , a read operation concurrent with an unbounded number of writes may not terminate , hence we state their cost as .in contrast to awe , depsky is neither wait - free nor amnesic and m - powerstore is not amnesic .it is easy to see that aweperforms better than most storage solutions in terms communication complexity .in this section we prove that protocol awe , given by algorithms [ alg : client-1][alg : datareplica ] , emulates an atomic read / write register and is wait - free .whenever the metadata directory _ dir _ contains an entry .{\textit{frozenptrlist}}[p].{\textit{ts}} ] ( this includes that _ ts _ is frozen by for ) or when .{\textit{ts } } = { \textit{ts}} ] the _ written _ timestamp of .[ lem : timestamps ] at any time the timestamps that a client has frozen are no larger than its written timestamp .more precisely , for all , .{\textit{writeptr}}.{\textit{ts } } \ > \ m[c].{\textit{frozenptrlist}}[p].{\textit{ts}}.\ ] ] moreover , during any _ dir_-_update _ operation of , the timestamp .{\textit{writeptr}}.{\textit{ts}} ] may only increase . from algorithm [ alg : client-1 ]it follows that for any client , the timestamps stored in .{\textit{writeptr}}.{\textit{ts}} ] is only updated through a _r_-_write _ operation of , and is set to the written timestamp of the preceding _r_-_write _ operation of , which is strictly smaller than the written timestamp stored in .{\textit{writptr}}.{\textit{ts}} ] only increase .we define the _ timestamp of a register operation _ as follows : ( i ) for an _ r_-_write _operation , the timestamp of the value assigned to variable _ writeptr_._ts _ during ; ( ii ) when an _r_-_read _ operation , then its timestamp is the value assigned to variable _ readptr_._ts _ by _ highestread_. note that the timestamp of an _ r_-_read _operation is if and only if .furthermore , we say that a value is _ associated _ to a timestamp _ ts _ whenever the timestamp of the register operation that writes is _ts_. according to _ highestread _ , the timestamp in the returned pointer may be frozen ( taken from the _ frozenptrlist _ field of ) or written ( taken from the _ writeptr _ field of ) , but not both .[ lem : readfrozen ] if the timestamp _ ts _ of a _ r_-_read_ operation by client has been frozen for by a client , then executes two _ r_-_write _ operations concurrently to , where the _dir_-_scan _ operation of the former _r_-_write _ operation and the _ dir_-_update _ operation of the latter _r_-_write _ operation occur between _dir_-_update _ and _ dir_-_scan _ operations of . moreover , the timestamp of the _ r_-_read _ operation is _ ts _ , the one associated with the value written by . from algorithm [ alg : client-2 ]it follows that for _ highestread _ within to return a frozen timestamp , then , if is the metadata snapshot returned by the _ dir_-_scan _ operation during , it holds .{\textit{frozenindex}}[c ] = { \textit{readindex}} ] caused by through the _ dir_-_scan _ operation invoked during . from algorithm [ alg : client-1 ] , this can only be the operation through which wrote the value associated to _ts_. [ lem : partorder ] let two distinct operations on register _ r _ with timestamps , respectively , such that precedes .then .furthermore , if is of type _ r_-_write _ , then .we distinguish between two cases , depending on the type of .case 1 : : : if of type _ r_-_write _ , the claim follows directly from lemma [ lem : timestamps ] and from the algorithm of the writer . in particular ,if of type _ r_-_read _ , then , if there is no concurrent _r_-_write _ operation of the same client as , is returned as written timestamp by the _ readfrom _function when called for and reader of . in addition , if concurrently with a_ r_-_write _ of , then one of the two hold : ( i ) ( or a higher timestamp if many _ r_-_write _ operations have intervened ) is frozen for and is returned by the _ readfrom _operation invoked by _ highestread _ in , ( ii ) (or a higher timestamp if many _ r_-_write _operations have intervened ) has not yet been frozen by , in which case a written timestamp greater or equal to ( by lemma [ lem : timestamps ] ) is returned by the _ readfrom _ operation invoked by _ highestread _ in .case 2 : : : if of type _ r_-_read _ , then let be the maximum value of the timestamp field _ ts _ in a _ writeptr _ at the time when the _dir_-_scan _ operation invoked by .note that _ highestread _ obtains as this maximum or as a frozen timestamp .lemma [ lem : timestamps ] implies now that .+ we now show that by distinguishing two cases .first , if of type _ r_-_write _ , the writer calls _ dir_-_scan _ after completes and determines the maximum value of the _ ts _ field in any _writeptr_. then it increments that timestamp to obtain .this ensures that , as claimed .+ second , if of type _ r_-_read _ , then either have been a written timestamp or a frozen timestamp ( at the time when the client obtained the response of its _dir_-_scan _ ) .if been written , then the maximum value of the _ ts _ field in any _ writeptr _ , which is at least as large as by lemma [ lem : timestamps ] and by the atomicity of _dir_. + alternatively , if been frozen by writer , then lemma [ lem : readfrozen ] applies and shows that there exist two _operations by that are concurrent to , of which the first writes the value associated to . as such , if is the timestamp returned by the _ readfrom _ function invoked by any _ r_-_read _ operation that precedes and for writer , then .since this can be extended to all writers , it holds that .[ lem : unqwrites ] if two distinct operations of type _ r_-_write _ with timestamps , respectively , then .if executed by different clients , then the two timestamps differ in their second component . if executed by the same client , then the client executed them sequentially . by lemma[ lem : partorder ] , it holds .[ lem : integr ] let be an operation of type _r_-_read _ with timestamp that returns a value . then there is a unique operation of type _r_-_write _ that writes with timestamp .operation by client returns and is , thus , complete .this means that the client has processed events of type -_readresp _ from distinct nodesin a set ; according to the protocol , the client has verified that the response from every contains a timestamp and a fragment such that and ] entry stored in the metadata directory _dir_. this pointer must have been computed during the write operation with timestamp and was later stored in _dir_by the same client .note that by lemma [ lem : unqwrites ] , no other write has timestamp . from the algorithm of the writer, it follows that the entries in _ readhash _ were generated as hash values of the fragments , i.e. , = h(\bar{v}_i) ] to _+ ( case 1.a ) if the _dir_-_scan _ operation of the reader during , denoted by _ dir_- , precedes _ dir_- , then obtains as the highest timestamp stored in by the algorithm .+ ( case 1.b ) otherwise , _ dir_- _ dir_- ; then the reader obtains such that .{\textit{frozenindex}}[c] ] , according to _ readfrom _ in the protocol and because .{\textit{frozenindex}}[c] ] and thus sets .alternatively ( case 2.b.ii ) , suppose that _ dir_- _ dir_- ; then , according to the protocol , the writer has already set .{\textit{frozenindex}}[c ] = { \textit{readindex}} ] .it remains to argue why these nodesdo not free this fragment before completes . in case 1.a, the writer detects the concurrent read during and therefore excludes the data fragments associated to from garbage collection for , by setting .{\textit{ts}} ] to in its state . again according to the protocol, remains reserved and the writer retains the corresponding fragments at least until invokes a subsequent read . intuitively ,cases 1.a and 2.a demonstrate why retains two values during a write ( the one being written and the one written before ) : does not know which one of the two the reader is about to access . in case 2.b.i , if the writer detects the concurrent read during , then it reserves and retains and the claim follows analogously to case 2.a . in cases 1.b and 2.b.ii, the reader accesses a frozen value . again , according to the protocol , remains frozen and is retained at least until invokes a subsequent read operation .the lemma follows .[ thm : atomic ] given a atomic snapshot object _ dir _, protocol aweemulates an atomic mrmw register _r_. we show that every execution of the protocol is linearizable with respect to an mrmw register . by lemma [ lem : integr ] , the timestamp of a _r_-_read _ either has been written by some _r_-_write _ operation or _r_- returns .we first construct an execution from by completing all operations of type _r_-_write _ for those values that have been returned by some _r_-_read _ operation .then we obtain a sequential permutation from as follows : ( 1 ) order all operations according to their timestamps ; ( 2 ) among the operations with the same timestamp , place the _r_-_read _ operations immediately after the unique _r_-_write _ with this timestamp ; and ( 3 ) arrange all non - concurrent operations in the same order as in .note that concurrent _r_-_read _ operations with the same timestamp may appear in arbitrary order . for proving that is a view of at a client w.r.t .a register , we must show that every _r_-_read _ operation returns the value written by the latest preceding _r_-_write _ that appears before in or if there is no such operation .let be an operation of type _r_- timestamp that returns a value .if , then by construction is ordered before any write operation in . otherwise , it holds and according to lemma [ lem : integr ] , there exists an _operation that writes with the same timestamp . in this case, is placed in before by construction .r_-_write _ operation appears between and because all other write operations have a different timestamp and therefore appear in either before or after .it remains to show that preserves the real - time order of .consider two operations in with timestamps and , respectively , such that . from lemma [ lem : partorder] , we have . if then after in by construction .otherwise and be an operation of type _r_-_read_. if is of type _r_-_write _ , then after we placed each _ r_-_read _ after the _r_-_write _ with the same timestamp .otherwise , a _ r_-_read _ and the two _ r_-_read _operations appear in in the same order as in by construction . 0 consider an r - read operation by client with timestamp / pointer ; then at least distinct data nodesthat store a data fragment such that etc . matches and they do not `` free '' these before invokes its subsequent r - read op .adapt from below [ lem : concwr ] let be an operation of type _r_-_read _ with timestamp invoked by a reader , and be operations of type _r_-_write _ invoked by the only writer , and their respective timestamps .we further assume that some of the _ r_-_write _ operations are concurrent with .now , let be the most recent _r_-_write _ operation whose_ dir_-_update _ ( we call the latter_ dir_- ) completes before the _ dir_-_update _ of the ( we call the latter _ dir_- ) .then , or .+ [ [ ] ] ( 1 ) if there is no_ dir_- call that has completed before _ dir_- , then or .( 2 ) if , then .+ furthermore , these two values are _ frozen _ ,i.e. , excluded from garbage collection by until * * completes**. we distinguish between the two cases : * _ dir_- precedes _ dir_- : here , the writer detects the ongoing , and updates its local _ frozenindex _ variable accordingly .if _ dir_- precedes _dir_- , then . otherwise , if _dir_- is invoked at any point after _dir_- , forces ; the latter is because from this point onwards _ readindex _ within is equal to .{\textit{frozenindex}}[c_r] ] , and thus reads sets .otherwise , _ readindex _dir_- equals to .{\textit{frozenindex}}[c_r]$ ] , and forces . clearly , if , sets .it is easy to see , that if , then sets .similarly , if , i.e. , _ dir_- takes place before _dir_- , then if _ dir_- invoked before _ dir_- , sets , while if _ dir_- invoked after _dir_- , sets .in addition , it is easy to see that in both cases , is frozen by . [ [ ] ][ lem : concmwr ] let be an operation of type _ r_-_read _ with timestamp invoked by a reader , and be operations of type _r_-_write _ invoked by each writer , and their respective timestamps .we further assume that some of the _ r_-_write _ operations are concurrent with , while _r_-_write _ operations of different clients can also be concurrent with each other and/or with .now , assume that for each writer , denotes the the most recent _ r_-_write _ operation of whose_ dir_-_update _ ( we call the latter ) completes before the _ dir_-_update _ of the ( we call the latter _ dir_- ) .then , it derives directly from lemma [ lem : concwr ] .+ [ [ ] ] + [ thm : waitfree ] given an atomic snapshot object _dir_and assuming that , protocol aweis wait - free . as the atomic snapshot _dir_operates correctly , all its operations eventually complete independently of other processes .it remains to show that no _ r_-_write _ and no _ r_-_read _ operation blocks .operation , the client needs to receive _-_writeack _ events from distinct data nodes before completing .as there are nodesand up to may be faulty , the assumption implies this . during a _ r_-_read _operation , the reader needs to obtain valid fragments , i.e. , fragments that pass the verification of their hash value . according to lemma [ lem : concurrent ] ,there are at least correct data nodesdesignated by _readptr_._set _ that store a fragment under timestamp until the operation completes .as the reader contacts these nodesand waits for fragments , these fragments eventually arrive and can be reconstructed to the value written by the writer by the completeness of the erasure code .this paper has presented awe , the first _ erasure - coded _ distributed implementation of a multi - writer multi - reader read / write storage object that is , at the same time : ( 1 ) asynchronous , ( 2 ) wait - free , ( 3 ) atomic , ( 4 ) amnesic , ( i.e. , with data nodesstoring a bounded number of values ) and ( 5 ) byzantine fault - tolerant ( bft ) using the optimal number of nodes .aweis efficient since it does not use public - key cryptography and requires data nodesthat support only reads and writes , further reducing the cost of deployment and ownership of a distributed storage solution .notably , awestores metadata separately from -out - of- erasure - coded fragments .this enables aweto be the first bft protocol that uses as few as data nodesto tolerate byzantine nodes , for any .future work should address how to optimize protocol aweand to reduce the storage consumption for practical systems ; this could be done at the cost of increasing its conceptual complexity and losing some of its ideal properties .for instance , when the metadata service is moved from a storage abstraction to a service with processing , it is conceivable that fewer values have to be retained at the nodes .we thank alessandro sorniotti , nikola kneevi , and radu banabic for inspiring discussions during the early stages of this work .this work is supported in part by the eu cloudspaces ( fp7 - 317555 ) and seccrit ( fp7 - 312758 ) projects .10 i. abraham , g. chockler , i. keidar , and d. malkhi .byzantine disk paxos : optimal resilience with byzantine shared memory . , 18(5):387408 , 2006 .a. adya , w. j. bolosky , m. castro , g. cermak , r. chaiken , j. r. douceur , j. howell , j. r. lorch , m. theimer , and r. p. wattenhofer . : federated , available , and reliable storage for an incompletely trusted environment . in _ proc .5th symp .operating systems design and implementation ( osdi ) _ , 2002 .y. afek , h. attiya , d. dolev , e. gafni , m. merritt , and n. shavit .atomic snapshots of shared memory . , 40(4):873890 , 1993 .a. bessani , m. correia , b. quaresma , f. andr , and p. sousa . : dependable and secure storage in a cloud - of - clouds . in _ proc .6th european conference on computer systems ( eurosys ) _ , pages 3146 , 2011 .k. d. bowers , a. juels , and a. oprea . : a high - availability and integrity layer for cloud storage . in _ proc .16th acm conference on computer and communications security ( ccs ) _ , pages 187198 , 2009 . c. cachin , d. dobre , and m. vukoli .storage with data replicas .report arxiv:1305.4868 , corr , 2013 . c. cachin , r. guerraoui , and l. rodrigues . .springer , 2011 . c. cachin , b. junker , and a. sorniotti . on limitations of using cloud storage for data replication .wraits , 2012 . c. cachin and s. tessaro .optimal resilience for erasure - coded byzantine distributed storage . in _ proc .international conference on dependable systems and networks ( dsn - dccs ) _ , pages 115124 , 2006 . v. r. cadambe , n. lynch , m. medard , and p. musial . codedatomic shared memory emulation for message passing architectures .csail technical report mit - csail - tr-2013 - 016 , mit , 2013 .g. chockler , r. guerraoui , and i. keidar .amnesic distributed storage . in g.taubenfeld , editor , _ proc .21th international conference on distributed computing ( disc ) _ , volume 4731 of _ lecture notes in computer science _ , pages 139151 .springer , 2007 .d. dobre , g. karame , w. li , m. majuntke , n. suri , and m. vukoli . : proofs of writing for efficient and robust storage . in _ proc .acm conference on computer and communications security ( ccs ) _, 2013 .d. dobre , m. majuntke , and n. suri . on the time - complexity of robust and amnesic storage . in t.p. baker , a. bui , and s. tixeuil , editors , _ proc .12th conference on principles of distributed systems ( opodis ) _ , volume 5401 of _ lecture notes in computer science _ , pages 197216 .springer , 2008 .p. dutta , r. guerraoui , and r. r. levy .optimistic erasure - coded distributed storage . in g.taubenfeld , editor , _ proc .22th international conference on distributed computing ( disc ) _ , volume 5218 of _ lecture notes in computer science _ , pages 182196 .springer , 2008 .s. frlund , a. merchant , y. saito , s. spence , and a. veitch . a decentralized algorithm for erasure - coded virtual disks . in _ proc .international conference on dependable systems and networks ( dsn - dccs ) _ , pages 125134 , 2004 .g. r. goodson , j. j. wylie , g. r. ganger , and m. k. reiter .efficient byzantine - tolerant erasure - coded storage . in _ proc .international conference on dependable systems and networks ( dsn - dccs ) _ , pages 135144 , 2004 .r. guerraoui , r. r. levy , and m. vukoli .lucky read / write access to robust atomic storage . in _ proc .international conference on dependable systems and networks ( dsn - dccs ) _ , pages 125136 , 2006 .j. hendricks . .phd thesis , school of computer science , carnegie mellon university , july 2009 .j. hendricks , g. r. ganger , and m. k. reiter . low - overhead byzantine fault - tolerant storage . in _ proc .21st acm symposium on operating systems principles ( sosp ) _ , 2007 .m. herlihy .wait - free synchronization ., 11(1):124149 , jan . 1991 .m. p. herlihy and j. m. wing .linearizability : a correctness condition for concurrent objects . , 12(3):463492 , july 1990 .c. huang , h. simitci , y. xu , a. ogus , b. calder , p. gopalan , et al .erasure coding in windows azure storage . in _ proc .usenix annual technical conference _ , 2012 .n. a. lynch . .morgan kaufmann , san francisco , 1996 .martin , l. alvisi , and m. dahlin . minimal byzantine storage . in d.malkhi , editor , _ proc .16th international conference on distributed computing ( disc ) _ , volume 2508 of _ lecture notes in computer science _ , pages 311325 .springer , 2002 .j. s. plank .erasure codes for storage applications .tutorial , presented at the usenix conference on file and storage technologies ( fast ) , 2005 .m. o. rabin .efficient dispersal of information for security , load balancing , and fault tolerance .36(2):335348 , 1989 . w. wong .cleversafe grows along with customers data storage needs .chicago tribune , nov .j. yin , j .-martin , a. v. l. alvisi , and m. dahlin . separating agreement from execution in byzantine fault - tolerant services . in _ proc .19th acm symposium on operating systems principles ( sosp ) _ , pages 253268 , 2003 .
|
although many distributed storage protocols have been introduced , a solution that combines the strongest properties in terms of availability , consistency , fault - tolerance , storage complexity and the supported level of concurrency , has been elusive for a long time . combining these properties is difficult , especially if the resulting solution is required to be efficient and incur low cost . we present awe , the first _ erasure - coded _ distributed implementation of a multi - writer multi - reader read / write storage object that is , at the same time : ( 1 ) asynchronous , ( 2 ) wait - free , ( 3 ) atomic , ( 4 ) amnesic , ( i.e. , with data nodesstoring a bounded number of values ) and ( 5 ) byzantine fault - tolerant ( bft ) using the optimal number of nodes . furthermore , aweis efficient since it does not use public - key cryptography and requires data nodesthat support only reads and writes , further reducing the cost of deployment and ownership of a distributed storage solution . notably , awestores metadata separately from -out - of- erasure - coded fragments . this enables aweto be the first bft protocol that uses as few as data nodesto tolerate byzantine nodes , for any .
|
motility is the hallmark of life . from intracellular molecular transport and crawling of amoebae to the swimming of fish and flight of birds ,movement is one of life s central attributes .all these motile elements generate the forces required for their movements by _ actively _ converting some other forms of energy into mechanical energy. however , in this review we are interested in a special type of collective movement of these motile elements .what distinguishes a _ traffic - like _ movement from all other forms of movements is that traffic flow takes place on _`` tracks '' _ and _ `` trails '' _ ( like those for trains and street cars or like roads and highways for motor vehicles ) for the movement of the motile elements . from nowonwards , the term `` element '' will mean the motile element under consideration .we are mainly interested in the _ general principles _ and common trends seen in the mathematical modeling of collective traffic - like movements at different levels of biological organization .we begin at the lowest level , starting with intracellular biomolecular motor traffic on filamentary rails and end our review by discussing the collective movements of social insects ( like , for example , ants and termites ) and vertebrates on trails .some examples of motile elements and the corresponding tracks are shown in fig.[fig - table ] .now we shall give a few examples of the traffic - like collective phenomena in biology to emphasize some dynamical features of the tracks which makes biological traffic phenomena more exotic as compared to vehicular traffic .in any modern society , the most common traffic phenomenon is that of vehicular traffic .the changes in the roads and highway networks take place over periods of years ( depending on the availability of funds ) whereas a vehicle takes a maximum of a few hours for a single journey .therefore , for all practical purposes , the roads can be taken to be independent of time while studying the flow of vehicular traffic . in sharp contrast , the tracks and trails , which are the biological analogs of roads , can have nontrivial dependence on time during the typical travel time of the motile elements .we give a few examples of such traffic . _ time - dependent track whose length and shape can be affected by the motile element : _ microtubules , a class of filamentary proteins , serve as tracks for two superfamilies of motor proteins called kinesins and dyneins .interestingly , microtubules are known to exhibit an unusual polymerization - depolymerization dynamics even in the absence of motor proteins . moreover , in some circumstances , the motor proteins interact with the microtubule tracks so as to influence their length as well as shape ; one such situation arises during cell division ( the process is called _ mitosis _ ) . _ time - dependent track / trail created and maintained by the motile element : _ a dna helicase unwinds a double - stranded dna and uses one of the single strands thus opened as the track for its own translocation .ants are known to create the trails by dropping a chemical which is generically called _pheromone _ .since the pheromone gradually evaporates , the ants keep reinforcing the trail in order to maintain the trail networks . _ time - dependent track destroyed by the motile element : _ a class of enzymes , called mmp-1 , degrades their tracks formed by collagen fibrils .our aim is to present a critical overview of the common trends in the mathematical modelling of these traffic - like phenomena .although the choice of the physical examples and modelling strategies are biased by our own works and experiences , we put these in a broader perspective by relating these with works of other research groups .this review is organized as follows : the general physical principles and the methods of modelling traffic - like collective phenomena are discussed in sections [ sec - approach]-[sec - intra ] while specific examples are presented in the remaining sections .a summary of the various theoretical approaches followed so far in given in section [ sec - approach ] .the totally asymmetric simple exclusion process ( tasep ) , which lies at the foundation of the theoretical formalism that we have used successfully in most of our own works so far , has been described separately in section [ sec - asep ] .the brownian ratchet mechanism , an idealized generic mechanism of directed , albeit noisy , movement of single molecular motors , is explained in section [ sec - single ] .traffic of ribosomes , a class of nucleotide - based motors , is considered in section [ sub - protein ] .intracellular traffic of cytoskeletal motors is discussed in detail in section [ sec - intra ] while those of matrix metalloproteases in the extra - cellular matrix is summarized in section [ sec - extra ] .models of traffic of cells , ants and humans on trails are sketched in sections [ sec - cellular ] , [ sec - ants ] and [ sec - ped ] .the main conclusions regarding the common trends of modelling the traffic - like collective phenomena in diverse systems over a wide range of length scales are summarized in section [ sec - sum ] .first of all , the theoretical approaches can be broadly divided into two categories : ( i ) `` individual - based '' and ( ii ) `` population - based '' .the _ individual - based _ models describe the dynamics of the individual elements explicitly .just as `` microscopic '' models of matter are formulated in terms of molecular constituents , the individual - based models of transport are also developed in terms of the constituent elements .therefore , the individual - based models are often referred to as `` microscopic '' models .in contrast , in the _ population - based _ models individual elements do not appear explicitly and , instead , one considers only the population densities ( i.e. , number of individual elements per unit area or per unit volume ) .the spatio - temporal organization of the elements are emergent collective properties that are determined by the responses of the individuals to their local environments and the local interactions among the individual elements .therefore , in order to gain a deep understanding of the collective phenomena , it is essential to investigate the linkages between these two levels of biological organization .usually , but not necessarily , space and time are treated as continua in the population - based models and partial differential equations ( pdes ) or integro - differential equations are written down for the time - dependent local collective densities of the elements .the individual - based models have been formulated following both continuum and discrete approaches . in the continuum formulation of the lagrangian models , differential equationsdescribe the individual trajectories of the elements .suppose is the local density of the population of the motile elements at the coarse - grained location at time .if the elements are conserved , one can write down an equation of continuity for : + where is the current density corresponding to the population density .in addition , depending on the nature of the motile elements and their environment , it may be possible to write an analogue of the navier - stokes equation for the local dynamical variable .however , in this review we shall focus almost exclusively on works carried out following individual - based approaches . for developing an individual - based model , one must first specify the _ state _ of each individual element .the dynamical laws governing the time - evolution of the system must predict the state of the system at a time , given the corresponding state at time .the change of state should reflect the response of the system in terms of movement of the individual elements . langevin equation : one possible framework for the mathematical formulation of such models is the deterministic newton s equations for individual elements ; each element is modelled as a `` particle '' subjected to some `` effective forces '' arising out of its interaction with the other elements .in addition , the elements may also experience viscous drag and some random forces ( `` noise '' ) that may be caused by the surrounding medium . in that case , instead of the newton s equation , one can use a langevin equation . in casethe element is an organism that can think and take decision , capturing inter - element interaction via effective forces becomes a difficult problem . for a particle of mass and instantaneous velocity , the langevin equation describing its motion in one - dimensional spaceis written as where is the external force acting on the particle , and is the random force ( noise ) while the second term on the right hand side represents the viscous drag on the particle . in orderthat the average velocity satisfies the newton equation for a particle in a viscous medium , we further assume that and where , and , at this level of description , is a phenomenological parameter .the prefactor on the right hand side of equation ( [ eq - noi2 ] ) has been chosen for convenience .an alternative , but equivalent approach is to write down what is now generally referred to as a fokker - planck equation . in this approach ,one deals with a _ deterministic _ partial differential equation for a probability density .for example , suppose be the conditional probability that , at time , the motile element is located at and has velocity , given that its initial ( i.e. , at time ) position and velocity were . since the total probability integrated over all space and all velocities is conserved ( i.e , , does not change with time ) , the probability density satisfies an equation of continuity .the probability current density gets contribution not only from a diffusive motion of the motile elements but also a drift caused by the external force .often it turns out that real forces ( i.e. , forces arising from real physical interactions ) alone can not account for the observed dynamics of the motile elements ; in such situations , `` social forces '' have been incorporated in the equation of motion .however , a priori justification of the forms of such social forces is extremely difficult .it is also worth pointing out that , in contrast to passive brownian particles , the motile agents are active brownian particles . hybrid approaches : suppose a set of `` particles '' , each of which represents a motile element , move in a potential field ] ; one possible form assumed in the case of ants is = - \ln\biggl(1 + \frac{\sigma}{1+\delta \sigma}\biggr)\ ] ] where is called the capacity . stochastic cellular automata : numerical solution of the newton - like or langevin - like equations require discretization of both space and time . therefore , the alternative discrete formulations may be used from the beginning . in recent years many individual - based models ,however , have been formulated on discretized space and the temporal evolution of the system in discrete time steps are prescribed as dynamical update rules using the language of cellular automata ( ca ) or lattice gas ( lg ) .since each of the individual elements may be regarded as an agent , the ca and lg models are someties also referred to as agent - based models .there are some further advantages in modeling biological systems with ca and lg .biologically , it is quite realistic to think in terms of the way each individual motile element responds to its local environment and the series of actions they perform .the lack of detailed knowledge of these behavioral responses is compensated by the rules of ca .usually , it is much easier to devise a reasonable set of logic - based rules , instead of cooking up some effective force for dynamical equations , to capture the behaviour of the elements .moreover , because of the high speed of simulations of ca and lg , a wide range of possibilities can be explored which would be impossible with more traditional methods based on differential equations .most of the models we review in this article are based on ca and lg ; this modelling strategy focusses mostly on generic features of the system .the average number of motile elements that arrive at ( or depart from ) a fixed detector site on the track per unit time interval is called the _flux_. one of the most important transport properties is the relation between the flux and the density of the motile elements ; a graphical representation of this relation is usually referred to as the _fundamental diagram_. if the motile elements interact mutually only via their steric repulsion their average speed would decrease with increasing density because of the hindrance caused by each on the following elements . on the other hand , for a given density , the flux is given by , where is the corresponding average speed . at sufficiently low density ,the motile elements are well separated from each other and , consequently , is practically independent of . therefore , is approximately proportional to if is very small .however , at higher densities the increase of with becomes slower . at high densitits ,the sharp decrease of with leads to a decrease , rather than increase , of with increasing .naturally , the fundamental diagram of such a system is expected to exhibit a maxium at an intermediate value of the density .the _ asymmetric simple exclusion process ( asep ) _ is a simple particle - hopping model . in the asep particles can hop ( with some probability or rate ) from one lattice site to a neighbouring one , but only if the target site is not already occupied by another particle . `` simple exclusion '' thus refers to the absence of multiply occupied sites .generically , it is assumed that the motion is `` asymmetric '' such that the particles have a preferred direction of motion . for a full definition of a model ,it is necessary to specify the order in which the local rule described above is to be applied to the sites .the most common update types are _ random - sequential dynamics _ and _ parallel dynamics_. in the random - sequential case , sites are chosen in random order and then updated .in contrast , updating for the parallel case is done in a synchronous manner ; here all the sites are updated at once .most often the one - dimensional case is studied , where particles move along a linear chain of sites .this is rather natural for many applications , e.g. for modelling highway traffic .if motion is allowed in only one direction ( e.g. `` to the right '' ) , the corresponding model is sometimes called totally asymmetric simple exclusion process ( tasep ) .the probability of motion from site to site will be denoted by ; in the simplest case , where all the sites are treated on equal footing , is assumed to be independent of the position of the particle . for such driven diffusive systems the boundary conditions turn out to be crucial .if periodic boundary conditions are imposed , i.e. , the sites and are made nearest - neibours of each other , all the sites are treated on the same footing . for this system the fundamental diagram has been derived exactly both in the cases of parallel and random - sequential updating rules ; these are shown graphically in fig.[fig - nsfunda ] .if the boundaries are open , then a particle can enter from a reservoir and occupy the leftmost site ( ) , with probability , if this site is empty . in this systema particle that occupies the rightmost site ( ) can exit with probability .the asep has been studied extensively in recent years and is now well understood ( see e.g. and references therein ) .in fact its stationary state for different dynamics can be obtained exactly .it shows an interesting phase diagram ( see fig .[ fig_asepphase ] ) and is the prototype for so - called boundary - induced phase transitions .[ fig_asepphase ] shows the generic form of the phase diagram obtained by varying the boundary rates and .one can distinguish three phases , namely ( a ) a low - density phase ( ) , ( b ) a high - density phase ( ) and ( c ) a maximal - current phase ( and ) .the appearance of these three phases can easily be understood . in the low - density phasethe current depends only on the input rate .the input is less efficient than the transport in the bulk of the system or the output and therefore dominates the behaviour of the whole system . in the high - density phasethe output is the least efficient part of the system .therefore the current depends only on . in the maximal current phase , input and outputare more efficient than the transport in the bulk of the system . herethe current has reached the largest possible value corresponding to the maximum of the fundamental diagram of the periodic system .mean - field theory predicts the existence of a shock or domain wall that separates a macroscopic low - density region at the start - end of the chain from a macroscopic high - density region at the stop - end .the exact solution , on the other hand , gives a linear increasing density profile .these two results do not contradict each other since the sharp domain wall , due to current fluctuations , performs a random walk along the lattice .the mean - field result therefore corresponds to a snapshot at a given time whereas the exact solution averages over all possible positions of the shock . in nice physical picture has been developed which explains the structure of the phase diagram not only qualitatively , but also ( at least partially ) quantitatively .it remains correct even for more sophisticated models .it relates the phase boundaries to properties of the periodic system which can be derived from the fundamental diagram , namely the so - called shock velocity and the collective velocity . is the velocity of a domain - wall which in nonequilibrium systems denotes an object connecting two possible stationary states . here these stationary states have densities and , respectively .the collective velocity describes the velocity of the center - of - mass of a local perturbation in a homogeneous , stationary background of density .the phase diagram of the open system is then completely determined by the fundamental diagram of the periodic system through an extremal - current principle and therefore independent of the microscopic dynamics of the model . at the bottleneck ( partially hatched region ) is smaller than the normal hopping probability , in ( b ) the randomness is associated with the particles ; and being the time - independent hopping probabilities of the particles and , respectively . in ( c ) the randomness arises from the coupling of the dynamics of the hopping particles ( filled circle ) with another species of particles that represent specific type of signal molecules ; the two possible states of the latter are represented by open and filled squares.,title="fig : " ] at the bottleneck ( partially hatched region ) is smaller than the normal hopping probability , in ( b ) the randomness is associated with the particles ; and being the time - independent hopping probabilities of the particles and , respectively . in ( c ) the randomness arises from the coupling of the dynamics of the hopping particles ( filled circle ) with another species of particles that represent specific type of signal molecules ; the two possible states of the latter are represented by open and filled squares.,title="fig : " ] at the bottleneck ( partially hatched region ) is smaller than the normal hopping probability , in ( b ) the randomness is associated with the particles ; and being the time - independent hopping probabilities of the particles and , respectively . in ( c ) the randomness arises from the coupling of the dynamics of the hopping particles ( filled circle ) with another species of particles that represent specific type of signal molecules ; the two possible states of the latter are represented by open and filled squares.,title="fig : " ] at least three different types of randomness of the hopping rates have been considered so far in the context of the asep - type models .\(a ) first , the randomness may be associated with the _ track _ on which the motile elements move ; typical examples are the bottlenecks created , in intra - cellular transport in neurons , by _ tau _ , a microtubule - associated protein .the inhomogeneities of some dna and m - rna strands can be well approximated as random and , hence , the randomess of hopping of the motile elements on the nucleotide - based tracks . as shown schematically in fig.[fig - disorder](a ) , normal hopping probability at unblocked sites is whereas that at the bottleneck is ( ) .this type of randomness in the hopping probabilities , which may be treated as quenched ( i.e. , time - independent or `` frozen '' ) defect of the track , leads to interesting phase - segregation phenomena ( see for a review ) .\(b ) the second type of randomness is associated with the hopping _motile elements _ , rather than with the track .for example , the normal hopping probabilities of the motile elements may vary randomly from one element to another ( see fig.[fig - disorder](b ) ) , e.g. , in randomly mutated kinesins ; the hopping rate of each motile element is , however , `` quenched '' , i.e. , independent of time . in this case , the system is known to be exhibit coarsening of queues of the motile elements and the phenomenon has some formal similarities with bose - einstein condensation ( reviewed in ) . note that in case of the randomness of type ( a ) , the hopping probability depends only on the spatial location on the track , independent of the identity of the hopping motile element . on the other hand , in the case of randomness of type ( b ) , the hopping probability depends on the hopping motile element , irrespective of its spatial location on the track .\(c ) in contrast to the two types of randomness ( ( a ) and ( b ) ) considered above , the randomness in the hopping probabilities of the motile elements in some situations arises from the coupling of their dynamics with that of another non - conserved dynamical variable .for example , the hopping probability of a motile element may depend on the presence or absence of a specific type of signal molecule in front of it ( see fig.[fig - disorder](c ) ) ; such situations arise in traffic of ants whose movements are strongly dependent on the presence or absence of pheromone on the trail ahead of them . therefore , in such models with periodic boundary conditions , a given motile element may hop from the same site , at different times , with different hopping probabilities .two extremely idealized mechanisms of motility of single - motors have been developed in the literature .the _ power - stroke _mechanism is analogous to the power strokes that drive macrscopic motors . on the other hand , the _brownian ratchet _mechanism is unique to the microscopic molecular motors .let us now consider a brownian particle subjected to a _ time - dependent _ potential , in addition to the viscous drag ( or , frictional force ) .the potential switches between the two forms ( i ) and ( ii ) shown in fig.[fig - ratgauss ] . the sawtooth form ( i ) is spatially _ periodic _ where each period has an _ asymmetric _ shape .in contrast , the form ( ii ) is flat so that the particle does not experience any external force imposed on it when the potential has the form ( ii ) .note that , in the left part of each well in ( i ) the particle experiences a rightward force whereas in the right part of the same well it is subjected to a leftward force .moreover , the spatially averaged force experienced by the particle in each well of length is because of the spatially periodic form of the potential ( i ) .what makes this problem so interesting is that , in spite of vanishing average force acting on it , the particle can still exhibit directed , albeit noisy , rightward motion . in order to understand the underlying physical principles ,let us assume that initially the potential has the shape ( i ) and the particle is located at a point on the line that corresponds to the bottom of a well .now the potential is switched off so that it makes a transition to the form ( ii ) .immediately , the free particle begins to execute a brownian motion and the corresponding gaussian profile of the probability distribution begins to spread with the passage of time . if the potential is again switched on before the gaussian profile gets enough time for spreading beyond the original well , the particle will return to its original initial position .but , if the period during which the potential remains off is sufficiently long , so that the gaussian probability distribution has a non - vanishing tail overlapping with the neighbouring well on the right side of the original well , then there is a small non - vanishing probability that the particle will move forward towards right by one period when the potential is switched on . in the case of cytoskeleton - based motors like kinesin and dynein ,this energy is supplied by the hydrolysis of atp molecules to adp ; thus , the mechanical movement is coupled to a chemical reaction . in this mechanism , the particle moves forward not because of any force imposed on it but because of its brownian motion .the system is , however , not in equilibrium because energy is pumped into it during every period in switching the potential between the two forms . in other words ,the system works as a rectifier where the brownian motion , in principle , could have given rise to both forward and backward movements of the particle in the multiples of , but the backward motion of the particle is suppressed by a combination of ( a ) the time dependence and ( b ) spatial asymmetry ( in form ( i ) ) of the potential .in fact , the direction of motion of the particle can be reversed by replacing the potential ( i ) by the potential ( iii ) shown in fig.[fig - ratbidir ] .the spatial asymmetry of the sawtooth potential arises from the polar nature of the microtubule and actin filamentary tracks .the mechanism of directional movement discussed above is called a brownian ratchet .the concept of brownian ratchet was popularized by feynman through his lectures although , historically , it was introduced by smoluchowski .effects of quenched ( i.e. , time - independent ) disorder on the properties of brownian ratchets have been considered by several authors .quenched disorder can arise in brownian ratchets , for example , from + ( i ) random variation of the heights ( or depths ) of the sawtooth potential from one site to another where all the sawteeth have the same type of asymmetry , + ( i ) a random mixture of forward and reversed sawteeth where the heights of all the sawteeth is identical .the nature of disorder in real molecular motors , even if driven by a brownian - ratchet mechanism , may be a combination of these two types of idealized disorder .suppose is the frequency of both the transitions from ( i ) to ( ii ) and ( ii ) to ( i ) forms of the potential .also , let be the probability of finding a defect , i.e. , a reversed sawtooth in case ( ii ) . in that case , the effective drift and effective diffusion coefficient exhibit three different regions on the phase diagram including some anomalous behaviour .helicases and polymerases are the two classes of nucleotide - based motors that have been the main focus of experimental investigations . in this section ,we discuss only the motion of the ribosome along the m - rna track . historically , this problem is one of the first where tasp - like model was successfully applied to a biological system .the synthesis of proteins and polypetids in a living cell is a complex process .special machines , so - called _ ribosomes _ , translate the genetic information ` stored ' in the _ messenger - rna ( mrna ) _ into a program for the synthesis of a protein .mrna is a long ( linear ) molecule made up of a sequence of triplets of nucleotides ; each triplet is called a _codon_. the genetic information is encoded in the sequence of codons . a ribosome , that first gets attached to the mrna chain , `` reads '' the codons as it moves along the mrna chain , recruits the corresponding amino acids and assembles these amino acids in the sequence so as to synthesize the protein for which the `` construction plan '' was stored in the mrna .after executing the synthesis as per the plan , it gets detached from the mrna . thus , the process of `` translation '' of genetic information stored in mrna consists of three steps : ( i ) _ initiation _ : attachment of a ribosome at the `` start '' end of the mrna , ( ii ) _ elongation _ : of the polypeptide ( protein ) as the ribosome moves along the mrna , and ( iii ) _ termination _ : ribosome gets detached from the mrna when it reaches the `` stop '' codon .let us denote each of the successive codons by the successive sites of a one - dimensional lattice where the first and the last sites correspond to the start and stop codons .the ribosomes are much bigger ( 20 - 30 times ) than the codons .therefore , neighbouring ribosomes attached to the same mrna can not read the same information or overtake each other .in other words , any given site on the lattice may be covered by a single ribosome or none .let us represent each ribosome by a rigid rod of length .if the rod representing the ribosome has its left edge attached to the i - th site of the lattice , it is allowed to move to the right by one lattice spacing , i.e. , its left edge moves to the site provided the site is empty . in the special case this model reduced to the tasep .although the model was originally proposed in the late sixties , significant progress in its analytical treatment for the general case of arbitrary could be made only three decades later ; even the effects of quenched disorder has also been considered in the recent literature . as mentioned above ,a ribosom is much bigger than a base triplet .however , modifying the asep by taking into account particles that occupy more than one lattice site does not change the structure of phase diagram .physically this can be understood from the domain - wall picture and the extremal - current principle .intracellular transport is carried by molecular motors which are proteins that can directly convert the chemical energy into mechanical energy required for their movement along filaments constituting what is known as the cytoskeleton .three superfamilities of these motors are kinesin , dynein and myosin .members of the majority of the familities have two heads whereas only a few families have single - headed members .most of the kinesins and dyneins are like porters in the sense that these move over long distances along the filamentary tracks without getting completely detached ; such motors are called _processive_. on the other hand , the conventional myosins and a few unconventional ones are nonprocessive ; they are like rowers .but , a few families of unconventional myosins are processive .these cytoskeleton - based molecular motors play crucially important biological functions in axonal transport in neurons , intra - flagellar transport in eukaryotic flagella , etc .the relation between the architectural design of these motors and their transport function has been investigated both experimentally and theoretically for quite some time . however , in this review we shall focus mostly on the effects of mutual interactions ( competition as well as cooperation ) of these motors on their collective spatio - temporal organisation and the biomedical implications of such organisations .often a single microtubule ( mt ) is used simultaneously by many motors and , in such circumstances , the inter - motor interactions can not be ignored .fundamental understanding of these collective physical phenomena may also expose the causes of motor - related diseases ( e.g. , alzheimer s disease ) thereby helping , possibly , also in their control and cure .the bio - molecular motors have opened up a new frontier of applied research- `` bio - nanotechnology '' .a clear understanding of the mechanisms of these natural machines will give us clue as to the possible design principles that can be utilized to synthesize artificial nanomachines .derenyi and collaborators developed one - dimensional models of interacting brownian motors , each of which is subjected to a time - dependent potential of the form shown in fig.[fig - ratgauss ] .they modelled each motor as a _rigid rod _ and formulated the dynamics through langevin equations of the form ( [ eq - lan3 ] ) for each such rod assuming the validity of the overdamped limit ; the mutual interactions of the rods were incorporated through the mutual exclusion .however , in this section we shall focus attention on those models where the dynamics is formulated in terms of `` rules '' for undating in discrete time steps . the model considered by aghababaieet al. is not based on tasep , but its dynamics is a combination of brownian ratchet and update rules in discrete time steps .more precisely , this model is a generalization of tasep , rather than tasep , where the hopping probabilities are obtained from the local potential which itself is time - dependent and is assumed to have the form shown in fig.[fig - ratgauss ] . in this model , the filamentary track is discretized in the spirit of the particle - hopping models described above and the motors are represented by _ field - driven _ particles ; no site can accomodate more than one particle at a time .each time step consists of either an attempt of a particle to hop to a neighbouring site or an attempt that can result in switching of the potential from flat to sawtooth form or vice - versa . both forward and backward movement of the particles are possible and the hopping probability of every particleis computed from the instantaneous local potential . however , neither attachment of new particles nor complete detachment of existing particles were allowed .the fundamental diagram of the model , computed imposing periodic boundary conditions , is very similar to those of tasep .this observation indicates that further simplification of the model proposed in ref. is possible to develope a minimal model for interacting molecular motors .indeed , the detailed brownian ratchet mechanism , which leads to a noisy forward - directed movement of the _ field - driven _ particles in the model of aghababaie et al . , is replaced in some of the more recent theoretical models by a tasep - like probabilitic forward hopping of _ self - driven _particles . in these simplied versions, none of the particles is allowed to hop backward and the forward hopping probability is assumed to capture most of the effects of biochemical cycle of the enzymatic activity of the motor .the explicit dynamics of the model is essentially an extension of that of the asymmetric simple exclusion processes ( asep ) ( see section [ sec - asep ] ) that includes , in addition , langmuir - like kinetics of adsorption and desorption of the motors . model proposed by parmeggiani et al. + in the model of parmeggiani et al . , the molecular motors are represented by particles whereas the sites for the binding of the motors with the cytoskeletal tracks ( e.g. , microtubules ) are represented by a one - dimensional discrete lattice .just as in tasep , the motors are allowed to hop forward , with probability , provided the site in front is empty . however , unlike tasep , the particles can also get `` attached '' to an empty lattice site , with probability , and `` detached '' from an occupied site , with probability ( see fig.[fig - frey ] ) from any site except the end points .the state of the system was updated in a random - sequential manner . carrying out monte - carlo simulations of the model , applying open boundary conditions ,parmeggiani et al. demonstrated a novel phase where low and high density regimes , separated from each other by domain walls , coexist . using a mean - field theory ( mft ) , they interpreted this spatial organization as traffic jam of molecular motors .this model has interesting mathematical properties which are of fundamental interest in statistical physics but are beyond the scope of this review . model proposed by klumpp et al .+ a cylindrical geometry of the model system ( see fig.[fig - lipowsky ] ) was considered by lipowsky , klumpp and collaborators to mimic the microtubule tracks in typical tubular neurons .the microtubule filament was assumed to form the axis of the cylinder whereas the free space surrounding the axis was assumed to consist of channels each of which was discretized in the spirit of lattice gas models .they studied concentration profiles and the current of free motors as well as those bound to the filament by imposing a few different types of boundary conditions .this model enables one to incorporate the effects of exchange of populations between two groups , namely , motors bound to the axial filament and motors which move diffusively in the cylinder .they have also compared the results of these investigations with the corresponding results obtained in a different geometry where the filaments spread out radially from a central point ( see fig.[fig - lipo2 ] ) .+ model proposed by klein et al .it is well known that , in addition to generating forces and carrying cargoes , cytoskeletal motors can also depolymerize the filamentary track on which they move processively .a model for such filament depolymerization process has been developed by klein et al. by extending the model of intra - cellular traffic proposed earlier by parmeggiani et al . .+ the model of klein et al. is shown schematically in fig .[ fig - klein ] .the novel feature of this model , in contrast to the similar models of intracellular transport , is that the lattice site at the tip of a filament is removed with a probability per unit time provided it is occupied by a motor ; the motor remains attached to the newly exposed tip of the filament with probability ( or remains bound with the removed site with probability ) .thus , may be taken as a measure of the processivity of the motors .this model clearly demonstrated a dynamic accumulation of the motors at the tip of the filament arising from the processivity ; a motor which was bound to the depolymerizing monomer at the tip of the filament is captured by the monomer at the newly exposed tip . model proposed by kruse and sekimoto : kruse and sekimoto proposed a particle - hopping model for motor - induced relative sliding of two filamentary motor tracks .the model is shown schematically in fig.[fig - sekimoto ] .each of the two - headed motors is assumed to consist of two particles connected to a common neck and are capable of binding with two filaments provided the two binding sites are closest neighbours as shown in the figure .each particle can move forward following a tasep - like rule and every movement of this type causes sliding of the two filaments by one single unit .the most important result of this investigation is that the average relative velocity of the filaments is a non - monotonic function of the concentration of the motors .the models of intracellular traffic described so far are essentially extensions of the asymmetric simple exclusion processes ( asep ) that includes langmuir - like kinetics of adsorption and desorption of the motors . in reality ,a motor protein is an enzyme whose mechanical movement is loosely coupled with its biochemical cycle . in a recent work , we have considered specifically the _ single - headed _ kinesin motor , kif1a ; the movement of a single kif1a motor was modelled earlier with a brownian ratchet mechanism .in contrast to the earlier models of molecular motor traffic , which take into account only the mutual interactions of the motors , our model explicitly incorporates also the brownian ratchet mechanism of individual kif1a motors , including its biochemical cycle that involves _ adenosine triphosphate(atp ) hydrolysis_. the asep - like models successfully explain the occurrence of shocks .but since most of the bio - chemistry is captured in these models through a single effective hopping rate , it is difficult to make direct quantitative comparison with experimental data which depend on such chemical processes .in contrast , the model we proposed in ref . incorporates the essential steps in the biochemical processes of kif1a as well as their mutual interactions and involves parameters that have one - to - one correspondence with experimentally controllable quantities .the biochemical processes of kinesin - type molecular motors can be described by the four states model shown in fig .[ fig - cycle ] : bare kinesin ( k ) , kinesin bound with atp ( kt ) , kinesin bound with the products of hydrolysis , i.e. , adenosine diphosphate(adp ) and phosphate ( kdp ) , and , finally , kinesin bound with adp ( kd ) after releasing phosphate .recent experiments revealed that both k and kt bind to the mt in a stereotypic manner ( historically called `` strongly bound state '' , and here we refer to this mechanical state as `` state 1 '' ) .kdp has a very short lifetime and the release of phosphate transiently detaches kinesin from mt .then , kd re - binds to the mt and executes brownian motion along the track ( historically called `` weakly bound state '' , and here referred to as `` state 2 '' ) . finally , kd releases adp when it steps forward to the next binding site on the mt utilizing a brownian ratchet mechanism , and thereby returns to the state k. + thus , in contrast to the earlier asep - like models , each of the self - driven particles , which represent the individual motors kif1a , can be in two possible internal states labelled by the indices and . in other words ,each of the lattice sites can be in one of three possible allowed states ( fig .[ fig2 ] ) : empty ( denoted by ) , occupied by a kinesin in state , or occupied by a kinesin in state .+ for the dynamical evolution of the system , one of the sites is picked up randomly and updated according to the rules given below together with the corresponding probabilities ( fig .[ fig2 ] ) : the probabilities of detachment and attachment at the two ends of the mt may be different from those at any bulk site .we chose and , instead of , as the probabilities of attachment at the left and right ends , respectively .similarly , we took and , instead of , as probabilities of detachments at the two ends ( fig .[ fig2 ] ) . finally , and , instead of , are the probabilities of exit of the motors through the two ends by random brownian movements .it is possible to relate the rate constants , and with the corresponding physical processes in the brownian ratchet mechanism of a single kif1a motor .suppose , just like models of flashing ratchets , the motor `` sees '' a time - dependent effective potential which , over each biochemical cycle , switches back and forth between ( i ) a periodic but asymmetric sawtooth like form and ( ii ) a constant .the rate constant in our model corresponds to the rate of the transition of the potential from the form ( i ) to the form ( ii ) .the transition from ( i ) to ( ii ) happens soon after atp hydrolysis , while the transition from ( ii ) to ( i ) happens when atp attaches to a bare kinesin .the rate constant of the motor in state captures the brownian motion of the free particle subjected to the flat potential ( ii ) .the rate constants and are proportional to the overlaps of the gaussian probability distribution of the free brownian particle with , respectively , the original well and the well immediately in front of the original well of the sawtooth potential . good estimates for the parameters of the model could be extracted by analyzing the empirical data .for example , ms is independent of the kinesin concentration . on the other hand , , which depends on the kinesin concentration ,could be in the range ms ms .similarly , , ms , ms and ms .let us denote the probabilities of finding a kif1a molecule in the states and at the lattice site at time by the symbols and , respectively . in mean - field approximationthe master equations for the dynamics of motors in the bulk of the system are given by the corresponding equations for the boundaries are also similar . .[tab-1mol]predicted transport properties from our model in the low - density limit for four different atp concentrations . is calculated by averaging the intervals between attachment and detachment of each kif1a . [cols="^,^,^,^,^",options="header " , ] in our model the right - moving ( left - moving ) particles , represented by ( ) , are never allowed to move towards left ( right ) ; these two groups of particles are the analogs of the outbound and nest - bound ants in a _ bi - directional _ traffic on the same trail .thus , no u - turn is allowed .in addition to the tasep - like hopping of the particles onto the neighboring vacant sites in the respective directions of motion , the and particles on nearest - neighbour sites and facing each other are allowed to exchange their positions , i.e. , the transition takes place , with the probability .this might be considered as a minimal model for the motion of ants on a hanging cable as shown in fig.[fig - antphoto ] . when a outbound ant and a nest - bound ant face each other on the upper side of the cable , they slow down and , eventually , pass each other after one of them , at least temporarily , switches over to the lower side of the cable .similar observations have been made for normal ant - trails where ants pass each other after turning by a small angle to avoid head - on collision . in our model ,as commonly observed in most real ant - trails , none of the ants is allowed to overtake another moving in the same direction .we now introduce a third species of particles , labelled by the letter , which are intended to capture the essential features of pheromone .the particles are deposited on the lattice by the and particles when the latter hop out of a site ; an existing particle at a site disappears when a or particle arrives at the same location .the particles can not hop but can _ evaporate _ , with a probability per unit time , independently from the lattice .none of the lattice sites can accomodate more than one particle at a time .the state of the system is updated in a _ random - sequential _ manner . because of the periodic boundary conditions , the densities of the and the particles are conserved . in contrast , the density of the particles is a non - conserved variable .the distinct initial states and the corresponding final states for pairs of nearest - neighbor sites are shown in fig.[fig - updating ] together with the respective transition probabilties .( for ) and ( b ) ( for ) .the other common parameters are , and .non - monotonic variation of the average speeds of the ants with their density on the trail gives rise to the unusual shape of the fundamental diagrams ., title="fig:",scaledwidth=50.0% ] + ( for ) and ( b ) ( for ) .the other common parameters are , and .non - monotonic variation of the average speeds of the ants with their density on the trail gives rise to the unusual shape of the fundamental diagrams ., title="fig:",scaledwidth=50.0% ] in the prl model is plotted against time for , and , both for the same total density ; the other common parameters being , , , , . , scaledwidth=50.0% ] suppose and are the total numbers of and particles , respectively .for a system of length the corresponding densities are with the total density . of the particles ,a fraction are of the type while the remaining fraction are particles .the corresponding fluxes are denoted by . in both the limits and this model reduces to the model reported in ref. and reviewed in section ix.a , which was motivated by uni - directional ant - traffic and is closely related to the bus - route models .one unusual feature of this prl model is that the flux does _ not _ vanish in the _ dense - packing _limit .in fact , in the _ full - filling _ limit , the _ exact _ non - vanishing flux at arises only from the exchange of the and particles , _ irrespective of the magnitudes of _ and . in the special case hopping of the ants become independent of pheromone .this special case of the prl model is identical to the ahr model with .a simple mean - field approximation ( mfa ) yields the estimates \label{eq - mf}\end{aligned}\ ] ] _ irrespective of _ , for the fluxes at any arbitrary .we found that the results of mfa agree reasonably well with the exact values of the flux for all but deviate more from the exact values for , indicating the presence of stronger correlations at smaller values of .for the generic case , the flux in the prl model depends on the evaporation rate of the partcles . in fig .[ fig - prlfd ] we plot the fundamental diagrams for wide ranges of values of ( in fig .[ fig - prlfd](a ) ) and ( in fig .[ fig - prlfd](b ) ) , corresponding to one set of hopping probabilities .first , note that the data in figs .[ fig - prlfd ] are consistent with the physically expected value of , because in the dense packing limit only the exchange of the oppositely moving particles contributes to the flux .moreover , the sharp rise of the flux over a narrow range of observed in both fig .[ fig - prlfd ] ( a ) and ( b ) arise from the nonmonotonic variation of the average speed with density , an effect which was also observed in our earlier model for uni - directional ant traffic . in the special limits and , this model reduces to our single - lane model of unidirectional ant traffic ; therefore , in these limits , over a certain regime of density ( especially at small ) , the particles are expected to form `` loose '' ( i.e. , non - compact ) clusters . therefore , in the absence of encounter with oppositely moving particles , , the coarsening time for the right - moving and left - moving particles would grow with system size as and . , , , , , and .the black and grey dots represent the right - moving and left - moving ants , respectively . ,scaledwidth=37.5% ] in the prl model _ with periodic boundary conditions _ , the oppositely moving ` loose' clusters `` collide '' against each other periodically where the time gap between the successive collisions increases _ linearly _ with the system size following ; we have verified this scaling relation numerically ( see the typical space - time diagram in fig.[fig - shred ] ) . during a collision each loose cluster `` _shreds _ ''( i.e. , cuts into pieces ) the oppositely moving cluster ; both clusters shred the other equally if .however , for all , the minority cluster suffers more severe shredding than that suffered by the majority cluster because each member of a cluster contributes in the shredding of the oppositely moving cluster . in small systemsthe `` shredded '' clusters get opportunity for significant re - coarsening before getting shredded again in the next encounter with the oppositely moving particles .but , in sufficiently large systems , shredded appearance of the clusters persists .however , we observed practically no difference in the fundamental diagrams for and .following the methods of ref. , we have computed starting from random initial conditions .the data ( fig . [ fig - rvst](a ) ) corresponding to are consistent with the asymptotic growth law . in sharp contrast , for , saturates to a much smaller value ( fig .[ fig - rvst](b ) ) that is consistent with highly shredded appearance of the corresponding clusters .thus , coarsening and shredding phenomena compete against each other and this competition determines the overall spatio - temporal pattern . therefore , in the late stage of evolution , the system settles to a state where , because of alternate occurrence of shredding and coarsening , the typical size of the clusters varies periodically .moreover , we find that , for given and , increasing leads to sharper _ speeding up _ of the clusters during collision so long as is not much smaller than .both the phenomena of shredding and speeding during collisions of the oppositely moving loose clusters arise from the fact that , during such collisions , the domainant process is the exchange of positions , with probability , of oppositely - moving ants that face each other .it is possible to extend the model of uni - directional ant - traffic to a minimal model of two - lane bi - directional ant - traffic . in such models of bi - directional ant - trafficthe trail consists of _ two _ lanes of sites .these two lanes need not be physically separate rigid lanes in real space . in the initial configuration ,a randomly selected subset of the ants move in the clockwise direction in one lane while the others move counterclockwise in the other lane .the numbers of ants moving in the clockwise direction and counterclockwise in their respective lanes are fixed , i.e. ants are allowed neither to take u - turn . .this process does not have any analog in the model of uni - directional ant - traffic ., scaledwidth=50.0% ] + ( left ) and ( right ) for several different values of the pheromone evaporation probability .the densities for both directions are identical and therefore only the graphs for one directions are shown .the parameters in the left graph are and .the symbols , , , , , , , and correspond , respectively , to and .the parameters in the right graph are and .the symbols , , , , and correspond , respectively , to and .the inset in the right graph is a magnified re - plot of the same data , over a narrow range of density , to emphasize the fact that the unusual trend of variation of flux with density in this case is similar to that observed in the case ( left ) .the lines are merely guides to the eye . in all cases curves plotted with filled symbols exhibit non - monotonic behaviour in the speed - density relation ., title="fig:",scaledwidth=50.0% ] ( left ) and ( right ) for several different values of the pheromone evaporation probability .the densities for both directions are identical and therefore only the graphs for one directions are shown .the parameters in the left graph are and .the symbols , , , , , , , and correspond , respectively , to and .the parameters in the right graph are and . the symbols , , , , and correspond , respectively , to and .the inset in the right graph is a magnified re - plot of the same data , over a narrow range of density , to emphasize the fact that the unusual trend of variation of flux with density in this case is similar to that observed in the case ( left ) .the lines are merely guides to the eye . in all cases curves plotted with filled symbols exhibit non - monotonic behaviour in the speed - density relation ., title="fig:",scaledwidth=50.0% ] the rules governing the dropping and evaporation of pheromone in the model of bi - directional ant - traffic are identical to those in the model of uni - directional traffic .the _ common _pheromone trail is created and reinforced by both the outbound and nestbound ants .the probabilities of forward movement of the ants in the model of bi - directional ant - traffic are also natural extensions of the similar situations in the uni - directional traffic .when an ant ( in either of the two lanes ) _ does not _face any other ant approaching it from the opposite direction the likelihood of its forward movement onto the ant - free site immediately in front of it is or , respectively , depending on whether or not it finds pheromone ahead . finally , if an ant finds another oncoming ant just in front of it , as shown in fig .[ fig - modeldef2 ] , it moves forward onto the next site with probability .since ants do not segregate in perfectly well defined lanes , head - on encounters of oppositely moving individuals occur quite often although the frequency of such encounters and the lane discipline varies from one species of ants to another . in reality , two ants approaching each other feel the hindrance , turn by a small angle to avoid head - on collision and , eventually , pass each other . at first sight , it may appear that the ants in our model follow perfect lane discipline and , hence , unrealistic .however , that is not true .the violation of lane discipline and head - on encounters of oppositely moving ants is captured , effectively , in an indirect manner by assuming .but , a left - moving ( right - moving ) ant _ can not _overtake another left - moving ( right - moving ) ant immediately in front of it in the same lane .it is worth mentioning that even in the limit the traffic dynamics on the two lanes would remain coupled because the pheromone dropped by the outbound ants also influence the nestbound ants and vice versa .[ fig - flux ] shows fundamental diagrams for the two relevant cases and and different values of the evaporation probability for equal densities on both lanes . in both cases the unusual behaviour related to a non - monotonic variation of the average speed with density as in the uni - directional modelcan be observed .an additional feature of the fundamental diagram in the bi - directional ant - traffic model is the occurrence of a plateau region .this plateau formation is more pronounced in the case than for since they appear for all values of .similar plateaus have been observed earlier in models related to vehicular traffic where randomly placed bottlenecks slow down the traffic in certain locations along the route .the experimental data available initially were not accurate enough to test the predictions mentioned above . however , more accurate recent data exhibit non - monotonic variation of the average speed with density thereby confirming our theoretical prediction .one of the interesting open questions , which requires careful modelling , is as follows : how does a forager ant , which gets displaced from a trail , decides the correct direction on rejoining the trail ?more specifically , an ant carrying food should be nest - bound when it rejoins the trail to save time and to minimize the risk of an encounter with a predator . in other words , do the pheromone trails have some `` polarity '' ( analogous to the polarity of microtubules and actin , the filamentary tracks on which the cytoskeletal motors move ) ? on the basis of recent experimental observations , it has been claimed that the trail geometry gives rise to an effective polarity of the ant trails .however , other mechanisms for polarity of the trails are also possible .although there are some superficial similarities between the trafic - like collective phenomena in ant - trails and the pedestrian traffic on trails , there are also some crucial differences . at present , there are very few models which can account for all the observed phenomena in completely satisfactory manner .we present only a brief overview of the collective effects and self - organization ; for a more comprehensive discussion , see ref. .* jamming * : at large densities various kinds of jamming phenomena occur , e.g.when the flow is limited by a door or narrowing .therefore , this kind of jamming does not depend strongly on the microscopic dynamics of the particles , but is typical for a bottleneck situation .it is important for practical applications , especially evacuation simulations .furthermore , in addition to the flow reduction , effects like arching , known from granular materials , play an important role .jamming also occurs where two groups of pedestrians mutually block each other .* lane formation * : in counterflow , i.e. two groups of people moving in opposite directions , a kind of spontaneous symmetry breaking occurs ( see fig .[ fig_oszi]a ) .the motion of the pedestrians can self - organize in such a way that ( dynamically varying ) lanes are formed where people move in just one direction . in this way ,strong interactions with oncoming pedestrians are reduced and a higher walking speed is possible .a ) b ) * oscillations * : in counterflow at bottlenecks , e.g. doors , one can observe oscillatory changes of the direction of motion .once a pedestrian is able to pass the bottleneck it becomes easier for others to follow in the same direction until somebody is able to pass ( e.g. through a fluctuation ) the bottleneck in the opposite direction ( see fig . [fig_oszi]b ) .* patterns at intersections * : at intersections various collective patterns of motion can be formed .short - lived roundabouts make the motion more efficient since they allow for a `` smoother '' motion .* panics * : in panic situations , many counter - intuitive phenomena can occur . in the faster -is - slower effect a higher desired velocity leads to a slower movement of a large crowd .understanding such effects is extremely important for evacuations in emergency situations .several different approaches for modelling the dynamics of pedestrians have been proposed , either based on a continuous representation of space or on a grid .the earliest models of pedestrian dynamics belonged to the population - based approaches and took inspiration from hydrodynamics or gas - kinetic theory .however , it turned our that several important differences to normal fluids are important , e.g. the anisotropy of interactions or the fact that pedestrians usually have an individual preferred direction of motion . later several individual - based approaches in continuous space and time have been proposed . in the social force models( see e.g. and references therein ) and other similar approaches pedestrians are treated as particles subject to repulsive forces induced by the social behaviour of the individuals .this leads to ( coupled ) equations of motion similar to newtonian mechanics .there are , however , important differences since , e.g. , in general the third law ( `` actio = reactio '' ) is not fulfilled .furthermore a two - dimensional variant of the optimal - velocity model has also been suggested .active walker models have been used to describe the formation of human or animal trails etc . herethe walker leaves a trace by modifying the underground on his path .this modification is real in the sense that it could be measured in principle . for trail formation, vegetation is destroyed by the walker . in a kind of mesoscopic approach inspired by lattice gas models has been suggested .thus the exclusion principle is relaxed and the dynamics is based on a collision - propagation scheme .most cellular automaton models for pedestrian dynamics proposed are rather simple and can be considered as two - dimensional generalizations of the asep ( see sec .[ sec - asep ] ) . however , these models are not able to reproduce all the collective effects described in the preceeding subsection .the same is true for more sophisticated discrete models . in the following we discuss a promising new approach , the _ floor field model _ , for the description of pedestrian dynamics in more detail which is takes inspiration from the ant trail model of sec .[ sec - ants ] .the interaction in this model is implemented as virtual chemotaxis which allows to translate a long - ranged spatial interaction into a local interaction with `` memory '' .guided by the phenomenon of chemotaxis the interactions between pedestrians are thus local and allow for computational efficiency .pedestrians are modelled as particles that move on a two - dimensional lattice of cells .each cell can be occupied by at most one particle which reflects that the interactions between them are repulsive for short distances ( private sphere ) .particles can move to one of the neighbouring cells based on certain transition probabilities that are are determined by three factors : ( 1 ) the desired direction of motion , ( 2 ) interactions with other pedestrians , and ( 3 ) interactions with the infrastructure ( walls , doors , etc . ) .first of all , basic transition probabilities are determined which reflect the preferred walking direction and speed of each individual in the form of a matrix of preferences which can be related to the preferred velocity vector and its fluctuations .next interactions between pedestrians are taken into account .the exclusion principle accounts for the fact that one likes to keep a minimal distance from others .however , for larger distances the interaction is assumed to be attractive to capture the advantage in following the predecessor .this is implemented by virtual chemotaxis . moving particlescreate a `` pheromone '' at the cell which they leave , thus creating a kind of trace , the _ dynamic floor field _it has its own dynamics given by diffusion and decay which leads to a dilution and finally the vanishing of the trace after some time . for a unified description another floor fieldis introduced , the _static floor field _it is constant and takes into account the interactions with the infrastructure , e.g. preferred areas , walls and other obstacles .the transition probabilities given by the matrix of preference are now modified by the strengths of the floor fields and in the target cell ; the details are given in ref. .due to the use of parallel dynamics it might happen that two ( or more ) pedestrians choose the same target cell .such a situation is called a _conflict_. due to hard - core exclusion , at most one persons can move . introducing a friction parameter , with probability _ all _ pedestrians remain at their site , i.e. nobodyis allowed to move . with probability one of the individualsis chosen randomly and allowed to move to the target cell .the effects of are similar to those arising from a moment of hesitation when people are about to collide and try to avoid the collision .the details of the update rules can be found in ; this floor field model has been able to reproduce the empirically observed phenomena listed earlier in this section .another interesting phenomenon which has been studied successfully using the floor field model is the evacuation from a large space with only one exit .however , this is beyond the scope of this review as we restrict our attention here mostly on traffic - like flow properties .nevertheless the application of such models in the planning stages of large buildings , ships or aircrafts has become increasingly important over the last years .the simplicity and realism of the models allows to optimize evacuation and egress processes already in an early stage of the construction without the necessity of performing potentially dangerous experiments .because of restrictions imposed by the allowed length of the review , we have excluded several biological traffic phenomena where , to our knowledge , very little progress has been made so far in theoretical modelling of the processes .these include , for example , + ( i ) _ bidirectional transport _ along microtubules where the same cargo moves along the same microtubule track using sets of opposing motors ; + ( ii ) _ self - organized patterns _ like , for example , _ asters _ and _ vortices _ , which have been in several in - vitro experiments and model calculations in this article we have reviewed our current understanding of traffic - like collective phenomena in living systems starting from the smallest level of intra - cellular transport and ending at the largest level of traffic of pedestrians .so far as the theoretical methods are concerned , we have restricted our attention to those works where the language of cellular automata or extensions of tasep has been used .the success of this modelling strategy has opened up a new horizon and , we hope , we have provided a glimpse of the exciting frontier . *acknowledgements : * it is our great pleasure to thank yasushi okada , alexander john and ambarish kunwar for enjoyable collaborations on the topics discussed in this review .we are indebted to many colleagues for illuminating discussions although we are unable to mention all the names because of space limitations .j. howard , _ mechanics of motor proteins and the cytoskeleton _ ,( sinauer associates , 2001 ) .m. schliwa ( ed . ) , _ molecular motors _ , ( wiley - vch , 2002 ) .hackney and f. tamanoi , _ the enzymes _, vol.xxiii , _ energy coupling and molecular motors _ ( elsevier , 2004 ) .levin , in ref. .d.j . crampton and c.c .richardson , in ref. .wilson , _ the insect societies _ ( belknap , cambridge , usa , 1971 ) ; b. hlldobler and e.o .wilson , _ the ants _ ( belknap , cambridge , usa , 1990 ) h. nagase and j. f. woessner , j. biol . chem . * 274 * , 21491 ( 1999 ) .m. whittaker and a. ayscough , celltransmisions * 17 * , 3 ( 2001 )coffey , yu .p. kalmykov and j.t .waldron , _ the langevin equation _( world scientific , 2004 ) .h. risken , _ the fokker - planck equation _ , ( springer 1984 ) .a. mogilner , l. edelstein - keshet , l. bent and a. spiros , j. math . biol . * 47 * , 353 ( 2003 ) .s. gueron , s.a . levin and d.i .rubenstein , j. theor .biol . * 182 * , 85 ( 1996 ) .f. schweitzer : _ brownian agents and active particles _ , springer series in synergetics ( springer 2003 ) .rauch , m. m. millonas and d.r .chialvo , phys .a * 207 * , 185 ( 1995 ) .s. wolfram , _ theory and applications of cellular automata _( world sci . , 1986 ) ; _ a new kind of science _( wolfram research inc . ,2002 ) b. chopard and m. droz , _ cellular automata modeling of physical systems _ ( cambridge university press , 1998 ) .j. marro and r. dickman , _ nonequilibrium phase transitions in lattice models _ ( cambridge university press , 1999 ) .v. grimm , ecological modeling , * 115 * , 129 ( 1999 ) ; s. f. railsback , ecological modeling * 139 * , 47 ( 2001 ) ; j. odell , j. object technol .* 1 * , 35 ( 2002 ) ; see also the special issue of the proc . natl .may 14 , 2002 , supplement 3 .schtz : _ exactly solvable models for many - body systems _ , in c. domb and j.l .lebowitz ( eds . ) , _ phase transitions and critical phenomena _ , vol .19 ( academic press , 2001 ) .d. chowdhury , l. santen , and a. schadschneider , phys .rep . * 329 * , 199 ( 2000 ) ; a. schadschneider , physica a * 313 * , 153 ( 2002 ) .r. mahnke , j. kaupuzs and i. lubashevsky , phys . rep . *408 * , 1 ( 2005 ) .evans and r.a .blythe , physica * a313 * , 110 ( 2002 ) .b. derrida , m.r .evans , v. hakim and v. pasquier , j. phys .a * 26 * , 1493 ( 1993 ) .g. schtz and e. domany , j. stat .phys.*72 * , 277 ( 1993 ) .n. rajewsky , l. santen , a. schadschneider and m. schreckenberg , j. stat .phys . * 92 * , 151 ( 1998 ) .evans , n. rajewsky and e.r .speer , j. stat .phys . * 95 * , 45 ( 1999 ) .j. de gier and b. nienhuis , phys .e * 59 * , 4899 ( 1999 ) .j. krug , phys .lett . * 67 * , 1882 ( 1991 ) c. macdonald , j. gibbs , and a. pipkin , biopolymers * 6 * , 1 ( 1968 ) ; c. macdonald and j. gibbs , biopolymers * 7 * , 707 ( 1969 ) a.b .kolomeisky , g. schtz , e.b .kolomeisky and j.p .straley , j. phys .a * 31 * , 6911 ( 1998 ) v. popkov , l. santen , a. schadschneider and g.m .schtz : j. phys .* a34 * , l45 ( 2001 ) v. popkov and g. schtz , europhys . lett .* 48 * , 257 ( 1999 ) s.a .janowsky and j.l .lebowitz , phys .a * 45 * , 618 ( 1992 ) g. tripathi and m. barma , phys .78 * 3039 ( 1997 ) .s. goldstein and e.r .speer , phys .e * 58 * , 4226 ( 1998 ) .j. krug and p.a .ferrari , j. phys .a * 29 * , l465 ( 1996 ) .evans , europhys .lett . * 36 * , 13 ( 1996 ) ; j. phys .a * 30 * , 5669 ( 1997 ) .d. ktitarev , d. chowdhury and d.e .wolf , j. phys .a * 30 * , l221 ( 1997 ) .r. juhasz , l. santen and f. igloi , phys .lett . * 94 * , 10601 ( 2005 ) , cond - mat/0507197 .a. ebneth , r. godemann , k. stamer , s. illenberger , b. trinczek , e.m .mandelkow and e. mandelkow , j. cell biol . * 143 * , 777 ( 1998 ) .k. stamer , r. vogel , e. thies , e. mandelkow and e.m .mandelkow , j. cell biol . *156 * , 1051 ( 2002 ) .y. kafri , d.k . lubensky and d. r. nelson , biophys .j. * 86 * , 3373 ( 2004 ) . f. jlicher , a. ajdari , and j. prost , rev .phys . * 69 * , 1269 ( 1997 ) .p. reimann , phys . rep .* 361 * , 57 - 265 ( 2002 ) .feynman , the feynman lectures on physics , vol.1 ( addison - wesley , 1963 ) .m. smoluchowski , physik .z. * 13 * , 1069 ( 1912 ) .t. harms and r. lipowsky , phys .lett . * 79 * , 2895 ( 1997 ) .f. marchesoni , phys .e * 56 * , 2492 ( 1997 ) .popescu , c.m .arizmendi , a.l .salas - brito and f. family , phys .lett . * 85 * , 3321 ( 2000 ) .shaw , r.k.p .zia and k.h .lee , phys .e * 68 * , 021910 ( 2003 ) .shaw , j. p. sethna and k.h .lee , phys .e * 70 * , 021901 ( 2004 ) .shaw , a.b .kolomeisky and k.h .lee , j. phys .a * 37 * , 2105 ( 2004 ) .g. lakatos and t. chou , j. phys .a * 36 * , 2027 ( 2003 ) .t. chou and g. lakatos , phys .* 93 * , 198101 ( 2004 ) . g. oster and h. wang , in ref. .fisher and a.b .kolomeisky , proc .sci . * 98 * , 7748 ( 2001 ) .astumian , appl .phys . a * 75 * , 193 ( 2002 ) .m. aridor and l.a .hannan , traffic * 1 * , 836 ( 2000 ) ; * 3 * , 781 ( 2002 ) .n. hirokawa and r. takemura , trends in biochem .* 28 * , 558 ( 2003 ) e. mandelkow and e.m .mandelkow , trends in cell biol .* 12 * , 585 ( 2002 ) .goldstein , proc .* 98 * , 6999 ( 2001 ) ; neuron * 40 * , 415 - 425 ( 2003 ) . * 28 * , 558 ( 2003 ) ; curr* 14 * , 564 - 573 ( 2004 ) .i. derenyi and t. vicsek , phys .* 75 * , 374 ( 1995 ) .i. derenyi and a. ajdari , phys .e * 54 * , r5 ( 1996 ) .y. aghababaie , g.i . menon and m. plischke ,e * 59 * , 2578 ( 1999 ) .r. lipowksy , s. klumpp , and th .m. nieuwenhuizen , phys .87 , 108101 ( 2001 ) .m. nieuwenhuizen , s. klumpp , and r. lipowksy , europhys . lett .58 , 468 ( 2002 ) .s. klumpp and r. lipowksy , j. stat .113 , 233 ( 2003 ) .s. klumpp and r. lipowksy , europhys .66 , 90 ( 2004 ) .m. nieuwenhuizen , s. klumpp , and r. lipowksy , phys .e 69 , 061911 ( 2004 ) .s. klumpp , th .m. nieuwenhuizen , and r. lipowksy , biophys .j. 88 , 3118 ( 2005 ) .r. lipowksy and s. klumpp , physica a 352 , 53 ( 2005 ) .a. parmeggiani , t. franosch , and e. frey , phys .* 90 * , 086601 ( 2003 ) ; phys . rev .e * 70 * , 046101 ( 2004 ) .evans , r. juhasz , and l. santen , phys .e * 68 * , 026117 ( 2003 ) .r. juhasz and l. santen , j. phys .a * 37 * , 3933 ( 2004 ) .v. popkov , a. rakos , r.d .williams , a.b .kolomeisky , and g.m .schtz , phys .e * 67 * , 066117 ( 2003 ) .s. mukherji and s.m .bhattacharjee , j. phys .a * 38 * , l285 ( 2005 ) .b. schmittmann and r.p.k .zia , in c. domb and j.l .lebowitz ( eds . ) , _ phase transitions and critical phenomena _ , vol .17 ( academic press , 1995 ) .klein , k. kruse , g. cuniberti and f. jlicher , phys rev . lett . * 94 * , 108102 ( 2005 ) .k. kruse and k. sekimoto , phys .e * 66 * , 031904 ( 2002 ) .k. nishinari , y. okada , a. schadschneider and d. chowdhury , phys .lett . * 95 * , 118101 ( 2005 ) .y. okada and n. hirokawa , science * 283 * , 1152 ( 1999 ) .y. okada and n. hirokawa , proc .usa * 97 * , 640 ( 2000 ) .y. okada , h. higuchi , and n. hirokawa , nature , * 424 * , 574 ( 2003 ) .r. nitta , m. kikkawa , y. okada , and n. hirokawa , science * 305 * , 678 ( 2003 ) .y. okada , k. nishinari , d. chowdhury , a. schadschneider , and n. hirokawa ( to be published ) .s. saffarian , i. e. collier , b.l .marmer , e.l .elson and g. goldberg , science * 306 * , 108 ( 2004 ) .j. mai , i.m . sokolov and a. blumen ,e * 64 * , 011102 ( 2001 ) .t. antal and p.l .krapivsky , cond - mat/0504652 .y. hiratsuka , m. miyata and t. q. p. uyeda , biochem .commun . * 331 * , 318 ( 2005 ) .a. vartak et al . to be published .d. b. weibel , p. garstecki , d. ryan , w. r. diluzio , m. mayer , j. e. seto and g. m. whitesides , proc .usa , * 102 * , 11963 ( 2005 ) .e. bonabeau , g. theraulaz , j.l .deneubourg , s. aron and s. camazine , trends in ecol . evol .* 12 * , 188 ( 1997 ) c. anderson , g. theraulaz and j.l .deneubourg , insect .sociaux * 49 * , 99 ( 2002 ) z. huang and j.h .fewell , trends in ecol . evol . * 17 * , 403 ( 2002 ) .e. bonabeau , ecosystems * 1 * , 437 ( 1998 ) .g. theraulaz , j. gautrais , s. camazine and j.l .deneubourg , phil .a * 361 * , 1263 ( 2003 ) .j. gautrais , g. theraulaz , j.l .deneubourg and c. anderson , j. theor . biol . *215 * , 363 ( 2002 ) .l. edelstein - keshet , j. math . biol . * 32 * , 303 ( 1994 ) .g. theraulaz , e. bonabeau , s.c .nicolis , r.v .sole , v. fourcassie , s. blanco , r. fournier , j.l .joly , p. fernandez , a. grimal , p. dalle and j.l .deneubourg , proc .sci . * 99 * , 9645 ( 2002 ) .m. dorigo , g. di caro and l.m .gambardella , artificial life * 5(3 ) * , 137 ( 1999 ) ; special issue of future generation computer systems dedicated to ant - algorithms ( 2000 ) .e. bonabeau , m. dorigo and g. theraulaz , nature * 400 * , 39 ( 2000 ) .e. bonabeau , m. dorigo and g. theraulaz , _ swarm intelligence : from natural to artificial intelligence _( oxford university press , 1999 ) .krieger , j.b .billeter and l. keller , nature * 406 * , 992 ( 2000 ) .ratnieks and c. anderson , insectes sociaux * 46 * , 95 ( 1999 ) . c. anderson and f.l.w .ratnieks , am . nat . * 154 * , 521 ( 1999 ) .f.l.w . ratnieks and c. anderson , am .* 154 * , 536 ( 1999 ) . c. anderson and f.l.w .ratnieks , insectes sociaux * 47 * , 198 ( 2000 ) . c. anderson and d.w .mcshea , biol . rev . * 76 * , 211 ( 2001 ) . c. anderson and f.l.w .ratnieks , in : _ complexity and complex systems in industry _ , eds .i.p . mccarthy and t. rakotobe - joel , ( university of warwick , u.k . ) , 92 ( 2000 ) .e. bonabeau and c. meyer , harvard business review ( may ) , 107 ( 2001 ) .s. camazine , j.l .deneubourg , n. r. franks , j. sneyd , g. theraulaz , e. bonabeau : _ self - organization in biological systems _ ( princeton university press , 2001 ) .mikhailov and v. calenbuhr , _ from cells to societies : models of complex coherent action _ ( springer , 2002 ) .j. watmough and l. edelstein - keshet , j. theor . biol . *176 * , 357 ( 1995 ) .i.d . couzin and n.r .franks , proc .london b * 270 * , 139 ( 2003 ) .d. helbing , f. schweitzer , j. keltsch , p. molnar : phys .* e56 * , 2527 ( 1997 ) b. derrida , phys .* 301 * , 65 ( 1998 ) b. derrida and m.r .evans , in : _ nonequilibrium statistical mechanics in one dimension _ , ed .v. privman ( cambridge university press , 1997 ) o.j .oloan , m.r .evans , m.e .cates , europhys .lett . * 42 * , 137 ( 1998 ) ; phys .e*58 * , 1404 ( 1998 ) .d. chowdhury , r.c .desai , eur .j. b*15 * , 375 ( 2000 ) a. kunwar , d. chowdhury , a. schadschneider and k. nishinari , submitted for publication . m. burd and n. aranwela , insect .sociaux * 50 * , 3 ( 2003 ) .d. chowdhury , v. guttal , k. nishinari , a. schadschneider , j. phys .35 * , l573 ( 2002 ) k. nishinari , d. chowdhury , a. schadschneider , phys .e * 67 * , 036120 ( 2003 ) p. f. arndt , t. heinzel and v. rittenberg , j. phys .a * 31 * , l45 ( 1998 ) ; j. stat . phys . * 97 * , 1 ( 1999 ) .n. rajewsky , t. sasamoto and e.r .speer , physica a * 279 * , 123 ( 2000 ) .a. john , a. schadschneider , d. chowdhury and k. nishinari , j. theor .* 231 * , 279 ( 2004 ). m. burd , d. archer , n. aranwela and d.j .stradling , am .nat . * 159 * , 283 ( 2002 ) .m. burd et al .( 2005 ) unpublished .d. e. jackson , m. holcombe and f.l.w .ratnieks , nature , * 432 * , 907 ( 2004 ) .d. helbing : rev .phys . * 73 * , 1067 ( 2001 ) m. schreckenberg , s.d .sharma ( ed . ) : _ pedestrian and evacuation dynamics _ , springer 2001 d. helbing , i. farkas , t. vicsek : nature * 407 * , 487 ( 2000 ) d.e . wolf and p. grassberger ( editors ) : _ friction , arching , contact dynamics _ , ( world scientific , 1997 ) d. helbing , p. molnar : phys. rev .* e51 * , 4282 ( 1995 ) l.f .henderson : transp .res . * 8 * , 509 ( 1974 ) d. helbing : complex systems * 6 * , 391 ( 1992 ) r. hughes : math .* 53 * , 367 ( 2000 ) r. hughes : transp .b * 36 * , 507 ( 2001 ) p. thompson and e. marchant : fire safety journal * 24 * , 131 ( 1995 ) a. nakayama , k. hasebe , y. sugiyama : phys . rev . * e71 * , 036121 ( 2005 ) d. helbing , j. keltsch , p. molnar : nature * 388 * , 47 ( 1997 ) s. marconi , b. chopard : in _ cellular automata _ , lect .notes comp . sc . * 2493 * , 231 ( 2002 ) b. chopard , m. droz : _ cellular automata modeling of physical systems _ , cambridge university press ( 1998 ) m. fukui and y.ishibashi : j. phys .jpn . * 68 * , 2861 ( 1999 ) m. muramatsu and t. nagatani : physica * a275 * , 281 ( 2000 ) h. klpfel , t. meyer - knig , j. wahle and m. schreckenberg : in _ theory and practical issues on cellular automata _ , s. bandini , t. worsch ( eds . ) , springer ( 2000 ) p.g .gipps and b. marksjs : math . and comp . in simulation* 27 * , 95 ( 1985 ) k. bolay : diploma thesis , stuttgart university ( 1998 ) c. burstedde , k. klauck , a. schadschneider , j. zittartz : physica * a295 * , 507 ( 2001 ) a. kirchner , a. schadschneider : physica * a312 * , 260 ( 2002 ) a. kirchner , k. nishinari , a. schadschneider : phys .rev . * e67 * , 056122 ( 2003 ) a. kirchner , h. klpfel , k. nishinari , a. schadschneider , m. schreckenberg : j. stat, p10011 ( 2004 ) m.a .welte , curr . biol . * 14 * , r525 ( 2004 ) .gross , phys .* 1 * , r1 ( 2004 ) .nedlec , t. surrey , a.c .maggs and s. leibler , nature * 389 * , 305 ( 1997 ) ; t. surrey , f. nedlec , s. leibler and e. karsenti , science * 292 * , 1167 ( 2001 ) ; f. nedlec , t. surrey and e. karsenti , curr .cell biol .* 15 * , 118 ( 2003 ) .h. y. lee and m. kardar , phys .e * 64 * , 056113 ( 2001 ) .s. sankararaman , g. i. menon and p.b .sunil kumar , phys .e * 70 * , 031905 ( 2004 ) .f. ziebert and w. zimmermann , cond - mat/0502236 ( 2005 ) .
|
traffic - like collective movements are observed at almost all levels of biological systems . molecular motor proteins like , for example , kinesin and dynein , which are the vehicles of almost all intra - cellular transport in eukayotic cells , sometimes encounter traffic jam that manifests as a disease of the organism . similarly , traffic jam of collagenase mmp-1 , which moves on the collagen fibrils of the extracellular matrix of vertebrates , has also been observed in recent experiments . novel efforts have been made to utilize some uni - cellular organisms as `` micro - transporters '' . traffic - like movements of social insects like ants and termites on trails are , perhaps , more familiar in our everyday life . experimental , theoretical and computational investigations in the last few years have led to a deeper understanding of the generic or common physical principles involved in these phenomena . in this review we critically examine the current status of our understanding , expose the limitations of the existing methods , mention open challenging questions and speculate on the possible future directions of research in this interdisciplinary area where physics meets not only chemistry and biology but also ( nano-)technology .
|
the method of cosmic crystallography ( cc ) , devised by lehoucq et al . , looks for distance correlations between cosmic sources using pair separations histograms ( psh ) , i.e. plots of the number of pairs of sources versus the distance ( or squared distance ) between them .these correlations arise from the isometries of the covering group of the 3-manifold used to model our universe and so they provide a signature of its global spatial topology . in this waycc is potentially useful to investigate the shape and size of our universe .it has recently been shown by gomero et al . how to calculate the topological signature from these distance correlations in a very general geometrical - topological - observational setting .it turns out from the major result of ref . that correlations due to clifford translations manifest as spikes in psh s , whereas other isometries manifest as small deformations of the _ expected _ pair separations histogram ( epsh ) of the corresponding universal covering manifold .the major result of ref. has a striking consequence for universe models with hyperbolic spatial sections. indeed , since no hyperbolic isometry is a clifford translation , there are no topological spikes in psh s corresponding to low density universe models .thus , at first sight , these histograms seem to give no reliable information of the topology of the spatial sections in these models of the universe .the absence of spikes in psh s from hyperbolic universes is by now well understood and has been confirmed by simulations performed by lehoucq et al . and fagundes and gausmann .it remains , however , to understand the topological signature of hyperbolic isometries in cc .the implications of the results of ref. for psh s from flat universe models seem to be less well understood .it has been stated in and that every euclidean isometry which produces -pairs in a given catalog will give rise to a spike in the corresponding psh .this statement , however , is in clear contradiction with the fact that only translations produce spikes .moreover , in studying the applicability of cc to closed flat models of our universe , fagundes and gausmann reported a psh for a manifold of class , therein called model , which exhibits a significant peak at .that paper suggests that this spike is generated by an isometry of the covering group of the manifold considered , and this interpretation was again suggested in ref. .nevertheless , according to ref. , since there is no translation that would produce the peak at , one immediately concludes that it must be due to statistical fluctuations , and so it is not of topological origin .a definitive elucidation of these unsettled issues would be useful because it would clarify the actual signature of euclidean non - translational isometries in psh s .indeed , by performing simulations we will evince in this letter that , contrary to what is suggested in and , topological spikes are not the only signature of topology in psh s corresponding to euclidean small universes .besides we also show through simulations that non - translational isometries do not manifest as _ less sharp peaks _ as suggested by fagundes and gausmann , but as broad and tiny deformations of the psh corresponding to the simply connected case .our results here are supported by and in agreement with the general theoretical developments of ref. . actually , the major purpose of this letter is to show how to use the mpsh technique described in for studying the topological signature of isometries in psh s . after a brief review of the techniques developed in first compute mpsh s for a manifold of class and reduce the statistical noise to a level that allows the identification of topological spikes . in this way ( i ) statistical spikes that may be confused with topological spikes are removed , and ( ii ) topological spikes that are masked by statistical fluctuations in individual psh s show up even when there are few -pairs corresponding to them .incidentally , point ( i ) makes clear that actually there is no topological spike at . as an additional application we construct an epsh for the minimal 3-torus that covers and plot the difference between this epsh and an mpsh of . since the covering groups of this 3-torus and that of have the same translations , then psh s for these two manifolds would exhibit identical spike spectra .so this difference yields the topological signature of non - translational isometries of the covering group of plus some statistical noise . for comparison, we also plot the difference between an mpsh and an epsh , both for the 3-torus , obtaining as a result essentially statistical noise .this indicates that , within the accuracy of the simulations , topological spikes are the only topological signature in psh s for a 3-torus .here we briefly review some results obtained in ref. , and extend them to the level needed for the development of this work .we begin by describing what a pair separations histogram ( psh ) is , and then show how to construct mean pair separations histograms ( mpsh ) with simulated catalogs .we end with a brief explanation of the expected pair separations histogram ( epsh ) and its use in determining the topological signature for non - translational isometries . to build a psh we simply evaluate a suitable one - to - one function of the separation of every pair of cosmic sources from a given catalog , and then count the number of pairs for which these values lie within certain subintervals .these subintervals are all of equal length and must form a partition of the interval ] in equal subintervals of length .each subinterval has the form \qquad ; \qquad i=1,2 , \dots , m \ ; , \ ] ] and is centered at given a catalog of cosmic sources and denoting by the number of pairs of sources in with squared separation , a psh is then obtained plotting the function where is the number of sources in . note that with the same catalog we may obtain different psh s simply by taking different values for .the sum in ( [ histograma ] ) is just a counting and the coefficient of the sum is a normalization constant , so although it is usual in cc to refer to the plot of the function as a psh , for theoretical purposes it is more useful to define a psh as a function given by ( [ histograma ] ) . from now on we will always refer to simply as a psh . any single psh is plagued with statistical noise that may mask the topological signature . the simplest and most obvious way to reduce this noise is to use the mpsh which is described as follows . consider comparable catalogs ( ) , with approximately the same number of cosmic sources and corresponding to the same manifold .let their psh s , for a fixed value of , be given by where is the number of sources in and is the number of pairs of sources in with squared separation ; then , the mpsh defined by contains much less noise than any single psh , and clearly contains the same topological information .indeed , elementary statistics tells us that the statistical fluctuations in the mpsh are reduced by a factor proportional to , which makes at first sight the mpsh very attractive . in sec.[spikes ] we apply this technique to discriminate between topological and statistical spikes in psh s corresponding to an euclidean compact manifold . as shown in ref . ( see also refs . ) , in the limit the mpsh approximates very well to the epsh which is an `` ideal '' psh , i.e. a psh with the statistical noise completely removed .equation ( 4.15 ) of ref . [ or equivalently eq . ( 2.11 ) rederived in ref . , wherein denotes the total number of pairs of cosmic images ] can be rewritten in the form + \frac{1}{n-1}\,\sum_{g \in \widetilde{\gamma } } \nu_g\ , [ \ , \phi^g_{exp}\,(s_i ) - \phi^{sc}_{exp}\,(s_i)\ , ] \;,\ ] ] where is the covering group of without the identity element , is the mean value of the , is the epsh of the corresponding simply connected case , is the expected number of uncorrelated pairs and . in( [ topsig1 ] ) , where is the probability of an uncorrelated pair to be separated by a squared distance that lies in . for each covering isometryit is also defined a number , where is the expected number of -pairs in a catalog with sources , and a distribution function , where is the probability of an observed -pair to be separated by a squared distance that lies in .now within the approximation the epsh reads \ ; .\ ] ] the general underlying setting for performing the calculations involved in ( [ epsh ] ) is the assumed existence of an ensemble of catalogs comparable to a given catalog ( real or simulated ) , with the same number of sources and corresponding to the same manifold .the construction rules permit the computation of probabilities and expected values involved in ( [ epsh ] ) .let be the subset of all clifford translations of ( i.e. all the isometries such that for all , the distance is independent of ) . when we have where is the kronecker delta , and is the position of the spike due to the translation , i.e. . then one can write ( [ epsh ] ) as where \ ] ] is the contribution of clifford translations to the topological signature of the epsh , and \ ] ] is the topological signature associated to the non - translational isometries of .it is clear from ( [ epsh2 ] ) that manifolds with the same translations in their covering groups will exhibit the same spike spectra given by ( [ transepsh ] ) , the only difference between their epsh s being the topological signature associated to non - translational isometries .moreover , in the euclidean case , from ( [ epsh2 ] ) one can always write with being the epsh of the minimal 3-torus that covers .now , using the fact that one gets another approximate expression for the topological signature of non - translational isometries of small flat universes , namely this expression can easily be numerically evaluated since the mpsh can be obtained with computer simulations , and is given explicitly by in sec.[topsig ] we use ( [ ntepsh3 ] ) and ( [ torusepsh2 ] ) to evince the shape of the topological signature of non - translational isometries for an euclidean closed manifold . from now on we will consider only trivial construction rules , i.e. we will assume that cosmic sources are uniformly distributed in space , and all cosmic sources present in universe models , up to a given redshift , are recorded in catalogs .although unrealistic , this assumption makes easy to illustrate the general results developed in ref. and permits a comparison with current literature in cc . besides , in this case we can readily compute and the coefficients . indeed , from ref. ( see also ref . ) one can easily calculate for the case of trivial construction rules and a ball of radius as an observed universe . for flat models one obtains where is the heaviside function . correspondingly , the coefficients can be calculated by simple geometrical arguments .indeed , let be the observed universe , i.e. a ball of radius centered at our position .the isometry transforms isometrically the ball into the ball , so only the sources in have a -partner in , and form -pairs .thus we have a simple calculation yields where is the distance from the center of the observed universe to its image by the isometry .let us now show how to use the mpsh technique to discriminate between topological and statistical spikes by working out two examples of models of the universe reported in ref. . in order to make a comparison with the plots of the upper part of fig.1 in ref. , we took a manifold of type with covering group generated by where . a fundamental polyhedron for and a detailed construction of this manifoldis given in ref. ( see also ) .we have performed simulations for two observed universes with radii and , respectively . as in ref. , for each simulation in the first case ( ) we put 20 objects uniformly distributed in the fp , while in the case we put 101 objects inside it . in both cases we end up with catalogs of approximately 240 sources .a psh for one catalog for each case is shown in fig.1 , where we have subdivided the intervals ] have been subdivided in bins of width 0.01 . in ( a )the psh presents no apparent spike at .: : mpsh s built with 50 simulated comparable catalogs with approximately 240 sources for the two universes of fig.1 .the statistical noise has been considerably reduced so that it becomes apparent that there is no topological spike at in ( a ) , whereas in ( b ) there is a small spike at that is masked by statistical fluctuations in the psh of fig.1b .the intervals ] have been subdivided in bins of width 0.01 .there is no relevant difference between graphs corresponding to the model and its minimal covering 3-torus , illustrating that topological spikes are not enough to distinguish between these two flat manifolds .: : an epsh given by ( [ torusepsh2 ] ) for the 3-torus whose psh is shown in fig.3 .the comparison between the epsh of the present figure with the mpsh of fig.3b makes apparent the suitability and strength of the mpsh procedure .: : mpsh s corresponding to the topological signature of non - translational isometries given by ( [ ntepsh3 ] ) . part ( a ) corresponds to the 3-torus of fig.3 and fig.4 , while ( b ) corresponds to the manifold of figs.1 and 2 .both mpsh s were built with 5000 catalogs of approximately 120 sources , and with bins of width 0.02 .while ( a ) exhibits essentially statistical noise as expected , ( b ) shows the topological signature of non - translational isometries of .
|
we study the topological signature of euclidean isometries in pair separations histograms ( psh ) and elucidate some unsettled issues regarding distance correlations between cosmic sources in cosmic crystallography . reducing the noise of individual psh s using mean pair separations histograms we show how to distinguish between topological and statistical spikes . we report results of simulations that evince that topological spikes are not enough to distinguish between manifolds with the same set of clifford translations in their covering groups , and that they are not the only signature of topology in psh s corresponding to euclidean small universes . we also show how to evince the topological signature due to non - translational isometries .
|
computer science and computer engineering are disciplines that have transformed every aspect of modern society . in these fields , cutting - edge research is about new models of computation , new materials and techniques for building computer hardware , novel methods for speeding - up algorithms , and building bridges between computer science and several other scientific fields that allow scientists to both think of natural phenomena as computational procedures as well as to employ novel models of computation to simulate natural processes ( e.g. . ) in particular , quantifying the resources required to process information and/or to compute a solution , i.e. to assess the complexity of a computational process , is a prioritized research area as it allows us to estimate implementation costs as well as to compare problems by comparing the complexity of their solutions . among the mathematical tools employed in advanced algorithm development ,classical random walks , a subset of stochastic processes ( that is , processes whose evolution involves chance ) , have proved to be a very powerful technique for the development of stochastic algorithms .in addition to the key role they play in algorithmics , classical random walks are ubiquitous in many areas of knowledge as physics , biology , finance theory , computer vision , and earthquake modelling , to name a few . theoretical computer science , in its canonical form , does not take into account the physical properties of those devices used for performing computational or information processing tasks . as this characteristiccould be perceived as a drawback because the behavior of any _ physical _ device used for computation or information processing must ultimately be predicted by the laws of physics , several research approaches have therefore concentrated on thinking of computation in a physical context ( e.g . ) among those physical theories that could be used for this purpose , quantum mechanics stands in first place .quantum computation can be defined as the interdisciplinary scientific field devoted to build quantum computers and quantum information processing systems , i.e. computers and information processing systems that use the quantum mechanical properties of nature .research on quantum computation heavily focuses on building and running algorithms which exploit the physical properties of quantum computers . among the theoretical discoveries and promising conjectures that have positioned quantum computation as a key element in modern science , we find : 1 .the development of novel and powerful methods of computation that may allow us to significantly increase our processing power for solving certain problems ( e.g. . ) 2 .the increasing number of quantum computing applications in several branches of science and technology ( e.g. image processing and computational geometry , pattern recognition , quantum games , and warfare . ) 3 . the simulation of complex physical systems and mathematical problems for which we know no classical digital computer algorithm that could efficiently simulate them . a detailed summary of scientific and technological applications of quantum computers can be found in .building good quantum algorithms is a difficult task as quantum mechanics is a counterintuitive theory and intuition plays a major role in algorithm design and , for a quantum algorithm to be good , it is not enough to perform the task it is intended to : it must also do better , i.e. be more efficient , than any classical algorithm ( at least better than those classical algorithms known at the time of developing corresponding quantum algorithms . )examples of successful results in quantum computation can be found in .good introductions and reviews of quantum algorithms can be found in .quantum walks , the quantum mechanical counterpart of classical random walks , is an advanced tool for building quantum algorithms ( e.g. ) that has been recently shown to constitute a universal model of quantum computation .there are two kinds of quantum walks : discrete and continuous quantum walks .the main difference between these two sets is the timing used to apply corresponding evolution operators . in the case of discrete quantum walks , the corresponding evolution operator of the systemis applied only in discrete time steps , while in the continuous quantum walk case , the evolution operator can be applied at any time .our approach in the development of this work has been to study those concepts of quantum mechanics and quantum computation relevant to the computational aspects of quantum walks .thus , in the history of cross - fertilization between physics and computation , this review is meant to be situated as a contribution within the field of quantum walks from the perspective of a computer scientist .in addition to this paper , the reader may also find the scientific documents written by kempe , kendon , konno , ambainis , santha , and venegas - andraca relevant to deepening into the mathematical , physical and algorithmic properties of quantum walks .the following lines provide a summary of the main ideas and contributions of this review article .section [ quantum_walks_intro ] .* fundamentals of quantum walks*. in this section i offer a comprehensive yet concise introduction to the main concepts and results of discrete and continuous quantum walks on a line and other graphs .this section starts with a short and rigorous introduction to those properties of classical discrete random walks on undirected graphs relevant to algorithm development , including definitions for hitting time , mixing time and mixing rate , as well as mathematical expressions for hitting time on an unrestricted line and on a circle .i then introduce the basic components of a discrete - time quantum walk on a line , followed by a detailed analysis of the hadamard quantum walk on an infinite line , using a method based on the discrete time fourier transform known as the schrdinger approach .this analysis includes the enunciation of relevant theorems , as well as the advantages of the hadamard quantum walk on an infinite line with respect to its closest classical counterpart . in particular , i explore the context in which the properties of the hadamard quantum walk on an infinite line are compared with classical random walks on an infinite line and with two reflecting barriers .also , i briefly review another method for studying the hadamard walk on an infinite line : path counting approach .i then proceed to study a quantum walk on an infinite line with an arbitrary coin operator and explain why the study of the hadamard quantum walk on an infinite line is enough as for the analysis of arbitrary quantum walks on an infinite line .then , i present several results of quantum walks on a line with one and two absorbing barriers , followed by an analysis on the behavior of discrete - time coined quantum walks using many coins and a study of the effects of decoherence , a detailed review on limit theorems for discrete - time quantum walks , a subsection devoted to the recently founded subfield of localization on discrete - time quantum walks , and a summary of other relevant results .i then focus on the properties of discrete - time quantum walks on graphs : we study discrete - time quantum walks on a circle , on the hypercube and some general properties of this kind of quantum walks on cayley graphs , including a limit theorem of averaged probability distributions for quantum walks on graphs .i continue this section with a general introduction to continuous quantum walks together with several relevant results published in this field .then , i present an analysis of the role that randomness plays in quantum walks and the connections between the mathematical models of coined discrete quantum walks and continuous quantum walks .the last part of this section focuses on issues about the quantumness of quantum walks that includes a brief summary of reports on discrete quantum walks and entanglement , finally , i briefly summarize several experimental proposals and realizations of discrete - time quantum walks .section [ qw_based_algorithms ] . * algorithms based on quantum walks and classical simulation of quantum algorithms - quantum walks*. we review several links between computer science and quantum walks .we start by introducing the notions of oracle and hitting time , followed by a detailed analysis of quantum algorithms developed to solve the following problems : searching in an unordered list and in a hypercube , the element ditinctness problem , and the triangle problem .i then provide an introduction to a seminal paper written by m. szegedy in which a new definiton of quantum walks based on quantizing a stochastic matrix is proposed .the second part of this section is devoted to analyzing continuous quantum walks .we start by reviewing the most successful quantum algorithm based on a continuous quantum walk known so far , which consists of traversing , in polynomial time , a family of graphs of trees with an exponential number of vertices ( the same family of graphs would be traversed only in exponential time by any classical algorithm ) .we then briefly review a generalization of a continuous quantum walk , now allowed to perform non - unitary evolution , in order to simulate photosynthetic processes , and we finish by reviewing the state of the art on classical digital computer simulation of quantum algorithms and , particularly , quantum walks .section [ qw_computational_universality ] .* universality of quantum walks*. i review in this last section a very recent and most important contribution in the field of quantum walks : computational universality of both continuous- and discrete - time quantum walks .quantum walks are quantum counterparts of classical random walks .since classical random walks have been successfully adopted to develop classical algorithms and one of the main topics in quantum computation is the creation of quantum algorithms which are faster than their classical counterparts , there has been a huge interest in understanding the properties of quantum walks over the last few years .in addition to their usage in computer science , the study of quantum walks is relevant to building methods in order to test the quantumness of emerging technologies for the creation of quantum computers as well as to model natural phenomena .quantum walks is a relatively new research topic .although some authors have selected the name quantum random walk to refer to quantum phenomena and , in fact , in a seminal work by r.p .feynman about quantum mechanical computers we find a proposal that could be interpreted as a continuous quantum walk , it is generally accepted that the first paper with quantum walks as its main topic was published in 1993 by aharonov _et al _ .thus , the links between classical random walks and quantum walks as well as the utility of quantum walks in computer science , are two fresh and open areas of research ( among scientific contributions on the links between classical and quantum walks , konno has proposed in solid mathematical connections between correlated random walks and quantum walks using the matrix method introduced in . )two models of quantum walks have been suggested : + + - the first model , called * discrete quantum walks * , consists of two quantum mechanical systems , named a walker and a coin , as well as an evolution operator which is applied to both systems only in discrete time steps .the mathematical structure of this model is evolution via unitary operator , i.e. .+ - the second model , named * continuous quantum walks * , consists of a walker and an evolution ( hamiltonian ) operator of the system that can be applied with no timing restrictions at all , i.e. the walker walks any time .the mathematical structure of this model is evolution via the schrdinger equation .+ in both discrete and continuous models , the topology on which quantum walks have been performed and their properties computed are discrete graphs .this is mainly because graphs are widely used in computer science and building up quantum algorithms based on quantum walks has been a prioritized activity in this field .the original idea behind the construction of quantum algorithms was to start by initializing a set of qubits and then to apply ( one of more ) evolution operators several times _ without making intermediate measurements _ , as measurements were meant to be performed only at the end of the computational process ( for example , see the quantum algorithms reported in . ) not surprisingly , the first quantum algorithms based on quantum walks were designed using the same strategy : initialize qubits , apply evolution operators and measure only to calculate the final outcome of the algorithm .indeed , this method has proved itself very useful for building several remarkable algorithms ( e.g. . )however , as the field has matured , it has been reported that performing ( partial ) measurements on a quantum walk may lead to interesting mathematical properties for algorithm development , like the top hat probability distribution ( e.g. . )moreover and expanding on the idea of using more sophisticated tools from the repertoire of quantum mechanics , recent reports have shown the effect of using weak measurements on the walker probability distribution of discrete quantum walks .the rest of this section is organized as follows .i begin with a short introduction to those properties of classical discrete random walks on undirected graphs relevant to algorithm development , including definitions for hitting time , mixing time and mixing rate , as well as mathematical expressions for hitting time on an unrestricted line and on a circle .i then introduce the basic components of a discrete - time quantum walk on a line , followed by a detailed analysis of the hadamard quantum walk on an infinite line , using a method based on the discrete time fourier transform known as the schrdinger approach .this analysis includes the enunciation of relevant theorems , as well as the advantages of the hadamard quantum walk on an infinite line with respect to its closest classical counterpart .in particular , i explore the context in which the properties of the hadamard quantum walk on an infinite line are compared with classical random walks on an infinite line and with two reflecting barriers .also , i briefly review another method for studying the hadamard walk on an infinite line : path counting approach .i then proceed to study a quantum walk on an infinite line with an arbitrary coin operator and explain why the study of the hadamard quantum walk on an infinite line is enough as for the analysis of arbitrary quantum walks on an infinite line .then , i present several results of quantum walks on a line with one and two absorbing barriers , followed by an analysis on the behavior of discrete - time coined quantum walks using many coins and a study of the effects of decoherence , a detailed review on limit theorems for discrete - time quantum walks , a subsection devoted to the recently founded subfield of localization on discrete - time quantum walks , and a summary of other relevant results .in addition to this review paper , the reader may also find the scientific documents written by kempe , kendon , konno , ambainis , santha , and venegas - andraca relevant to deepening into the mathematical , physical and algorithmic properties of quantum walks .finally , readers who are not yet acquainted with the mathematical and/or physical foundations of quantum computation may find the following references useful : .classical discrete random walks were first thought as stochastic processes with no straightforward relation to algorithm development .thus , in addition to references like in which the mathematical foundations of random walks can be found , references are highly recommendable for a deeper understanding of algorithm development based on classical random walks .a classical discrete random walk on a line is a particular kind of stochastic process .the simplest classical random walk on a line consists of a particle ( the walker ) jumping to either left or right depending on the outcomes of a probability system ( the coin ) with ( at least ) two mutually exclusive results , i.e. the particle moves according to a probability distribution ( fig .( [ ucrw_line ] ) . )the generalization to discrete random walks on spaces of higher dimensions ( graphs ) is straightforward .an example of a discrete random walk on a graph is a particle moving on a lattice where each node has vertices , and the particle moves according to the outcomes produced by tossing a dice .classical random walks on graphs can be seen as markov chains ( . ) now , let be a stochastic process which consists of the path of a particle which moves along an axis with steps of one unit at time intervals also of one unit ( fig .( [ ucrw_line ] ) . ) at any step , the particle has a probability of going to the right and of going to the left .each step is modelled by a bernoulli - distributed random variable and the probability of finding the particle in position after steps and having as initial position is given by the binomial distribution fig .( [ binomial_25 ] ) shows a plot of eq .( [ unrestricted_classical_random_walk ] ) with number of steps and .since is bin then the expected value is given by = np ] .thus , = v[2t_n - n ] = 4npq .\text { in other words , } v[z_n ] = o(n ) \label{variance_unrestricted_rw_line}\ ] ] eq .( [ variance_unrestricted_rw_line ] ) will be used in the following sections to show one of the earliest results on comparing classical random walks to quantum walks .graphs that encode the structure of a group are called * cayley graphs*. cayley graphs are a vehicle for translating mathematical structures of scientific and engineering problems into forms amenable to algorithm development for scientific computing .* cayley graph*. let be a finite group , and let be a generating set for g. the cayley graph of with respect to has a vertex for every element of , with an edge from to and .[ cayley_graph ] cayley graphs are -regular , that is , each vertex has degree .cayley graphs have more structure than arbitrary markov graphs and their properties can be used for algorithm development .graphs and markov chains can be put in an elegant framework which turns out to be very useful for the development of algorithmic applications : let be a connected , undirected graph with and . induces a markov chain if the states of are the vertices of , and where is the degree of vertex .since is connected , then is irreducible and aperiodic .moreover , has a unique stationary distribution .let be a connected , undirected graph with nodes and edges , and let be its corresponding markov chain .then , has a unique distribution for all components of .[ theorem_stationary_distribution_undirected_graph ] note that theorem [ theorem_stationary_distribution_undirected_graph ] holds even when the distribution is not uniform .in particular , the stationary distribution of an undirected and connected graph with nodes , edges and constant degree , i.e. a cayley graph , is , the uniform distribution .we have established the relationship between markov chains and graphs .we now proceed to define the concepts that make discrete random walks on graphs useful in computer science .we shall begin by formally describing a random walk on a graph : let be a graph . a random walk , starting from a vertex is the random process defined by + + s = u + * repeat * + choose a neighbor of according to a certain probability distribution + u = v + * until * ( stop condition )so , we start at a node and , if at step we are at a node , we move to a neighbour of with probability given by probability distribution . it is common practice to make , where is the degree of vertex . examples of discrete random walks on graphs are a classical random walk on a circle or on a 3-dimensional mesh .we now introduce several measures to quantify the performance of discrete random walks on graphs .these measures play an important role in the quantitative theory of random walks , as well as in the application of this kind of markov chains in computer science . *hitting time*. the hitting time is the expected number of steps before node is visited , starting from node .[ hitting_time_classical ] * mixing rate*. the mixing rate is a measure of how fast the discrete random walk converges to its limiting distribution .the mixing rate can be defined in many ways , depending on the type of graph we want to work with .we use the definition given in .+ if the graph is non - bipartite then as , and the mixing rate is given by as it is the case with the mixing rate , the * mixing time * can be defined in several ways .basically , the notion of mixing time comprises the number of steps one must perform a classical discrete random walk before its distribution is close to its limiting distribution .+ * mixing time * _ . let be an ergodic markov chain which induces a probability distribution on the states at time .also , let denote the limiting distribution of .the mixing time is then defined as + where is a standard distance measure .for example , we could use the total variation distance , defined as .thus , the mixing time is defined as the first time such that is within distance of at all subsequent time steps , irrespective of the initial state .[ mixing_time ] let us now provide two examples of hitting times on graphs .it has been shown in eq .( [ unrestricted_classical_random_walk ] ) that , for an unrestricted classical discrete random walk on a line with , the probability of finding the walker in position after steps is given by using stirling s approximation and after some algebra , we find we know that eq .( [ unrestricted_classical_random_walk ] ) is a binomial distribution , thus it makes sense to study the mixing time in two different vertex populations : and ( the first population is mainly contained under the bell - shape part of the distribution , while the second can be found along the tails of the distribution . ) in both cases , we shall find the expected hitting time by calculating the inverse of eq .( [ binomial_approx_stirling ] ) ( i.e. , the expected time of the geometric distribution ) : * case * .since * case * .let and , where and are small integer numbers .since thus , the hitting time for a given vertex of an -step unrestricted classical discrete random walk on a line depends on which region vertex is located in . if then it will take steps to reach , in average .however , if then it will take an exponential number of steps to reach , as one would expect from the properties of the binomial distribution. the definitions of discrete random walks on a circle and on a line with two reflecting barriers are very similar .in fact , the only difference is the behavior of the extreme nodes .let be a stochastic process which consists of the path of a particle which moves along a circle with steps of one unit at time intervals also of one unit .the circle has different position sites ( for an example with 10 nodes , see fig .( [ circle ] ) ) . at any step ,the particle has a probability of going to the right and of going to the left .if the particle is on at time then the particle will move to with probability and to with probability . similarly , if the particle is on at time then at time the particle will go to with probability and to with probability . according to theorem [ theorem_stationary_distribution_undirected_graph ] ,the markov chain defined by has a stationary distribution given by and a hitting time given by ( ) discrete quantum walks on a line ( dqwl ) is the most studied model of discrete quantum walks . as its name suggests ,this kind of quantum walks are performed on graphs composed of a set of vertices and a set of edges ( i.e. , ) , and having each vertex two edges , i.e. . studying dqwl is important in quantum computation for several reasons , including : + + 1 .dqwl can be used to build quantum walks on more sophisticated structures like circles or general graphs .dqwl is a simple model that can be exploited to explore , find and understand relevant properties of quantum walks for the development of quantum algorithms .dqwl can be employed to test the quantumness of experimental realizations of quantum computers .+ in , meyer made two contributions to the study of dqwl while working on models of quantum cellular automata ( qca ) and quantum lattice gases : + 1 .he proposed a model of quantum dynamics that would be used later on to analytically characterize dqwl .he showed that a quantum process in which , at each time step , a quantum particle ( the walker ) moves in superposition both to left and right with equal amplitudes , is physically impossible in general , the only exception being the trivial motion in a single direction .+ + in order to perform a discrete dqwl with non - trivial evolution , it was proposed in and to use an additional quantum system : a coin .thus , a dqwl comprises two quantum systems , * coin * and * walker * , along with a unitary coin operator ( to toss a coin ) and a conditional shift operator ( to displace the walker to either left or right depending on the accompanying coin state component . ) in a different perspective , patel _ et al _ proposed in to eliminate the use of coins by rearranging the hamiltonian operator associated with the evolution operator of the quantum walk ( however , there is a price to be paid on the translation invariance of the quantum walk . )moreover , hines and stamp have proposed the development of quantum walk hamiltonians in order to reflect the properties of potential experimental realizations of quantum walks in their mathematical structure . motivated by , hamada _ et al _ wrote a general setting for qca , developed a correspondence between dqwl and qca , and applied this connection to show that the quantum walk proposed in could be modelled as a qca .the relationship between qca and quantum walks has been indirectly explored by meyer .additionally , konno _ et al _ have studied the relationship between quantum walks and cellular automata , van dam has shown that it is possible to build a quantum cellular automaton capable of universal computation , and gross _et al _ have introduced a comprehensive mathematical setting for developing index theory of one - dimensional automata and cellular automata .we now review the mathematical structure of a basic coined dqwl .the main components of a coined dqwl are a walker , a coin , evolution operators for both walker and coin , and a set of observables : + _ * walker and coin : * _ the walker is a quantum system living in a hilbert space of infinite but countable dimension .it is customary to use vectors from the canonical ( computational ) basis of as position sites " for the walker .so , we denote the walker as and affirm that the canonical basis states that span , as well as any superposition of the form subject to , are valid states for .the walker is usually initialized at the origin , i.e. . the coin is a quantum system living in a 2-dimensional hilbert space .the coin may take the canonical basis states and as well as any superposition of these basis states .therefore and a general normalized state of the coin may be written as , where .the total state of the quantum walk resides in .it is customary to use product states of as initial states , that is , .+ _ * evolution operators : * _ the evolution of a quantum walk is divided into two parts that closely resemble the behavior of a classical random walk . in the classical case, chance plays a key role in the evolution of the system . in the quantum case ,the equivalent of the previous process is to apply an evolution operator to the coin state followed by a conditional shift operator to the total quantum system .the purpose of the coin operator is to render the coin state in a superposition , and the randomness is introduced by performing a measurement on the system after both evolution operators have been applied to the total quantum system several times . among coin operators ,customarily denoted by , the hadamard operator has been extensively employed : for the conditional shift operator use is made of a unitary operator that allows the walker to go one step forward if the accompanying coin state is one of the two basis states ( e.g. ) , or one step backwards if the accompanying coin state is the other basis state ( e.g. ) . a suitable conditional shift operator has the form consequently , the operator on the total hilbert space is and a succinct mathematical representation of a discrete quantum walk after steps is where .+ _ * observables : * _ several advantages of quantum walks over classical random walks are a consequence of interference effects between coin and walker after several applications of ( other advantages come from quantum entanglement between walker(s ) and coin(s ) as well as partial measurement and/or interaction of coins and walkers with the environment . ) however, we must perform a measurement at some point in order to know the outcome of our walk . to do so, we define a set of observables according to the basis states that have been used to define coin and walker .there are several ways to extract information from the composite quantum system .for example , we may first perform a measurement on the coin using the observable we show in fig .( [ hadamard_skewed ] ) the probability distributions of two 100-steps dqwl .coin and shift operators for both quantum walks are given by eqs .( [ hadamard_single ] ) and ( [ shift_single ] ) respectively .the dqwls from plots ( a ) and ( b ) have corresponding initial quantum states and . the first evident property of these quantum walks is the skewness of their probability distributions , as well as the dependance of the symmetry of such a skewness from the coin initial quantum state ( for plot ( a ) and for plot ( b ) . ) this skewness comes from constructive and destructive interference due to the minus sign included in eq .( [ hadamard_single ] ) . also , we notice a quasi - uniform behavior in the central area of both probability distributions , approximately in the interval ] is given by and its inverse is given by [ dtft ] ambainis _ et al _ employ the following slight variant of the dtft : where and \rightarrow \mathbb{c} ] .[ amplitudes_dqwl ] the amplitudes for even ( odd ) at odd ( even ) are zero , as it can be inferred from the definition of the quantum walk .now we have an analytical expression for and , and taking into account that , we are interested in studying the asymptotical behavior of and . integrals in theorem [ amplitudes_dqwl ] are of the form the asymptotical properties of this kind of integral can be studied using the method of stationary phase ( and ) , a standard method in complex analysis .using such a method , the authors of and reported the following theorems and conclusions : let be any constant , and be in the interval .then , as , we have ( uniformly in ) where , , , and . [ theorem_probabilities_dqwl ]let with fixed . in case for which and .[ asymptotics_amplitudes ] * conclusions * + 1 . *quasi - uniform behavior and standard deviation*. the wave function and ( theorem [ amplitudes_dqwl ] ) is almost uniformily spread over the region for which is in the interval ] .in fact , the exact probability value in that interval is .furthermore , the position probability distribution spreads as a function of , i.e. ] and ] , and .as in the hadamard walk case , the properties of the quantum walk defined by eqs .( [ general_quantum_walk],[quantum_walk_general ] ) may be studied by inverting the fourier transform and using methods of complex analysis .let us concentrate on the phase factors of the coin initial state ( eq .( [ general_coin_initial_state ] ) ) and of the coin operator ( eq .( [ fourier_general_coin_operator ] ) . )note that we can choose many pairs of values ( ) for any phase factor .so , if we fix a value for ( i.e. if we use only one coin operator ) we can always vary the initial coin state ( eq . ( [ general_coin_initial_state ] ) ) to get a value for so that we can compute a quantum walk with a certain phase factor value .it is in this sense that we say that the study of a hadamard walk suffices to analyze the properties of all unrestricted quantum walks on a line . in fig .( [ hadamard_symmetry ] ) we show the probability distributions of three hadamard walks with different initial coin states .\(a ) ( b ) ( c ) on further studies of coined quantum walks on a line , villagra _ et al _ present a closed - form of the probability that a quantum walk arrives at a given vertex after steps , for a general symmetric _su(2 ) _ coin operator .the properties of discrete quantum walks on a line with one and two absorbing barriers were first studied in . for the semi - infinite discrete quantum walk on a line , theorem [ quantum_walk_one_barrier ] was reported let us denote by the probability that the measurement of whether the particle is at the location of the absorbing boundary ( location in ) .[ quantum_walk_one_barrier ] theorem [ quantum_walk_one_barrier ] is in stark contrast with its classical counterpart ( theorem 8 of ) , as the probability of eventually being absorbed ( in the classical case ) is equal to unity .furthermore , yang , liu and zhang have introduced an interesting and relevant result in : the absorbing probability of theorem [ quantum_walk_one_barrier ] decays faster than the classical case and , consequently , the conditional expectation of the quantum - mechanical case is finite ( as opposed to the classical case in which the corresponding conditional expectation is infinite . ) the case of a quantum walk on a line with two absorbing boundaries was also studied in , and their main result is given in theorem [ quantum_walk_two_barriers ] . for each ,let be the probability that the process eventually exits to the left . also define to be the probability that the process exits to the right .then [ quantum_walk_two_barriers ] in , bach _et al _ revisit theorems [ quantum_walk_one_barrier ] and [ quantum_walk_two_barriers ] with detailed corresponding proofs using both fourier transform and path counting approaches as well as prove some conjectures given in .moreover , in , bach and borisov further study the absorption probabilities of the two - barrier quantum walk .finally , konno studied the properties of quantum walks with boundaries using a set of matrices derived from a general unitary matrix together with a path counting method ( . )the effect of different and multiple coins has been studied by several authors . in ,inui and konno have analyzed the localization phenomena due to eigenvalue degeneracies in one - dimensional quantum walks with 4-state coins ( the results shown in have some similarities with the quantum walks with maximally entangled coins reported by venegas - andraca _ et al _ in in the sense that both quantum walks tend to concentrate most of their probability distributions about the origin of the walk , i.e. the localization phenomenon is present . ) moreover , in , konno , inui and segawa have derived an analytical expression for the stationary distribution of one - dimensional quantum walks with 3-state coins that make the walker go either right or left or , alternatively , rest in the same position .additionally , ribeiro _ et al _ have considered quantum walks with several biased coins applied aperiodically , dalessandro _ et al _ have studied non - stationary quantum walks on a cycle using different coin operators at each computational step , and feinsilver and kocik have proposed the use of krawtchouk matrices ( via tensor powers of the hadamard matrix ) for calculating quantum amplitudes .linden and sharam have formally introduced a family of quantum walks , inhomogeneous quantum walks , being their main characteristic to allow coin operators to depend on both position and coin registers .shikano and katsura have studied the properties of self - duality , localization and fractality on a generalization of the inhomogeneous quantum walk model defined in , konno has presented and proved a theorem on return probability for inhomogeneous walks which are periodic in position , machida has found that combining the action of two unitary operators in an inhomogenenous quantum walk will result in a limit distribution for that can be expressed as a function and a combination of density functions ( for a detailed analisys of weak convergence please go to subsection [ qw_limit_theorems ] ) , and konno has proved that the return probability of a one - dimensional discrete - time quantum walk can be written in terms of elliptic integrals .in , brun _ et al _ analyzed the behavior of a quantum walk on the line using both 2-dimensional coins and single coins of dimension , and sewaga _ et al _ have computed analytical expressions for limit distributions of quantum walks driven by 2-dimensional coins as well as analyzed the conditions upon which applying 2-dimensional coins to a quantum walk leads to classical behavior .furthermore , bauls _ have studied the behavior of quantum walks with a time - dependent coin and machida and konno have produced limit distributions for such quantum walks with , chandrashekar has proposed a generic model of quantum walk whose dynamics is described by means of a hamiltonian with an embedded coin , and romanelli has generalized the standard definition of a discrete quantum walk and shown that appropriate choices of quantum coin lead to obtaining a variety of wave - function spreading .finally , ahlbrecht _ et al _ have produced a comprehensive analysis of asymptotical behavior of ballistic and diffusive spreading , using fourier methods together with perturbed and unperturbed operators .the links between classical and quantum versions of random walks have been studied by several authors under different perspectives : + 1 ) simulating classical random walks using quantum walks .studies on this area ( e.g. ) would provide us not only with interesting computational properties of both types of walks , but also with a deeper insight of the correspondences between the laws that govern computational processes in both classical and quantum physical systems .+ 2 ) transitions from quantum walks into classical random walks .this area of research is interesting not only for exploring computational properties of both kinds of walks , but also because we would provide quantum computer builders ( i.e. experimental physicists and engineers ) with some criteria and thresholds for testing the quantumness of a quantum computer .moreover , these studies have allowed the scientific community to reflect on the quantum nature of quantum walks and some of their implications in algorithm development ( in fact , we shall discuss the quantum nature of quantum walks in subsection [ quantumness ] . )decoherence is a physical phenomenon that typically arises from the interaction of quantum systems and their environment .decoherence used to be thought of as an annoyance as it used to be equated with loss of quantum information .however , it has been found that decoherence can indeed play a beneficial role in natural processes ( e.g. ) as well as produce interesting results for quantum information processing ( e.g. . )in addition to these properties , decoherence via measurement or free interaction with a classical environment is a typical framework for studying transitions of quantum walks into classical random walks .thus , for the sake of getting a deeper understanding of the physical and mathematical relations between quantum systems and their environment , together with searching for new paradigms for building quantum algorithms , studying decoherence properties and effects on quantum walks is an important field in quantum computation .tregenna and kendon have studied the impact of decoherence in quantum walks on a line , cycle and the hypercube , and have found that some of those decoherence effects could be useful for building quantum algorithms , strauch has also studied the effects of decoherence on _ continuous - time _ quantum walks on the hypercube , and fan _ et al _ have proposed a convergent rescaled limit distribution for quantum walks subject to decoherence . brun _et al _ have shown that the quantum - classical walk transition could be achieved via two possible methods , in addition to performing measurements : decoherence in the quantum coin and the use of higher - dimensional coins , ampadu has focused on generalizing the method of decoherent quantum walk proposed in for two - dimensional quantum walks , and annabestani _ et al _ have generalized the results of by providing analytical expressions for different kinds of decoherence . moreover , by using a discrete path approach , it was shown by konno that introducing a random selection of coins ( that is , amplitude components for coin operators are chosen randomly , being under the unitarity constraint ) makes quantum walks behave classically . in ,childs _ et al _ make use of a family of graphs ( e.g. fig .( [ trees](a ) ) to exemplify the different behavior of ( continuous ) quantum walks and classical random walks .several authors have addressed the physical and computational properties of decoherence in quantum walks : ermann _ et al _ have inspected the decoherence of quantum walks with a complex coin , where the coin is part of a larger quantum system , chandrashekar et al have studied symmetries and noise effects on coined discrete quantum walks , and obuse and kawakami have studied one - dimensional quantum walks with spatial or temporal random defects as a consequence of interactions with randome environments , having found that this kind of quantum walks can avoid complete localization .also , kendon _ et al _ have extensively studied the computational consequences of coin decoherence in quantum walks , alagi and russell have studied the effects of independent measurements on a quantum walker travelling along the hypercube ( please see def .[ hypercube_1 ] and fig .[ hypercube_3d ] ) , kok _ et al _ have studied the quantum to classical transition of a quantum walk by introducing randoms phase shifts in the coin particle , romanelli has studied one - dimensional quantum walks subjected to decoherence induced by measurements perfomed with timing provided by the lvi waiting time distribution , prez and romanelli have analyzed a one - dimensional discrete quantum walk under decoherence , on the coin degree of freedom , with a strong spatial dependence ( decoherence acts only when the walker moves on one half of the line ) , and oliveira _ et al _ have analyzed two - dimensional quantum walks under a decoherence regime due to random broken links on the lattice .furthermore and taking as basis a global chirality probability distribution ( gcd ) independent of the walker s position proposed in , romanelli has studied the behavior of one - dimensional quantum walks under two models of decoherence : periodic measurements of position and chirality as well as randomly broken links on the one - dimensional lattice .additionally , chisaki _ et al _ have studied both quantum to classical and classical to quantum transitions using discrete - time and classical random walks , and have also introduced a new kind of quantum walk entitled final - time - dependent discrete - time quantum walk ( fd - dtqw ) together with a limit theorem for fd - dtqw . in , zhang studied the effect of increasing decoherence ( caused by measurements probabilistically performed on both walker and coin ) in coined quantum walks and derived analytical expressions for position - related probability distributions , annabestani _et al _ have studied the impact of decoherence on the walker in one - dimensional quantum walks , srikanth _et al _ have quantified the degree of quantumness in decoherent quantum walks using measurement - induced disturbance , gnlol _ et al _ have studied decoherence phenomena in two - dimensional quantum walks with traps , and rao _ et al _ have analyzed noisy quantum walks using measurement - induced disturbance and quantum discord .moreover , liu and petulante have proposed a model for decoherence in an -site cycle together with a definition for decoherence time , as well as derived analytical expressions for i ) the asymptotic dynamics of discrete quantum walks under decoherence on the coin degree of freedom and on both coin and walker degrees of freedom running on n - site cycles , ii ) the order ( _ big o _ ) of the mixing time for the time - averaged probability of a quantum walk subject to decoherence on the coin quantum system , and iii ) the limiting behavior of quantum entanglement between coin and walker under the same decoherence regime .schreiber _ et al _ have analyzed the effect of decoherence and disorder in a photonic implementation of a quantum walk , and have shown how to use dynamic and static disorder to produce diffusive spread and anderson localization , respectively .in addition , ahlbrecht _ et al _ have produced a detailed manuscript in which several topics from the field of discrete quantum walks are analyzed , including ballistic and diffusive behavior , decoherent and invariance on translation , asymptotic behavior with perturbation , together with several examples .the central limit theorem plays a key role in determining many properties of statistical estimates .this key role has been a crucial motivation for members of the quantum computing community to derive limit distributions for quantum walks . among the scientific contributions produced in this field, the seminal papers produced by norio konno and collaborators have been central to the effort of deriving analytical results and establishing solid grounds for quantum walk limit distributions .let us start this summary with a fundamental result for quantum walks on a line : konno s weak limit theorem ( following mathematical statements are taken verbatim from corresponding papers . )+ let be the set of initial qubit states of a one - dimensional quantum walk , and let denote a one - dimensional quantum walk at time starting from initial qubit state with evolution operator given by a unitary matrix \label{u_unitary_konno}\ ] ] using a path integral approach , konno proves the following theorem : we assume . if , then where has the following density , known as * konno s density function * ) = { \sqrt{1 - |a|^2 } \over \pi ( 1 - x^2 ) \sqrt{|a|^2 - x^2 } } \left\ { 1- \left ( |\alpha|^2 - |\beta|^2 + { a \alpha \overline{b \beta } + \overline{a \alpha } b\beta \over |a|^2 } \right ) x \right\}\ ] ] for with and means that converges in distribution to a limit .[ weak_limit_theorem_konno ] that is , the quantity , later on named a _ pseudovelocity _, does converge to the limit distribution . in , hamada _et al _ study the symmetric ] cases of konno s density function . + a plethora of central results are published in . among them ,i mention the following : * * symmetry of probability distribution *. + let us define the following sets : + ^t \in \phi : \right\ } , \end{aligned}\ ] ] + [ symmetry_qw ] + where is the set of the positive integers .then , + let and be as in def .( [ symmetry_qw ] ) .suppose .then we have [ theorem_symmetry_qw_konno ] + theorem [ theorem_symmetry_qw_konno ] is a generalization of the result given by for the hadamard walk , i.e. a one - dimensional quantum walk with the hadamard operator ( def .[ hadamard_single ] ) as evolution operator . also , nayak and vishwanath discussed the symmetry of distribution and showed that {}^t \in \phi_s ] , theorem [ weak_limit_theorem_konno ] implies where is the indicator function , that is , if , and if + compare eq .( [ probability_pseudovelocity_konno ] ) with the corresponding result for the classical symmetric random walk starting from the origin , eq .( [ classical_moivre_laplace_probability_konno ] ) : + in addition to the scientific contributions already mentioned in previous sections , we now provide a summary of more results on limit distributions .konno has proved the following weak limit theorem for continuous quantum walks : let us denote a continuous - time quantum walk on by whose probability distribution is defined by for any location and time then , the following result holds for a continuous - time quantum walk on a line : [ teorema_konno_limite_debil ] in , grimmett _ et al _ used fourier transform methods to also rigorously prove weak convergence theorems for and dimensional quantum walks and , using the definition of pseudovelocities introduced by konno , the fourier transform method proposed in and the one - parameter family of quantum walks proposed by inui _et al _ in , watabe _ et al _ have derived analytical expressions for the limit and localization distributions of walker pseudovelocities in two - dimensional quantum walks , while sato _et al _ have derived limit distributions for qudits in one - dimensional quantum walks , liu and petulante have presented limiting distributions for quantum markov chains , and chisaki _ et al _ have also deduced limit theorems for ( localization ) and ( weak convergence ) for quantum walks on cayley trees . furthermore and based on the fouriertransform approach developed by grimmett _et al _ , machida and konno have deduced a limit theorem for discrete quantum walks with 2-dimensional time - dependent coins .in addition , machida has produced analytical expressions for weak convergence as well as limit distributions for a localization model of a 2-state quantum walk , konno has derived limit theorems using path counting methods for discrete - time quantum walks in random ( both quenched and annealed ) environments , and liu has derived a weak limit distribution as well as formulas for stationary probability distribution for quantum walks with two - entangled coins .motivated by the properties of quantum walks with many coins published by brun _et al _ in , segawa and konno have used the wigner formula of rotation matrices for quantum walks published by miyazaki _et al _ in to rigorously derive limit theorems for quantum walks driven by many coins .also , sato and katori have analyzed konno s pseudovelocities within the context of relativistic quantum mechanics , di molfetta and debbasch have proposed a subset of quantum walks , named ( 1-jets ) , to study how continuous limits can be computed for discrete - time quantum walks .in addition , based on definitions and concepts found in , ampadu proposed a mathematical model for the localization and symmetrization phenomena in generalized hadamard quantum walks as well as proposed conditions for the existence of localization . moreover , based on mc gettrick s model of discretequantum walks with memory and using the fourier - based approach proposed by grimmett _et al _ , konno and machida have proved two new weak limit distribution theorems for that kind of quantum walk .finally , in konno _ et al _ have studied three kinds of measures ( time averaged limit measure , weak limit measure and stationary measure ) as well as studied conditions for localization in a family of inhomogeneous quantum walks , while chisaki _et al _ have produced limit theorems for discrete quantum walks running on joined half lines ( i.e. lines with sites defined on and ( semi)homogeneous trees . in condensed - matter physics ,localization is a well - studied physical phenomenon . according to kramer and mackinnon ,it is likely that the first paper in which localization was discussed within the context of quantum mechanical phenomena is by p. w. anderson .since then , localization has been extensively studied ( see the compilation of textbooks and reviews on localization provided in ) and , consequently , different cualitative and mathematical definitions have been provided for this concept .nevertheless , the essential idea behind localization is _ the absence of diffusion of a quantum mechanical state _ , which could be caused by random or disordered environments that break the periodicity in the dynamics of the physical system .moreover , localization could also be produced by evolution operators that mimic the behavior of disordered media , as shown by chandrashekar in .as for quantum walks , localization phenomena has been detected as a result of either eigenvalue degeneracy ( typically caused by using evolution operators that are all identical except for a few sites ) or choosing coin operators that are site dependent . in order to have a precise and inclusive introduction to localization in quantum walks , we direct the reader s attention to by a. joye , by a. joye and m. merkli , and by e. hamza and a. joye , and references provided therein .in addition to these references and those presented in previous sections in which we have incidentally addressed the topic of localization , we also mention the numerical simulations of quantum walks on graphs shown by tregenna _et al _ , in which the localization phenomenon , due to the use of grover s operator ( def .( [ grover_coin_operator ] ) ) in a 2-dimensional quantum walk , was detected .inspired by this phenomenon , inui _ et al _ proved in that the key factor behind this localization phenomenon is the degeneration of the eigenvectors of corresponding evolution operator , inui and konno have further studied the relationship between localization and eigenvalue degeneracy in the context of particle trapping in quantum walks on cycles , and ide _ et al _ have computed the return probability of final - time dependent quantum walks .based on the study of aperiodic quantum walks given in , romanelli has proposed the computation of a trace map for fibonacci quantum walks ( this is a discrete quantum walk with two coin operators arranged in quasi - periodic sequences following a fibonacci prescription ) and ampadu has shown that localization does not occur on fibonacci quantum walks . in ,grnbaum _ et al _ have studied recurrence processes on discrete - time quantum walks following a particle absorption monitoring approach ( i.e. a projective measurement strategy ) , tefak _ et al _ have analyzed the plya number ( i.e. recurrence without monitoring particle absorption ) for biased quantum walks on a line as well as for -dimensional quantum walks , and darz and kiss have also proposed a plya number for continuous - time quantum walks . in ,tefak _ et al _ have proposed a criterion for localization and kollr _ et al _ found that , when executing a discrete - time quantum walk on a triangular lattice using a three - state grover operator , there is no localization in the origin . furthermore , chandrashekar has found that one - dimensional discrete coined quantum walks fail to fully satisfy the quantum recurrence theorem but suceed at exhibiting a fractional recurrence that can be characterized using the quantum plya number , ampadu has analyzed the motion of particles on a one - dimensional hadamard walk and has presented a theoretical criterion for observing quantum walkers at an initial location with high probability , has also studied the conditions upon which a biased quantum walk on the plane is recurrent , as well as studied the localization phenomenon in two - dimensional five - state quantum walks .in , cantero _ et al _ present an alternative method to formulate the theory of quantum walks based on matrix - valued szeg orthogonal polynomials , known as the cgmv method , associated with a particular kind of unitary matrices , named cmv matrices , and hamada _et al _ have independently introduce the idea of employing orthogonal polynomials for deriving analytical expressions for limit distributions of one - dimensional quantum walks .based on the mathematical formalism delivered in , konno and segawa have studied quantum walks on a half line , focusing on analyzing the corresponding spectral measure as well as on localization phenomena for this kind of quantum walks .also based on the cmv method presented in , ampadu has studied both limit distributions and localization of quantum walks on the half plane . moreover , in , cantero _ et al _ have produced an extensive analysis of the asymptotical behavior of quantum walks : starting with a definition for a quantum walk with one defect ( i.e. a one - dimensional quantum walk with constant coins except for the origin ) and using the cgmv method , cantero _ et al _ have classified localization properties as well as derived analytical expressions for return probabilities to the origin . finally , grnbaum and velzquez have studied models of quantum walks on the non - negative integers using riez probability measures . on further studies , konno mathematically proved that inhomogenenous discrete - time quantum walks do exhibit localization , shikano and katsura have proved that , for a class of inhomogenenous quantum walks , there is a limit distribution that is localized at the origin , as well as found , through numerical studies , that the eigenvalue spectrum of such inhomogenenous walks exhibit a fractal structure similar to that of the hofstadter butterfly .also , machida has proposed a localization model of quantum walks on a line as well as computed a limit distribution for 2-state inhomogenenous quantum walks with different unitary operators applied in different times , and chandrashekar has proposed hamiltonians for walking on different lattices as well as found links between localization and spatially static disordered operations , and presented a scheme to induce localization in a bose - einsten condensate .finally , in , ahlbrecht _ et al _ have delivered a review on disordered one - dimensional quantum walks and dynamical localization .a plethora of numerical , analytical and experimental results have made the field of quantum walks rich and solid .in addition to the results already mentioned in this review , i would like to direct the reader s attention to the following results : in , shikano _ et al _ have proposed using discrete - time quantum walks to analyze problems in quantum foundations .specifically , shikano _ et al _ have derived an analytical expression for the limit distribution of a discrete - time quantum walk with periodic position measurements and analyzed the concepts of randomness and arrow of time . also ,gnlol _ et al _ have found that the quantum walker survival probability in discrete - time quantum walks running of cycles with traps exhibits a piecewise stretched exponential character , kurzyski and wjcik and shown that quantum state transfer is achievable in discrete - time quantum walks with position - dependent coins , stang _ et al _ have introduced a history - dependent discrete - time quantum walk ( i.e. a quantum walk with memory ) and proposed a correlation function for measuring memory effects on the evolution of discrete - time quantum walks , navarrete - benlloch _ et al _ have introduced a nonlinear version of the optical galton board , whitfield _ et al _ have introduced an axiomatic approach for a generalization of both continuous and discrete quantum walks that evolve according to a quantum stochastic equation of motion ( helps to realize why the behavior of some decoherent quantum walks is different from both classical and coherent quantum walks ) , xu has derived analytical expressions for position probability distributions on unrestricted quantum walks on the line , together with an introduction to a quantum walk on infinite or even - numbered size of lattices which is equivalent to the traditional quantum walk with symmetrical initial state and coin parameter , chandrashekar has introduced a quantum walk version of parrondo s games and , in , chandrashekar _ et al _ have introduced some mathematical relationships between quantum walks and relativistic quantum mechanics and have proposed hamiltonian operators ( that retain the coin degree of freedom ) to run quantum walks on different lattices ( e.g. cubic , kagome and honeycomb lattices ) as well as to study different kinds of disorder on quantum walks .also , feng _ et al _ have introduced the idea of using quantum walks to study waves , cantero _ et al _ show how to use matrix valued orthogonal polynomials defined in the real line to build a large class of quantum walks , and jacobs has analyzed quantum walks within the mathematical framework of coalgebras , monads and category theory .mc gettrick has proposed a model of discrete quantum walks with up to two memory steps and derived analytical expressions for corresponding quantum amplitudes .based on , konno and machida have proved two new weak limit distribution theorems .moreover , romanelli has developed a thermodynamical approach to entanglement quantification between walker and coin and de valcrcel _ et al _ have assigned extended probability distributions as initial walker position in a discrete quantum walk , and have found a particular initial condition for producing a homogeneous position distribution ( interestingly enough , a similar quasi - homogeneous position probability distribution has been shown in as a result of a measurement - induced decoherent process in a discrete quantum walk . )also , goswani _ et al _ have extended the concept of persistence ( i.e. the time during which a given site remains unvisited by the walker ) , konno and sato have presented a formula for the transition matrix of a discrete - time quantum walk in terms of the second weighted zeta function , and konno _ et al _ have shown several relationships between the heun and gauss differential equations with quantum walks . in ,konno has introduced the notion of sojourn time for hadamard quantum walks and has also derived analytical expressions for corresponding probability distributions , while in ampadu has shown the inexistence of sojourn time for _ grover _ quantum walks .et al _ have presented foundational definitions and statistics of a family of discrete quantum walks with an anyonic walker and lehman _et al _ have modelled the dynamics on a non - abelian anyonic quantum walk and found that , asymptotically , the statistical dynamics of a non - abelian ising anyon reduce to that of a classical random walk ( i.e. linear dispersion ) . in addition , ghoshal _ et al _ have recently reported some effects of using weak measurements on the walker probability distribution of discrete quantum walks , konno has proposed an it s formula for discrete - time quantum walks , endo _ et al _ have studied the ballistic behavior of quantum walks having the walker initial state spread over neighboring sites , venegas - andraca and bose have studied the behavior of quantum walks with walkers in superposition as initial condition , xue and sanders have studied the joint position distribution of two independent quantum walks augmented by stepwise partial and full coin swapping , and chiang _et al _ have proposed a general method , based on , for realizing a quantum walk operator corresponding to an arbitrary sparse classical random walk .quantum walks on graphs is now an established active area of research in quantum computation . among several scientific documents providing comprehensive introductions to quantumwalks on graphs , we find a seminal paper by aharonov et al , a rigorous mathematical analysis and description of quantum walks on different topologies and their limit distributions by konno , as well as introductory reviews on discrete and continuous quantum walks on graphs by kendon and venegas - andraca .in , aharonov _ et al _ studied several properties of quantum walks on graphs .their first finding consisted in proving a counterintuitive theorem : if we adopt the classical definition of stationary distribution ( see and references cited therein for a concise introduction on mathematical properties of markov chains ) , then quantum walks do not converge to any stationary state nor to any stationary distribution . in order to review the contributions of and other authors , let us begin by formally introducing the following elements : let be a -regular graph with ( note that graphs studied here are _ finite _ , as opposed to the unrestricted line we used in the beginning of this section ) and be the hilbert space spanned by states where . also , we define , the coin space , as an auxiliary hilbert space of dimension spanned by the basis states , and , the coin operator , as a unitary transformation on . now , we define a shift operator on such that , where is the neighbour of ( since edge labeling is a permutation then is unitary . ) finally , we define one step of the quantum walk on as . as in the study of quantumwalks on a line , if is the quantum walk initial state then a quantum walk on a graph can be defined as now , we discuss the definition and properties of limiting distributions for quantum walks on graphs .suppose we begin a quantum walk with initial state .then , after steps , the probability distribution of the graph nodes induced by eq .( [ quantum_walk_graph ] ) is given by * probability distribution on the nodes of .* let be a node of and be the coin hilbert space .then if probability distributions at time and are different , it can be proved that does not converge . however , if we compute the _ average _ of distributions over time * averaged probability distribution*. [ averaged_prob_dist ] we then obtain the following result .let , denote the eigenvectors and corresponding eigenvalues of .then , for an initial state where the sum is only on pairs such that . if all the eigenvalues of are distinct , the limiting averaged probability distribution takes a simple form .let , i.e. is the probability to measure node in the eigenstate .then it is possible to prove that , for an initial state . using this fact it is possible to prove the following theorem . let be a coined quantum walk on the cayley graph of an abelian group , such that all eigenvalues of are distinct . then the limiting distribution ( def .( [ averaged_prob_dist ] ) ) is uniform over the nodes of the graph , independent of the initial state .[ theorem_uniform_averaged_militing_distribution ] using theorem [ theorem_uniform_averaged_militing_distribution ] we compute the limiting distribution of a quantum walk on a cycle : let be a cycle with nodes ( see fig .( [ cycle ] ) . ) a quantum walk on acts on a total hilbert space .the limiting distribution for the coined quantum walk on the -cycle , with odd , and with the hadamard operator as coin , is uniform on the nodes , independent of the initial state .several other important results for quantum walks on a graph are delivered in . among them , we mention some results on mixing times .* average mixing time*. the mixing time of a quantum markov chain with initial state is given by [ average_mixing_time ] for the quantum walk on the -cycle , with n odd , and the hadamard operator as coin , we have so , the mixing time of a quantum walk on a cycle is .the mixing time of corresponding classical random walk on a circle is .now we focus on a general property of mixing times . for a general quantum walk on a bounded degree graph ,the mixing time is at most quadratically faster than the mixing time of the simple classical random walk on that graph .[ boundary_mixing_time ] so , according to theorem [ boundary_mixing_time ] , the speedup that can be provided by a quantum walk on a graph is not enough to exponentially outperform classical walks . consequently , other parameters of quantum walks have been investigated , among them their _ hitting time_. in , kempe offers an analysis of hitting time of discrete quantum walks on the hypercube ( due to the potential service of hitting times in the construction of quantum algorithms , we shall analyze in detail on section [ qw_based_algorithms ] . ) further studies on mixing time for discrete quantum walks on several graphs as well a convergence criterion for stationary distribution in _ non - unitary _ quantum walks are presented in . the properties of the wave function of a quantum particle walking on a circle have been studied by fjelds _et al _ in , some details of limiting distributions of quantum walks on cycles are shown by bednarska _et al _ in , liu and petulante have presented limiting distributions for quantum markov chains , the effect of using different coins on the behavior of quantum walks on an -cycle as well as in graphs of higher degree has been studied by tregenna _et al _ in , a standard deviation measure for quantum walks on circles is introduced by inui _et al _ in , and banerjee _ et al _ have studied some effects of noise in the probability distribution symmetry of quantum walks on a cycle .another graph studied in quantum walks is the hypercube , defined by * the hypercube*. the hypercube is an undirected graph with nodes , each of which is labeled by a binary string of bits .two nodes in the hypercube are connected by an edge if differ only by a single bit flip , i.e. if , where is the hamming distance between and .[ hypercube_1 ] in , moore and russell derived values for _ the two notions _ of mixing times we have studied ( defs . ( [ instantaneous_mixing_time ] ) and ( [ average_mixing_time ] ) ) for continuous and discrete quantum walks on the hypercube . as for the discrete quantum walk, begins by defining grover s operator as coin operator . *grover s operator*. let be an -dimensional hilbert space and be the canonical basis for and .then we define grover s operator as .[ grover_coin_operator ] [ [ section ] ] additionally , their shift operator is given by where is the basis vector of the -dimensional hypercube .so , the quantum walk on the hypercube proposed in can be written as ^t |\psi\rangle_0 \label{qw_hypercube}\ ] ] for a given initial state . using a fouriertransform approach as in , it was proved in that for the discrete quantum walk defined in eq .( [ qw_hypercube ] ) , its instantaneous mixing time ( def .( [ instantaneous_mixing_time ] ) ) is given by , i.e. , with for all odd .additionally , provides analytical expressions for eigenvalues and corresponding eigenvectors of the evolution operator defined in eq .( [ qw_hypercube ] ) which were later used in for the design of a search algorithm based on a discrete quantum walk .in addition to the articles i have already mentioned , a substantial number of scientific papers has been published over the last few years .please let me now provide a summary of more results on properties and developments on discrete quantum walks on graphs ( we leave published algorithmic applications of quantum walks for section [ qw_based_algorithms ] . ) in , mackay _ et al _ present numerical simulations of quantum walks in higher dimensions using separable and non - separable coin operators , gottlieb _ et al _ studied the convergence of coined quantum walks in , and dimcovic _ et al _ have put forward a general framework for describing discrete quantum walks in which the coin operator is substituted by an interchange operator .kempf and portugal have introduced a new definition of hitting time for quantum walks that exhibit phase and group velocities , marquezino _ et al _ have studied and computed the mixing time and limiting distribution of a discrete quantum walk on a torus - like lattice , leung _ et al _ have studied the behavior of coined quantum walks on 1- and 2-dimensional percolation graphs ( i.e. graphs in which edges or sites are randomly missing ) under two regimes : quantum tunneling employing general coin operators and the potential path redundancy present in 2-d grids , and lovett _ et al _ have presented a further numerical study on how dimensionality , tunneling and connectivity affect a discrete quantum - walk based search algorithm . in addition , _ et al _ have presented in how eigenvalue independency from momenta imply a cyclic evolution that correspondingly leads to quantum state full revivals in two - dimensional discrete quantum walks .on further studies on classical and quantum hitting times , in magniez _ et al _ :i ) have presented mathematical definitions of hitting time according to las vegas and monte carlo algorithms for finding a marked element , ii ) have introduced quantum analogues of such classical hitting times , and iii ) have proved that , for any reversible ergodic markov chain p , the corresponding quantum hitting time of the quantum analogue of p is of the same order as the square root of the classical hitting time of p. moreover , based on space - time generating functions and the mathematical methods introduced in , baryshnikov _ et al _ have presented a mathematically rigorous and highly elegant treatment of quantum walks on two dimensions in , being this work followed by in which bressler _ et al _ have presented examples of results shown in as well as derived asymptotic properties for 1-d quantum walk amplitudes . in addition , gudder and sorkin have presented a study of discrete quantum walks based on measure theory and smith has studied graph invariants closely related to both continuous- and discrete - time quantum walks .feldman and hillery have studied the relationship between quantum walks on graphs and scattering theory in as well as proposed a protocol for detecting graph anomalies using discrete quantum walks .also , berry and wang have analyzed , for a variety of graphs including cayley trees , fractals and husmi cactuses , the relationship betwen search success probability and the position of a marked vertex in such graphs , lpez - acevedo and gobron delivered an algebraic oriented analysis of quantum walks on cayley graphs , montanaro presented in a study on quantum walks on directed graphs , krovi and brun have studied quantum walks ( and their hitting times ) on quotient graphs as well as links between those quantum walks and the group theory properties of cayley graphs ( for an extended work on this last topic , see . )also , hoyer and meyer have presented a discrete quantum walk model for traversing a directed 1-d graph with self - loops and have found that , on this topology , the quantum walker proceeds an expected distance in constant time regardless the number of self - loops , berry and wang have presented a scheme for building discrete quantum walks upon interacting and non - interacting particles and have produced two results : a numerical study of entanglement generation in such quantum walks together with a potential application on those quantum walks for testing graph isomorphism ( in contrast to the results presented by gamble _et al _ in for continuous - time quantum walks also built upon interacting and no - interacting particles , the scheme proposed in can only detect some non - isomorphic strongly regular graphs . )resources for experimental realizations of quantum walks are costly . with this fact in mind , di franco _ et al _ have suggested a novel scheme for implementing a grover discrete quantum walks on two dimensions , consisting of using a single qubit as coin ( instead of using a four - dimensional quantum system ) and alternating the use of such coin for motion on the and axes . as stated in , a step on this walk consists substituting the grover operator for a sequence of two hadamard operators on the qubit acting as coin system ( one for the axis , the other for the axis ) , together with the movement on both and axes .moreover , di franco _et al _ have provided a proof of equivalence between the grover walk and the alternate quantum walk introduced in as well as a limit theorem and a numerical study of entanglement generation for the alternate quantum walk , and rohde _ et al _ have studied the dynamics of entanglement on discrete - time quantum walks running on bounded finite sized graphs . finally , kitagawa _ et al _ have shown that discrete time quantum walks can be useful for studying topological phases , attal _et al _ have proposed a formalism for modeling open quantum walk on graphs , based on completely positive maps and , in a fresh and most interesting potential application of quantum walks to engineering science , albertini and dalessandro have devised the execution of quantum walks with coins allowed to change at every time step as control systems . in particular , albertini and dalessandrohave found in that if the degree of of the graph is greater than then the quantum walk is always completely controllable .we start by defining a continuous quantum walk so that we can use it in subsection ( [ connection_discrete_continuous ] ) where we present recent advances about the mathematical bonds between discrete and continuous quantum walks , as well as in subsection [ qw_based_algorithms ] , where we explore how this kind of quantum processes is utilized in algorithm development .in addition to feynman s celebrated contribution about the simulation of quantum systems , continuous quantum walks were defined by farhi and gutmann , being the latter the basis upon which childs _ et al _ present the following formulation of a continuous classical random walk : let be a graph with then a continuous time random walk on can be described by the order infinitesimal generator matrix m given by following and , the probability of being at vertex at time is given by now , let us define a hamiltonian ( ) that closely follows eq .( [ generator_matrix ] ) let be a hamiltonian with matrix elements given by we can then employ hamiltonian as given in eq .( [ hamiltonian_ctqw_chapter5 ] ) , defined in a hilbert space with basis , for constructing the following schrdinger equation of a quantum state finally , taking eqs .( [ hamiltonian_ctqw_chapter5 ] ) and ( [ schrodinger_equation_ctqw_chapter5 ] ) the unitary operator defines a * continuous quantum walk * on graph .note that the continuous quantum walk given by eq .( [ unitary_operator_ctqw_chapter5 ] ) defines a process on continuous time and discrete space .since the publication of , there has been an increasing number of publications with relevant results of continuous quantum walks .we now provide a summary of more results on this area . in , konno has proved the weak limit theorem for continuous quantum walks presented on theorem [ teorema_konno_limite_debil ] .also , in varbanov _ et al _ present a definition of hitting time for continuous quantum walks , based on performing measurements on the walker at poisson - distributed random times ; moreover , they have proved that , depending on the measurement rate , continuous quantum walks may or may not have infinite hitting times .xu has derived transition probabilities and computed transport velocity in continuous quantum walks on ring lattices , xu and liu have studied quantum and classical transport on both finite and infinite versions of erds - rnyi networks while agliari _ et al _ , motivated by recent advances on quantum transport phenomena on photosynthesis , have studied trapping processes in rings and shown that carrying trap configuration leads to changes in quantal mean survival probability . also , agliari _et al _ have studied the average displacement of quantum walker on gasket , cayley tree and square torus graphs , agliari has studied coherent transport models with traps on erds - rnyi graphs , tsomokos has investigated the properties of continuous quantum walks on complex networks with community structure , and salimi and jafarizadeh have studied both classical and continuous quantum walks on several cayle graphs and spidernet graphs .a review on models for coherent transport on complex networks has been recently published by o. mken and a. blumen in .furthermore , kargin has calculated the limit of average probability distribution for nearest - neighbor walks on and infinite homogeneous trees , rosmanis has introduced quantum snake walks ( i.e. continuous quantum walks with fixed - length paths ) on graphs , godsil and guo have analyzed the properties of transition matrix of continuous quantum walks on regular graphs , and kieferov and nagaj have analyzed the evolution of continuous quantum walks on necklaces . mixing and hitting times as well as the structure of probability distributions and transitions probabilities have been analyzed in this field .analytical expressions of transition probabilities on star graphs have been presented by xu in and godsil has proposed some properties of average mixing of continuous quantum walks , while salimi has produced a version of the central limit theorem for continuous quantum walks also on star graphs , inui _ et al _ have proposed both instantaneous uniform mixing property and temporal standard deviation for continuous - time quantum random walks on circles , best _ et al _ have studied instantaneous and uniform mixing of continuous quantum walks on generalized hypercubes , drezgich _ et al _ have characterized the mixing time of continuous quantum walks on the hypercube under a markovian decoherence model , salimi and radgohar have also analyzed effects of decoherence on mixing time in cycles , and anishchenko _ et al _ have studied how highly degenerate eigenvalue spectra impact the quantum walk spreading on a star graph . motivated by the power - law ditribution exhibited by real world networks showing scale - free characteristics , ide and konno have studied the evolution of continuous quantum walks on the threshold network model , salimi and sorouri have introduced a model of continuous quantum walks with non - hermitian hamiltonians , and bachman _ et al _ have studied how perfect state transfer can be achieved on quotient graphs . finally , we report the works of konno on continuous time quantum walks on ultrametric spaces and continuous quantum walks on trees in quantum probability theory , de falco _ et al _ on speed and entropy of continuous quantum walks , mlken _ et al _ on quantum transport on small - world networks , and jafarizadeh _ et al _ on studying continuous time quantum walks by using the krylov subspace - lanczos algorithm .randomness is an inherent component of every single step of a classical random walk . in other words, there is no way to predict step of a classical random walk , no matter how much information we have about previous steps .we can only tell the probability associated to each possible step .on the other hand , if we carefully analyze quantum evolution in discrete ( unitary operator ) and continuous ( schrdinger equation ) versions , we shall convince ourselves of the fact that quantum evolution is deterministic , i.e. for each computational step denoted by we can always tell the exact description of step , as .so , what is random about a quantum walk ?why are quantum walks candidates for developing quantum counterparts of stochastic algorithms ?the answer is : randomness comes as a result of either decoherence or measurement processes on either quantum walker(s ) and/or quantum coin(s ) .so , decoherence and quantum measurement allow us to introduce randomness into a quantum walk - based algorithm .moreover , we are not restricted to introducing chance only at the end of the quantum algorithm execution as we can also exploit several measurement strategies in order to manipulate quantum systems and produce probability distributions suitable for their use in advantageous algorithms ; for example , see the top hat probability distribution , a quasi - uniform distribution created by running a discrete quantum walk and performing measurements on its constituent elements ( or , alternatively , allowing such constituent particles to have some interaction with the environment . )the mathematical models of discrete and continuous quantum walks studied in the previous sections present a serious problem : it is not clear how to transform discrete quantum walks into continuous quantum walks and vice versa .this is an important issue for two reasons : * 1 ) * in the classical case , discrete and continuous random walks are connected via a limit process , and * 2 ) * it is not natural / elegant to have two different kinds of quantum diffusion , one of them with an extra particle ( the quantum coin ) with no clear connection between them . 1 . *strauch s contribution * + in , f.w .strauch presents a connection between discrete and continuous quantum walks .he starts by using a simplification of the continuous quantum walk defined by eq .( [ schrodinger_equation_ctqw_chapter5 ] ) , namely + + which in is rewritten as + + where is a complex amplitude at the continuous time and the discrete lattice position .+ then , uses results from and to build a discrete quantum walk represented by the following unitary mapping + + + where and are complex amplitudes at the discrete time and discrete lattice position .+ strauch s result focuses on building a unitary transformation that allows us to transform eqs.([discrete_strauch_01 ] ) and ( [ discrete_strauch_02 ] ) into eq .( [ continuous_strauch ] ) .there are several important conclusions from the developments shown in : + 1 .it is indeed possible to transform a discrete quantum walk into a continuous one by means of a limit process ( although this is not a straightforward derivation . )strauch s derivation does not use any coin degree .thus agrees , from an new perspective , with patel _ et al _ with respect to the irrelevance of the coin degree of freedom in order to obtain the statistical enhancements ( that discrete quantum walks show .child s contribution * + in , childs presents the following mathematical framework for simulating a continuous quantum walk as a limit ( ) of discrete quantum walks ( for the sake of clarity and readability of the original paper , we closely follow the notation used in ) : 1 .let be a general hermitian matrix .we now define a set of quantum states as + where denotes the elementwise absolute value of in an orthonormal basis of 2 .define the isometry mapping to 3 .enlarge the hilbert space by building a new set of quantum states from eq .( [ jstates ] ) to for some $ ] and as defined in eq .( 25 ) of 4 . from eq .( [ isometry ] ) , build a modified isometry 5 .now , given an initial state apply the modified isometry given in eq .( [ modified_isometry ] ) and the operation , where is the swap operator . 6 .apply steps of the discrete quantum walk and , finally , 7 .project onto the basis of states .+ in addition to this protocol , childs also presents in a notion of query complexity for continuous - time quantum walk algorithms as well as a continuous - time quantum walk algorithm for solving the distinctness problem , a problem that was originally solved using a discrete quantum walk - based algorithm by ambainis .+ 3 . as a third contribution to state and clarify the relationships between different models of quantum walks ,there are two formulations for _ discrete _ quantum walks : coined and scattering . in , andrade and da luz present a general framework for unitary equivalence of both discrete quantum walk models .the results presented so far in this review show that superposition and , consequently , interference play an important role in the structure and properties of discrete quantum walks . however , interference is also a characteristic of classical physical systems , like electromagnetic waves .thus , it makes sense to scrutinize whether the statistical and computational properties of quantum walks are really due to their quantum nature or not .arguments in favor of the plausibility of using classical physics for building experiments which replicate some interference and statistical properties of quantum walks on a line are given in , where it was shown that it is possible to develop implementations of a quantum walk on a line purely described by classical physics ( wave interference of electromagnetic fields ) and still be able to reproduce the variance enhancement that characterizes a discrete quantum walk .for example , the implementation proposed in utilizes the frequency of a light field as walker and the spatial path or the polarization state of the same light field as the coin .arguments in favor of the quantum mechanical nature of quantum walks have been provided by , among others , kendon and sanders who showed it would still be necessary to have a quantum mechanical description of such an implementation in order to account for two properties of a quantum walk with one walker : i ) the indivisibility of the quantum walker , and ii ) complementarity , which in quantum computation jargon may be stated as follows : _ the trade - off between interference and information about the path followed by the walker ( knowing the path followed by a quantum particle decreases the sharpness of the interference pattern . ) _ furthermore , romanelli _ et al _ showed in that the evolution equation of a quantum walk on a line can be separated into two parts : markovian and interference terms , and that the quadratic increase in the variance of the quantum walker is a consequence of quantum evolution .thus it seems that if we are only interested in some statistical properties of one - walker quantum walks on a line , like its variance enhancement with respect to classical random walks , we could do with either classical or quantum experimental setups .however , the quantum mechanical nature of walkers and/or coins play an important role in the following cases : 1 . from a purely physical point of view , if one is interested in using quantum walks for testing the quantumness of a quantum computer realization , complementarity would be a very helpful resource as it is a property of quantum mechanical systems that can not be exactly reproduced in a classical experiment .a similar argument would be applied in the case of using complementarity as a computational resource .2 . including more walkers ( e.g. and/or coins ( e.g. ) opens up the possibility of detecting , quantifying and harnessing quantum - mechanical properties for information processing purposes .in particular , quantum entanglement has been incoporated into quantum walks research either as a result of performing a quantum walk or as a resource to build new kinds of quantum walks .since entanglement is a key component in quantum computation , it is worth keeping in mind that quantum walks can be used either as entanglement generators or as computational processes taking advantage of this quantum mechanical property .a brief summary of results on quantum walks and entanglement is delivered in subsection [ entanglementdiscretewalks ] .3 . genuine quantum computers will be an excellent ( and most likely , indispensable ) tool to execute exact and efficient simulations of quantum systems ( e.g. . )carnerio _ et al _ have numerically investigated the variation in entanglement between coin(s ) and walker on unrestricted line , trees , and cycles , conjecturing that for all coin initial states of a hadamard walk , the entanglement has 0.872 as its limiting value . in ,et al _ have analytically proved this last result .in fact , studying asymptotical behavior of entanglement in various settings is a fruitful research topic : in , abal _et al _ have studied the long - term behavior of entanglement for two walkers using non - local coin operators , venegas - andraca _ et al _ numerically showed asymptotical properties ( particularly the three peak localization phenomenon ) of quantum walks with entangled coins that later on were analytically proved by liu and petulante ( the three peak localization phenomenon reflects the degeneracy of some eigenvalue of the quantum walk evolution operator ) .furthermore , liu has derived analytical expression for position limit distributions on quantum walks with generalized entangled coins , annabestani _et al _ gave an exact characterization of asymptotic entanglement in , and ide _ et al _ have produced analytical expressions for limit distributions of shannon and von neumann entropies on a one - dimensional quantum walk .also , omar _ et al _ have produced several position probability distributions of quantum walks with entangled walkers ( fermions and bosons ) , endrejat and bttner have presented a multi - coin scheme in order to analyze the effect of entanglement in the initial coin state , pathak and agarwal have argued that entanglement generation in discrete - time quantum walks is a physical resource that can not be exactly reproduced by classical systems , goyal and chandrashekar have numerically studied spatial entanglement in -particle quantum walks using the meyer - wallach multipartite entanglement measure , _ et al _ have investigated non - classical effects ( directional correlations ) in quantum walks with two walkers with interaction , ampadu has studied directional correlations among particles with interaction on a quantum walk on a line , and peruzzo _ et al _ have provided experimental demonstrations of quantum correlations that violate a classical limit by standard deviations .furthermore , chandrashekhar has introduced the idea of generating entanglement between two spatially - separated systems using the entanglement generated while performing a discrete quantum walk as a resource , alls _ et al _ have introduced a shift operator for discrete quantum walks with two walkers which provides conditions for ( not highly probable ) maximal entanglement generation , salimi and yosefjani have studied the asymptotical behavior of coin - position entanglement under a time - dependent coin regime , and ampadu has proposed limit theorems for the von neumann and shannon entropies of discrete quantum walks on .finally , maloyer and kendon have numerically calculated the impact of decoherence in the entanglement between walker and coin for quantum walks on a line and on a cycle , chandrashekar has proposed a modified discrete - time quantum walk in which the coin toss is no longer needed , ampadu has analyzed the impact of decoherence on the quantification of mutual information in a square lattice , rohde _ et al _ have studied the dynamical behavior of entanglement on quantum walks running on bounded linear graphs with reflecting boundaries , together with a scheme for realizing their proposal on a linear optics setting , and romanelli has defined a global chirality probability distribution ( gcd ) independent of the walker s position and has proved that gcd converges to a stationary solution .in , roldn _ et al _ have proposed an experimental set - up based on classical optical devices to implement a discrete quantum walk .this is a remarkable result that provide grounds , together with , to reflect on what exactly is quantum when working on the physical and computational properties of quantum walks ( more on this on subsection [ quantumness ] . ) moreover , rai _ et al _ study the quantum walk of nonclassical light in an array of coupled wave guides , schreiber _ et al _ present a realization of a 5-step quantum walk on passive optical elements and zhang _et al _ have put forward a scheme for implementing quantum walks on spin - orbital angular momentum space of photons .also , rohde _ et al _ have introduced a formal framework for distinguishable and indistinguishable multi - walker quantum walks on several lattices , together with a proposal for implementing such framework on quantum optical settings , solntsev _ et al _ have analyzed links between parametric down conversion and quantum walk implementations , broome _ et al _ have implemented a discrete quantum walk using single photons in space , witthaut has explored how the dynamics of spinor atoms in optical lattices can be used for implementing a quantum walker , van hoogdalem and blaauboer introduced the idea of implementing quantum walk step operator in a one - dimensional chain of quantum dots , and souto ribeiro _et al _ have presented an implementation of a quantum walk step at single - photon level produced by parametric down - conversion .skyrmions are solitons in nonlinear field theory that , as the magnetic field increases , the skyrmion radius decreases and suddenly shrinks to zero by emitting spin waves . this last phenomenon is known as the skyrmion burst . in ,ezawa has proposed to use the remnants of a skyrmion burst to implement several continuous - time quantum walkers . in , owens _et al _ present the architecture of an optical chip with an array of waveguides in which they have implemented a two - photon continuous quantum walk . in ,et al _ show that the landau - zener transitions induced in electron systems due to strong electric fields can be mapped to a quantum walk on a lattice , hamilton _et al _ have proposed an experimental setup of a four - dimensional quantum walk using the polarization and orbital angular momentum of a photon , and klmn _ et al _ have presented a scheme for implementing a coined quantum walk using the ballistic transport of an electron through a series of quantum rings .indeed , the abundance of experimental proposal and realizations of quantum walks based on optical devices may be a glimpse to future implementations of universal quantum computers . based on the results presented by xue and sanders in aboutthe behavior of quantum walks in circle in phase space , xue _ et al _ have suggested an implementation of quantum walks on circles using superconducting circuit quantum electrodynamics , manouchehri and wang proposed implementations of quantum walks on bose - einstein condensates and quantum dots , xue _ et al _ suggest that a multi - step quantum walk using generalized hadamard coins may be realized using an ion trap while schmitz _et al _ have indeed implemented a proof of principle of a quantum walk in a linear ion trap and matjeschk _ et al _ have presented an experimental proposal for quantum walks in trapped ions .et al _ have implemented a quantum walk on the line with single neutral atoms by delocalizing them over the sites of a one - dimensional spin - dependent optical lattice , lavi _ et al _ have proposed a quantum walk implementation using non - ideal optical multiports , and zhringer _et al _ have experimentally demonstrated a 23-step quantum walk on a line in phase space using one and two trapped ions .lahini _ et al _ have studied the dynamics of a two - boson quantum walk on a lattice , sansoni _ et al _ have experimentally studied the effect of particle statistics in two - particle coined quantum walks , mayer _ et al _ have studied the correlations that can be found in a quantum walk built upon interacting and non - interacting particles , and peruzzo _ et al _ have observed quantum correlations on photons generated using parametric - down conversion techniques and have experimentally found that such correlations critically depend on the actual quantum walk input state .finally , ahlbrecht _ et al _ have investigated how to use a two - atoms system for executing a quantum walk , regensburger _et al _ have experimentally shown how a coupled fiber system could be used to implement a quantum walk , and matsuoka _ et al _ have proposed a scheme to implement a continuous - time quantum walk on a diatomic molecule .let us start with a catchy sentence : efficient search is a holy grail in computer science .indeed , in addition to being searching a core topic in undergraduate and graduate computer science education , many open problems and challenges in both theoretical and applied computer science can be formulated as search problems ( e.g. optimization problems , typically within the sphere of np - hard problems , can be seen as detect and/or identify object(s) problems whose solutions ask for search algorithms . )thus , a great deal of efforts and resources have been devoted to build both classical and quantum algorithms for solving a variety of search problems .in particular , due to the central role played by classical random walks in the development of successful stochastic algorithms , there has been a huge interest in understanding the computational properties of quantum walks over the last few years .moreover , the development of sucessful quantum - walk based algorithms and the recent proofs of computational universality of quantum walks have boosted this area . a general strategy for building an algorithm based on quantum walks includes choosing : a ) the unitary operators for discrete quantum walks or the hamiltonians for continuous quantum walks , that will be employed to determine the time evolution of the quantum hardware , b ) the measurement operators that will be employed to find out the position of the walker and , possibly c ) decoherence effects if required for controlling the quantum walk algorithmic effects ( e.g. manipulating probability distributions ) or mimicking natural phenomena ( e.g. . )the quantum programmer must bear in mind that the choice of evolution and measurement operators , as well as initial quantum states and ( possibly ) decoherence models , will determine the shape and other properties of the resulting probability distribution for the quantum walker(s ) .moreover , a computer scientist interested in algorithms based on quantum walks must keep in mind that , due to the no - cloning theorem , making copies of arbitrary quantum states is not possible in general thus copying variable content is not allowed in principle .indeed , it is possible to use cloning machines for imperfect quantum state copying , but it would frequently translate into computational and estimation errors .since any non - reversible gate can be converted into a reversible gate , errors due to imperfect quantum state cloning are unneccessary and consequently must be avoided . employing classical computer simulators of quantumwalks can be a fruitful exercise in order to figure out the operators and initial states required for algorithmic applications of quantum walks ( more on classical simulation of quantum algorithms in subsection [ classical_simulation ] . )quantum algorithms based on either discrete or continuous quantum walks are built upon detailed and complex mathematical structures and it is not possible to cover all details in a single review paper .therefore , we shall devote this section to review the fundamental links between quantum walks and computer science ( mainly algorithms ) and we strongly recommend the reader to go to both the references provided in this section , as well as to the introductions and reviews of quantum walk - based algorithms that can be found in .let us start by defining an abstract object frequently used in quantum algorithms : an oracle .* oracle*. an oracle is an abstract machine used to study decision problems .it can be thought of as a black box which is able to decide certain decision problems in a single step , i.e. an oracle has the ability to _ recognize _ solutions to certain problems .[ oracle_computation ] an oracle is a mathematical device built to simplify the actual process of algorithm development .unfortunately , the name oracle does not help much as it seems to invoke metaphysical entities and powers .however , the nature of an oracle is just that of any other function or procedure : it is defined in terms of what mathematical operations are performed both in terms of computability and complexity .oracles are widely used in classical algorithm design . in the context of quantum computation, we also use oracles to _ recognize _ solutions for the search problem .additionally , we assume that if an oracle recognizes a solution then that oracle is also capable of computing a function with as argument .we are interested in searching for elements in a space of elements .to do so , we use an index , where , to enumerate those elements .we also suppose we have a function such that if and only if is one of the elements we are looking for . otherwise , .an oracle can be written as a unitary operator defined by + where is the index register , is addition modulo ( the xor operation in computer science parlance ) and the oracle qubit is a single qubit which is flipped if and is left unchanged otherwise .as shown in , we can check whether is a solution to our search problem by preparing , applying the oracle , and checking whether the oracle qubit has been flipped to .algorithm , as well as several algorithms we shall review in this section , make use of an oracle .a comparison of quantum oracles can be found in .we now proceed to review quantum algorithms based on discrete quantum walks .let us introduce the following problem : * searching in an unordered list*. suppose we have an unordered list of items labeled .we want to find one of those elements , say .[ search_problem ] any classical algorithm would take steps at least to solve the problem given in def .( [ search_problem ] ) .however , one of the jewels of quantum computation , grover s search algorithm , would do much better . by using an oracle and a technique called * amplitude amplification *, the search algorithm proposed in would only take time steps to solve the same search problem .in addition to its intrinsic value for outperforming classical algorithms , grover s algorithm has relevant applications in computer science , including solutions to the 3-sat problem . in , shenvi _et al _ proposed an algorithm based on a discrete quantum walk to solve the search problem given in def .( [ search_problem ] ) .for the sake of completeness and in order to present the results contained in , let us remember the definition of a hypercube ( def .[ hypercube_1 ] ) . *the hypercube*. the hypercube is an undirected graph with nodes , each of which is labeled by a binary string of bits .two nodes in the hypercube are connected by an edge if differ only by a single bit flip , i.e. if , where is the hamming distance between and . as an example , the 3-dimensional hypercube is shown in fig .[ hypercube_definition ] an example of a 3-dimensional hypercube can be seen in fig .( [ hypercube_3d ] ) .since each node of the hypercube has degree and there are distinct nodes then the hilbert space upon which the discrete quantum walk is defined is , and each state is described by a bit string and a direction .we now define the following coin and shift operators where is the equal superposition over all directions , i.e. , and where is the basis vector of the hypercube . using the eigenvalues and eigenvectors of the evolution operator of the quantum walk on the hypercube in order to build a slightly modified coin operator ( which works within the algorithm structure as an oracle ( def.([oracle_computation ] ) ) ) and an evolution operator , and by collapsing the hypercube into a line , the quantum walk designed by evolution operator used to search for element .it is claimed in that , after applying a number of times , the outcome of their algorithm is with probability .a summary of similarities and differences between this quantum walk algorithm and grover s algorithm can be found in the last pages of , gbris _ et al _ studied the impact of noise on the algorithmic performance given in using a scattering quantum walk , lovett _et al _ have numerically studied the behavior of the algorithm presented in on different two - dimensional lattices ( e.g. honeycomb lattice ) , and potoek _ et al _ have introduced strategies for improving both success probability and query complexity computed in .now , let us think of the following problem : we have a hypercube as defined in def .( [ hypercube_definition ] ) and we are interested in measuring the time ( or , equivalently , the number of steps ) an algorithm would take to go from node to node , i.e. its _ hitting time _( [ hitting_time_classical ] ) ) .since defining the notion of hitting time for a quantum walk is not straightforward , kempe has proposed the following definitions * one - shot hitting time*. a quantum walk has a one - shot hitting time if the probability to measure state at time starting in is larger than , i.e. . [ one_shot_ht ] * - stopped walk*. a -stopped walk from starting in state is the process defined as the iteration of a measurement with the two projectors and . if is measured , an application of follows . if is measured the process is stopped . *concurrent hitting time*. a quantum walk has a concurrent hitting time if the -stopped walk from and initial state has a probability of stopping at a time .[ concurrent_ht ] in both cases ( defs .( [ one_shot_ht ] ) and ( [ concurrent_ht ] ) ) , it has been shown by kempe that the hitting time from one corner to its opposite is polynomial . however , although it was thought that this polynomial hitting time would imply an exponential speedup over corresponding classical algorithms , that is not the case as it is possible to build a polynomial time classical algorithm to traverse the hypercube from one corner to its opposite , as shown by childs _et al _ in .further studies on hitting times of quantum walks on graphs have been produced by kok and buek as well as krovi and brun .a natural step further along employing discrete quantum walks for solving search problems is to use quantum computation techniques to find items stored in spaces of 2 or more dimensions . in , benioff proposed the use of grover s algorithm for searching items in a grid of elements , and showed that a direct application of such algorithm would take times steps to find one item , i.e. there would be no more quantum speedup . later on , in aaronson and ambainis used grover s algorithm and multilevel recursion to build algorithms capable of searching in a 2-dimensional grid in steps and a 3-dimensional grid in steps , and ambainis _ et al _ proposed algorithms based on discrete quantum walks ( evolution operators used in are those perturbed operators defined in ) that would take steps to search in a 2-dimensional grid and would reach an optimal performance of for 3 and higher dimensional grids ( an important contribution of was to show that the performance of search algorithms based on quantum walks is sensitive to the selection of coin operators , i.e. the performance of a search algorithm may be optimal or not depending on the coin operator choice ) , aaronson and ambainis have shown how to build algorithms based on discrete quantum walks to search on a 2-dimensional grid using a total number of steps , and a 3-dimensional grid with number of steps , tulsi has presented a modified version of ambainis _ et al _ s quantum walk search algorithm , and ambainis _ et al _ have proved that executing the algorithm presented in times would leave the walker within a neighbourhood with probability , thus classical algorithm for local search could be used instead of performing the amplitude amplification technique designed in .numerical studies on how dimensionality , tunneling and connectivity affect a discrete quantum - walk based search algorithm are presented by lovett _et al _ in , and more numerical studies on potential improvements on algorithmic complexity on hypercubic lattices using the dirac operator have been presented by patel _et al _ in .finally , childs and goldstone developed a continuous quantum walk algorithm to solve the search problem in a grid and discovered algorithms that would have an optimal performance of in grids of 5 or more dimensions . a variant of def . ( [ search_problem ] ) , the * element distinctness problem * , was analyzed by ambainis in : * element distinctness problem * . given a list of strings over separated by # ,determine if all the strings are different .[ element_distinctness ] a quantum algorithm for solving the element distinctness problem is given in .this algorithm combines the quantum search of spatial regions proposed in with a quantum walk .the first part of transforms the string list from def .( [ element_distinctness ] ) into a graph with marked and non - marked vertices ; in this process , uses an oracle ( def .( [ oracle_computation ] ) . ) the second part of the algorithm employs a discrete quantum walk to search graph . as a result , the algorithm solves the distinctness problem in a total number of steps and steps for identical strings , among items . upon the work presented in , magniez _ et al _proposed in a new quantum algorithm for solving the _ triangle problem _ , which can be stated as let be a graph .any complete subgraph of on three vertices is called a triangle .the triangle problem ( in oracle version ) can be posed as follows : + oracle input : the adjacency matrix of a graph on nodes .+ oracle output : a triangle if there is any , otherwise reject . additionally , another quantum algorithm , based on grover s search quantum algorithm , is presented in for solving the same triangle problem .one more application of has been proposed by childs and eisenberg in , where it has been proposed to employ the quantum algorithm developed for the distinctness problem ( def .( [ element_distinctness ] ) ) to solve the l - subset finding ( oracle ) problem , which can be stated as * the triangle problem ( oracle version)*. + oracle input : 1 ) a black box function , where are finite sets and is the problem size .2 ) property .+ oracle output : some subset such that , or reject if none exists .an alternative , refreshing and highly influential approach to discrete quantum walks has been presented by m. szegedy in , where a new definition of a discrete quantum walk in presented via the quantization of a stochastic matrix , as well as an alternative definition of hitting time for discrete quantum walks . begins by defining the search problem as follows : * search problem via stochastic processes * given a markov chain with transition probability matrix on a discrete state space , with , a given probability distribution on , and a subset of marked elements , compute an estimate for the number of iterations required to find an element of , assuming that the markov chain is started from a u - distributed element of .[ search_via_stochastic ] continues by defining the following concepts : is the matrix obtained from by deleting its rows and columns indexed from . since there is no natural ( i.e. straightforward ) method for quantizing a discrete markov chain, proposes a quantization method of which uses bipartite random walks .let and be two finite sets and and be matrices describing probabilistic maps and , respectively .if we have a single probabilistic function from to , i.e. a markov chain , in order to create a bipartite walk we can set for every ( that is , we set . ) the quantization method for proposed by szegedy is as follows .we start by creating two operators on the hilbert space with basis states .let us define the states for every , .finally , let us define as the matrix composed of columns vectors ( ) , and as the matrix composed of columns vectors ( ) .then , defines the unitary operator , the quantization of the bipartite walk , as proceeds to build definitions and theorems for new quantum hitting time and upper bounds for finding a marked element as in def .( [ search_via_stochastic ] ) .a relevant result presented in this paper is : for every ergodic markov chain whose transition probability matrix is equal to its transpose , the quantum walk hitting time as defined in is at most the square root of the classical one .furthermore , a remarkable feature of is a proposal for a new link between classical and quantum walks , namely the development of a quantum walk evolution operator via a classical stochastic matrix . inspired in the quantum walk model presented in , ide _et al _ have investigated the time averaged distribution of discrete quantum walks and segawa has studied the relation between recurrent properties of random walks and localization phenomena in quantum walks .also , chiang and chiang and gomez have proposed a model of noise based on system precision limitations and noisy environments in order to introduce a model of evolution perturbation for quantum walks and , based on the results presented in and weyl s perturbation theorem on classical matrices , chiang and gomez have studied how perturbation affects quantum hitting time as originally defined in . upon the quantum walk definition given in , magniez _ et al _ proposed a quantum walk - based algorithm for solving the following problem : let be the eigenvalue gap of a reversible , ergodic markov chain , and let be a lower bound on the probability that an element chosen from the stationary distribution of is marked whenever is non - empty .then , there is a quantum algorithm that with high probability determines if is empty or finds an element of , with cost of order , where is the computational cost of constructing superposition states , and are costs of constructing unitary transformations as defined on page 2 of .furthermore , in magniez _ et al _ have presented an algorithm for detecting marked elements that improves the complexity of the detection algorithm presented in and ide _ et al _ have derived a time average distribution for a quantum walk following .in addition , krovi _ et al _ have constructed quantum walk - based algorithms that both detect and find marked vertices on a graph , buhrman and palek have presented a bounded error quantum algorithm with complexity for veryfying whether the product of two matrices of order equals a third ( i.e. the matrix multiplication verification problem ) , and magniez and nayak have presented a quantum algorithm for testing the commutativity of a black - box group , all three algorithms based on the formalisms introduced by szegedy . a novel application of discrete quantum walks is shown by somma _et al _ in , where a quantum algorithm for combinatorial optimization problems is proposed : this quantum algorithm combines techniques from discrete quantum walks , quantum phase estimation , and quantum zeno effect , and can be seen as a quantum counterpart of classical simulated annealing based on markov chains ( also , the zeno effect in quantum - walk dynamics under the influence of periodic measurements in position space is studied by chandrashekar in ) , and hillery _ et al _ have presented in a discrete quantum walk algorithm for detecting a marked edge or a marked complete subgraph within a graph .finally , paparo and martin - delgado present a novel and refreshing proposal developed upon the notion of szegedy s quantum walk : a quantum - mechanical version of google s pagerank algorithm .the operation and mathematical formulation of discrete quantum walks fits very well into the mindset of a computer scientist , as time evolves in discrete steps ( as a typical classical algorithm would ) and the model employs walkers and coins , usual elements of stochastic processes when employed in algorithm development .however , the most successful applications of quantum walks are found within the realm of continuous quantum walks .given the seminal result derived by f. strauch in about the connection between discrete and continuous quantum walks , we now know that results from continuous quantum walks should be translatable , at least in principle , to discrete quantum walks and vice versa .nonetheless , the mathematical structure of continuous quantum walks and the physical meaning of corresponding equations provide an accurate picture of several physical systems upon which we may implement quantum walks and quantum computers .although many physical implementations in this field have been based on the discrete quantum walk model ( please see subsection [ experimental_realizations ] ) , the additional stimulus provided by as well as the computational universality of quantum walks and recent connections found between quantum walks and adiabatic quantum computation , another model of continuous quantum computation , it is reasonable to expect new implementations based on continuous quantum walks .readers interested in acquiring a deeper understanding of the physics and mathematics of continuous quantum systems ( particularly continuous quantum walks ) may find the following references useful : .in , e. farhi and s. gutmann introduced an algorithm based on a continuous quantum walk that solves the following problem : given a graph consisting of two balanced binary trees of height with the leaves of the left tree identified with the leaves of the right tree according to the way shown in fig .( [ trees](a ) ) , and with two marked nodes _ entrance _ and _ exit _ , find an algorithm to go from _ entrance _ to _\(a ) ( b ) it was shown in that it is possible to build a quantum walk that traverses graph from _ entrance _ to _ exit _ which is exponentially faster than its corresponding classical random walk .in other words , the _ hitting time _ of the continuous quantum walk proposed in is of polynomial order , while the hitting time of the corresponding classical random walk is of exponential order .however , this advantage does not lead to an exponential speedup due to the fact that it is possible to build a deterministic algorithm that traverses the same graph in polynomial time .ideas from were taken one step further by a. childs _ et al _ in , where the authors introduced a more general type of graphs to be crossed , proved that those graphs could not be passed across efficiently with any classical algorithm , and delivered an algorithm based on a continuous quantum walk that traverses the graph in polynomial time .graphs are built as follows .begin by constructing two balanced binary trees of height ( i.e. with leaves ) , but instead of identifying the leaves , they are connected by a random cycle that alternates between the leaves of the two trees , that is , we choose a leaf on the left at random and connect it to a leaf on the right chosen at random too .then , we connect the latter to a leaf on the left chosen randomly among the remaining ones .the process is continued , always alternating sides , until every leaf on the left is connected to two leaves on the right , and vice versa .( [ trees](b ) ) for an example of graphs . in order to build the quantumwalk that will be used to traverse a graph , the authors of defined a hamiltonian based on s adjacency matrix . has matrix elements given by in the continuous quantum walk algorithm proposed in , the authors used an oracle to learn about the structure of the graph , i.e. information about the hamiltonian given by eq .( [ continuous_quantum_walk_childs ] ) is extracted using an oracle . by doing so, it is proved in that it is possible to construct a continuous quantum walk that would efficiently traverse any graph .an improved lower bound for any classical algorithm traversing has been proposed in , but the performance difference between quantum and classical algorithms in remains as previously stated .i now provide a succinct review of more continuous - time quantum walk algorithms . focusing on finding hidden nonlinear structures over finite fields , childs have developed efficient quantum algorithms to solve the hidden radius problem and the hidden flat of centers problems . moreover , farhi _ et al _ have produced a quantum algorithm for solving the nand tree problem ( which consists of evaluating the root node of a perfectly bifurcating tree whose leaves are either 0 or 1 and the value of any other node is the nand of corresponding children leaves ) and cleve _ et al _ have built quantum algorithms for evaluating min - max trees . finally , agliari _et al _ have proposed a quantum walk - based search algorithm on fractal structures .let us present a final reflection with respect to algorithms purely based on quantum walks . as stated in the beginning of this section andrightly argued by ritcher , the quantum algorithms reviewed in this section are instances of an abstract search problem : given a state space which can be translated into a graph structure , find a marked state ( or set of states ) by performing a quantum walk on the graph . with this abstraction in mind as well as with the purpose of combining the power of quantumwalks with classical sampling algorihtms , ritcher has proposed a method for almost - uniform sampling based on repeated measurements of a continuous quantum walk .one of the main goals of quantum computation is the simulation of quantum systems , i.e. the realization of programmable quantum systems whose physical properties allow us to model the behavior of other quantum systems . a novel use of continuous quantum walks for simulation of quantum processes has been presented by mohseni _et al _ in . in this contribution , the authors have developed a theoretical framework for studying quantum interference effects in energy transfer phenomena , with the purpose of modeling photosynthetic processes .the main contribution of is to analyze the action of the environment in the coherent dynamics of quantum systems related to photosynthesis .the framework developed in includes a generalization of a non - unitary continuous quantum walk in a directed graph ( as opposed to a previous definition of a unitary continuous quantum walk on undirected graphs . ) exact simulation of quantum systems using the mathematical model of the universal turing machine ( or any other universal automaton equally or less powerful than the universal turing machine ) is either an impossible task ( for example , if we try to exactly simulate uniquely quantum mechanical behavior for which no classical counterpart is known ) or a very difficult one ( for example , when trying to replicate physical phenomena in which the number of possible combinations or outcomes increases exponentially or factorially with respect to the number of physical systems involved in the experiment . ) still , as long as quantum computers are not available in the market in order to run quantum algorithms on them , physicists and computer scientists need an alternative tool to explore ideas and emergent properties of quantum systems and sophisticated quantum algorithms .classical computer simulation of quantum algorithms is crucial for understanding and developing intuition about the behavior of quantum systems used for computational purposes , as well as to realize the approximate behavior of practical implementations of quantum algorithms .moreover , we may use classical simulation of quantum systems in order to learn which properties and operations of quantum systems can not be efficiently simulated by classical systems ( see and for most interesting results ) , as well as to find out how exclusive quantum - mechanical systems and operations can be employed for algorithm speed - up .given the relevance of quantum walks in quantum computing both as a universal model of quantum computation and as an advanced tool for building quantum algorithms , as well as the daunting complexity of designing and coding classical algorithms for running on stand - alone , distributed or parallel hardware platforms , simulating quantum algorithms and quantum walks on classical computers has become a field on its own merit . in the following lines , we summarize several theoretical developments and practical software implementations of classical simulators of quantum algorithms , being all these developments suitable for ( approximately ) simulating both discrete and continuous quantum walks .mer , bettelli _ et al _ , viamontes _et al _ , selinger , and bauls _ et al _ , among others , have introduced mathematical frameworks for implementing quantum algorithms simulators using classical computer languages .later and among many other relevant contributions , nyman proposed using symbolic classical computer languages for simulating quantum algorithms , mer introduced abstract semantic structures for modelling quantum algorithms in classical environments , and altenkirch _ et al _ proposed a quantum programming language based on classical functional programming .selinger and gay provided an early description of quantum programming languages and miszczak presented a summary of models of quantum computation and current quantum programming languages . among several software packages and platforms that have been developed for quantum algorithm simulation ,i would like to mention the contributions of marquezino and portugal ( quantum walk simulator for one- and two - dimensional lattices ) , gmez - muoz ( mathematica add - on for quantum algorithm simulation ) , de raedt _ et al _ ( quantum algorithm simulation on parallel computers ) , caraiman and manta ( quantum algorithm simulation on grids ) , daz - pier _et al _ ( this is an extension of built for simulating adiabatic quantum algorithms on gpus ) , and machnes _ et al _ ( a matlab toolset for simulating quantum control algorithms . ) the interested reader will find a comprehensive list of currently available classical simulators of quantum algorithms in .an example of the importance of realizing whether truly quantum properties can be used for algorithm speed - up was provided in the field of quantum walks a few years ago . as already explained in this review paper ( subsection [ quantumness ] ) ,since the publication of it had been believed that the enhanced variance of position distribution in quantum walks was responsible ( partially at least ) for quadratic speed - up of quantum walk - based algorithms .however , it has been shown that it is possible to develop implementations of a quantum walk on a line purely described by classical physics and still be able to reproduce the variance enhancement that characterizes a discrete quantum walk .thus , _ it remains as an open question what exclusive quantum - mechanical properties and operations are relevant for enhancing our computing capabilities_.universality is a highly desirable property for a model of computation because it shows that such a model is capable of simulating any other model of computation .basically , models of computation that are labeled as universal are capable of solving the same problems , although it could happen in different time regimes .the history of quantum computing includes the recollection of significant efforts to prove the universality of several models of quantum computers , i.e. that any algorithm that can be computed by a general - purpose quantum computer can also be executed by quantum gates , and computers based on the quantum adiabatic theorem , for example . in the field of quantum walks ,hines and stamp have shown in how to map quantum walk hamiltonians and hamiltonians for other quantum systems on hypercubes and hyperlattices .later on , formal proofs of computational universality of quantum walks have been presented by childs ( 2009 ) , lovett _et al _ ( 2010 ) , and underwood and feder ( 2010 ) .let us now dwell on the properties and details of and .+ + a ) * universal computation by continuous - time quantum walk * + in his seminal work , childs proved that the model known as continuous - time quantum walk is universal for quantum computation .this means that , for an arbitrary problem that is computable in a general - purpose quantum computer , it is possible to employ the continuous - time quantum walk model to build computational processes that would also solve . since it has already been proved by childs _et al _ and aharonov and ta - shma that it is possible to simulate a continuous quantum walk using poly(log ) gates , we then conclude that quantum walks and quantum circuits have essentially the same computational power .the proof of universal computation delivered in is based on the following ideas : 1 .executing a continuous - time quantum walk - based algorithm is equivalent to propagating a continuous - time quantum walk on a graph .propagation occurs via scattering theory .2 . the particular structure of graph depends on the problem to solve ( i.e. on the algorithm that one would like to implement . ) nevertheless and in all cases , graph consists of sub - graphs ( with maximum degree equal to three ) representing quantum - mechanical operators connected by quantum wires .moreover , graph is finite in terms of both the number of quantum gates as well as the number and length of quantum wires .3 . quantum wires do not represent qubits : they represented quantum states , instead .consequently , the number of quantum wires in a graph will grow exponentially with respect to the number of qubits to be employed . indeed ,if we meant to simulate the propagation of a continuous - time quantum walk in on a classical computer we would certainly need an exponential amount of computational resources for representing quantum wires ; however , both and the propagation of a continuous - time quantum walk on it are to be simulated by a general purpose quantum computer which , as previously stated in the beginning of this section , can simulate a continuous - time quantum walk in poly(log ) .a set of gates is labelled as _universal for quantum computation _ if any unitary operation may be approximated to arbitrary accuracy by a quantum circuit involving those gates .the core of is to simulate a universal gate set for quantum computation by propagating a continuous - time quantum walk on different graph shapes .the universal set chosen by childs in is composed by the controlled - not , phase and and basis - changing gates with matrix representations given in eqs . ( [ cnot_childs],[phase_childs],[basis_changing ] ) , which together constitute a dense subset of .graphs employed to represent these three quantum gates are shown in fig .( [ graphs_for_quantum_gates ] ) . + + + 5 .the eigenvalues and eigenvectors of those graphs employed to simulate a universal gate set for quantum computation play a central role in this discussion . 6 . constitutes a theoretical proposal for proving and exhibiting the computational power of continuous quantum walks .in particular , does _ not _ constitute a hardware - oriented proposal for implementing a general - purpose quantum computer based on continuous quantum walk .\(a ) ( b ) + ( c ) to put it in a few words of my own , proposes quantum computation as the flow of quantum information , via the dynamics of a continuous - time quantum walk , on graphs .let us now work out the details of ( for the sake of clarity and readability of the original paper , hereinafter i will closely follow the notation used in . )+ -_scattering on an infinite line_. starts by reviewing some properties of scattering theory on infinite lines .let be an infinite line of vertices .each vertex corresponds to a basis state and is , of course , connected only to vertices .then , the eigenstates of the adjacency matrix of this graph are the momentum states , with corresponding eigenvalues . the eigenstates fulfill the following condition : -_scattering on a semi - infinite line_. the next step toward calculating expressions for scattering on finite graphs is to study semi - infinite lines .let us consider a graph and construct an infinite graph with adjacency matrix by attaching a semi - infinite line to each of of its vertices ( i.e. it is not compulsory to attach infinite lines to all vertices in , just some vertices would suffice . )we shall enumerate the vertices of each infinite line attached to by labelling the vertex in the original graph with and assigning the values to the vertices we find as we move out along the line ( see fig .( [ semi_infinite_line_childs ] ) for an example of a graph with semi - infinite lines . )a nice example of this kind of semi - infinite graphs on discrete - time quantum walks is provided by feldman and hillery in which we reproduce here .let be the graph given in fig .( [ semi_infinite_line_feldman ] ) .the graph goes to on the left and to on the right .one set of unnormalized eigenstates of this graph can be described as having an incoming wave from the left , an outgoing transmitted wave going to the right , and a reflected wave going to the left .the eigenstates with a wave incident from the left take the form is the part of the eigenfunction between vertices and , and is the eigenvalue of the operator that advances the walk one step .the first term can be thought of as the incoming wave ( from to zero ) , the term proportional to is the reflected wave ( from zero to ) , and the term proportional to is the transmitted wave ( from 2 to ) .please notice the crucial role that eigenvalue plays in the quantification of phases .let us now go back to . for each ( i.e. for each infinite line attached to ) there is an incoming scattering state of momentum denoted given by the reflection coefficient , the transmission coefficients and the form of are determined by the eigenequation .eigenstates together with the bound states defined in section ii of form a complete a orthogonal set of eigenfunctions of that are employed to calculate the propagator for scattering through ( we do not write the mathematical expressions for bound states as it is proved in that the role of those states on the scattering process through can be neglected ) : where is a parameter of bound states .the whole purpose of this exercise is to have the mathematical tools needed to compute the propagation of the continuous - time quantum walk on the graphs that act as quantum gates ( fig .( [ graphs_for_quantum_gates ] ) . ) finally with respect to this introductory mathematical treatment , it is stated in that finite graphs can be modelled with eqs .( [ uno_childs],[dos_childs],[tres_childs ] ) without significant changes. + -_universal gate set_. as previously stated in this review , the universal gate set chosen by childs is composed of the controlled - not , phase and and basis - changing gates .the implementation of the controlled - not gate is straightforward as it suffices just to exchange the quantum wires corresponding to the basis states and as shown in fig .( [ graphs_for_quantum_gates].a ) .this wire - exchange may sound unfeasible , but it is not : is a theoretical proposal that describes the logical / mathematical processes that must be performed in order to achieve universal quantum computation , not the implementation of quantum walk - based universal computation on actual quantum hardware . as for the phase gate ,the process to be performed is to apply a nontrivial phase to the , leaving the unchanged .to do so , childs has proposed to propagate the quantum walk through the widget shown in fig .( [ graphs_for_quantum_gates].b ) .the process is as follows : attach semi - infinite lines to the ends ( open circles ) of fig .( [ graphs_for_quantum_gates].b ) and compute the transmission coefficient for a wave of momentum incident on the input terminal ( lhs open circle . )the value for reported in is as direct substitution in eq .( [ childs_cuatro ] ) shows , at the widget has perfect transmission ( i.e. . ) furthermore , also at , the widget shown in fig .( [ graphs_for_quantum_gates].b . ) introduces a phase of to the quantum information that is being propagated through it .this last result is not explicitly derived in but it can be calculated from the eigenvalues of the corresponding adjacency matrix and the mathematical model for propagation for scattering through graphs ( eq .( [ tres_childs ] ) . )the same rationale applies to the construction of the basis - changing single - qubit gate proposed by childs : propagating a continuous - time quantum walk at through the graph shown in fig .( [ graphs_for_quantum_gates].c ) would be equivalent to applying the unitary transformation given in eq .( [ basis_changing ] . ) now , assuming that will only take the value could be very difficult to implement .consequently , introduces two more gates : a momentum filter and a momentum separator , which are to be used for appropriately tuning the algorithm input .finally , it is stated in that for the actual implementation of a general quantum gate as well as a continuous - time quantum - walk algorithm , we would only need to connect appropriate widgets using quantum wires .let us now review the main ideas and properties of universal computation of discrete - time quantum walks .+ + b ) * universal computation by discrete quantum walk * + in , lovett _et al _ have presented a proof of computational universality for discrete - time quantum walks .the arguments delivered in keep a close link with the ideas presented in , in terms of the universal gate set upon which the simulation of an arbitrary quantum gate can be achieved as well as on the nature of quantum wires ( as in , quantum wires represent basis states rather than qubits . ) here a summary of relevant properties : 1 . executing a discrete - time quantum walk - based algorithm is equivalent to propagating a discrete - time quantum walk on a graph via state transfer theory .in contrast to the behavior of continuous - time quantum walks , coined discrete - time quantum walks do exhibit back - propagation , hence the need to look for an efficient way to propagate the discrete - time quantum walk .+ it has been shown that perfect state transfer can be achieved in graphs ( for example , an eight - node cycle gives perfect state transfer from the initial vertex to the opposite vertex in 12 time steps . )thus , lovett _et al _ propose a scheme based on two - edge quantum wires ( i.e. a cycle of two nodes ) for achieving perfect state transfer .the basic wire used to propagate a discrete - time quantum walk is shown in fig .( [ basic_wire_lovett ] ) . in this setup ,the state would be split as initial state .i shall describe the propagation method proposed in in the following lines .2 . as in , the particular structure of graph on the problem to solve ( i.e. on the algorithm that one would like to implement . ) nevertheless and in all cases , graph consists of sub - graphs representing quantum - mechanical operators connected by quantum wires ( in contrast with , in graphs representing quantum gates have maximum degree equal to eight . ) furthermore , graph is finite in terms of both the number of quantum gates as well as the number and length of quantum wires .3 . quantum wires do not represent qubits : they represented quantum states , instead . as in , the number of quantum wires in a graph will grow exponentially with respect to the number of qubits to be employed but , as previously stated in the beginning of this section , both and the propagation of a discrete - time quantum walk on it are to be simulated by a general purpose quantum computer which can simulate a discrete - time quantum walk using poly(log ) gates .it is proposed in to simulate a universal gate set for quantum computation by propagating a discrete - time quantum walk on different graph shapes . the universal set chosen by lovett _et al _ in is composed by the controlled - not , phase and hadamard gates with matrix representations given in eqs .( [ cnot_lovett],[phase_lovett],[hadamard_lovett ] ) .graphs employed to represent these three quantum gates are shown in fig .( [ graphs_for_quantum_gates_lovett ] ) . also , as in , lovett _et al _ have presented a theoretical proposal for proving and exhibiting the computational power of discrete - quantum walks and it does _ not _ constitute a straightforward quantum computer architecture proposal for implementing a general - purpose quantum computer based on discrete - time quantum walks ( pretty much in the same spirit that a classical algorithm is not straightforwardly implemented in classical digital hardware . ) + + + + \(a ) + ( b ) + ( c ) + -_state transfer on the basic wire using a four - dimensional grover coin_. let us now describe the propagation method proposed in .suppose that we need to transmit a qubit that has been initialized as then : * the initial state of the basic wire consists of preparing both lhs arms and with the same quantum information , i.e the actual amplitude assigned to basis state from eq .( [ lovett_initial_state ] . )the same rationale applies to lhs arms and : they both are initialized with the same quantum information , i.e. the amplitude assigned to basis state from eq .( [ lovett_initial_state ] . ) this initialization , visually presented in fig .( [ propagation_grover_coin_01 ] ) for , may be written as shown in eq .( [ initial_state_lovett ] . )note that the rhs of fig .( [ propagation_grover_coin_01 ] ) is initialized to .+ * now , a crucial point comes into the scene : the application of grover diffusion operator ( eq .( [ grover_4d ] ) ) to .it is stated in that , for any vertex of even degree , the grover coin will transfer the entire state from all input edges to all output edges , _ provided the inputs are all equal in both amplitude and phase_. + + mathematically speaking , computing is a straightforward procedure .physicall speaking , applying to would be equivalent to applying a unitary operator that does perfect quantum information transfer from the lhs of the graph to the rhs of that same graph , as shown in fig .( [ propagation_grover_coin_02 ] ) . in principle and depending on the particular properties of quantum hardware we may try to translate and implement this protocol , we should be able to find such a transfer physical operation as we are modelling it as a quantum - mechanical unitary operator .+ so , yields eq .( [ first_operation ] ) + * the third and last step of this basic quantum operation consists of shifting quantum information from the zone nearby node 1 to the sorrounding area of node 2 .this step is equivalent to preparing the input of the next algorithmic operation .the full three - step basic operation is shown in fig .( [ propagation_grover_coin_03 ] ) .\(a ) ( b ) \(a ) ( b ) ( b ) [ [ section-1 ] ] -_construction of the universal gate set_. let us now describe how to construct , according to , the controlled - not , phase and and hadamard gates ( eqs . ( [ cnot_lovett],[phase_lovett],[hadamard_lovett ] ) . ) as in , the controlled - not gate is trivial to implement : we only need to exchange corresponding basis states wires as shown in fig .( [ graphs_for_quantum_gates_lovett].(a ) . )as previously declared in this review , this wire - exchange describes the logical / mathematical processes that must be performed in order to achieve universal quantum computation , not the implementation of quantum walk - based universal computation on actual quantum hardware . as for the phase gate ,the process to be performed is to apply a nontrivial phase to the , leaving the unchanged .( [ phase_gate ] ) shows the detailed graph structure of this gate . the rationale behind fig .( [ phase_gate ] ) is as follows : - for each four - edge vertex , apply to to , , , and a grover diffusion operator , a relative phase gate and the shift operator , i.e. apply the full operator . is given in matrix representation in eq .( [ grover_4d ] ) and we propose the following definitions for the relative phase gate ( eq .( [ proposed_phase_factor_matrix ] ) ) and the shift operator ( eq . ( [ proposed_shift_matrix ] ) ) : that is , .now , the diamond - shaped graph that is located in the middle of applies a shift operation to the quantum information that is propagated along that wire _ without applying a relative phase gate_. consequently , at step of fig .( [ phase_gate ] ) , the quantum information running on has a different phase from the one found on the quantum information running on .let us now , for each time step , take a look at quantum operations and corresponding calculations .+ + for for .+ + for : , i.e. for the rationale is identical : .+ + for : , i.e. for the rationale is identical : .+ + for : , i.e. for the rationale is identical : .+ + for : , i.e. for the rationale is identical : .+ + here we have a most important result . for : , i.e. however , for , we only apply the coin operator , suitable for propagating quantum information through the two - edge vertices and _ without applying an additional relative phase operator _ : latexmath:[\[\label{phase_gate_t6_one_w1 } latexmath:[\[\label{phase_gate_t6_one_w2 } thus , the state of this computation at time is given by direct calculations would produce the following states : . . . . . , at time , the wire has a phase equal to while the wire has a phase equal to , i.e. the wire has a relative phase of with respect to the wire . finally ,let us find out how to construct the hadamard gate according to .please note that the graph structure proposed in for the hadamard gate ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) is divided into three parts : * as in the previous gates , the hadamard gate ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) has as input states + for + for * part ( a ) of ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) adds a total phase of to the wire and a phase of to the .we can see that from the number of nodes that the quantum walks is propagated through from the beginning to the very entrance of : nine nodes for and seven nodes for .thus , states for part ( a ) of ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) are : + + + the same rationale applies to the phase applied to and wires on part ( c ) of ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) .thus , the total phase added to the wire is and to the wire is , i.e. there is a relative phase of on .+ of course , and are also the input states of part b. * according to , part b of ( fig .( [ graphs_for_quantum_gates_lovett].c ) ) is composed of a graph that has two effects on eqs .( [ hadamard_a_psi],[hadamard_a_phi ] ) : to combine the two inputs from and wires as well as to add a global phase of to both wires .applying euler s identity as before we can see that , hence the factor needed for the hadamard operator ( the number is a global phase that would be experimentally irrelevant . ) lovett _ et al _ finish by explaining how to build quantum circuits using the graphs and methods exposed in , which is very similar to the method proposed in : for the actual implementation of a general quantum gate as well as a discrete - time quantum - walk algorithm , we would only need to connect corresponding graphs using basis - state quantum wires .+ + c ) * universal computation by discontinuous quantum walk * + based on an eclectic analysis of and , underwood and feder have proposed a hybrid quantum walk for realizing universal computation , consisting of propagating a quantum walker via perfect state transfer under continuous evolution .the quantum walk propagates on a line ( quantum wire ) which is actually composed of two alternating lines ( fig .( [ underwood_lines ] ) . )the walker begins walking on the solid line of the graph lhs long enough to perfectly transfer to the end of the first solid line segment .then , the solid line is turned off and , simultaneously , the dashed line is turned on , enabling then the walker to transfer to the end of the first dashed line segment . as in ,underwood and feder have proposed a universal gate set ( phase , identity and rotation graphs ) as well as a method for building general unitary quantum gates and quantum circuits as a combination of basis state quantum wires and phase , identity and rotation graphs .[ [ section-2 ] ] , together with the computational equivalence proofs of several other models of quantum computations , provide a rich toolbox for computer scientists interested in quantum computation , for they will be free to choose from several models of quantum computation those that particularly suit their academic background and interests .in this paper we have reviewed theoretical advances on the foundations of both discrete- and continuous - time quantum walks , together with the role that randomness plays in quantum walks , the connections between the mathematical models of coined discrete quantum walks and continuous quantum walks , the quantumness of quantum walks and a brief summary of papers published on discrete quantum walks and entanglement as well as a succinct review of experimental proposals and realizations of discrete - time quantum walks .moreover , we have reviewed several algorithms based on quantum walks as well as a most important result : the computational universality of both continuous- and discrete - time quantum walks .fortunately , quantum walks is now a solid field of research of quantum computation full of exciting open problems for physicists , computer scientists and engineers .this review , which is meant to be situated as a contribution within the field of quantum walks from the perspective of a computer scientist , will best serve the scientific community if it encourages quantum scientists and quantum engineers to further advance on this discipline .i start by gratefully thanking my family for unconditionally supporting me during the holidays i spent working on this manuscript .i am also indebted to professor y. shikano for his kind invitation , patience and support .additionally , i acknowledge the financial support of itesm - cem , conacyt ( sni member number 41594 ) , and texia .i thank professor f.a .grnbaum , professor a. joye , professor c. liu , professor m.a .martin - delgado , professor a. prez , professor c. a. rodrguez - rosario , professor e. roldn , professor s. salimi , professor y. shikano , and the anonymous reviewers of this paper for their criticisms and useful comments .finally , i thank dr a. aceves - gaona for his kind help on artwork .a. ambainis .quantum random walks , a new method for designing quantum algorithms . in _sofsem 2008 : theory and practice of computer science _ , lecture notes in computer science , vol .4910 , pp . 14 , springer berlin / heidelberg , 2008 .m. drezgich , a.p .hines , m. sarovar , and s. sastry .complete characterization of mixing time for the continuous quantum walk on the hypercube with markovian decoherence model .9(9 & 10 ) , pp . 856878 , 2009 .d. ghoshal , m. lanzagorta , and s.e .venegas - andraca .a statistical and comparative study of quantum walks under weak measurements and weak values regimes . in_ proceedings ( 8057 ) of the spie conference on defense , security and sensing _ , page 80570i , 2011 .b. jacobs .coalgebraic walks , in quantum and turing computation . in _ proceedings of the international conference on foundations of software science and computation structures ,springer lncs 6604 _ , pp .1226 , 2009 .h. krovi and f. magniez .finding is as easy as detecting for quantum walks .proceedings of the 37th international colloquium conference on automata , languages and programming , pp .540551 , springer - verlag , 2010 .s. machnes , u. sander , s.j .glaser , p. de fouquieres , a. gruslys , s. schirmer , and t. schulte - herbrueggen .comparing , optimising and benchmarking quantum control algorithms in a unifying programming framework ., 84:022305 , 2011 .k. manouchehri and j. b. wang .solid state implementation of quantum random walks on general graphs . in _ proceedings of the 2nd international workshop on solid state quantum computing and mini school on quantum information science _ , pp .5661 , 2008 .p. nyman . a symbolic classical computer language for simulation of quantum algorithms .in p. bruza , d. sofge , w. lawless , k. van rijsbergen , and m. klusch , editors , _ quantum interaction _ , volume 5494 of _ lecture notes in computer science _ , pp . 158173 .springer berlin / heidelberg , 2009 .owens , m.a .broome , d.n .biggerstaff , m.e .goggin , a. fedrizzi , t. linjordet , m. ams , g.d .marshall , j. twamley , m.j .withford , and a.g .two - photon quantum walks in an elliptical direct - write waveguide array ., 13:075003 , 2011 .a. perdomo , c. truncik , i. tubert - brohman , g. rose , and a. aspuru - guzik . on the construction of model hamiltonians for adiabatic quantum computation and its application to finding low energy conformations of lattice protein models ., 78:012320 , 2008 .a. peruzzo , m. lobino , j.c.f .matthews , n. matsuda , a. politi , k. poulios , x.q .zhou , y. lahini , n. ismail , k. wrhoff , y. bromberg , y. silberberg , m.g .thompson , and j.l .quantum walks of correlated photons .329(5998 ) , pp . 15001503 , 2010 .rohde , a. schreiber , m. , i. jex , and c. silberhorn fldi .multi - walker discrete time quantum walks on arbitrary graphs , their properties and their photonic implementation ., 13:013001 , 2011 .a. schreiber , k.n .cassemiro , v. potoek , a. gbris , p.j .mosley , e. andersson , i. jex , and ch .photons walking the line : a quantum walk with adjustable coin operations ., 104(5):050502 , 2010 .p. selinger . a brief survey of quantum programming languages . in _ proceedings of the 7th international symposium on functional and logic programming , nara , japan .springer lncs _ , vol .2998 , pp . 16 , 2004 .b. sun , p. q. le , a.m. iliyasu , f. yan , j. adrin garca , f. dong , and k. hirota . a multi - channel representation for images on quantum computers using the rgb color space ., pp . 160165 , 2011 .m. villagra , m. nakanishi , s. yamashita , and y. nakashima .quantum walks on the line with phase parameters . in _ proceedings of the 10th asian conference on quantum information science ( aqis10 ) _ , 2010 .
|
quantum walks , the quantum mechanical counterpart of classical random walks , is an advanced tool for building quantum algorithms that has been recently shown to constitute a universal model of quantum computation . quantum walks is now a solid field of research of quantum computation full of exciting open problems for physicists , computer scientists and engineers . in this paper we review theoretical advances on the foundations of both discrete- and continuous - time quantum walks , together with the role that randomness plays in quantum walks , the connections between the mathematical models of coined discrete quantum walks and continuous quantum walks , the quantumness of quantum walks , a summary of papers published on discrete quantum walks and entanglement as well as a succinct review of experimental proposals and realizations of discrete - time quantum walks . furthermore , we have reviewed several algorithms based on both discrete- and continuous - time quantum walks as well as a most important result : the computational universality of both continuous- and discrete - time quantum walks .
|
what is the virtual observatory ?the virtual observatory is a realisation of the e - science concept in astronomy ; it is a _ powerful virtual environment _ aimed at facilitating astronomical research and increasing scientific output of astronomical data .it is formed by data archives and software tools interoperating using a set of peer - reviewed standards and technologies developed and maintained by the international virtual observatory alliance ( ivoa ) . what does this really mean ?being naive , the increase of the scientific output of the data means that each gigabyte of data coming from a given instrument will produce larger number of scientific results , i.e. papers or conference presentations , exactly like uploading a research paper to the preprint server increases significantly its scientific impact .] virtual observatory is sometimes referred as a world wide web for astronomers . indeed ,there are numerous remarkable similarities between the concepts of www and vo ( fig [ figvowww ] ) : * _ ivoa _ plays a similar role for the vo as _ w3c _ does for www : these are administrative organisms responsible for the definition of interoperability standards . as examples, we can consider specifications of html / xhtml developed by w3c and votable developed by ivoa * _ resources _ are the inalienable parts of both concepts . in case of wwwthese comprise : ( 1 ) web - sites , ( 2 ) portals and directories , ( 3 ) web - services . in the vowe have : ( 1 ) data archives , ( 2 ) catalogue access services ( e.g. sdss casjobs : http://cas.sdss.org/ ) , ( 3 ) astronomy - oriented web - services * _ tools _ represent another cornerstone of both , vo and www : ( 1 ) in www we deal with web - browsers ( e.g. firefox , internet explorer , safari ) while in the vo we have data browser and discovery tools , such as astrogrid vo desktop , cds aladin , nvo datascope ; ( 2 ) advanced users often deal with command - line tools to access the resources such as curl or wget for www , and , similarly , vo clients based on access libraries such as astro - runtime ; ( 3 ) finally , there are specialised clients using www / vo protocols as infrastructure and/or data transport , for instance , picassa or google earth with their `` analogues '' in the vo world such as visivoin this section i would like to briefly mention the existing accomplishments of the virtual observatory . on the side of ivoa we have a comprehensive set of standards including : data formats ( votable ) , vo resource description ( resource metadata ) , data model for 1d spectra ( spectrum data model ) and much more complex and general characterisation data model , astronomical data query language , protocols to access images and spectra , application messaging protocol allowing different vo tools to talk to each other , authorisation and authentication mechanisms , and others .many more standards are still at different phases of development .now it became possible to handle even very complex astronomical datasets in the virtual observatory , such as 3d spectroscopy and results of n - body simulations .in the meantime , application developers have created an impressive set of vo - enabled tools from those of general interest to very specialised applications .many of them were presented in the review by m. allen ( this conference ) .data and service providers have contributed to the virtual observatory by providing access to numerous data collections and archives at wavelength domains from gamma - ray to radio .first services to access theoretical models ( e.g. theoretical spectra of stellar atmospheres in spanish - vo or pegase.2/ pegase.hr synthetic stellar populations in vo - france , access to the results of cosmological simulations in italian vo ) started to appear recently .we should also mention first prototypes of data analysis services and value - added services associated with data access services , such as modelling of the spectrophotometric properties of merging galaxies in the galmer database .the virtual observatory has been used for astronomical research for almost 5 years .the first vo - science result was the discovery of optically faint obscured quasars by .this was an example of a multi - wavelength study carried out entirely inside the vo infrastructure .three years later , the vo studies of obscured agn were continued .a number of refereed papers were published by the spanish vo project members , presenting discoveries of unique objects done with the vo tools .a paper by presenting vo sed analyzer is of particular interest , as the first refereed paper presenting a `` virtual instrument '' , i.e. a service in the vo aimed at data analysis , as well as its application to a particular research project .many other studies made use of the vo tools and infrastructure combining them with proprietary data access and analysis .for example , in , authors used the vo data discovery and access mechanisms to collect all existing data on a newly discovered object .this example demonstrates how difficult may be to define the concept of a vo study or vo - enabled study .nearly all research projects mentioned above used virtual observatory to do data discovery , data access , and data mining .therefore , the principal question i would like to address is : is it already possible to go beyond data mining ?the answer is : yes , it is .in this section i describe two vo - enabled research projects making heavy use of vo technologies beyond data mining .the vo is used in connection with dedicated observations and numerical simulations demonstrating the proof - of - concept for such complex studies .this study was inspired by the serendipitous discovery of a very rare compact elliptical ( ce ) galaxy in the central part of the nearby galaxy cluster abell 496 which became the 6th known galaxy of this class in the universe .compact elliptical galaxies have similar luminosities ( mag ) and stellar masses to dwarf ellipticals , but 10 smaller effective radii ( pc ) and , therefore , 100 times higher surface brightness and 1000 times larger stellar density per unit of volume .the prototype of the ce type is messier 32 , a satellite of the andromeda galaxy .all known ces reside in the vicinities of larger stellar systems and/or in the innermost regions of galaxy clusters .compact ellipticals are thought to be tidally stripped intermediate - luminosity galaxies , additional arguments for this scenario have been provided by from stellar populations .however , low statistics did not allow us to uniquely argue for this scenario of ce formation .given small spatial sizes of ces they become spatially - unresolved in ground - based observations for distances beyond mpc .their broadband optical colours well resemble k - type galactic stars giving them little chances to be included into the samples of large spectroscopic surveys such as sdss .the key advantage here is provided by the hubble space telescope , which can efficiently resolve these little galaxies up - to a distance of 200 mpc , allowing us to study their structural properties .we realised this in the course of our study of the abell 496 cluster of galaxies , and decided to search for ce galaxies using the power of the virtual observatory to study the role of tidal stripping in the galaxy evolution .all details about this project will be soon provided in chilingarian et al .( in prep . ) , here we give a brief overview .we have constructed a workflow including the following steps : 1 .querying vizier catalogue service to retrieve a list of galaxy clusters having ; 2 . querying ned to retrieve their precise coordinates and values of galactic extinction ; 3 .querying fully - reduced direct images obtained with hst wfpc2 and acs from the hubble legacy archive ( hla ) using simple image access protocol ( siap ) ; 4 . running sextractor as a remote tool on these images ( no image download is required ) ; 5 .selecting extended objects having low ellipticity , effective radii below 0.7 kpc and -band mean effective surface brightness higher than 20 mag / arcsec ; 6 . querying ned to check if there are published redshifts for the selected objects and obtaining additional photometric data having applied the workflow to the entire wfpc2 data collection in the hla we ended up with the archival images of 63 clusters with several dozens candidate ce and tidally stripped galaxies in 30 of them .we found a large number of objects in the scarcely populated region of the vs and vs diagrams reflecting structural properties of galaxies .in fig [ figmbmuav ] we present the structural properties of newly discovered ce and tidally stripped galaxy candidates in comparison with dwarf , intermediate - luminosity , giant early - type galaxies , three nearby compact ellipticals and transitional ce / ucd object . ]our workflow may have confused ce galaxies with ( 1 ) foreground or cluster compact star - forming galaxies ; ( 2 ) background giant early - type galaxies hosting bright active nuclei ( agn ) ; ( 3 ) background post - starburst galaxies .the star - forming galaxies can be discriminated automatically by their blue colours if multi - band data are available or manually by clumpy morphology .the two remaining cases may arise when the distance to the background object is 23 times the distance to the cluster , i.e. mpc ( ) in our study .then , agns can be ruled out by checking x - ray point source catalogues , and the probability of having a post - starburst galaxy at this redshift is very low .the next stage of the project was to obtain high - quality optical spectroscopic data on some of the candidates .we have observed three galaxy clusters , abell 160 ( fig [ figa160 ] ) , abell 189 , and abell 397 hosting 8 candidate galaxies ( mag ) with the multi - slit unit of the scorpio spectrograph at the russian 6-m `` bolshoi telescop azimutalnyy '' ( bta ) in august 2008 .we have analysed the spectra by fitting them against high - resolution pegase.hr stellar population models using the novel nbursts full spectral fitting technique , and obtained precise radial velocities , internal velocity dispersions , luminosity - weighted stellar ages and metallicities .all 8 candidate objects were confirmed to be cluster members having stellar populations older than 8 gyr with metal abundances typically between and the solar value ( for one object ) and internal velocity dispersions between 50 and 100 km s .we have performed numerical simulations of tidal stripping of intermediate - luminosity early - type disc galaxies by the potential of the galaxy cluster including the central cd galaxy using the gadget-2 code .the simulations suggest that the progenitor galaxies may entirely lose their discy components due to tidal stripping , while keeping the bulges although with significant stellar mass loss as well . the remnants of the tidal stripping for some initial conditions and orbital configurations well resemble the observed properties of ces , although in most cases they are still remain quite extended . this explains why ce galaxies are not very common .we have faced a number of issues while undertaking this project , most of them are infrastructural .* there is no real vo access to ned .we had to develop and use `` home - made '' scripts to execute queries .* siap interface in the hla is not publicly announced , although it is used internally in the project .the service url has been provided privately to us by the hla developers , although it was possible to get it by reverse engineering of the javascript code .* we had to setup a customized sextractor service and spend a lot of time to fine - tune it by using some undocumented features . *it took 3 semesters to convince the tac to approve our telescope proposal .* we were the first to use the scorpio multi - slit mode with the high - resolution grism , therefore it was necessary to design and develop the data reduction pipeline .the main result of our study is that the class of ce galaxies is converted from `` unique '' into `` common under certain environmental conditions '' .we provide evidences for the importance of tidal stripping of stellar discs as a way to create peculiar early - type galaxy population in the cluster centres .now we can explain the existence of very strange objects such as vcc 1199 having supersolar metallicity for a very low luminosity of mag .this was the first study , where the primary step was done in the vo , then the discovered objects were followed - up with a large telescope and successfully reproduced by numerical simulations .while preparing my presentation for this meeting , i decided to make something special .the challenge was to get valuable results related to studies of galaxies _ in one week _ using the vo starting the project from scratch .my main collaborator in this project was i. zolotukhin , located geographically in a different place , so we had to work remotely and interact only online .we decided to study optical and near - infrared colours of nearby galaxies and try to connect them to their stellar population properties .nir magnitudes are less sensitive to the stellar population age compared to the optical colours , therefore they can be used as better stellar mass tracers ( although not perfect ) . in addition , the effects of extinction inside the galaxies being studied are less important in the nir spectral bands .spectroscopic information on stellar ages and metallicities should become additional important bricks of information .the catalogue will be presented in chilingarian , zolotukhin & melchior ( in prep . )we have used the following resources : * sdss dr7 photometric catalogues as a source of optical magnitudes * sdss dr7 spectra to get stellar population properties by the full spectral fitting * ukidss dr4 large area survey ( las ) catalogue as a source of nir magnitudes the techniques we have exploited : * position - based cross - match ( possible to do in the vo ) * stellar population modelling using pegase.2/pegase.hr ( possible to do in the vo ) * nbursts full spectral fitting technique ( yet as a stand - alone non - vo service ) from the zoo of vo tools we selected topcat / stilts to join and merge large tables , scripts - based access to sdss , and astrogrid vo desktop to access ukidss catalogues and perform the cross - match .we have used sdss casjobs to select all galaxies from the spectroscopic sample having redshifts in the sdss stripes 9 to 16 which have been partially covered by the ukidss .this query has returned approximately 170 thousand objects .then , we have cross - matched this list against the ukidss dr4 las catalogue with a search radius of 5 arcsec .this step can be done either using wfcam archive or through the astrogrid vo desktop application using the multi - cone search interface .notice , that the access to ukidss dr4 was restricted , therefore to query it in an automatic way it was necessary to use the authorisation / authentication mechanisms provided by the vo .then we have computed and applied -corrections by fitting optical - nir spectral energy distributions ( seds ) against pegase.2 stellar population models to get rest - frame magnitudes . at the final step ,we have processed all selected sdss dr7 spectra using the nbursts full spectral fitting technique in order to estimate velocity dispersions , ages , and metallicities of all galaxies in 3-arcsec wide apertures .the non - trivial problem is the homogenisation of the photometric data .firstly , we tried to use the petrosian magnitudes provided in both catalogues , but we quickly realised that due to very different sensitivity of the two surveys the petrosian radii used to measure the magnitudes may be very different and the difference is correlated with the galaxy colours . finally , we decided to deal with fluxes in the 3 arcsec wide apertures , which are provided directly by sdss and easy to compute for ukidss using the three provided aperture magnitudes .these magnitudes may not reflect the real total colours of galaxies , since our fixed aperture corresponds to different spatial sizes at different redshifts , but , for explaining the colour properties using additional spectroscopic information obtained by the sdss in the same apertures , this approach is preferable .another important problem is the -correction , or dependence of colours on the redshift due to the fact that the filter transmission curves effectively contain different regions of galaxies seds at different redshifts .there are several existing prescriptions for the computation of -corrections , however , they provide controversial information for the nir spectral bands . therefore , we have decided to deduce -corrections from the multi - wavelength sed fitting of the data against the pegase.2 ssp models redshifted according to the spectroscopic information and varying the effects of internal dust extinction . the behaviour of -corrections in the optical bandpasses obtained in this fashion well resembles the results of .at the same time , our nir -corrections turned to be very different , however , well corresponding to those presented in .since we had optical spectra available from sdss for all the galaxies in our sample , we computed the actual values of flux differences by integrating the spectra in the rest - frame and redshifted filter bandpasses unless they moved out of the spectral coverage . given the wavelength range of sdss spectra we were able to compute the `` true '' -corrections for the band for any redshift , for the band for objects having , and for the band till .these `` true '' values perfectly agree with the values derived from our sed fitting with a typical errors not exceeding 0.05 mag .therefore , we conclude that our prescriptions for the computation of -corrections have reasonable quality for the studies of optical / nir galaxy colours .the full discussion related to the computation and applications of -corrections for nearby galaxies will be provided in chilingarian , melchior & zolotukhin ( in prep . )we have also faced a number of technical issues : * sdss dr7 is not accessible via vo protocols , therefore we used its own casjobs portal .* there were numerous problems accessing ukidss dr4 through the vo due to bugs in the implementation of services and access interfaces .however , all these questions have been solved very efficiently by the ukidss and astrogrid teams . *need to download , upload , merge , and convert lengthy tables . * still there is no way to perform the cross - match against user - uploaded table using the adql queries : these mechanisms are still to be implemented -corrected . the top and bottom panels present vs and vs respectively .the ssp - equivalent ages are colour coded , values from violet to red correspond to 1 to 15 gyr.[figrs],title="fig : " ] + -corrected .the top and bottom panels present vs and vs respectively .the ssp - equivalent ages are colour coded , values from violet to red correspond to 1 to 15 gyr.[figrs],title="fig : " ] in fig [ figrs ] we present the colour magnitude plots for the galaxies from our sample .we use the rest - frame magnitude which is much less sensitive to the effects of stellar age than optical bands .spectroscopic ages obtained from the full spectral fitting are colour - coded .it is immediately evident that the red sequence is populated by old galaxies , while in the blue cloud there is a significant age gradient .there is a 3-magnitude long high - luminosity tail of the red sequence clearly seen on the plots , providing an immediate evidence that such galaxies can not be formed by equal - mass mergers of objects from the blue cloud .another feature seen in the vs plot is a population of young and intermediate - age objects overlapping the red sequence and sometimes being as much as 0.7 mag redder in the colour .these are probably dusty galaxies with active star formation , so the superposition of young and all populations , plus dust attenuation creates such an appearance .this project is still very far from being finished .we foresee to add galex ultraviolet data and fit sdss spectra together with photometric data points with more realistic galaxy models than simple ssps , for example , to include two star formation episodes with different dust attenuation for them .we also plan to study the distribution of emission line strengths since we are able to precisely model the stellar population , making feasible studies of even very faint emission lines .the virtual observatory is already at the production level .scientists not associated directly to vo projects are trying to feel the way .the scientific results already obtained are impressive and very important as a proof - of - concept .the advantages of the vo approach are clear : one can transparently access and process enormous volumes of data from different sources .but , of course , the vo should not be considered as replacement for scientists it is just a tool to help them . in my opinion ,the major problem for a scientist in the vo now is very little , but numerous and , therefore , annoying infrastructural faults : all the individual bricks exist , but them together still requires a lot of efforts .author is grateful to the organizing committee of the workshop for invitation , to collaborators in the projects presented in this article , especially to ivan zolotukhin , veronique cayatte , and anne - laure melchior , to eso for supporting financially the attendance to the workshop .the research presented here is partially based on the data obtained with the hubble space telescope and by the sloan digital sky survey project .the sdss web - site is http://www.sdss.org/ , i. , bonnarel , f. , louys , m. , & mcdowell , j. 2006 , in astronomical society of the pacific conference series , vol .351 , astronomical data analysis software and systems xv , ed . c. gabriel ,c. arviset , d. ponz , & s. enrique , 371 , i. , bonnarel , f. , louys , m. , et al .2008 , in astronomical spectroscopy and virtual observatory , proceedings of the euro - vo workshop , held at the european space astronomy centre of esa , villafranca del castillo , spain , 21 - 23 march , 2007 , eds .: m. guainazzi and p. osuna , published by the european space agency . ,p.125 , ed . m. guainazzi & p. osuna , 125 , i. , prugniel , p. , silchenko , o. , & koleva , m. 2007 , in iau symposium , vol .241 , stellar populations as building blocks of galaxies , ed .a. vazdekis & r. r. peletier ( cambridge , uk : cambridge university press ) , 175
|
after several years of intensive technological development virtual observatory resources have reached a level of maturity sufficient for their routine scientific exploitation . the virtual observatory is starting to be used by astronomers in a transparent way . in this article i will review several research projects making use of the vo at different levels of importance . i will present two projects going further than data mining : ( 1 ) studies of environmental effects on galaxy evolution , where vo resources and services are used in connection with dedicated observations using a large telescope and numerical simulations , and ( 2 ) a study of optical and near - infrared colours of nearby galaxies complemented by the spectroscopic data .
|
for classical mechanical systems , the equation of motion can be written as \equiv \mathcal{l}(t ) \mathbf{\rho}(t),\ ] ] where is the set of phase variables , ] and denote the bernoulli numbers . for example , the shadow hamiltonian of the leap - frog method is given by \bigr]+\bigl[{t},[{t},{v}]\bigr]\bigr ) + { \mathcal{o}}(h^{4}),\ ] ] which is of second order accuracy .force - gradient schemes are based on the fact that the total propagator in eqn .can be split in the following way : where ] denotes the commutator of two operators .the coefficients , and in have to be chosen in such way to obtain the highest possible order for a given integer .eqn . represents the general form of the decomposition , while for the decomposition reduces to the standard non - gradient factorization .the force - gradient method is defined by using the value of which reduces the difference between the true hamiltonian and shadow hamiltonian which is conserved by the method .we will show how to determine the shadow hamiltonian in the next section .the third order force - gradient operator can be obtained for classical systems and is given by \bigr ] = \sum \limits^{n}_{i=1 } \frac{\mathbf{g}_{i}}{m_{i } } \cdot \frac{\partial}{\partial \mathbf{v } } \equiv \mathbf{g } \cdot \frac{\partial}{\partial \mathbf{v}},\ ] ] where and denote the cartesian components of the vectors .the force - gradient evaluations can be explicitly represented taking into account that where is the inter - particle part of the acceleration .the result is +\mathbf{h}_i,\ ] ] where basically the evolution operators and displace and forward in time with the decomposition integration of eqn .conserves the symplectic map of flow of the particles in phase space , because the separate shifts of eqn . of positions and velocitiesdo not change the phase volume .time - reversibility can be ensured by imposing two conditions , namely , , , , as well as , , with and .next we deal with numerical integrators of the form given in eqn ., the most efficient version of which is due to omelyan . adding the force - gradient term in the leap - frog schemedoes not increase the order of the method as one can not cancel the commutator \bigr] ] , hence \bigr] ] and \bigr] ] and neglect the last three terms : \bigr ] } \delta\left(\frac{h}{2}\right)_{m } \operatorname{e}^{\frac{1}{6 } h\hat{{v}}_{2 } } \right]^{l},\ ] ] which preserves the fourth - order accurate shadow hamiltonian order to estimate the performance of the integrator of eqn . we compare it with the other algorithms mentioned above .let us consider the three body problem and a particular case of it , the _ sun - earth - moon problem_. the given system has the energy where , , and represent the masses of the sun , the earth and the moon , respectively and is the gravitational constant .the equations of motion are then the force - gradient terms can be obtained from for this case , using the external field potential and the pair - wise potentials respectively for each interaction ..physical parameters of the sun - earth - moon problem . [ cols="<,^,^",options="header " , ] figure [ fig:1 ] presents a comparison between the standard numerical algorithms , nested approaches , the force - gradient and our combined method .the proposed integrator of eqn . with , which combines nested and force - gradient ideas , yields a better energy conservation even compared with 9-stage and 11-stage force - gradient numerical schemes .these numerical results correspond to our analytical observations .figure [ fig:2 ] presents the cpu time , required for the three different integrators against the achieved accuracy .here we scale the time needed for the computation of the fast part by a factor of , since we assume that the computation of the fast scale functions is very cheap compared to the slow scale function evaluations .we can see that in general our nested force - gradient method requires less cpu time and performs more accurate than the standard schemes , presented in figure [ fig:2 ] .+ thus we can argue that , if the evaluation of fast function is significantly cheaper than the _ slow _ function , computational costs decrease .this is exactly the case found in our long - term goal applications in lattice quantum chromodynamics ( lqcd ) , where the action can be split into two parts : the gauge action ( whose force evaluations are cheap ) and the fermion action ( expensive ) .we have introduced a new decomposition scheme for hamiltonian systems , which combines the idea of the force - gradient time - reversible and symplectic integrators and the splitting approach of nested algorithms .the new method of eqn .is fourth - order accurate . compared to other fourth - order schemes ,the leading error coefficient is smaller and computational costs are lower. our future work will apply this approach in the hybrid monte carlo ( hmc ) algorithm for numerical integration of the lattice path - integral of quantum chromodynamics ( qcd ) , which describes the strong interactions between quarks and gluons inside the nucleons . in this case , the hamiltonian dynamics are defined on curved manifolds and one has to take into account the non - commutativity of the operators and .this work is supported by the european union within the marie curie initial training network strongnet on _ strong interaction supercomputing training network _ ( grant agreement number 238353 ) .this work is part of project b5 within the sfb / transregio 55 _ hadronenphysik mit gitter - qcd_. i.p .omelyan , i.m .mryglod , r. folk , _ symplectic analytically integrable decomposition algorithms : classification , derivation , and application to molecular dynamics , quantum and celestial mechanics _ , comput .151(2003 ) , pp . 272314 .
|
force - gradient decomposition methods are used to improve the energy preservation of symplectic schemes applied to hamiltonian systems . if the potential is composed of different parts with strongly varying dynamics , this multirate potential can be exploited by coupling force - gradient decomposition methods with splitting techniques for multi - time scale problems to further increase the accuracy of the scheme and reduce the computational costs . in this paper , we derive novel force - gradient nested methods and test them numerically . such methods can be used to increase the acceptance rate for the molecular dynamics step of the hybrid monte carlo algorithm ( hmc ) and hence improve its computational efficiency . numerical geometric integration , decomposition methods , energy conservation , force - gradient , nested algorithms , multirate schemes , operator splitting 65p10 , 65l06 , 34c40
|
game theory provides a powerful framework to study interactions between individuals ( players " ) . among the most interesting types of interactions are social dilemmas , which result from conflicts of interest between individuals and groups .perhaps the most well - studied model of a social dilemma is the prisoner s dilemma . a two - player game with actions , ( cooperate " ) and ( defect " ) , and payoff matrix , is said to be a prisoner s dilemma if .in a prisoner s dilemma , defection is the dominant action , yet the players can realize higher payoffs from mutual cooperation ( ) than they can from mutual defection ( ) , resulting in a conflict of interest between the individual and the pair , which characterizes social dilemmas .thus , in a one - shot game ( i.e. a single encounter ) , two opponents have an incentive to defect against one another , but the outcome of mutual defection ( the unique nash equilibrium ) is suboptimal for both players . one proposed mechanism for the emergence of cooperation in games such as the prisoner s dilemma is direct reciprocity , which entails repeated encounters between players and allows for reciprocation of cooperative behaviors . in an iterated game , a player might forgo the temptation to defect in the present due to the threat of future retaliationthe shadow of the future " or the possibility of future rewards for cooperating , phenomena for which there is both theoretical and empirical support .one example of a strategy for the iterated game is to copy the action of the opponent in the previous round ( tit - for - tat " ) . alternatively, a player might choose to retain his or her action from the previous round if and only if the most recent payoff was or ( win - stay , lose - shift " ) .these examples are among the simplest and most successful strategies for the iterated prisoner s dilemma . in a landmark paper , the existence of zero - determinant strategies , which allow a single player to exert much more control over this game than previously thought possible . since their introduction, these strategies have been extended to cover multiplayer social dilemmas and temporally - discounted games .moreover , zero - determinant strategies have been studied in the context of evolutionary game theory , adaptive dynamics , and human behavioral experiments . in each of these studies ,the game is assumed to have only two actions : cooperate and defect .in fact , the qualifier zero - determinant " actually reflects this assumption because these strategies force a matrix determinant to vanish for action spaces with only two options .we show here that this assumption is unnecessary .more specifically , suppose that players and interact repeatedly with no limit on the number of interactions . for games with two actions , and ,a memory - one strategy for player is a vector , , where is the probability that cooperates following an outcome in which plays and plays .let and be the payoff vectors for players and , respectively , and let , , and be fixed constants . show that if there is a constant , , for which then can unilaterally enforce the linear relationship on the average payoffs , and , by playing .a strategy , , that satisfies eq .( [ eq : pressdysonvector ] ) is known as a zero - determinant " strategy due to the fact that causes a particular matrix determinant to vanish . however , what is important about these strategies is not that they cause some matrix determinant to vanish , but rather that they unilaterally enforce a linear relationship on expected payoffs . therefore , we refer to these strategies and their generalization to arbitrary action spaces as autocratic strategies . of particular interestare extortionate strategies , which ensure that a player receives an unfair share of the payoffs exceeding the payoff at the nash equilibrium .hence , if is the payoff for mutual defection in the prisoner s dilemma , then is a an extortionate strategy for player if enforces the equation for some extortion factor , .the most common extensions of finite action sets are continuous action spaces .an element ] the probability that uses after plays and plays ( see fig . [fig : density ] ) .the proofs of the existence of zero - determinant strategies ( both in games with and without discounting ) rely heavily on the fact that the action space is finite . in particular , it remains unclear whether zero - determinant strategies are consequences of the finiteness of the action space or instances of a more general concept .here , we introduce autocratic strategies as an extension of zero - determinant strategies to discounted games with arbitrary action spaces . the traditional ,undiscounted case is recovered in the limit .[ thm : maintheorem ] suppose that ] together enforce the linear payoff relationship for _ any _ strategy of player .in other words , the pair \right) ] is a feasible memory - one strategy ; that is , plays the same role as the scalar in eq .( [ eq : pressdysonvector ] ) , which is chosen so that the entries of are all between and .we call the right - hand side of eq .( [ eq : mainequation ] ) a press - dyson function , which extends the press - dyson vector of eq .( [ eq : pressdysonvector ] ) to arbitrary action spaces ( see * supporting information * ) .in contrast to action spaces with two options ( cooperate " and defect " , for instance ) , autocratic strategies are defined only implicitly via eq .( [ eq : mainequation ] ) for general action spaces ( and actually already for games with just three actions ) . for each and , the integral , \left(s\right) ] .since the integral is taken against ] , so it is typically not possible to directly specify all pairs \right) ] , with indicating maximal cooperation .the costs and benefits associated with , denoted by and , respectively , are nondecreasing functions of and , in analogy to the discrete case , satisfy for and . the payoff matrix , eq .( [ eq : classicalpd ] ) is replaced by payoff functions , with the payoffs to players and for playing against being and , respectively ( i.e. the game is symmetric ) . for this natural extension of the classical donation game ,we first show the existence of autocratic and , in particular , extortionate strategies , that play only and and ignore all other cooperation levels . for the continuous donation game , we show , using theorem [ thm : maintheorem ] , that player can unilaterally enforce for fixed and by playing only two actions : ( defect ) and ( fully cooperate ) . conditioned on the fact that plays only and , a memory - one strategy for player is defined by a reaction function , , which denotes the probability that plays following an outcome in which plays and plays ] , that enforces the equation .if there is no discounting ( i.e. ) , then the initial move is irrelevant and can be anything in the interval ] .that is , a markov strategy depends on only the last pair of actions and not on the entire history of play .note , however , that a markov strategy may still depend on .if is a markov strategy that does not depend on , then we say that is a stationary ( or memory - one ) strategy .suppose that and are behavioral strategies for players and , respectively .consider the map , , defined by the product measure , \times\sigma_{y}\left[h^{t}\right ] .\end{aligned}\ ] ] by the hahn - kolmogorov theorem , for each there exists a unique measure , , on such that for each and , where , for and , denotes the differential of the measure on . in the case , this measure is simply the product of the two initial actions , i.e. \times\sigma_{y}\left[\varnothing\right] ] and , as a first step we show that for a particular choice of .we then deduce theorem [ thm : maintheorem ] by setting for this known function , .[ prop : mainproposition ] if is a bounded , measurable function , then \left(s\right ) \right ] \ , d\nu_{t}\left(x , y\right ) & = \int_{s\in s_{x}}\psi\left(s\right ) \, d\sigma_{x}^{0}\left(s\right ) , \end{aligned}\ ] ] for any memory - one strategy , , where is the initial action of player .since is bounded , there exists a sequence of simple functions , , such that uniformly on . for each , let .using the uniform convergence of this sequence , together with the dominated convergence theorem and lemma [ lem : akinslemma ] , we obtain \left(s\right ) \right ] \ , d\nu_{t}\left(x ,y\right ) \nonumber \\ & = \sum_{t=0}^{\infty}\lambda^{t } \lim_{n\rightarrow\infty } \int_{\left(x , y\right)\in s_{x}\times s_{y } } \left [ \psi_{n}\left(x\right ) -\lambda\int_{s\in s_{x}}\psi_{n}\left(s\right)\,d\sigma_{x}\left[x , y\right]\left(s\right ) \right ] \ , d\nu_{t}\left(x ,y\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \sum_{t=0}^{\infty}\lambda^{t } \int_{\left(x , y\right)\in s_{x}\times s_{y } } \left [ \psi_{n}\left(x\right ) -\lambda\int_{s\in s_{x}}\psi_{n}\left(s\right)\,d\sigma_{x}\left[x , y\right]\left(s\right ) \right ] \ ,d\nu_{t}\left(x , y\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \sum_{t=0}^{\infty}\lambda^{t}\sum_{i=1}^{n_{n}}c_{i}^{n}\int_{\left(x , y\right)\in s_{x}\times s_{y } } \left [ \chi_{e_{i}^{n}}\left(x\right ) -\lambda\int_{s\in s_{x}}\chi_{e_{i}^{n}}\left(s\right)\,d\sigma_{x}\left[x , y\right]\left(s\right ) \right ] \ , d\nu_{t}\left(x ,y\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \sum_{t=0}^{\infty}\lambda^{t}\sum_{i=1}^{n_{n}}c_{i}^{n}\int_{\left(x , y\right)\in s_{x}\times s_{y } } \big [ \chi_{e_{i}^{n}}\left(x\right ) - \lambda\sigma_{x}\left[x , y\right]\left(e_{i}^{n}\right ) \big ] \ , d\nu_{t}\left(x , y\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \sum_{i=1}^{n_{n}}c_{i}^{n}\sum_{t=0}^{\infty}\lambda^{t}\int_{\left(x , y\right)\in s_{x}\times s_{y } } \big [ \chi_{e_{i}^{n}}\left(x\right ) - \lambda\sigma_{x}\left[x , y\right]\left(e_{i}^{n}\right ) \big ] \ , d\nu_{t}\left(x , y\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \sum_{i=1}^{n_{n}}c_{i}^{n}\sigma_{x}^{0}\left(e_{i}^{n}\right ) \nonumber \\ & = \lim_{n\rightarrow\infty } \int_{s\in s_{x}}\psi_{n}\left(s\right)\,d\sigma_{x}^{0}\left(s\right ) \nonumber \\ & = \int_{s\in s_{x}}\psi\left(s\right)\,d\sigma_{x}^{0}\left(s\right ) , \end{aligned}\ ] ] which completes the proof . while proposition [ prop : mainproposition ] applies to discounted games with , we can get an analogous statement for undiscounted games by multiplying both sides of eq .( [ eq : mainpropequation ] ) by and taking the limit : if is a bounded , measurable function , then , when the limit exists , \left(s\right ) \right ] \ , d\nu_{t}\left(x , y\right ) & = 0\end{aligned}\ ] ] for any memory - one strategy , , where is the initial action of player .suppose that ] together enforce the linear payoff relationship for _ any _ strategy of player .in other words , the pair \big) ] enforces .by theorem [ thm : maintheorem ] , we need only show that there exists such that \left(s\right ) -\left(1-\lambda\right)\int_{s\in s_{x}}\psi\left(s\right)\,d\sigma_{x}^{0}\left(s\right)\end{aligned}\ ] ] for each and in order to establish the equation . indeed , since we are restricting s actions to two points , we may assume that .fix and let .since \left(s\right ) - \left(1-\lambda\right)\int_{s\in s_{x}}\psi\left(s\right)\,d\sigma_{x}^{0}\left(s\right ) \nonumber \\ & = \psi_{1 } - \lambda\left(\psi_{1}+\left(1-p\left(s_{1},y\right)\right)\frac{1}{\phi}\right ) - \left(1-\lambda\right)\left(\psi_{1}+\left(1-p_{0}\right)\frac{1}{\phi}\right ) \nonumber \\ & = \alpha u_{x}\left(s_{1},y\right ) + \beta u_{y}\left(s_{1},y\right ) + \gamma\end{aligned}\ ] ] and \left(s\right ) - \left(1-\lambda\right)\int_{s\in s_{x}}\psi\left(s\right)\,d\sigma_{x}^{0}\left(s\right ) \nonumber \\ & = \frac{1}{\phi } + \psi_{1 } - \lambda\left(\psi_{1}+\left(1-p\left(s_{2},y\right)\right)\frac{1}{\phi}\right ) - \left(1-\lambda\right)\left(\psi_{1}+\left(1-p_{0}\right)\frac{1}{\phi}\right ) \nonumber \\ & = \alpha u_{x}\left(s_{2},y\right ) + \beta u_{y}\left(s_{2},y\right ) + \gamma , \end{aligned}\ ] ] and since for each and by eq .( [ eq : corollaryequation ] ) , the proof is complete . in the undiscounted case ( ) , eq .( [ eq : corollaryequation ] ) is satisfied for some if and only if there exist such that for every .moreover , if eq .( [ eq : corollaryequation ] ) holds for some , , , and , then it must be true that eq .( [ eq : underoverzero ] ) also holds for every .( [ eq : underoverzero ] ) does not hold for a particular choice of , then and can not form a two - point autocratic strategy for any discounting factor , . therefore , eq .( [ eq : underoverzero ] ) , which is easy to check , offers a straightforward way to show that two actions can not form a two - point autocratic strategy for a particular game . [ rem : twopointexplicit ] for , , and , fixed , one can ask which strategies of the form & = p\left(x , y\right)\delta_{s_{1}}+\big(1-p\left(x , y\right)\big)\delta_{s_{2 } } , \end{aligned}\ ] ] for some and , satisfy the equation \left(s\right ) - \left(1-\lambda\right)\int_{s\in s_{x}}\psi\left(s\right)\,d\sigma_{x}^{0}\left(s\right ) .\end{aligned}\ ] ] indeed , we see from the proof of corollary [ cor : maincorollary ] that , for a strategy of this form , we must have for each and . therefore , this simple case does not capture the generally - implicit nature of autocratic strategies because one can explicitly write down two - point strategies via eq .( [ eq : omegaexpression ] ) , which is typically not possible for strategies concentrated on more than just two actions .here we present some simple examples of theorem [ thm : maintheorem ] and its implications . in [ si : sizefinite ] , we demonstrate how theorem [ thm : maintheorem ] reduces to the main result of when the action space has only two options .moreover , we use an action space consisting of three choices to illustrate the implicit nature of autocratic strategies defined via press - dyson functions for more than two actions . in [ si : continuousdonationgame ] , we show that there is no way for a player to unilaterally set her own score using theorem [ thm : maintheorem ] . in particular , despite the implicit nature of autocratic strategies , one can use theorem [ thm : maintheorem ] to deduce the non - existence of certain classes of strategies .\left(s\right ) \nonumber \\ & = \psi\left(x\right ) - \sum_{r=1}^{n}\psi\left(a_{r}\right)\sigma_{x}\left[x , y\right]\left(a_{r}\right ) \nonumber \\ & = \psi\left(x\right ) - \sum_{r=1}^{n-1}\psi\left(a_{r}\right)\sigma_{x}\left[x , y\right]\left(a_{r}\right ) - \psi\left(a_{n}\right)\left(1-\sum_{r=1}^{n-1}\sigma_{x}\left[x , y\right]\left(a_{r}\right)\right ) \nonumber \\ & = \psi\left(x\right ) - \psi\left(a_{n}\right ) - \sum_{r=1}^{n-1}\big ( \psi\left(a_{r}\right ) -\psi\left(a_{n}\right ) \big)\sigma_{x}\left[x , y\right]\left(a_{r}\right ) \nonumber \\ & = \begin{cases}\displaystyle\sum_{r=1}^{n-1}\big ( \psi\left(a_{n}\right ) -\psi\left(a_{r}\right ) \big)\big(\sigma_{x}\left[x , y\right]\left(a_{r}\right ) -\delta_{r , r'}\big ) & x = a_{r'}\neq a_{n } , \\\displaystyle\sum_{r=1}^{n-1}\big ( \psi\left(a_{n}\right ) -\psi\left(a_{r}\right ) \big)\sigma_{x}\left[x , y\right]\left(a_{r}\right ) & x = a_{n}.\end{cases}\end{aligned}\ ] ] \left(s\right ) & = \begin{cases}c_{1}\sigma_{x}\left[x , y\right]\left(a_{1}\right ) + c_{2}\sigma_{x}\left[x , y\right]\left(a_{2}\right ) - c_{1 } & x = a_{1 } , \\c_{1}\sigma_{x}\left[x , y\right]\left(a_{1}\right ) + c_{2}\sigma_{x}\left[x , y\right]\left(a_{2}\right ) - c_{2 } & x = a_{2 } , \\c_{1}\sigma_{x}\left[x , y\right]\left(a_{1}\right ) + c_{2}\sigma_{x}\left[x , y\right]\left(a_{2}\right ) & x = a_{3},\end{cases}\end{aligned}\ ] ] where and . for each and ,the measure ] and \left(a_{2}\right) ] .despite the fact that player has an uncountably infinite number of actions to choose from , player can still ensure that eq .( [ eq : extortionequation ] ) holds by playing only two actions . in the main text, we saw that can unilaterally enforce for provided and is sufficiently close to . if , then is irrelevant and the linear relationship is simply . here , we show that , if , then is necessary for such a payoff relationship to be enforced via theorem [ thm : maintheorem ] . indeed ,if although a player can not set her own score in the continuous donation game , she can set the score of her opponent .we saw in the main text that can set s score to anything between and provided is sufficiently large , and here we show that this interval is the only range of payoffs for player that can unilaterally set via theorem [ thm : maintheorem ] .indeed , if satisfies we saw in the main text that extortionate strategies exist in the continuous donation game , as demonstrated by the two - point strategy defined by eq .( [ eq : twopointdonationexample ] ) .however , it certainly need not be the case that for any , there exists such that which is impossible since and .thus , there is no feasible press - dyson function that allows player to unilaterally enforce the equation .in other words , a player can not unilaterally set her own payoff .
|
the recent discovery of zero - determinant strategies for the iterated prisoner s dilemma sparked a surge of interest in the surprising fact that a player can exert unilateral control over iterated interactions . these remarkable strategies , however , are known to exist only in games in which players choose between two alternative actions such as cooperate " and defect . " here we introduce a broader class of autocratic strategies by extending zero - determinant strategies to iterated games with more general action spaces . we use the continuous donation game as an example , which represents an instance of the prisoner s dilemma that intuitively extends to a continuous range of cooperation levels . surprisingly , despite the fact that the opponent has infinitely many donation levels from which to choose , a player can devise an autocratic strategy to enforce a linear relationship between his or her payoff and that of the opponent even when restricting his or her actions to merely two discrete levels of cooperation . in particular , a player can use such a strategy to extort an unfair share of the payoffs from the opponent . therefore , although the action space of the continuous donation game dwarfs that of the classical prisoner s dilemma , players can still devise relatively simple autocratic and , in particular , extortionate strategies .
|
data processing and management systems such as databases , datastores or query engines usually have to answer to two kinds of entities : _ humans _ and _ hardware_. towards humans , they provide means to query , manipulate or manage data . towards the hardware , they issue store and retrieve commands .they depend directly or indirectly on the very nature of the hardware .almost all systems for example , hadoop s distributed file system are designed with strong though not necessarily explicit assumptions about the underlying hardware such as hard disk drives ( hdd ) , their spindles , heads , etc . conceptually , there are three levels present in data processing and management systems ( fig .[ fig : data - layers ] ) : * the _ user interface _ level .any database or datastore needs to provide a way to interact with the data under management .this can be something elaborate , standardised and mature as the structured query language ( sql ) found in relational database management systems ( rdbms ) , such as oracle db , postgresql , or mysql .this can be a restful interface , found in many nosql datastores , like , for example , couchdb s api .of course , this can also be a programming - language - level api such as the case with hadoop ] .* the _ logical data layout _ level . addresses how the user conceptually thinks about and deals with the data . in case of a rdbmsthe data units might be tables and records , in a key - value store like redis it may be an entry identified via a key and in a wide - column store the data unit might be a row containing different columns , and last but not least in an rdf store a single triple might be the unit one logically manipulates . * the _ physical data layout _ level . on this level , we re concerned with the question how the data is laid out once serialised .the serialisation takes place from main memory ( ram ) either to send the data in question over the wire , or , to be stored on a durable medium such as a hard disk drive or a solid - state drive ( ssd ) .concrete serialisations may be textual based , such as csv and json or of binary nature , like the rcfile format .in we introduced the three fundamental data shapes _ tabular _ , _ tree_henceforth _ nested _ , and _ graph_.it turns out that it is useful to further differentiate the shapes , distinguishing between logical and physical layouts , as hinted above . in the following ,i propose a non - exhaustive , lightweight taxonomy for logical and physical data layouts and serialisation formats as depicted in fig .[ fig : taxonomy - dl ] .the main point of this taxonomy is to decouple the logical from the physical level . while for the human user the logical level is of importance , from a software and systems perspective the physical level dominates .there are cases , however , where the abstraction is leaking and the user is forced to accommodate .take , for example , best practices concerning nosql data modeling ] : with a wide - column store , such as hbase , one can easily get into a situation where one must take into account the physical location of the data in order to avoid performance penalties ] .also , the choice of the serialisation format ( for example , textual vs. binary ) can have severe implications , both in terms of performance and maintenance .look at a case where one decides to use json as the wire format in contrast to , say , avro . in the former case, one can debug any document simply by issuing a command on the shell like ` cat datafile.json | more ` while with avro more specialised tooling is necessary . on the other hand, one can probably expect a better i / o performance and disk utilisation with a binary format such as avro , compared to json .now we re already entering the discussion of the impact of choices we make concerning how the data is laid out .let s jump right into it .there are two schools of thought concerning the organisation of data units : data _ normalisation _ , and data _denormalisation_. the former wants to minimise redundancy , the latter aims to minimise assembly .both have their own built - in assumptions , characteristics and use cases : [ [ normalised - data ] ] normalised data + + + + + + + + + + + + + + + + + * as data items are not redundant , data consistency is relatively easy to achieve compared to denormalised data . * when updating data in place one only has to deal with it once and not in multiple locations .* storage is efficiently used , that is , it takes up less disk space .[ [ denormalised - data ] ] denormalised data + + + + + + + + + + + + + + + + + + + * the access to data units is fast as no joins are necessary ; the data can be considered to be pre - joined . * as it provides an entity - centric view , it is in general more straight - forward to employ automated sharding of the data . *due to keeping multiple copies of data items or fragments thereof around , it requires typically a multitude more space on disk than normalised data . in table[ tab : ndcomparison ] i m providing a comparison and summary of the two different ways to handle data including typical examples of workloads and technologies concerning use cases ..a comparison of normalised vs. denormalised handling of data on the logical and physical level across sql and nosql data management systems .[ cols="<,<,<",options="header " , ] allow me a side remark relative to the ongoing and tiring debate sql vs. nosql : it turns out that the focus on sql as the representative of the evil is really a rather backward view . as stated in many places all over the web ] many open source projects and commercial entitiesare introducing sql bindings or interfaces on top of hadoop and nosql datastores .this is quite understandable , given the huge number of deployed ( business intelligence ) tools that natively speak sql and of course the many people out there trained in this language . * joining the dots .* we are now in a position to wrap up on the impact of choices we make concerning how the data is laid out : one dimension of freedom is the choice how we organise the data : normalised vs. denormalised .the second choice we have is concerning the physical data representation .interestingly , some systems are more rigid and upfront with what they support , expect or allow . while , for example , in the hadoop ecosystem it is entirely up to you how you serialise your data and depending on your requirements and the workload you might end up with a different result traditional rdbms are much more restrictive .seldom you get to choose the physical data layout and the logical layout is hard - coded anyways . coming back full circle to the initial fig .[ fig : data - layers ] one should , however , not underestimate the _ user interface _ level . at the end of the daythe usability , integrability and user familiarity of this level can be the reason why some data management systems may have a better chance to survive than others .last but not least , one should take into account the emerging _ _ polyglot persistence _ _ ] meme that essentially states that one size does not fit it all concerning data storage and manipulation .i suggest embracing this meme together with pat helland s advice : `` in today s humongous database systems , clarity may be relaxed , but business needs can still be met . ''i d like to thank eric brewer , whose ricon2012 keynote motivated me to write up this short note .his keynote is available via https://vimeo.com/52446728 and more than certainly worth watching it in its entirety .yongqiang he , rubao lee , yin huai , zheng shao , namit jain , xiaodong zhang , and zhiwei xu . .in serge abiteboul , klemens bhm , christoph koch , and kian - lee tan , editors , _ icde _ , pages 11991208 .ieee computer society , 2011 .
|
in this short note i review and discuss fundamental options for physical and logical data layouts as well as the impact of the choices on data processing . i should say in advance that these notes offer no new insights , that is , everything stated here has already been published elsewhere . in fact , it has been published in so many different places , such as blog posts , in the literature , etc . that the main contribution is to bring it all together in one place .
|
many physical , technological , biological , and social systems can be modeled as networks , which in their simplest form are represented by graphs .a ( static and single - layer ) graph consists of a set of entities ( called `` vertices '' or `` nodes '' ) and pairwise interactions ( called `` edges '' or `` links '' ) between those vertices .graphical representations of data have led to numerous insights in the natural , social , and information sciences ; and the study of networks has in turn borrowed ideas from all of these areas . in general , networks can be described using a combination of local , global , and `` meso - scale '' perspectives .to investigate meso - scale structures , intermediate - sized structures that are responsible for `` coupling '' local properties , such as whether triangles close , and global properties such as graph diameter a fundamental primitive in many applications entails partitioning graphs into meaningful and/or useful sets of nodes .the most popular form of such a partitioning procedure , in which one attempts to find relatively dense sets of nodes that are relatively sparsely connected to other sets , is known as `` community detection '' .myriad methods have been developed to algorithmically detect communities ; and these efforts have led to insights in applications such as committee and voting networks in political science , friendship networks at universities and other schools , protein - protein interaction networks , granular materials , amorphous materials , brain and behavioral networks in neuroscience , collaboration patterns , human communication networks , human mobility patterns , and so on .the motivation for the present work is the observation that it can be very challenging to find meaningful medium - sized or large communities in large networks .much of the large body of work on algorithmically identifying communities in networks has been applied successfully either to find communities in small networks or to find small communities in large networks , but it has been much less successful at finding meaningful medium - sized and large communities in large networks .there are many reasons that it is difficult to find `` good '' large communities in large networks .we discuss several such reasons in the following paragraphs .first , although it is typical to think about communities as sets of nodes with `` denser '' interactions among its members than between its members and the rest of a network , the literature contains neither a consensus definition of community nor a consensus on a precise formalization of what constitutes a `` good '' community .second , the popular formalizations of a `` community '' are computationally intractable , and there is little precise understanding or theoretical control on how closely popular heuristics to compute communities approximate the exact answers in those formulations .indeed , community structure itself is typically `` defined '' operationally via the output of a community - detection algorithm , rather than as the solution to a precise optimization problem or via some other mathematically precise notion .third , many large networks are extremely sparse and thus have complicated structures that pose significant challenges for the algorithmic detection of communities via the optimization of objective functions .this is especially true when attempting to develop algorithms that scale well enough to be usable in practice on large networks .fourth , the fact that it is difficult to visualize large networks complicates the validation of community - detection methods in such networks .one possible means of validation is to compare algorithmically - obtained communities with known `` ground truth '' communities .however , notions of ground truth can be weak in large networks , and one rarely possesses even a weak notion of ground truth for most networks . indeed , in many cases , one should not expect a real ( or realistic ) large network to possess a single feature that ( to leading order ) dominates large - scale latent structure in a network .thus , comparing the output of community - detection algorithms to `` ground truth '' in practice is most appropriate for obtaining coarse insights into how a network might be organized into social or functional groups of nodes . alternatively ,different notions and/or formalizations of `` community '' concepts might be appropriate in different contexts , so it is desirable to formulate flexible methods that can incorporate different perspectives .fifth , community - detection algorithms often have subtle and counterintuitive properties as a function of sizes of their inputs and/or outputs .for example , the community - size `` resolution limit '' of the popular modularity objective function is a fundamental consequence of the additive form of that objective function , but it only became obvious to people after it was explicitly pointed out .motivated by these observations , we consider the question of community quality as a function of the size ( i.e. , number of nodes ) of a purported community .that is , we are concerned with questions such as the following .( 1 ) what is the relationship between communities of different sizes in a given network ? in particular , for a given network and a given community - quality objective , are larger communities `` better '' or `` worse '' than smaller communities ?( 2 ) what is an appropriate way to think about medium - sized and large communities in large networks ? in particular , how do smaller communities `` fit together '' into medium - sized and larger communities ? ( 3 ) more generally , what effect do the answers to these questions have on downstream tasks that are of primary concern when modeling data using networks ?for example , what effect do they have on processes such as viral propagation or the diffusion of information on networks ? by considering a suite of networks and using several related notions of community quality, we identify several scenarios that can arise in realistic networks . 1 .* small communities are better than large communities . * in this first scenario , for which there is an upward - sloping _ network community profile _( ncp ; see the discussion below ) , a network has small groups of nodes that correspond more closely than any large groups to intuitive ideas of what constitutes a good community .small and large communities are similarly good or bad . * in this second scenario , for which an ncp is roughly flat , the most community - like small groups of nodes in a network have similar community quality scores to the most community - like large groups .* large communities are better than small communities . * in this third scenario , for which an ncp is downward - sloping , a network has large groups of nodes that are more community - like ( i.e. , `` better '' in some sense ) than any small groups .although the third scenario is the one that has an intuitive isoperimetric interpretation and thus corresponds most closely with peoples intuition when they develop and validate community - detection algorithms , one of our main conclusions is that most large realistic networks correspond to the first or second scenarios .this is consistent with recent results on network community structure using related approaches as well as somewhat different approaches , and it also helps illustrate the importance of considering community structures with groups that have large overlaps . for more on this ,see our discussions below .one of the main tools that we use to justify the above observations and to interpret the implications of community structure in a network is a _ network community profile ( ncp ) _ , which was originally introduced in ref . . given a community `` quality '' score i.e . , a formalization of the idea of a `` good '' community an ncp plots the score of the best community of a given size as a function of community size .the authors of ref . considered the community quality notion of conductance and employed various algorithms to approximate it . in subsequent work , many other notions of community quality have also been used to compute ncps . in the present paper , we compute ncps using three different procedures to identify communities. 1 . * diffusion - based dynamics .* first , we consider a diffusion - based dynamics ( called the aclcut method ; see the discussion below ) from the original ncp analysis that has an interpretation that good communities correspond to bottlenecks in the associated dynamics .* spectral - based optimization . * second , we consider a spectral - based optimization rule ( called the movcut method ; see below ) that is a locally - biased analog of the usual global spectral graph partitioning problem .* geodesic - based dynamics .* finally , we consider a geodesic - based spreading process ( called the egonet method ; see the discussion below ) that has an interpretation that nodes in a good community are connected by short paths that emanate from a seed node .we describe these three procedures in more detail in appendix[sxn : measures ] . for now , we note that the first and the third procedures have a natural interpretation as defining communities operationally as the output of an underlying dynamics , and the first and second procedures allow us to compare this operational approach with an optimization - based approach .viewed from this perspective , the computation of network community structure depends fundamentally on three things : actual network structure , the dynamics or application of interest , and the initial conditions or network region of interest .although there are differences between the aforementioned three community - identification methods , these methods all take the perspective that a network s community structure depends not only on the connectivity of its nodes but also on ( 1 ) the region of a large network in which one is interested and ( 2 ) the application of interest .the perspective in point ( 1 ) contrasts with the prevalent view of community structure as arising simply from network structure , but it is consistent with the notion of dynamical systems depending fundamentally on their initial conditions , and it is crucial in many applications ( e.g. , both social and biological contagions ) .for example , facebook s data team and its collaborators have demonstrated that one can view facebook as a collection of egocentric networks that have been patched together into a network whose global structure is very sparse .the above three community - identification methods have the virtue of combining the prevalent structural perspective with the idea that one is often interested in structure that is located `` near '' ( in terms of both network topology and edge weights ) an exogenously - specified `` seed set '' of nodes .the perspective in point ( 2 ) underscores the fact that one should not expect answers to be `` universal . ''the differences between the aforementioned three methods lie in the specific dynamical processes that underlie them .we also note that , although we focus on the measure of community quality known as `` conductance '' ( which is intimately related to the problem of characterizing the mixing rates of random walks ) , one can view other quality functions ( e.g. , based on non - conservative dynamics or geodesic - based dynamics ) as solving other problems , and they thus can reveal different aspects of community structure in networks .the global ncps that we compute from the three community - identification procedures are rather similar in some respects , suggesting that the characteristic features of ncps are actual features of networks and not just artifacts of a particular way of sampling local communities .however , we observe significant differences in their local behaviors because they are based on different dynamical processes . in concert with other recent work ( e.g. , ) ,our results with these three procedures suggest that `` local '' methods that focus on finding communities centered around an exogenously - specified seed node ( or seed set of nodes ) might have better theoretical grounding and more practical utility than other methods for community detection .our `` local '' ( and `` size - resolved '' ) perspective on community structure also yields several other interesting insights . by design, it allows us to discern how community structure depends both on the seed node and on the size scales and time scales of a dynamical process running on a network .similar perspectives were discussed in recent work on detecting communities in networks using markov processes , and our approach is in the spirit of research on dynamical systems more generally , as bottlenecks to diffusion and other dynamics depend fundamentally on initial conditions .local information algorithms are also an important approach for many other optimization problems and for practical purposes such as friend recommendation systems in online social networks .moreover , taking a local perspective on community structure is also consistent with the sociological idea of egocentric networks ( and with real - world experience of individuals , such as users of facebook , who experience their personal neighborhood of a social network ) .the local community experienced by a given node should be similarly locally - biased , and we demonstrate this feature quantitatively for several real networks .using our perspective , we also demonstrate subtle yet fundamental differences between different networks : some networks have high - quality communities of different sizes ( especially small ones ) , whereas others do not possess communities of any size that give bottlenecks to diffusion - based dynamics .this is consistent with , and helps explain , prior direct observations of networks in which algorithmically computed communities seemed to have little or no effect on several dynamical processes .more generally and importantly , whether small or large communities are `` better '' with respect to some measure of community quality has significant consequences not only for algorithms that attempt to identify communities but also for the dynamics of processes such as viral propagation and information diffusion .the rest of this paper is organized as follows . because our approach to examining network communities is uncommon in the physics literature, we start in section[sxn : prelim ] with an informal description of our approach .we then introduce ncps in section[sxn : ncp ] . in section[sxn : main - empirical ] , we present our main empirical results on community quality as a function of size , and we provide a detailed comparison of our three community - identification procedures when applied to real networks .this illustrates the three distinct scenarios of community quality versus community size that we described above . in section[sxn : benchmarks ] , we illustrate the behavior of these methods on the well - known lfr benchmark networks that are commonly used to evaluate the performance of community - detection techniques .we find that their ncps have a characteristic shape for a wide range of parameter values and are unable to reproduce the different scenarios that one observes for real networks .we then conclude in section[sxn : conc ] with a discussion of our results . in appendix[sxn : expanders ] , we provide a brief discussion of expander graphs ( a.k.a .`` expanders '' ) . in appendix[sxn : measures ] , we describe the three specific procedures that we use to identify communities in detail .appendices[sxn : ncp - mov ] and[sxn : ncp - ego ] contain empirical results for the two methods that we mentioned but did not discuss in detail in section[sxn : main - empirical ] .in this section , we describe some background and preliminaries that provide the framework that we use to interpret our results on size - resolved community structure in sections[sxn : main - empirical ] and[sxn : benchmarks ] .we start in section[sxn : prelim - notation ] by defining the notation that we use throughout this paper , and we continue in section[sxn : prelim - looklike ] with a brief discussion of possible ways that a network might `` look like '' if one is interested in its meso - scale or large - scale structure . to convey the basic idea of our approach ,much of our discussion in this section is informal . in later sections, we will make these ideas more precise .we represent each of the networks that we study as an undirected graph .we consider both weighted and unweighted graphs .let be a connected and undirected graph with node set , edge set , and set of weights on the edges .let denote the number of nodes , and let denote the number of edges .the edge has weight .let denote the ( weighted ) adjacency matrix of .its components are if and otherwise .the matrix denotes the diagonal degree matrix of .its components are , where is called the `` strength '' or `` weighted degree '' of node .the combinatorial laplacian of is , and the normalized laplacian of is .a path in is a sequence of edges , such that for .the length of path is , where is the length of the edge that connects nodes and . for an unweighted network , for all edges . for weighted networks, is a measure of closeness of the tie between nodes and , a common choice for is .let be the set of all paths between and .the geodesic distance between nodes and is the length of a shortest path between and .the -neighborhood of is the set of all nodes that are at most a distance away from , and the -neighborhood of a set of nodes is . before examining real networks , we start with the following question : what are possible ways that a network can `` look like , '' very roughly if one `` squints '' at it ?this question is admittedly vague , but the answer to it governs how small - scale network structure `` interacts '' with large - scale network structure , and it informs researchers intuitions and the design decisions that they make when analyzing networks ( and when developing methods to analyze networks ) . as an example of this idea , it should be intuitively clear that if one `` squints '' at the nearest - neighbor network ( i.e. , the uniform lattice of pairs of integers on the euclidean plane ) , then they `` look like '' the euclidean plane .distances are approximately preserved , and up to boundary conditions and discretization effects , dynamical processes on one approximate the analogous dynamic processes on the other . in the fields of geometric group theory and coarse geometry ,this intuitive connection between and has been made precise using the notions of coarse embeddings and quasi - isometries . + establishing quasi - isometric relationships on networks that are expander graphs ( a.k.a .`` expanders '' ; see appendix[sxn : expanders ] ) is technically brittle .thus , for the present informal discussion , we rely on a simper notion. suppose that we are interested in the `` best fit '' of the adjacency matrix to a block matrix : where , where the `` 1-vector '' is a column vector of the appropriate dimension that contains a in every entry and .thus , each block in has uniform values for all its elements , and larger values of correspond to stronger interactions between nodes .the structure of is then determined based on the relative sizes of , , and .the various relative sizes of these three scalars have a strong bearing on the structure of the network associated with .we illustrate several examples in fig.[fig : stylized ] . for the block models that we use for three of its panels, one block has nodes and the second block has nodes , and a node in block is connected to a node in block with probability . ** low - dimensional structure . * in fig.[fig : stylized - hotdog ] , we illustrate the case in which . in this case , each half of the network interacts with itself more densely than it interacts with the other half of the network .this `` hot dog '' or `` pancake '' structure corresponds to the situation in which there are two ( or any number , in the case of networks more generally ) dense communities of nodes that are reasonably well - balanced in the sense that each community has roughly the same number of nodes . in this case , the network embeds relatively well in a one - dimensional , two - dimensional , or other low - dimensional space .spectral clustering or other clustering methods often find meaningful communities in such networks , and one can often readily construct meaningful and interpretable visualizations of network structure .* * core - periphery structure . * in fig.[fig : stylized - coreper ] , we illustrate the case in which .this is an example of a network with a density - based `` core - periphery '' structure . in these cases, there is a core set of nodes that are relatively well - connected amongst themselves as well as to a peripheral set of nodes that interact very little amongst themselves . ** expander or complete graph .* in fig.[fig : stylized - expander ] , we illustrate the case in which .this corresponds to a network with little or no discernible structure .for example , if , then the graph is a clique ( i.e. , the complete graph ) .alternatively , if the graph is a constant - degree expander , then . as discussed in appendix[sxn : expanders ] , constant - degree expanders are the metric spaces that embed least well in low - dimensional euclidean spaces . in terms of the idealized block model in fig.[fig : stylized ] , they `` look like '' complete graphs , and partitioning them would not yield network structure that one should expect to construe as meaningful .informally , they are largely unstructured when viewed at large size scales . * * bipartite structure . * in fig.[fig : stylized - bipartite ] , we illustrate the case in which .this corresponds to a bipartite or nearly - bipartite graph .such networks arise , e.g. , when there are two different types of nodes , such that one type of node connects only to ( or predominantly to ) nodes of the other type .most methods for algorithmic detection of communities have been developed and validated using the intuition that networks have some sort of low - dimensional structure .as an example , consider the infamous zachary karate club network , which we show in fig.[fig : stylized - hotdog ] . this well - known benchmark graph , which seems to be an almost obligatory example to discuss in papers that discuss community structure , clearly `` looks like '' it has a nice low - dimensional structure .for example , there is a clearly identifiable left half and right half , and two - dimensional visualizations of the network ( such as that in fig.[fig : stylized - hotdog ] ) highlight that bipartition .indeed , the zachary karate club network possesses well - balanced and ( quoting herbert simon ) `` nearly decomposable '' communities ; and the nodes in each community are more densely connected to nodes in the same community than they are to nodes in the other community .relatedly , reordering the nodes of the zachary karate club appropriately yields an adjacency - matrix representation with an almost block - diagonal structure with two blocks ( as typified by the cartoon in fig.[fig : stylized - hotdog ] ) ; and any reasonable community - detection algorithm should be able to find ( exactly or approximately ) the two communities . another well - known network that ( slightly less obviously ) `` looks like '' it has a low - dimensional structure is a so - called caveman network , which we illustrate later ( in fig.[fig : ncp - cavemangraph ] ) .arguably , a caveman network has many more communities than the zachary karate club , so details such as whether an algorithm `` should '' split it into two or a somewhat larger number of reasonably well - balanced communities might be different than in the zachary karate club network .however , a caveman network also has a natural well - balanced partition that respects intuitive community structure .reasonable two - dimensional visualizations of this network ( such as the one that we present in fig.[fig : ncp - cavemangraph ] ) shed light on that structure ; and any reasonable community - detection algorithm can be adjusted to find ( exactly or approximately ) the expected communities . in this paper , we will demonstrate that most realistic networks do _ not _ `` look like '' these small examples . instead, realistic networks are often poorly - approximated by low - dimensional structures ( e.g. , with a small number of relatively well - balanced communities , each of which is more densely connected internally than it is with the rest of the network ) .realistic networks often include substructures that more closely resemble core - periphery graphs or expander graphs ( see fig.[fig : stylized - coreper ] and fig.[fig : stylized - expander ] ) ; and networks that partition into nice nearly - decomposable communities tend to be the exception rather than typical .recall from section[sxn : intro ] that an ncp measures the quality of the best possible community of a given size as a function of the size of the purported community . in this section ,we provide a brief description of ncps and how we will use it .we start with the definition of conductance and the original conductance - based definition of an ncp from ref . , and we then discuss our extensions of such ideas . for more details on conductance and ncps ,see refs .if is a graph with weighted adjacency matrix , then the `` volume '' between two sets and of nodes ( i.e. , ) equals the total weight of edges with one end in and one end in .that is , in this case , the `` volume '' of a set of nodes is in other words , the set volume equals the total weight of edges that are attached to nodes in the set .the volume between a set and its complement has a natural interpretation as the `` surface area '' of the `` boundary '' between and . in this study ,a set is a hypothesized community .informally , the conductance of a set of nodes is the `` surface area '' of that hypothesized community divided by `` volume '' ( i.e. , size ) of that community . from this perspective , studying community structure amounts to an exploration of the isoperimetric structure of . somewhat more formally , the _ conductance of a set of nodes _ is thus , smaller values of conductance correspond to better communities .the _ conductance of a graph _ is the minimum conductance of any subset of nodes : computing the conductance of an arbitrary graph is an intractable problem ( in the sense that the associated decision problem is np - hard ) , but this quantity can be approximated by the second smallest eigenvalue of the normalized laplacian .if the `` surface area to volume '' ( i.e. , isoperimetric ) interpretation captures the notion of a good community as a set of nodes that is connected more densely internally than with the remainder of a network , then computing the solution to eq .leads to the `` best '' ( in this sense ) community of any size in the network . instead of defining a community quality score in terms of the best community of any size , it is useful to define a community quality score in terms of the best community of a given size as a function of the size . to do this , ref . introduced the idea of a _ network community profile ( ncp ) _ as the lower envelope of the conductance values of communities of a given size : an ncp plots a community quality score ( which , as in ref . , we take to be the set conductance of communities ) of the best possible community of size as a function of . clearly , it is also intractable to compute the quantity in eq . exactly .previous work has used spectral - based and flow - based approximation algorithms to approximate it . to gain insight into how to understand an ncp and what it reveals about network structure ,consider fig.[fig : ncp ] .in fig.[fig : ncp - possiblencps ] , we illustrate three possible ways that an ncp can behave . in each case, we are using conductance as a measure of community quality .* * upward - sloping ncp . * in this case , small communities are `` better '' than large communities . * * flat ncp .* in this case , community quality is independent of size .( as illustrated in this figure , the quality tends to be comparably poor for all sizes . ) * * downward - sloping ncp . * in this case , large communities are `` better '' than small communities . for ease of visualization and computational considerations ,we only show ncps for communities up to half of the size of a network .an ncp for very large communities that we do not show in figures as a result of this choice roughly mirrors that for small communities , as the complement of a good small community is a good large community because of the inherent symmetry in conductance ( see eq . ) .in fig.[fig : ncp - bigncp ] , we show an ncp of a livejournal network from ref .it demonstrates an empirical fact about a wide range of large social and information networks : there exist good small conductance - based communities , but there do not exist any good large conductance - based communities in many such networks .see refs . ) for more empirical evidence that large social and information networks tend not to have large communities with low conductances . on the contrary, fig.[fig : ncp - cavemangraph ] illustrates a small toy network a so - called `` caveman network''formed from several small cliques connected by rewiring one edge from each clique to create a ring .as illustrated by its downward - sloping ncp in fig.[fig : ncp - cavemanncp ] , this network possesses good conductance - based communities , and large communities are better than small ones .one obtains a similar downward - sloping ncp for the zachary karate club network as well as for many other networks for which there exist meaningful visualizations . the wide use of networks that have interpretable visualizations ( such as the zachary karate club and planted partition models with balanced communities ) to help develop and evaluate methods for community detection and other procedures can lead to a strong selection bias when evaluating the quality of those methods .we now consider the relationship between the phenomena illustrated in fig.[fig : ncp ] and the idealized block models of fig.[fig : stylized ] . as a concrete example , fig.[fig : stylized_ncp ] shows the ncps for the example networks in the right panels of fig.[fig : ncp ] .first , note that the best partitions consist roughly of well - balanced communities in the low - dimensional case of figs.[fig : stylized - hotdog ] and[sfig : ncp_karate ] , and the `` lowest '' point on an ncp tends to be for large community sizes .thus , an ncp tends to be downward - sloping .networks with pronounced core - periphery structure i.e . ,networks that `` look like '' the example network in fig.[fig : stylized - coreper]tend to have many good small communities but no equally good or better large communities .this situation arises in many large , extremely sparse networks .the good small communities in such networks are sets of connected nodes in the extremely sparse periphery , and they do not combine to form good , large communities , as they are only connected via a set of core nodes with denser connections than the periphery .thus , an ncp of a network with core - periphery structure tends to be upward - sloping , as illustrated in figs.[fig : stylized - coreper ] and[sfig : ncp_core_periphery ] .however , this observation does not apply to all networks with well - defined density - based core - periphery structure .if the periphery is sufficiently well - connected ( though still much sparser than the core ) , then one no longer observes good , small communities .such networks act like expanders from the perspective of the behavior of random walkers , so they have a flat ncp .one can generate examples of such networks by modifying the parameters of the block - model that we used to generate the example network in fig.[fig : stylized - coreper ] . for a complete graph or a degree - homogeneous expander ( see figs.[fig : stylized - expander ]and[sfig : ncp_expander ] ) , all communities tend to have poor quality , so an ncp is roughly flat .( see appendix[sxn : expanders ] for a discussion of expander graphs . ) finally , bipartite structure itself does not have any characteristic influence on an ncp .instead , an ncp of a bipartite network reveals other structure present in a network . for the example network in fig.[fig : stylized - bipartite ] , the two types of nodes are connected uniformly at random , so its ncp ( fig.[sfig : ncp_bipartite ] ) has the characteristic flat shape of an expander .it is important to discuss the robustness properties of ncps .these are not obvious a priori , as the ncp is an extremal diagnostic .importantly , though , the qualitative property of being downward - sloping , upward - sloping , or roughly flat is very robust to the removal of nodes and edges , variations in data generation and preprocessing decisions , and similar sources of perturbation .for example , upward - sloping ncps typically have many small communities of good quality , so losing some communities via noise or some other perturbations has little effect on a realistic ncp . naturally , whether a particular set of nodes achieves a local minimum is not robust to such modifications .in addition , one can easily construct pathological networks whose ncps are not robust .it is also important to consider the robustness of a network ncps with respect to the use of conductance versus other measures of community quality .( recall that many other measures have been proposed to capture the criteria that a good community should be densely - connected internally but sparsely connected to the rest of a network . ) indeed , it has been shown that measures that capture both criteria of community quality ( internal density and external sparsity ) behave in a roughly similar manner to conductance - based ncps , whereas measures that capture only one of the two criteria exhibit qualitatively different behavior , typically for rather trivial reasons .although the basic ncp that we have been discussing yields numerous insights about both small - scale and large - scale network structure , it also has important limitations .for example , an ncp gives no information on the number or density of communities with different community quality scores .( this contributes to the robustness properties of ncp with respect to perturbations of a network . ) accordingly , the communities that are revealed by an ncp need not be representative of the majority of communities in a network .however , the extremal features that are revealed by an ncp have important system - level implications for the behavior of dynamical processes on a network : they are responsible for the most severe bottlenecks for associated dynamical processes on networks .another property that is not revealed by an ncp is the internal structure of communities .recall from eq . that the conductance of a community measures how well ( relative to its size ) that it is separated from the remainder of a network , but it does not consider the internal structure of a community ( except for size and edge density ) . in an extreme case ,a community with good conductance might even consist of several disjoint pieces .recent work has addressed how spectral - based approximations to optimizing conductance also approximately optimize measures of internal connectivity .we augment the information from basic ncps with some additional computations . to obtain an indication of a community s internal structure, we compute the internal conductance of the communities that form an ncp . the _ internal conductance _ of a community is where is the subgraph of induced by nodes in the community .the internal conductance is equal to the conductance of the best partition into two communities of the network viewed as a graph in isolation . because a good community should be well - separated from the remainder of a network andalso relatively well - connected internally , we expect good communities to have low conductance but high internal conductance .we thus compute the _ conductance ratio _ to quantify this intuition . a good community should have a small conductance ratio , and thus we also plot so - called _ conductance ratio profiles ( crps ) _ to illustrate how conductance ratio depends on community size in networks . in this paper ,we examine the small - scale , medium - scale , and large - scale community structure using conductance - based ncps and crps .we employ three different methods , which we introduce in detail in appendix[sxn : measures ] , for sampling an ncp : one based on local diffusion dynamics ( the aclcut method ) , one based on a local spectral optimization ( the movcut method ) , and one based on geodesic distance from a seed node ( the egonet method ) . in each case , we find communities of different sizes , and we then plot the conductance of the best community for each size as a function of size .an ncp provides a signature of community structure in a network , and we can thereby compare community structure across different networks .this helps one to discern which properties are attributable predominantly to network structure and which are attributable predominantly to choice of algorithms for community detection .our approach of comparing community structures in networks using ncps and crps is very general : one can of course follow a similar procedure with other community - quality diagnostics on the vertical axis , other procedures for community generation , and so on .in this section , we present the results of our empirical evaluation of the small - scale , medium - scale , and large - scale community structure in our example networks .we will examine six empirical networks in depth .they fall into three classes : coauthorship networks , facebook networks , and voting similarity networks . for each class , we consider two networks of two different sizes . * * collaboration graphs . * the two ( unweighted ) coauthorship networks were constructed from papers submitted to the arxiv preprint server in the areas of general relativity and quantum cosmology ( ca - grqc ) and astrophysics ( ca - astroph ) . in each case , two authors are connected by an edge if they coauthored at least one paper , so a paper with authors appears as a -clique ( i.e. , a complete -node subgraph ) in the network .these network data are available as part of the stanford network analysis package ( snap ) , and they were examined previously in refs . . * * facebook graphs . * the two ( unweighted ) facebook networks are anonymized data sets that consist of a snapshot of `` friendship '' ties on one particular day in september 2005 for two united states ( u.s . )universities : harvard ( fb - harvard1 ) and johns hopkins ( fb - johns55 ) .they form a subset of the facebook100 data set from refs . .in addition to the friendship ties , note that we possess node labels for gender and class year as well as numerical identifiers for student or some other ( e.g. , faculty ) status , major , and high school . ** congressional voting graphs . * the two ( weighted ) congressional voting networks represent similarities in voting patterns among members of the u.s. house of representatives ( us - house ) and u.s .senate ( us - senate ) .our construction follows prior work .in particular , we represent these two data sets as `` multilayer '' temporal networks .each layer corresponds to a single two - year congress , and edge weights within a layer represent the voting similarity between two legislators during the corresponding congress . in layer , this yields adjacency elements of , where both legislators voted the same way on the bill , if they voted in different ways on that bill , is the number of bills on which both legislators voted during that congress , and the sum is over bills .a tie between the same legislator in consecutive congresses is represented by an interlayer edge with weight .( we use ; the effect of changing has been investigated previously . )we represent each multilayer voting network using a single `` supra - adjacency matrix '' ( see refs . ) in which the different congresses correspond to diagonal blocks and interlayer edges correspond to off - block - diagonal terms in the matrix .note that throughout this paper we treat the congressional voting graphs at the level of this supra - adjacency matrix , without any additional labeling or distinguished treatment of inter- and intra - layer edges ( cf . ) .we chose these three sets of networks because ( as we will see in later sections ) they have _ very _ different properties with respect to their large - scale versus small - scale community structures .we thus emphasize that , with respect to the topic of this paper , these six networks are representative of several broad classes of previously - studied networks : ca - grqc and ca - astroph are representative of the snap networks that were examined previously in refs . ; both fb - harvard1 and fb - johns55 ( aside from a few very small communities in fb - harvard1 ) are representative of the facebook100 networks that were examined previously in refs . ; and us - house and us - senate give examples of networks ( that are larger than the zachary karate club and caveman networks ) on which conventional notions of and algorithms for community detection have been validated successfully . in table[tab : data_summary ] , we provide summary statistics for each of the six networks . we give the numbers of nodes and edges in the largest connected component , the mean degree / strength ( ) , the second - smallest eigenvalue ( ) of the normalized laplacian matrix , and mean clustering coefficient ( ) .we use the local clustering coefficient , where , which reduces to the usual expression for local clustering coefficients in unweighted networks .the high values for mean clustering coefficient in both the u.s .congress and coauthorship networks are unsurprising , given how those networks have been constructed .however , the latter is noteworthy , as the coauthorship networks are much sparser than the facebook networks . [ cols="<,>,>,>,>,>,^,<",options="header " , ] in tables[spearman_comp_gr][spearman_comp_senate ] , we show the results of our calculations of spearman rank correlations . for each of the three networks , we select 50 seed nodes by sampling uniformly without replacement .we then compute ppr vectors for these seed nodes using the aclcut and movcut method for different values of the truncation parameter and teleportation parameter , and we also compute the egorank vector for each of the seed nodes .recall that _smaller _ values of correspond to _ more local _ versions of the procedures , but that _ larger _ values of correspond to _ more local _ versions of the procedures .the aclcut and movcut methods give very similar results for most of the 50 seed nodes in our sample , although ( as discussed below ) some seed nodes do yield noticeable differences .the two methods give the most similar results for fb - johns55 ( mean : 0.92 , minimum : 0.43 ) , whereas we find larger deviations in both ca - grqc ( mean : 0.85 , minimum : ) and us - senate ( mean : 0.86 , minimum : ) .note that we calculated the mean , maximum , and minimum over all sampled seed nodes and parameter values .interestingly , the larger deviations between the two methods for ca - grqc and us - senate occur at different values of the truncation parameter . for ca - grqc( and , to a lesser extent , for fb - johns55 ) , we obtain the largest deviations for smaller values ( e.g. , ) . for us - senate , however , we obtain the largest deviations for .see the bold values in tables[spearman_comp_gr][spearman_comp_senate ] .this is consistent with the very different isoperimetric properties of these three networks , as revealed by their ncps , as well as with well - known connections between conductance and random walks .there are two potential causes for the differences between the aclcut and movcut method .first , there is a truncation effect , governed by the parameter , in approximating the ppr vector using the aclcut method .as becomes smaller , the approximation in aclcut becomes more accurate and this effect diminishes .second , the two methods differ in the precise way that they use a seed vector to represent a seed node . recall that the aclcut method uses an indicator vector to represent a seed node ; thus , we use whenever is a seed node , and we set all other entries in that vector to .in contrast , the movcut method projects the indicator vector onto the orthogonal complement of the strength vector to ensure that ( see appendix[sxn : measures ] ) .this effect decreases as .the larger deviations between the two methods occur for smaller values of in ca - grqc and fb - johns55 ; for these , the truncation effect is small , suggesting that the different way of representing a seed node is partially responsible for the difference between the results of the two methods for these networks . for larger values of ( in particular , ) , where the support of the approximate ppr vector from the aclcut method is small ,the behavior of the two methods is very similar .consequently , the differences in the choice of seed vector become more important for nodes that are `` far away '' from the seed node , in the sense that they are rarely visited by the personalized pagerank dynamics that underlie these methods . as a result ,the `` local ncps '' for the two methods in figs.[fig : local_ncp_gr ] and[fig : local_ncp_johns ] are largely identical for small community sizes but diverge for large community sizes .( we use the term _ local ncp _ to refer to an ncp that we computed using only a single seed node without optimizing over the results from multiple seed choices ; see ref . for details on the construction of local ncps . ) for us - senate , the two methods behave almost identically for small ( see table[spearman_comp_senate ] ) , so we conclude that the different ways of representing a seed node have only a small effect on this network . however , the truncation effect is more pronounced in this network compared with ca - grqc or fb - johns55 .this feature manifests as larger deviations between aclcut and movcut in table[spearman_comp_senate ] for large and small ( i.e. , where the truncation has the strongest impact ) .the discrepancy occurs because the aclcut method initially pushes a large amount of probability to the interlayer neighbors of the seed node ( i.e. , to the same senator in different congresses ) .this probability does not diffuse to other nodes for sufficiently large values of . in figs.[fig: local_gr][fig : local_senate ] , we illustrate the results from tables[spearman_comp_gr][spearman_comp_senate ] . in these figures, we plot the local ncps for ca - grqc , fb - johns55 , and us - senate for the seed nodes ( from the sample of 50 ) that yield the highest and lowest mean spearman rank correlation between the aclcut and movcut methods . in these figures , we also include visualizations of example communities that we obtained from the aclcut and movcut methods using a kamada - kawai - like spring - embedding visualization of the -ego - nets of these seed nodes . from the visualizations of the local communities , it seems for ca - grqc ( see fig.[fig : local_gr ] ) and fb - johns55 ( see fig.[fig : local_johns55 ] ) that nodes included in local communities obtained from aclcut tend to be closer in geodesic distance than those obtained from movcut to the seed node .( to see this , observe that red nodes tend to be larger than light blue nodes in the visualization of the -neighborhoods . )if this observation holds more generally and is not just an artifact of the particular communities that we show in figs.[fig : local_gr ] and[fig : local_johns55 ] , then we should obtain higher spearman rank correlations between aclcut and egonet than between movcut and egonet .indeed , tables[spearman_comp_gr][spearman_comp_senate ] consistently show this effect for all choices of and and for all three networks .note that this effect is also present in us - senate , though it is less prominent in its -neighborhood visualization than is the case for the other two networks .figures[fig : local_gr][fig : local_senate ] also reveal that the three networks look very different from a local perspective . for fb - johns55 ( see fig.[fig : local_johns55 ] ) , both seed nodes that we considered result in reaching a large fraction of all nodes after just 2 steps .this is consistent with known properties of the full facebook graph ( circa 2012 ) of individuals connected by reciprocal `` friendships . ''for example , the mean geodesic distance between pairs of nodes of the facebook graph is very small : it was recently estimated by facebook s data team and their collaborators to be about 4.74 . additionally ,as reported by facebook s data team , one can view facebook as a collection of ego networks that have been patched together into a network whose global structure is sparse ( and such structure is an important motivation for the locally - biased notion of community structure that we advocate in this paper ) . for ca - grqc, we obtain very different neighborhoods starting from our two different seed nodes .the node that exhibits the largest difference in behavior for both the aclcut and movcut methods appears to be better connected in the network in the sense that the -neighborhood ( for any until saturation occurs ) is much larger than that of the node that showed the smallest difference .( that is , it is more in the `` core '' than in the `` periphery '' of the nested core - periphery structure of refs .we observe a similar phenomenon for fb - johns55 and us - senate .furthermore , its 1-ego - net and 2-ego - net are highly clustered , in the sense that they contain many closed triangles . for the seed node that showed the smallest difference between the aclcut and movcut methods ,we need to consider the 6-ego - net ( which has 20 nodes ) to obtain a network of similar size to the 2-ego - net for the seed node with the largest difference ( which has 15 nodes ) . in the case of the seed node in our sample that showed the least difference between the two methods , even the 6-ego - net appears rather tree - like ; it contains few closed triangles and no larger cliques . for us - senate , the 1-neighborhood of any seed node contains only the node itself and those corresponding to the same senator in different congresses for nodes from interlayer edges to appear in the -neighborhood .hence , an increasing number of senators from the same congress can appear in a -neighborhood that does not contain any nodes from interlayer edges . ] .as one begins to consider nodes that are further away , one first reaches corresponding senators in other congresses before reaching other senators with similar voting patterns from the same congress .this behavior of the egonet method contrasts with the ( pagerank - based ) aclcut and movcut methods , which tend to initially select all senators from one congress before reaching senators from other congresses .+ + + + + + from the perspective of the locally - biased community - detection methods that we use in this paper , one can view intermediate - sized ( i.e. , meso - scale ) structures in networks as arising from collections of local features via overlaps of local communities that one obtains algorithmically using locally - biased dynamics such as those that we consider .such local features depend not only on the network adjacency matrix but also on the dynamical process under study , the initial seed(s ) from which one is viewing a network , and the locality parameters of the method ( which corresponds to the dynamical process ) that determine how locally one is viewing the network .although a full discussion of the relationship between local structure and meso - scale structure and global structures is beyond the scope of this paper , here we provide an initial example of such results . to try to visualize meso - scale and global network structures that we obtain from the local communities that we identify, we define an association matrix ( where is again the number of nodes in the network ) , which encodes pairwise relations between nodes based on a sample of local communities . for a given sample of local communities ( obtained , e.g. , by running a given method with many seed nodes and values of a locality parameter ) ,the entries of the association matrix are given by the number of times that a pair of nodes appear together in a local community , normalized by the number of times either of them appeared .that is , the elements of the association matrix are our procedure for extracting global network structure from a sampled set of communities is similar in spirit to computing association ( or `` co - classification '' ) matrices that have been constructed from sampling a landscape of the modularity objective function , and one can in principle analyze these matrices further using the same methods .the additional normalization in our definition of association matrices is necessary to correct for the oversampling of large communities relative to small communities ( which results from sampling nodes uniformly at random ) . at first glance ,association matrices computed by sampling a modularity landscape appear to reveal much clearer community structure in these networks than what we obtain by sampling local communities .however , this is largely an artifact of the well - known resolution limit of modularity optimization .one can mitigate this effect by using one of the multi - resolution generalizations of modularity to sample the modularity landscape across different values of the resolution parameter .this yields association matrices that are similar in appearance to the ones that we obtain by sampling local communities . to visualize the association matrices in a way that reveals global network structure ,it is important to find a good node order .we found the sorting method suggested in ref . to be impractically slow for the networks that we study .instead , we sort the nodes based on the optimal leaf ordering for the average - linkage hierarchical clustering tree of the association matrix .( for us - senate , we do this procedure within a given congress , and we then use the natural temporal ordering to define the inter - congressional ordering . ) in addition , to see small - scale structure using samples obtained from movcut , we use a community - size parameter that limits the volume of the resulting community based on the desired correlation with the seed vector . in this paper, we use . see ref . for details .we summarize our results in figs.[fig : assoc][fig : senate_global_vis ] . in fig.[fig :assoc ] , we show the result of applying this procedure with communities that we sampled using the aclcut , movcut , and egonet methods . in each case , we keep only the best conductance community for each sampled ranking vector .the most obvious feature of the visualizations in fig.[fig : assoc ] is that except for us - senate , for which there is a natural large - scale global structure defined by the one - dimensional temporal ordering the visualizations are much more complicated than any of the idealized structures in fig.[fig : stylized ] ( which suggests that the visualizations might be revealing at least as much about the inner workings of the visualization algorithm as about the networks being visualized ) .the structures in fig.[fig : stylized ] are trivially interpretable , whereas those in real networks ( e.g. , as illustrated in fig.[fig : assoc ] ) are extremely messy and very difficult to interpret . in the paragraphs below, we will discuss the structural features in fig.[fig : assoc ] in more detail . for ca - grqc( see fig.[fig : gr_global_vis ] as well as fig.[fig : assoc ] ) , we observe many small communities that are composed of about 10100 nodes .these communities , which correspond to the dark red blocks along the diagonal ( see the inset in fig.[fig : gr_assoc_acl ] ) , are responsible for the dips in the ncps ( see figs .[ ncp_acl_small ] , [ ncp_mov_small ] , and [ fig : ncp_ego_small ] ) for this network .however , these small communities do not combine to form large communities , which would result in large diagonal blocks in the association matrices .instead , the small communities appear to amalgamate into a single large block ( or `` core '' ) . in fig.[fig : gr_global_vis ] , we aim to make this observation more intuitive by showing how the local communities for three different seed nodes spread through the network as we change the resolution , i.e. , the locality bias parameter .we construct the weighted network shown in fig.[fig : gr_global_vis ] from the unweighted ca - grqc network using the association matrix for the aclcut method ( fig.[fig : gr_assoc_acl ] ) .we assign each edge a weight based on the corresponding entry of the association matrix , i.e. , if and otherwise . based on our earlier results with the slowly - increasing ncp , as well as previous results in refs . , we interpret these features shown in fig.[fig : gr_global_vis ] in terms of a nested core - periphery structure , in which the network periphery consists of relatively good communities and the core consists of relatively densely connected nodes . for fb - johns55 ( see fig.[fig : johns55_global_vis ] as well as fig.[fig : assoc ] ), we observe two relatively large communities , which correspond to the two large diagonal blocks in figs.[fig : johns55_assoc_acl ] and[fig : johns55_assoc_mov ] and which underlie the dips in the ncps in figs.[ncp_acl_small ] and[ncp_mov_small ] .note , however , from the scale of the vertical axis in figs.[ncp_acl_small ] and[ncp_mov_small ] that the community quality of these communities is very low , so one should actually construe the visualization in figs.[fig : johns55_assoc_acl ] and[fig : johns55_assoc_mov ] as highlighting a low - quality community that is only marginally better than the other low - quality communities that are present in that network . based on this visualization as well as our earlier results, the remainder of fb - johns55 does not appear to have much community structure ( at least based on using the conductance diagnostic to measure internal versus external connectivity ) .however , there do appear to be some remnants of highly overlapping communities that one could potentially identify using other methods ( e.g. , the one in ref . ) . the egonet method( see fig.[fig : johns55_assoc_ego ] ) is unable to resolve not only these small communities but also the larger low - quality communities .figure[fig : johns55_global_vis ] shows how the local communities for two seed nodes that do not belong to one of the two large communities slowly spread and eventually merge ( blue and yellow nodes ) , whereas the red community ( which corresponds to the smaller of the two communities ) is quickly identified and remains separate from the other communities . for us - senate( see fig.[fig : senate_global_vis ] as well as fig.[fig : assoc ] ) , we clearly observe the signature of temporal - based community structure at a large size scale .see figs.[fig : senate_assoc_acl ] , [ fig : senate_assoc_mov ] , and [ fig : senate_assoc_ego ] .using aclcut and movcut , we also obtain partitions at the scale of individual congresses ( see the insets in figs.[fig : senate_assoc_acl ] and[fig : senate_assoc_mov ] ) , which sometimes split into two or occasionally three individual communities .these latter partitions have been discussed previously in terms of polarization between parties . because we fixed the temporal order of congresses for us - senate and only sort senators within the same congress, this visualization reveals communities within each senate as well as more temporally - disparate communities . in particular , for the egonet method , this ordering introduces a checkerboard pattern that correspond to temporal communities that contain senators from several congresses .figure[fig : senate_global_vis ] clearly shows that this temporal structure also dominates the behavior of local communities for individual seed nodes . an important point from these visualizationsis that , for both ca - grqc and fb - johns55 , the meso - scale and large - scale structures that result from the superposition of local communities does _ not _ correspond particularly well to intuitive good - conductance communities .relatedly , it also does _ not _ correspond particularly well to an intuitive low - dimensional structure or a nearly decomposable block - diagonal matrix of community assignments ( see our illustration in fig.[fig : stylized - hotdog ] ) , one or both of which are often assumed ( typically implicitly ) by many global methods for algorithmically detecting communities in networks . of the networks that we investigate , only the temporal structure in us - senate ( as well as in us - house , which is a related temporally - dominant network ) closely resembles such an idealization .this is reflected clearly in its downward - sloping ncp ( see figs.[ncp_acl_small ] , [ ncp_mov_small ] , and [ fig : ncp_ego_small ] ) and in the visualizations in fig.[fig : assoc ] . instead , in the other ( e.g. , collaboration , facebook , and many many other realistic ) networks , community structure as a function of size is much more subtle and complicated .fortunately , our locally - biased perspective provides one means to try to resolve such intricacy . by averaging over results from different seed nodes , a local approach likeours leads naturally to the presence of strongly overlapping communities .overlapping community structure has now been studied for several years , and recent observations continue to shed new light on the ubiquity of community overlap .overlap of communities in networks is a pervasive phenomenon ; and our expectation is that most large realistic networks have communities with significant overlap , rather than merely a small amount of overlap that would amount to a small perturbation of the idealized , nearly decomposable communities in fig.[fig : stylized - hotdog ] .additionally , such overlaps imply that larger communities tend to have lower quality in terms of their internal versus external connectivity ( i.e. , in terms of how much they resemble the intuitive communities that many researchers know and love ) than smaller communities in agreement with our empirical results on both the collaboration networks and facebook networks , but in strong disagreement with popular intuition . in these latter cases , recent work that fits related networks with upward - sloping ncps to hierarchical kronecker graphs resulted in parameters that are consistent with the core - periphery structure that we illustrated in fig.[fig : stylized - coreper ] .synthetic benchmark networks with a known , planted community structure can be helpful for validating and gaining a better understanding of the behavior of community - detection algorithms .for such an approach to be optimally useful , it is desirable for the synthetic benchmarks to reproduce relevant features of real networks with community structure ; and it is challenging to develop good benchmarks that reproduce community structure and other structural properties of medium - sized and larger realistic networks . an extremely popular and in some ways useful family of benchmark networks that aims to reproduce some features of real networks are the so - called _ lfr ( lancichinetti - fortunato - radicchi ) networks _ . by design ,lfr networks have power - law degree distributions as well as power - law community - size distributions , they are unweighted , and they have non - overlapping planted communities . motivated by our empirical results on networks constructed from real data , we also apply our methods to lfr networks to test the extent to which they are able to reproduce the three classes of ncp behavior ( upward - sloping , flat , and downward - sloping ) that we have observed with real networks . to parametrize the family of lfr networks , we specify its power - law degree distribution using its exponent , mean degree , and maximum degree . similarly , we specify its power - law community size distribution using its exponent , minimum community size , and maximum community size , with the additional constraint that the sum of community sizes should equal the size of the network . furthermore , we specify the strength of community memberships using a mixing parameter , where each node shares a fraction of its edges with nodes in its own community .a simple calculation shows that this definition of the mixing parameter implies that each community in the planted partition has conductance ( up to rounding effects ) . to construct a network with these parameters, we sample degrees from the degree distribution and sample community sizes from the community size distribution .we then assign nodes to communities uniformly at random , with the constraint that a node can not be assigned to a community that is too small for the node to have the correct mixing - parameter value .we then construct inter - community and intra - community edges separately by connecting the corresponding stubs ( i.e. , ends of edges ) uniformly at random .we use the implementation by lancichinetti to generate lfr networks . in fig.[fig :benchmarks ] , we show representative ncps for lfr networks for three choices of parameters for the degree distribution and community - size distribution that have been used previously to benchmark community - detection algorithms .( we generated the results presented in fig.[fig : benchmarks ] using the aclcut method , but we obtain nearly identical ncps using the movcut method . )the three subfigures demonstrate that all three parameter choices yield networks with similar ncps .in particular , we observe that above a certain critical size the best communities have comparable quality , as a function of increasing size .depending on the particular parameter values , this can be of similar quality to or somewhat better than that which would be obtained by , e.g. , a vanilla ( not extremely sparse ) er random graph , across all larger size scales .that is , above the critical size , the ncp is approximately flat . increasing the topological mixing parameter in the lfr network generative mechanism at first shifts the entire ncp upwards because the number of inter - community edges increases . for , it levels off to the characteristic flat shape for an ncp of a network generated from the configuration model of random graphs .importantly , the behavior for the lfr benchmark networks from ref . that we illustrate in fig.[fig : benchmarks ] does _ not _ resemble the ncps for any of the real - world networks in either the present paper or in ref .in addition , we have been unable to find parameter values for which the qualitative properties of realistic ncps in particular , a relatively gradually upward - sloping ncp are reproduced , which suggests that the community structure generated by the lfr benchmarks is _ not _ realistic in terms of its size - resolved properties . to verify that this behavior is not an artifact of the particular choices of parameters shown in fig.[fig : benchmarks ] , we sampled sets of parameters uniformly at random with , , , , , and .the aggregate trends of the ncps for the lfr benchmark networks with the different parameters we sample are similar to and consistent with the results shown in fig.[fig : benchmarks ] .hence , although the lfr benchmark networks are useful as tests for community - detection techniques , our calculations suggest that they are unable to reproduce a fundamental feature of many real networks with respect to variation in community quality ( and , in particular , worsening community quality ) as a function of increasing community size .based on our empirical observations , our locally - biased perspective on community detection suggests a natural approach to determine whether synthetic benchmarks possess small - scale , medium - scale , and large - scale community structure that resembles that of large realistic networks : namely , a family of synthetic benchmark networks ought to include parameter values that generate networks with ( robust ) upward - sloping , flat , and downward - sloping ncps ( as observed in figs.[fig : ncp - possiblencps ] and[ncp_acl_small ] ) .in this paper , we have conducted a thorough investigation of community quality as a function of community size in a suite of realistic networks , and we have reached several conclusions with important implications for the investigation of realistic medium - sized to large - scale networks .our results build on previous work on using network community profiles ( ncps ) to study large - scale networks . in this paper , we have employed a wider class of community - identification procedures , and we have discovered a wider class of community - like behaviors ( as a function of community size ) in realistic networks than what had been reported previously in the literature .in addition , using ncps , we have discovered that the popular lfr synthetic benchmark networks , which are often used to validate community - detection algorithms and which are the most realistic synthetic benchmark networks that have been produced to test methods for community detection exhibit behavior that is markedly different from many realistic networks .our result thus underscores the importance of developing realistic benchmark graphs whose ncps are qualitatively similar to those of real networks . taken together ,our empirical results yield a much better understanding of realistic community structure in large realistic networks than was previously available , and they provide promising directions for future work .more generally , because our approach for comparing community structures in networks ( using ncps and conductance ratio profiles ) is very general e.g ., one can follow an analogous procedure with other community - quality diagnostics , other procedures for community generation , etc.our locally - biased and size - resolved methodology is an effective way to investigate size - resolved meso - scale network structures much more generally .the main conclusion of our work is that community structure in real networks is much more intricate than what is suggested by the block - diagonal assumption that is ( either implicitly or explicitly ) made by most community - detection methods ( including ones that allow overlapping communities ) and when using the synthetic benchmark networks that have been developed to test those methods .community structure interplays with other meso - scale features , such as core - periphery structure , and investigating only community structure without consideration of other structures can lead to misleading results . a local perspective on community detection , like the one that we have advocated in the present paper , allows pervasive community overlap in a natural way which is an important feature to capture when considering real social networks .additionally , the large - scale consensus community structure that we obtain subsequently by `` pasting together '' local communities is not constrained to resemble a global block - diagonal structure .this is a key consideration in the study of meso - scale structures in real networks .although most algorithmic methods for community detection take a different approach from ours , the observation that network community structure depends not only on the network structure per se but also on the dynamical processes that take place on a network and the initial conditions ( i.e. , seed node or nodes ) for those processes , is rather traditional in many ways. recall , for example , granovetter s observation that a node with many weak ties is ideally suited to initialize a successful social contagion process .our perspective also meshes better than global ones with real - life experience in our own networks .both of these observations underscore our point that whether particular network structures form bottlenecks for a dynamical process depends not only on the process itself but also on the initial conditions of that process .more generally , one might hope that our size - resolved and locally - biased perspective on community detection can be used to help develop new diagnostics that complement widely - used and intuitive concepts such as closeness centrality , betweenness centrality , and the many other existing global notions .these will be of particular interest for investigating large networks or even modestly - sized networks such as those that we have considered where traditional algorithmic and visualization methods have serious difficulties . because the study of meso - scale structure in networks is important for understanding how local and small - scale properties of a network interact with global or large - scale properties, we expect that taking a locally - biased perspective on community detection and related problems will yield interesting and novel insights on these and related questions .in this section , we provide a brief introduction to the concept of an _ expander graph _ ( or , more simply , an _ expander _ ) .essentially , expanders are graphs that are very well - connected and thus do not have any good communities ( when measured with respect to diagnostics such as conductance ) . because our empirical results indicate that many large social and information networks are expanders at least when viewed at large size - scales it is useful to review basic properties about expander graphs . although most of the technical aspects of expander graphs are beyond the scope of this paper , ref . provides an excellent overview of this topic .let be a graph , which we assume for simplicity is undirected and unweighted .for the moment , we assume that all nodes have the same degree ( i.e. , is -regular ) . for , the set of edges from to is then in this case , the number of nodes in is a natural measure of the size of .additionally , the quantity , which indicates the number of edges that cross between and , is a natural measure of the size of the boundary between and .we also define the _ edge expansion of a set of nodes _ as in which case the _ edge expansion of a graph _ is the minimum edge expansion of any subset ( of size no greater than ) of nodes : a sequence of -regular graphs is a _ family of expander graphs _ if there exists an such that for all .informally , a given graph is an expander if its edge expansion is large . as reviewed in ref . , one can view expanders from several complementary viewpoints . from a combinatorial perspective , expanders are graphs that are highly connected in the sense that one has to sever many edges to disconnect a large part of an expander graph . from a geometric perspective , this disconnection difficulty implies that every set of nodes has a relatively very large boundary . from a probabilistic perspective , expanders are graphs for which the natural random - walk process converges to its limiting distribution as rapidly as possible . finally , from an algebraic perspective , expanders are graphs in which the first nontrivial eigenvalue of the laplacian operator is bounded away from .( because we are talking here about -regular graphs , note that this statement holds for both the combinatorial laplacian and the normalized laplacian . )in addition , constant - degree ( i.e. , -regular , for some fixed value of ) expanders are the metric spaces that ( in a very precise and strong sense ) embed least well in low - dimensional spaces ( such as those discussed informally in section[sxn : prelim - looklike ] ) .all of these interpretations imply that smaller values of expansion correspond more closely to the intuitive notion of better communities ( whereas larger values of expansion correspond , by definition , to better expanders . ) note the similarities between eq . andeq . , which define expansion , with eq . and eq . , which define conductance .these equations make it clear that the difference between expansion and conductance simply amounts to a different notion of the size ( or volume ) of sets of nodes and the size of the boundary ( or surface area ) between a set of nodes and its complement .this difference is inconsequential for -regular graphs .however , because of the deep connections between expansion and rapidly - mixing random walks , the latter notion ( i.e. , conductance ) is much more natural for graphs with substantial degree heterogeneity .the interpretation of failing to embed well in low - dimensional spaces ( like lines or planes ) is not as extremal in the case of conductance and degree - heterogeneous graphs as it is in the case of expansion and degree - homogeneous graphs ; but the interpretations of being well - connected , failing to provide bottlenecks to random walks , etc .all hold for conductance and degree - heterogeneous graphs such as those that we consider in the main text of the present paper . accordingly, it is insightful to interpret our empirical results on small - scale versus large - scale structures in networks should be in light of known facts about expanders and expander - like graphs .in this section , we describe in more detail how we algorithmically identify possible communities in graphs . because we are interested in local properties and how they relate to meso - scale and global properties , we take an operational approach and view communities as the output of various dynamical processes ( e.g. , diffusions or geodesic hops ) , and we discuss the relationship between the output of those procedures to well - defined optimization problems .the idea of using dynamics on a network has been exploited successfully by many methods for finding `` traditional '' communities ( of densely connected nodes ) as well as for finding sets of nodes that are related to each other in other ways . in this paper, we build on the idea that random walks and related diffusion - based dynamics , as well as other types of local dynamics ( e.g. , ones , like geodesic hops , that depend on ideas based on egocentric networks ) , should get `` trapped '' in good communities .in particular , we consider the following three dynamical methods for community identification . in this procedure , we consider a random walk that starts at a given seed node and runs for some small number of steps .we take advantage of the idea that if a random walk starts inside a good community and takes only a small number of steps , then it should become trapped inside that community . to do this, we use the locally - biased personalized pagerank ( ppr ) procedure of refs . . recall that a ppr vector is implicitly defined as the solution of the equation where is a `` teleportation '' probability and is a seed vector . from the perspective of random walks , evolution occurs either by the walker moving to a neighbor of the current node or by the walker `` teleporting '' to a random node ( e.g. , determined uniformly at random as in the usual pagerank procedure , or to a random node that is biased towards in the ppr procedure ) . in general ,teleportation results in a bias to the random walk , which one usually tries to minimize when detecting communities .( see ref . for clever ways to choose with this goal in mind . ) the algorithm of refs . deliberately exploits the bias from teleportation to achieve localized results .it computes an approximation to the solution of eq .( i.e. , it computes an _ approximate ppr vector _ ) by strategically `` pushing '' mass between the iteratively - updated approximate solution vector and a residual vector in such a way that most of the nodes in the original network are _ not _ reached .consequently , this algorithm is typically _ much _faster for moderately - large to very large graphs than is the nave algorithm to compute a solution to eq .. the algorithm is parametrized in terms of a `` truncation '' parameter where larger values of correspond to more locally - biased solutions .we refer to this procedure as the aclcut method . in this procedure , we formalize the idea of a locally - biased version of the leading nontrivial eigenvector of the normalized laplacian that can be used in a locally - biased version of traditional spectral graph partitioning . following ref . , consider the following optimization problem : where is a locality parameter and is a vector , which satisfies the constraints and , and which represents a seed set of nodes .that is , in the norm defined by the diagonal matrix , the seed vector is unit length and is exactly orthogonal to the all - ones vector .this _ locally - biased _ version of standard spectral graph partitioning ( which becomes the usual global spectral - partitioning problem if the locality constraint is removed ) was introduced in , where it was shown that the solution vector inherits many of the nice properties of the solution to the usual global spectral - partitioning problem .the solution is of the form where the parameter is related to the teleportation parameter via the relation ( see ) and ] gives good coverage of different size scales in practice . in this paper , we use 20 logarithmically - spaced points in ] ( including the endpoints ) , where is the theoretical maximum for ( see ) . to sample seed nodes, we modified the strategy described in ref . to be applicable to the movcut method as well as the aclcut method . for each choice of parameter values ,we sampled nodes uniformly at random without replacement and stopped the sampling process either when all nodes were sampled or when the sampled local communities sufficiently covered the entire network . to determine sufficient coverage , we tracked how many timeseach node was included in the best local community that we obtained from the sweep sets and stopped the procedure once each node was included at least 10 times .this procedure ensures that good communities are sampled consistently .the egonet method does not have any size - scale parameters .for the network sizes that we consider , it is feasible to use all nodes rather than sampling them .we use this approach to generate figs.[fig : ncp - ego - small ] and[fig : ncp - ego - large ] .finally , for readability , we only plotted the ncps for communities that contain at most half of the nodes in a network .the symmetry in the definition of conductance ( see eq . ) implies that the complement of a good small community is necessarily a good large community and vice versa .hence , a sampled ncp is roughly symmetric , though this is hard to see on a logarithmic scale , and an ncp without sampling is necessarily symmetric . +the movcut method provides an alternative way of sampling local community profiles to construct an ncp .unlike aclcut , which uses _ only _ local information to obtain good communities , movcut also incorporates some global information about a network to construct local communities around a seed node .in particular , this implies that there can be sweep sets and thus communities that consist of disconnected components of a network .such communities have infinitely large conductance ratios .we observe this phenomenon often for the coauthorship and facebook networks , but it almost never occurs for the congressional voting networks . upon examination , these sweep sets consist of several small sets of peripheral nodes , each of which has moderate to very low conductance , but which are otherwise unrelated .although one would not usually think of such a set of nodes as a single good community , optimization - based algorithms often clump several unrelated communities into a single community for networks with a global core - periphery structure . for completeness and comparison ,we include our results both when we keep the disconnected sweep sets and when we restrict our attention to connected communities .as we discuss below , the ncp does not change substantially , although there are some small differences .the resulting ncps for the movcut method ( see figs.[ncp_mov_small ] and[ncp_mov_large ] ) are similar to those that we obtained for the aclcut method ( see figs.[ncp_acl_small ] and[ncp_acl_large ] ) , although there are a few differences worth discussing .the crp plots are also very similar ( compare figs.[condratio_mov_small ] and[condratio_mov_large ] to figs.[condratio_acl_small ] and[condratio_acl_large ] ) . for the coauthorship networks ( ca - grqc and ca - astroph ), as well as fb - harvard1 , both movcut and aclcut identify the same good small communities that are responsible for the spikes in the ncp plots .in addition , the communities that yield the dips in the ncps for fb - johns55 near 220 and 1100 nodes , and for fb - harvard1 near 1500 nodes , all share more than 98% of their nodes .this indicates that both methods are able to find roughly the same community - like structures .however , the results from the movcut ncp for ca - grqc is higher and less choppy than the one that we computed using aclcut because the truncation employed by aclcut performs a form of implicit sparsity - based regularization that is absent from movcut .see refs . for a discussion and precise characterization of this regularization . for the coauthorship and facebook networks, we also note that there are regions of the computed ncps , when using the movcut method , in which one finds disconnected sweep sets ( see the thin curves ) with lower conductance than that for the best connected sets of the same size . at other sizes ,we see some differences between the ncps from movcut and aclcut .this illustrates that the two methods can have somewhat different local behavior , although both methods produce similar insights regarding the large - scale structure in these networks . in section[sxn : local - comp ] , we discuss some of these differences between our results from the two methods in more detail .the egonet method was not originally developed to optimize conductance , although there is some recent evidence that -neighborhoods can be good conductance communities .the assumption that underlies the egonet method is that nodes in the same community should be connected by short paths .however , unlike the spectral - based methods ( aclcut and movcut ) , the egonet method does not take into account the number of paths between nodes .in contrast to ref . , which considered only -neighborhoods , here we also examine -neighborhoods with . we can then use this method to sample a complete ncp for a network . despite its simplicity , and in agreement with ref . , the egonet method produces ncp s that are qualitatively similar to those from both the aclcut and movcut methods , for all of the networks that we considered ; see figs.[fig : ncp - ego - small ] and[fig : ncp - ego - large ] .the ncps for the egonet method are shifted upwards compared to those for the aclcut and movcut methods ; and this is particularly noticeable at larger community size .this is unsurprising , because the latter two methods more aggressively optimize the conductance objective .however , for all six of our networks , this method preserves an ncp s small - scale structure as well as the global tendency to be upward - sloping , flat , or downward - sloping .this provides further evidence that the qualitative features of an ncp provide a signature of community structure in a network and are not just an artifact of a particular way to sample communities . in section[sxn : local - comp ] , we give a more detailed comparison between the results of these methods .lgsj acknowledges a case studentship award from the epsrc ( bk/10/039 ) . map was supported by a research award ( # 220020177 ) from the james s. mcdonnell foundation , the epsrc ( ep / j001759/1 ) , and the fet - proactive project plexmath ( fp7-ict-2011 - 8 ; grant # 317614 ) funded by the european commission ; map also thanks samsi for supporting several visits and mwm for his hospitality during his sabbatical at stanford .pjm was funded by the nsf ( dms-0645369 ) and by award number r21gm099493 from the national institute of general medical sciences .mwm acknowledges funding from the army research office and from the defense advanced research projects agency .the content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies . in addition , we thank adam dangelo and facebook for providing the facebook data , keith poole for providing the congressional voting data ( which is available from ref . ) , and jure leskovec for making many large network data sets publicly available as part of snap . 132ifxundefined [ 1 ] ifx#1 ifnum [ 1 ]# 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) link:\doibase 10.1038/nphys162 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.122653799 [ * * , ( ) ] * * , ( ) link:\doibase 10.1126/science.1184819 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.physa.2007.01.002 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.neuron.2012.03.038 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1003171 [ * * , ( ) ] link:\doibase 10.1007/s11192 - 011 - 0439 - 1 [ * * , ( ) ] * * , ( ) * * , ( ) in link:\doibase 10.1007/978 - 1 - 4614 - 0754 - 6_7 [ _ _ ] , , ( , , ) pp . in __ , ( , , ) pp . * * , ( ) in _ _ , ( , , ) pp .link:\doibase 10.1103/physreve.80.056117 [ * * , ( ) ] link:\doibase 10.1103/physreve.81.046106 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2008/10/p10008 [ * * , ( ) ] in link:\doibase 10.1145/2350190.2350193 [ _ _ ] , ( , , ) pp . ( ) , * * , ( ) * * , ( ) * * , ( ) ( ) , * * , ( ) link:\doibase 10.1103/physreve.72.046108 [ * * , ( ) ] link:\doibase 10.1056/nejmsa066082 [ * * , ( ) ] * * , ( ) _ _ , ed ., , vol .( , , ) * * , ( ) link:\doibase 10.1103/physrevx.3.021004 [ * * , ( ) ] ( ) , ( ) , in _ _ , vol . , ( , ) pp . in link:\doibase 10.1145/62212.62234 [ _ _ ] , ( , , ) pp . * * , ( ) * * , ( ) , ( ) , * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.84.017102 [ * * , ( ) ] link:\doibase 10.1103/physreve.71.046117 [ * * , ( ) ] in _ _ , , vol .( , , ) pp . * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) link:\doibase 10.1093/comnet / cnt016 [ * * , ( ) ] link:\doibase 10.1016/s0378 - 8733(99)00019 - 2 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.74.036104 [ * * , ( ) ] `` , '' ( ) , `` , '' ( ) , * * , ( ) _ _ , , vol .( , , ) _ _ , , vol .( , , ) * * , ( ) * * , ( ) * * , ( ) in link:\doibase 10.1109/sfcs.1989.63529 [ _ _ ] ( , ) pp . in _ _ , ( , ) pp . * * , ( ) link:\doibase 10.1093/comnet / cnu016 [ * * , ( ) ] link:\doibase 10.1063/1.4790830 [ * * , ( ) ] link:\doibase 10.1073/pnas.1318469111 [ * * , ( ) ] link:\doibase 10.1103/physrevx.3.041022 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.75.027105 [ * * , ( ) ] * * , ( ) `` , '' `` , '' ( ) , * * , ( ) `` , '' ( ) , ( ) , * * , ( ) in link:\doibase 10.1145/2380718.2380723 [ _ _ ] , ( , , ) pp .link:\doibase 10.1073/pnas.0703740104 [ * * , ( ) ] link:\doibase 10.1103/physreve.74.016110 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/10/5/053039 [ * * , ( ) ] link:\doibase 10.1093/bioinformatics/17.suppl_1.s22 [ * * , ( ) ] http://arxiv.org/abs/0907.3509 [ ( ) ] , in _ _ , , ( , , ) pp . in _ , ( , ) pp . * * , ( ) * * , ( ) * * , ( ) `` , '' ( ) , * * , ( ) link:\doibase 10.1103/physreve.78.046110 [ * * , ( ) ] link:\doibase 10.1038/srep00336 [ * * , ( ) ] * * , ( ) `` , '' ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ , ( , ) pp . * * , ( ) link:\doibase 10.1063/1.3552144 [ * * , ( ) ] link:\doibase 10.1063/1.3672513 [ * * , ( ) ] ( ) , in _ _ , ( , ) pp . in link:\doibase 10.1145/1135777.1135814 [ _ _ ] , ( , , ) pp .link:\doibase 10.1103/physreve.85.056107 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2007.07.039 [ * * , ( ) ] _ _ ( , , ) * * , ( ) `` , '' ( ) , in link:\doibase 10.1145/2213556.2213579 [ _ _ ] , ( , , ) pp . in http://arxiv.org/abs/1112.0031[__ ] ,( , , ) pp . `` , '' ( ) ,
|
it is common in the study of networks to investigate intermediate - sized ( or `` meso - scale '' ) features to try to gain an understanding of network structure and function . for example , numerous algorithms have been developed to try to identify `` communities , '' which are typically construed as sets of nodes with denser connections internally than with the remainder of a network . in this paper , we adopt a complementary perspective that `` communities '' are associated with bottlenecks of locally - biased dynamical processes that begin at seed sets of nodes , and we employ several different community - identification procedures ( using diffusion - based and geodesic - based dynamics ) to investigate community quality as a function of community size . using several empirical and synthetic networks , we identify several distinct scenarios for `` size - resolved community structure '' that can arise in real ( and realistic ) networks : ( i ) the best small groups of nodes can be better than the best large groups ( for a given formulation of the idea of a good community ) ; ( ii ) the best small groups can have a quality that is comparable to the best medium - sized and large groups ; and ( iii ) the best small groups of nodes can be worse than the best large groups . as we discuss in detail , which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure , and it is important to take this into account to obtain reliable downstream conclusions . depending on which scenario holds , one may or may not be able to successfully identify `` good '' communities in a given network ( and good communities might not even exist for a given community quality measure ) , the manner in which different small communities fit together to form meso - scale network structures can be very different , and processes such as viral propagation and information diffusion can exhibit very different dynamics . in addition , our results suggest that , for many large realistic networks , the output of locally - biased methods that focus on communities that are centered around a given seed node might have better conceptual grounding and greater practical utility than the output of global community - detection methods . they also illustrate subtler structural properties that are important to consider in the development of better benchmark networks to test methods for community detection .
|
a - order -dimensional real tensor consists of entries in real numbers : ,\ ] ] where =\{1,2,\ldots , n\}. ] . is called _ symmetric _ if the value of is invariant under any permutation of its indices .denote the set of all real symmetric - order -dimensional tensors by } ] , and } ] , the principal sub - tensor of a tensor } ] , such that for all .here , the symbol denotes the cardinality of .the following proposition shows the relationship between the solution of teicp ( [ teicp.problem ] ) and the generalized eigenvalue problem ( [ generalized eigenpair ] ) . is a solution of teicp ( [ teicp.problem ] ) if and only if there exists a subset ] , we know that there exists the unique semi - symmetric tensor such that .it is clear that . without loss of generality, we always assume that } ] , and is copositive .let and then and . if both and are symmetric and strictly copositive tensors , then we can use logarithmic function as the merit function in ( [ max - optimization - problem ] ) .in such a case , teicp ( [ teicp.problem ] ) could be reformulated to the following nonlinear optimization problem : its gradient and hessian are respectively and the hessian is much simpler than that of rayleigh quotient function in ( [ max - optimization - problem ] ) .if one need to use hessian for computing pareto eigenvalue , the logarithmic merit function may be a favorable choice .in this section , the spectral projected gradient ( spg ) method is applied to the nonlinear programming problem ( [ max - optimization - problem ] ) .one main feature of spg is the spectral choice of step length ( also called bb stepsize ) along the search direction , originally proposed by barzilai and borwein .the barzilai - borwein method performs much better than the steepest descent gradient method or projected gradient method in practice . especially , when the objective function is a convex quadratic function and , a sequence generated by the bb method converges -superlinearly to the global minimizer . for any dimension convex quadratic function , it is still globally convergent but the convergence is -linear .we firstly present the following spectral projected gradient method with monotone line search . ' '' '' + * given tensors * } ] , an initial unit iterate , parameter .let be the tolerance of termination .calculate gradient , .set k=0 . +* step 1 : * compute and the direction + * step 2 : * if then stop : is a pareto eigenvalue , and is a corresponding pareto eigenvector of teicp .otherwise , set . + * step 3 : * if then define , , .otherwise , set and try again .+ * step 4 : * compute .if set ; else , compute and set and go to * step 1*. + ' '' '' here is a close convex set . by the projection operation and the convexity of , we know that for all and , set and in the above inequality , then we have +\|x - p_{\omega}(x+\beta g(x)\|^2\le0.\ ] ] let with , then we have the following lemma .for all , ] . furthermore , using the lemma 1 , we have for all ] . here , we can set .we consider two cases .firstly , assume that . by continuity , for sufficiently large , .from the line search condition ( [ linesearch ] ) , we have clearly , when , , which is a contradiction .in fact , is a continuous function and so .assume that .since , there exists a subsequence such that .in such a case , from the way is chosen in ( [ linesearch ] ) , there exists an index sufficiently large such that for all , , for which fails to satisfy condition ( [ linesearch ] ) , i.e. , .hence , by the mean value theorem , we can rewrite this relation as where ] and } ] and } ] and } ] .let be the tolerance on termination .+ * for * * do * + * 1 : * compute , the gradient . + * 2 : * if , stop . otherwise , let , compute , and {\mathcal{b}(u_k)^m} ] and } ] .compute .let be the tolerance on termination .let be the tolerance on being positive definite .+ * for * * do * + * 1 : * compute , the hessian , respectively .let , .+ * 2 : * let , compute , and {\mathcal{b}(u_k)^m} ] be the symmetric tensor defined by [ fig : spg - spa - z - eigen-1 ] {spg - spp - spa - pareto - z - eigenvalue-1.eps } } \end{array}\ ] ] [ fig : spg - spp - sspa - z - eigen-100 - 1 ] {spg - spp - sspa - pareto - z - eigenvalue-100 - 1.eps } } \end{array}\ ] ] to compare the convergence in terms of the number of iterations .figure 1 shows the results for computing pareto z - eigenvalues of from * _ example 1 _ * , and the starting point is $ ] . in this case , all of the spg1 , spg2 , spp , sspa can reach the same pareto z - eigenvalue 0.3633 .spg1 method just need run 9 iterations in 0.1716 seconds while spa method need run 260 iterations in 3.3696 seconds .spp is similar to spg1 method in this case .spg2 need run 13 iterations in 0.4368 seconds . as we can see , comparing with spa method , sspa method get a great improvement .sspa method just need run 19 iterations in 0.2964 seconds . + [cols="^,^,^,^,^,^",options="header " , ]in this paper , two monotone ascent spectral projected gradient algorithms were investigated for the tensor eigenvalue complementarity problem ( teicp ) .we also presented a shifted scaling - and - projection algorithm , which is a great improvement of the original spa method .numerical experiments show that spectral projected gradient methods are efficient and competitive to the shifted projected power method .this work was supported in part by the national natural science foundation of china ( no.61262026 , 11571905 , 11501100 ) , ncet programm of the ministry of education ( ncet 13 - 0738 ) , jgzx programm of jiangxi province ( 20112bcb23027 ) , natural science foundation of jiangxi province ( 20132bab201026 ) , science and technology programm of jiangxi education committee ( ldjh12088 ) , program for innovative research team in university of henan province ( 14irtsthn023 ) .
|
this paper looks at the tensor eigenvalue complementarity problem ( teicp ) which arises from the stability analysis of finite dimensional mechanical systems and is closely related to the optimality conditions for polynomial optimization . we investigate two monotone ascent spectral projected gradient ( spg ) methods for teicp . we also present a shifted scaling - and - projection algorithm ( spa ) , which is a great improvement of the original spa method proposed by ling , he and qi [ comput . optim . appl . , doi 10.1007/s10589 - 015 - 9767-z ] . numerical comparisons with some existed gradient methods in the literature are reported to illustrate the efficiency of the proposed methods . * keywords : * tensor , pareto eigenvalue , pareto eigenvector , projected gradient method , eigenvalue complementarity problem .
|
one of the key attributes fueling the success of deep learning is the ability of deep networks to compactly represent rich classes of functions .this phenomenon has drawn considerable attention from the theoretical machine learning community in recent years .the primary notion for formally reasoning about the representational abilities of different models is _ expressive efficiency_. given two network architectures and , with size parameters ( typically the width of layers across a network ) and , we say that architecture is expressively efficient w.r.t .architecture if the following two conditions hold : _( i ) _ any function realized by with size can be realized ( or approximated ) by with size ; _ ( ii ) _ there exist functions realized by with size that can not be realized ( or approximated ) by unless its size meets for some super - linear function .the nature of the function in condition _ ( ii ) _ determines the type of efficiency taking place if is exponential then architecture is said to be exponentially expressively efficient w.r.t .architecture , and if is polynomial so is the expressive efficiency of over . to date ,works studying expressive efficiency in the context of deep learning ( e.g. ) have focused on the architectural feature of depth , showing instances where deep networks are expressively efficient w.r.t .shallow ones . this theoretical focus is motivated by the vast empirical evidence supporting the importance of depth ( see for a survey of such results ) .however , it largely overlooks an additional architectural feature that in recent years is proving to have great impact on the performance of deep networks _ connectivity_. nearly all state of the art networks these days ( e.g. ) deviate from the simple feed - forward approach , running layers in parallel with various connectivity ( split / merge ) schemes . whether or not this relates to expressive efficiency remains to be an open question .a specific family of deep networks gaining increased attention in the deep learning community is that of _ dilated convolutional networks_. these models form the basis of the recent wavenet ( ) and bytenet ( ) architectures , which provide state of the art performance in audio and text processing tasks .dilated convolutional networks are typically applied to sequence data , and consist of multiple succeeding convolutional layers , each comprising non - contiguous filters with a different dilation ( distance between neighboring elements ) .the choice of dilations directly affects the space of functions that may be realized by a network , and while no choice is expressively efficient w.r.t .another , we show in this work that interconnecting networks with different dilations leads to expressive efficiency , and by this demonstrate that connectivity indeed bears the potential to enhance the expressiveness of deep networks .our analysis follows several recent works utilizing tensor decompositions for theoretical studies of deep learning ( see for example ) , and in particular , builds on the equivalence between hierarchical tensor decompositions and convolutional networks established in and .we show that with dilated convolutional networks , the choice of dilations throughout a network corresponds to determination of the mode ( dimension ) tree underlying the respective decomposition .we then define the notion of a _ mixed tensor decomposition _ , which blends together multiple mode trees , effectively creating a large ensemble of hybrid trees formed from all possible combinations .mixed tensor decompositions correspond to _ mixed dilated convolutional networks _ , _i.e. _ mixtures formed by connecting intermediate layers of different dilated convolutional networks .this allows studying the expressive properties of such mixtures using mathematical machinery from the field of tensor analysis .we fully analyze a particular case of dilated convolutional arithmetic circuits , showing that a single connection between intermediate layers already leads to an almost quadratic expressive efficiency , which in large - scale settings typically makes the difference between a model that is practical and one that is not .an experiment on timit speech recognition dataset ( ) demonstrates the gain brought forth by mixing different networks , showing that interconnectivity can indeed boost the performance of dilated convolutional networks .the remainder of the paper is organized as follows .[ sec : prelim ] provides preliminary background in the field of tensor analysis , and establishes notational conventions .[ sec : dcn ] presents dilated convolutional networks , and their correspondence to tensor decompositions . in sec .[ sec : mtd ] we define mixed tensor decompositions , and discuss their equivalence to mixed dilated convolutional networks . our analysis of expressive efficiency is given in sec .[ sec : analysis ] , followed the experiment in sec . [sec : exp ] . finally , sec .[ sec : summary ] concludes .the constructions and analyses delivered in this paper rely on concepts from the field of tensor analysis .below we provide the minimal background required in order to follow our arguments .the core concept in tensor analysis is a _ tensor _ , which for our purposes may simply be thought of as a multi - dimensional array .the _ order _ of a tensor is defined to be the number of indexing entries in the array , which are referred to as _ modes_. the _ dimension _ of a tensor in a particular mode is defined as the number of values that may be taken by the index in that mode . for example , a -by- matrix is a tensor of order , _ i.e. _ it has two modes , with dimension in mode and dimension in mode . if is a tensor of order and dimension in each mode , the space of all configurations it can take is denoted , quite naturally , by .a fundamental operator in tensor analysis is the _ tensor product _ ( also known as _ outer product _ ) , which we denote by .it is an operator that intakes two tensors and ( orders and respectively ) , and returns a tensor ( order ) defined by : . in a generalization of the tensor productis defined , by replacing multiplication with a general operator .specifically , for a function that is commutative ( for all ) , the _ generalized tensor product _ , denoted , is defined to be the operator that for input tensors and ( orders and respectively ) , returns the tensor ( order ) given by : .an additional operator we will make use of is _mode permutation_. let be a tensor of order , and let be a permutation over ( bijective mapping from to itself ) .the mode permutation of w.r.t . , which by a slight abuse of notation is denoted , is the order- tensor defined by : . in words , is the tensor obtained by rearranging the modes of in accordance with . when studying tensors , it is oftentimes useful to arrange them as matrices , a procedure referred to as_ matricization_. let be a tensor of order and dimension in each mode , and let be a set of mode indexes , whose complement we denote by .we may write where , and similarly where .the matricization of w.r.t . , denoted , is the -by- matrix holding the entries of such that is placed in row index and column index .if or , then by definition is a row or column ( respectively ) vector of dimension holding in entry . to conclude this section , we hereinafter establish notational conventions that will accompany us throughout the paper .we denote tensors with uppercase calligraphic letters , _ e.g. _ , and in some cases with the greek letters , or .subscripts are used to refer to individual tensor entries , _ e.g. _ , whereas superscripts indicate the location of a tensor in some annotated collection , for example stands for the tensor in the collection .vectors are typically denoted with boldface lowercase letters , _ e.g. _ , where again subscripts refer to an individual entry ( _ e.g. _ ) , and superscripts to the identity of a vector within some annotated collection ( _ e.g. _ is the vector in the set ) .we use non - boldface lowercase or uppercase letters ( _ e.g. _ or respectively ) to denote scalars , and in this case both subscripts and superscripts distinguish between objects in an annotated set ( _ e.g. _ ) . finally ,for a positive integer , we use ] , where is a natural time index . a size- convolutional layer with dilation- , _ i.e. _ with contiguous filters , maps this input into the hidden sequence )_t\subset{{\mathbb r}}^{r_1} ] of ]. for reasons that will shortly become apparent , we use here to denote the binary function combining two size- convolutions into a single size- convolution with non - linearity . different choices of lead to different convolutional operators , for example leads to standard convolution followed by rectified linear activation ( _ relu _ , ) , whereas gives rise to what is known as a _ convolutional arithmetic circuit _ ( ) .following the first hidden layer , size- convolutional layers with increasing dilations are applied .specifically , for , hidden layer maps the sequence )_t\subset{{\mathbb r}}^{r_{l-1}} ] using filters with dilation- , _ i.e. _ with an internal temporal gap of points : {\gamma}=g({\left\langle{{{\mathbf a}}^{l,\gamma,{\text{i}}}},{{{\mathbf h}}^{(l\text{-}1)}[t\text{-}2^{l\text{-}1}]}\right\rangle},{\left\langle{{{\mathbf a}}^{l,\gamma,{\text{ii}}}},{{{\mathbf h}}^{(l\text{-}1)}[t]}\right\rangle}) ] into network output sequence )_t\subset{{\mathbb r}}^{r_l} ] .altogether , the architectural parameters of the network are the number of convolutional layers , the convolutional operator , the input dimension , the number of channels for each hidden layer ] of layer ] network output at time , is a function of \ldots{{\mathbf x}}[t] ] for every ] is a full binary tree in which : * every node is labeled by a subset of ] * the label of an interior ( non - leaf ) node is the union of the labels of its children if is a binary mode tree , we identify its nodes with their labels , _ i.e. _ with the corresponding subsets of ] , the children of an interior node ] , and the parent of a non - root node ] . for every node ] , where is some predetermined constant . tensors .] in addition , we also define , for each interior node , two collections of weight vectors }\subset{{\mathbb r}}^r ] .the hierarchical grid tensor decomposition induced by traverses through the tree in a depth - first fashion , assigning the tensors of node ( ) through combinations of the tensors of its children ( and ) .this is laid out formally in eq .[ eq : tree_decomp ] below , which we refer to as the _ tree decomposition_. & & + & & _ = [ v^(1)_, ,v^(m)_]^ + & & + & & _ = ^(;t)((_=1^r a_^,,^c_(;t ) , ) _ g(_=1^r a_^,,^c_(;t ) , ) ) + & & a^y=^[n],y [ eq : tree_decomp ] as in the baseline decomposition ( eq . [ eq : base_decomp ] ) , here stands for coordinate of the discretizer .the permutation , for an interior node , arranges the modes of the tensor such that these comply with a sorted ordering of .specifically , if we denote by the elements of ] , the permutation \to[2^{{\left\lvert\nu \right\rvert}}] ] corresponding to the root of . compare the general tree decomposition in eq .[ eq : tree_decomp ] to the baseline decomposition in eq .[ eq : base_decomp ] .it is not difficult to see that the latter is a special case of the former .namely , it corresponds to a binary mode tree that is perfect ( all leaves have the same depth ) , and whose depth- nodes ( ) are ] . is a scalar and is a set , stands for the set obtained by adding to each element in . ]this implies that such a mode tree , when plugged into the tree decomposition ( eq . [ eq : tree_decomp ] ) , provides a characterization of the baseline dilated convolutional network ( fig .[ fig : base_dcn ] ) , _ i.e. _ a network whose dilation in layer is ( see illustration in fig . [fig : dilations_trees](a ) ) . if we were to choose a different mode tree , the corresponding dilated convolutional network would change .for example , assume that is even , and consider a perfect binary mode tree whose depth- nodes ( ) are as follows :* even : depth- nodes are ] * odd : depth- nodes are generated by splitting nodes of depth , such that the first and third quadrants of a split node belong to one child , while the second and fourth belong to the other in this case , the network characterized by the tree decomposition ( eq . [ eq : tree_decomp ] ) is obtained by swapping dilations of even and odd layers in the baseline architecture , _ i.e. _ it has dilation in layer of if is even , and if is odd ( see illustration in fig . [fig : dilations_trees](b ) ) . to conclude this subsection, we defined the notion of a tree over tensor modes ( def .[ def : tree ] ) , and laid out a corresponding hierarchical decomposition of grid tensors ( tree decomposition . [ eq : tree_decomp ] ) .different choices of mode trees lead to decompositions characterizing networks with different dilations throughout their layers .the baseline decomposition ( eq . [ eq : base_decomp ] ) , characterizing the baseline dilated convolutional network ( dilation in layer see fig .[ fig : base_dcn ] ) , is now merely a special case that corresponds to a particular choice of mode tree . in the next section ,we build on the constructions made here , and define mixed tensor decompositions blending together multiple mode trees .these decompositions will be shown to correspond to multiple dilated convolutional networks interconnected to one another .let and be two binary mode trees over ] ) that reside in the interior of both and , defining locations in the tree decompositions at which tensors will be exchanged . if is chosen as the empty set , the mixed decomposition simply sums the output tensors generated by the tree decompositions of and ( ,y}\}_y ] respectively ) . otherwise , the tree decompositions of and progress in parallel , until reaching a mixture node , where they exchange half the tensors corresponding to that node ( half of is exchanged for half of ) .the process continues until all mixture nodes are visited and the root node ( of both trees ) ] and ,y}\}_y ] ) is always reached after all nodes strictly contained in it .lines - ( respectively - ) are the same as in the tree decomposition ( eq . [ eq : tree_decomp ] ) , except that instead of running through the entire interior of ( respectively ) , they cover a segment of it .this segment continues where the previous left off , and comprises only nodes ( subsets of ] and the decompositions of and have concluded , line sums the output tensors of these decompositions ( ,y}\}_y ] respectively ) , to produce the grid tensors . in terms of computation and memory , the requirements posed by the mixed decomposition( eq . [ eq : mix_decomp ] ) are virtually identical to those of running two separate tree decompositions ( eq . [ eq : tree_decomp ] ) with and . specifically ,if the tree decompositions of and correspond to input - output mappings computed by the dilated convolutional networks and ( respectively ) , the mixed decomposition would correspond to the computation of a _mixed dilated convolutional network _, formed by summing the outputs of and , and interconnecting their intermediate layers .the choice of mixture nodes in the mixed decomposition determines the locations at which networks and are interconnected , where an interconnection simply wires into half the outputs of a convolutional layer in , and vice versa .for example , suppose that is the baseline dilated convolutional network ( dilation in layer see sec .[ sec : dcn : base ] ) , whereas is the network obtained by swapping dilations of even and odd layers ( such that layer has dilation if is even , and if is odd ) . the mode trees corresponding to these networks , illustrated in fig . [ fig : dilations_trees ] ( for the case ) , share interior nodes ] .we may therefore choose to be all such nodes ( excluding root ) , and get a mixed decomposition that corresponds to a mixed network interconnecting all even layers of and .illustrations of such decomposition and network ( again , for the case ) are given in fig .[ fig : mix_trees_dcn ] .the main advantage of the mixed decomposition ( eq . [ eq : mix_decomp ] ) , and the reason for its definition , is that it leads to expressive efficiency .that is to say , the mixed dilated convolutional network , formed by interconnecting intermediate layers of networks with different dilations , can realize functions that without the interconnections would be expensive , or even impractical to implement .we theoretically support this in the next section , providing a complete proof for a special case of convolutional arithmetic circuits ( ) .as in sec . [sec : mtd ] , let and be two dilated convolutional networks whose input - output mappings are characterized by the tree decomposition ( eq . [ eq : tree_decomp ] ) with mode trees and respectively . consider the mixed decomposition ( eq . [ eq : mix_decomp ] ) resulting from a particular choice of mixture nodes ( subset of the nodes interior to both and ) , and denote its corresponding mixed dilated convolutional network by .we would like to show that is expressively efficient w.r.t . and , meaning : _( i ) _ any function realized by or can also be realized by with no more than linear growth in network size ( number of channels in the convolutional layers ) ; _ ( ii ) _ there exist functions realizable by that can not be realized by or ( or a summation thereof ) unless their size ( number of convolutional channels ) is allowed to grow super - linearly .we study the representational abilities of networks through their corresponding tensor decompositions , which as discussed in sec .[ sec : dcn ] , parameterize discretizations of input - output mappings ( grid tensors ) . before laying out the problem through the lens of tensor decompositions , a few remarks are in order : *the number of channels in each layer of or corresponds to the constant in the respective tree decomposition ( eq . [ eq : tree_decomp ] with underlying mode tree or respectively ) .similarly , the number of channels in each layer of each interconnected network in corresponds to in the respective mixed decomposition ( eq . [ eq : mix_decomp ] ) . in both the tree andmixed decompositions , , referred to hereafter as the _ size constant _ , stands for the number of tensors ( respectively ) held in each node ( respectively ) .we set this number uniformly across nodes , corresponding to uniformly sized layers across networks , merely for simplicity of presentation .our formulations and analysis can easily be adapted to account for varying layer sizes , by allowing different nodes in a decomposition to hold a different number of tensors . *an additional simplification we made relates to weight sharing . in both the tree and mixed decompositions, each interior node ( respectively ) has a separate set of weight vectors ( respectively ) .this implies that in the corresponding networks , convolution filters may vary through time , _i.e. _ different weights may be used against different portions of a convolved sequence .the more commonplace setting of stationary filters ( standard convolutions ) is obtained by restricting different nodes in a decomposition to possess the same weights .we do not introduce such restrictions into our formulations , as they make little difference in terms of the analysis , but on the other hand significantly burden presentation .we are now in a position to formulate our expressive efficiency problem in terms of tensor decompositions .our objective is to address the following two propositions ( stated informally ) : [ prop : tree_by_mix ] consider a tree decomposition ( eq . [ eq : tree_decomp ] ) with underlying mode tree or and size constant .this decomposition can be realized by a mixed decomposition of and ( eq . [ eq : mix_decomp ] ) whose size constant is linear in .[ prop : mix_by_tree ] consider a mixed decomposition of and ( eq . [ eq : mix_decomp ] ) with size constant .this decomposition can generate grid tensors that can not be generated by tree decompositions of or ( eq . [ eq : tree_decomp ] ) , or a summation of such , unless their size constant is super - linear in . before heading to a formal treatment of prop .[ prop : tree_by_mix ] and [ prop : mix_by_tree ] above , we briefly convey the intuition behind our analysis .recall from sec .[ sec : mtd ] that the mixed decomposition ( eq . [ eq : mix_decomp ] ) blends together tree decompositions ( eq . [ eq : tree_decomp ] ) of different mode trees and , by traversing upwards through the trees , while exchanging tensors at each of a preselected set of mixture nodes .we may think of each mixture node as a decision point that can propagate upwards one of two computations that carried out by , or that carried out by , where in both cases , the chosen computation is propagated upwards through both and .each combination of decisions across all mixture nodes gives rise to a computational path traversing between and , equivalent to a tree decomposition based on a _ hybrid mode tree _( see illustration in fig .[ fig : hybrid_trees ] ) .the number of possible hybrid trees is exponential in the number of mixture nodes , and thus a mixed decomposition is comparable to an exponential ensemble of tree decompositions .the original tree decompositions , based on and , are included in the ensemble , thus may easily be replicated by the mixed decomposition . on the other hand , many of the hybrid trees in the mixed decomposition are significantly different from and , requiring large size constants from tree decompositions of the latters .as a first step in formalizing the above intuition , we define the notion of a hybrid mode tree : [ def : hybrid_tree ] let and be binary mode trees over ] ) contained in the interior of both and .we say that is a _ hybrid mode tree _ of and w.r.t . if it is a binary mode tree over ] ( def .[ def : tree ] ) , and let be a corresponding collection of mixture nodes ( a set of nodes contained in the interior of both and ) .consider a mixed decomposition of and w.r.t . ( eq . [ eq : mix_decomp ] ) , and denote its size constant by .let be a hybrid mode tree of and w.r.t . ( def .[ def : hybrid_tree ] ) , and consider the respective tree decomposition ( eq . [ eq : tree_decomp ] ) , with a size constant of . for any setting of weights leading to grid tensors in this tree decomposition , there exists a setting of weights and in the mixed decomposition , independent of the discretizers ( see sec . [sec : dcn ] ) , that leads to the same grid tensors .see app .[ app : proofs : hybrid_tree_by_mix ] .claim [ claim : hybrid_tree_by_mix ] not only addresses prop .[ prop : tree_by_mix ] , but also paves the way to a treatment of prop . [ prop : mix_by_tree ] .in other words , not only does it imply that the mixed decomposition of and can realize their individual tree decompositions with a linear growth in size , but it also brings forth a strategy for proving that the converse does not hold , _i.e. _ that the tree decompositions of and can not realize their mixed decomposition without a super - linear growth in size .the aforementioned strategy is to find a hybrid mode tree distinct enough from and , such that its tree decomposition , realized by the mixed decomposition according to claim [ claim : hybrid_tree_by_mix ] , poses a significant challenge for the tree decompositions of and .hereinafter we pursue this line of reasoning , focusing on the particular case where the convolutional operator is a simple product . in this casethe tree and mixed decompositions ( eq . [ eq : tree_decomp ] and [ eq : mix_decomp ] respectively ) are standard ( non - generalized ) tensor decompositions ( see sec . [ sec : prelim ] ) , and the corresponding dilated convolutional networks are convolutional arithmetic circuits .we focus on this special case since it allows the use of a plurality of algebraic tools for theoretical analysis , while at the same time corresponding to models showing promising results in practice ( see for example ) .full treatment of additional cases , such as , corresponding to networks with relu activation , is left for future work . for establishing the difficulty experienced by the tree decompositions of and in replicating that of a hybrid tree , we analyze ranks of matricized grid tensors .specifically , we consider the tree decomposition ( eq . [ eq : tree_decomp ] ) of a general mode tree , and derive upper and lower bounds on the ranks of generated grid tensors when these are subject to matricization w.r.t . a general index set ] ( def .[ def : tree ] ) , and let ] ) , characterizes the ranks of grid tensors generated by the tree decomposition of when these are matricized w.r.t . .[ theorem : tree_decomp_ranks ] let be a binary mode tree over ] , ] .then , the ranks of the grid tensor matricizations are : * no greater than * at least almost always , _i.e. _ for all configurations of weights but a set of lebesgue measure zero see app .[ app : proofs : tree_decomp_ranks ] .as stated previously , given two binary mode trees over ] and a hybrid mode tree ( def .[ def : hybrid_tree ] ) , such that the tree decomposition ( eq . [ eq : tree_decomp ] ) of generates grid tensors whose ranks under matricization w.r.t . are much higher than those brought forth by the tree decompositions of and .consider our exemplar mode trees illustrated in fig .[ fig : dilations_trees ] .specifically , let be the mode tree corresponding to the baseline dilated convolutional network ( dilation in layer =[\log_{2}n] ] for ] , and every second pair of indexes in ] . as illustrated in fig .[ fig : trees_tilings ] , the mode tree tiles ( see def .[ def : tiling ] ) the lower half of into singletons , and its upper half into pairs . the same applies to s tiling of s complement \setminus{{\mathcal i}} ] see sec . [sec : dcn : base ] ) , with architectural parameters similar to those used in wavenet ( ) , to classify individual phonemes in the timit acoustic speech corpus ( ) .in addition to this baseline model , we also trained the companion network obtained by swapping dilations of even and odd layers ( such that layer has dilation if is even , and if is odd ) .as discussed in sec .[ sec : mtd ] , the mode trees corresponding to these networks ( illustrated in fig . [ fig : dilations_trees ] ) and , share interior nodes of even depth , thus any subset of those nodes may serve as mixture nodes for a mixed decomposition ( eq . [ eq : mix_decomp ] ) .we evaluate mixed dilated convolutional networks corresponding to different choices of mixture nodes ( see fig .[ fig : mix_trees_dcn ] for illustration of a particular case ) . specifically , we consider choices of the following form : varying the threshold yields mixed networks with a varying number of interconnections . inthe extreme case ( high threshold ) , simply sums the outputs of and .as the threshold decreases interconnections between hidden layers are added starting from hidden layer , then including hidden layer , and so on .the intuition from our analysis ( sec .[ sec : analysis ] ) is that additional interconnections result in a larger number of hybrid mode trees , which in turn boosts the expressive power of the mixed dilated convolutional network . as fig .[ fig : exp ] shows , this intuition indeed complies with the results in practice classification accuracy improves as we add interconnections between the networks , without any additional cost in terms of computation or model capacity .timit dataset is an acoustic - phonetic corpus comprising sentences manually labeled at the phoneme level .we split the data into train and validation sets in accordance with , and as advised by , mapped the possible phoneme labels into and an additional `` garbage '' label .the task was then to classify individual phonemes into one of the latter categories .following wavenet , we used a baseline dilated convolutional network with relu activation ( see sec .[ sec : dcn : base ] ) , channels per layer , and input vectors of dimension holding one - hot quantizations of the audio signal .the number of layers was set to , corresponding to an input window of samples , spanning of audio signal standard practice with timit dataset .the framework chosen for running the experiment was caffe toolbox ( ) , and we used adam optimizer ( ) for training ( with default hyper - parameters , , learning rate ) .models were trained for iterations with batch size , and the learning rate was decreased by a factor of after of the iterations took place .weight decay was set to the standard value of .besides the mixed dilated convolutional network , we also evaluated the individual networks and both reached accuracies comparable to in the case of interconnections ( output summation only ) .in this paper we presented a study of the representational capacity of dilated convolutional networks , showing that interconnecting networks with different dilations can lead to expressive efficiency .in particular , we showed that even a single connection between intermediate layers can already lead to an almost quadratic expressive efficiency ( theorem [ theorem : tree_decomp_ranks ] and corollary [ corollary : mix_by_tree ] ) , which in large - scale settings typically makes the difference between a model that is practical and one that is not .we began with the dilated convolutional network underlying wavenet model ( fig .[ fig : base_dcn ] ) , referring to it as the `` baseline architecture '' , and couching it in a tensor algebraic setting ( eq .[ eq : base_decomp ] ) .the key for introducing tensors into the framework is a discretization of the network s input - output mapping the input vectors , that propagate through the network to form the output , are sampled from a pool of `` templates '' , thereby creating a tensor with entries , referred to as a `` grid tensor '' .the wavenet model is shown ( app .[ app : base_decomp ] ) to give rise to a hierarchical decomposition of grid tensors . [ eq : base_decomp ] . given that the tensor decomposition associated with the baseline architecture adheres to a specific tree structure , the generalization of the framework to an arbitrary tree follows quite naturally . if represents a general binary mode tree ( as defined in def .[ def : tree ] ) , then eq . [ eq : tree_decomp ] provides a tensor decomposition that captures various dilated convolutional networks , _i.e. _ networks with various dilation schemes . fig .[ fig : dilations_trees](b ) illustrates the type of dilation schemes we chose to focus on , obtained by swapping dilations in the scheme of the baseline architecture ( illustrated in fig .[ fig : dilations_trees](a ) ) .armed with a framework for describing dilated convolutional networks through mode trees and tensor decompositions , we next presented how two networks can be `` mixed '' .this is achieved by choosing a set of `` mixture nodes '' in the trees of both networks , and defining a `` mixed tensor decomposition '' ( eq .[ eq : mix_decomp ] ) that : _ ( i ) _ at each mixture node , exchanges tensors between the decompositions of the two networks ; _ ( ii ) _ at the root node , sums up the tensors from both decompositions . from a computational viewpoint , the mixing process amounts to `` rewiring '' intermediate layers between the two networks , and summing their outputs .accordingly , the requirements posed by the mixed network are virtually identical to those of running the two individual networks separately .the heart of our analysis is a theoretical study of the expressive efficiency brought forth by generating a mixed network from two dilated convolutional networks and .establishing expressive efficiency requires proving two propositions : _( i ) _ any function realized by or can also be realized by with no more than linear growth in network size ; _ ( ii ) _ there exist functions realizable by that can not be realized by or ( or a summation thereof ) unless their size is allowed to grow super - linearly .we treat the first proposition in claim [ claim : hybrid_tree_by_mix ] , and the second in theorem [ theorem : tree_decomp_ranks ] .the latter is where the centrality of tensor algebra comes into play , as it is based entirely on ranks of tensor matricizations .the results of our work shed light on one of the most prominent architectural features of modern deep learning connectivity .empirical evidence shows that running layers in parallel with various interconnection schemes yields improved performance .what our study shows , at least in the domain of dilated convolutional networks , is that these ideas are backed by theoretical principles , and in fact , provide a powerful boost to expressiveness .this work is supported by intel grant icri - ci # 9 - 2012 - 6133 , by isf center grant 1790/12 , and by the european research council ( theorydl project ) .nadav cohen is supported by a google doctoral fellowship in machine learning .in this appendix we derive the baseline decomposition ( eq . [ eq : base_decomp ] ) a parameterization of grid tensors ( eq . [ eq : grid_tensor ] ) discretizing input - output mappings of the baseline dilated convolutional network ( fig .[ fig : base_dcn ] ) . as discussed in sec .[ sec : dcn : base ] , ] its input over the last time points .we would like to show that for any ] under the following input assignment : ={{\mathbf v}}^{(d_1)},\ldots,{{\mathbf x}}[t]={{\mathbf v}}^{(d_n)} ] , ] , coordinate of the network s depth- sequence ( input )_t ] for ] for ) at time ,is equal to entry of the tensor in the baseline decomposition ( eq . [ eq : base_decomp ] ) .the desired result then follows from the case . when , the inductive hypothesis is trivial coordinate of the input sequence at time , _ i.e. _ \gamma ] and ] stand for the input sequence )_t ] ( or )_t ] stand for the weights in the tree decomposition of the hybrid mode tree ( eq . [ eq : tree_decomp ] with size constant and underlying mode tree given by def . [def : hybrid_tree ] ) .similarly , we use } ] to denote the weights , corresponding to and ( respectively ) , in the mixed decomposition ( eq . [ eq : mix_decomp ] with size constant ) . recall that by construction ( def .[ def : hybrid_tree ] ) , the interior of , consists of different segments ( collections of nodes ) , each taken from either or .we define to be the function indicating which tree an interior node in came from . specifically , if the node originated from we have , and on the other hand , if its source is then . by convention, feeding with an argument outside yields something that is different from both and .for example , if is the root node , _i.e. _ ] fed into the tree decomposition of , leading the latter to produce grid tensors } ] , the first grid tensors it generates are equal to } ] ) , as these relate to tensors that are not exchanged ( see eq .[ eq : mix_decomp ] ) . on the other hand , if , weight vectors with lower indexes ( ] be a node in , whose elements we denote by .the reduction of onto is defined as follows : i|_:=\{j : i_ji } [ eq : reduction ] in words , it is the set of indexes corresponding to the intersection inside .besides index set reduction , an additional tool we will be using is the _ kronecker product _ a matrix operator we denote by . for two matrices and , is the matrix in holding in row index and column index .consider the central relation in the tree decomposition ( eq . [ eq : tree_decomp ] ) , while noticing that in our setting ( is the product operator see sec .[ sec : prelim ] ) : _ = ^()((_=1^ra_^,,^c _ ( ) , ) ( _ = 1^r a_^,,^c _ ( ) , ) ) [ eq : tree_decomp_main ] suppose we would like to matricize the tensor w.r.t .the reduction .if all elements of were smaller than those of , the permutation would be the identity ( see sec .[ sec : dcn : tree ] ) , and the following matrix relation would hold : ^,_i| _ & = & _ = 1^r a_^,,^c_(),_i|_c _ ( ) _ = 1^r a_^,,^c_(),_i|_c _ ( ) + & = & ( _ = 1^r a_^,,^c_(),_i|_c _( ) ) ( _ = 1^r a_^,,^c_(),_i|_c _ ( ) ) in general however , elements in could be greater than ones in , and so eq . [ eq : tree_decomp_main ] includes a tensor mode sorting via . in matricized form , this amounts to rearranging rows and columns through appropriate permutation matrices and respectively : we thus arrive at the following matrix form of eq .[ eq : tree_decomp ] , referred to as the _matricized tree decomposition _ : & & + & & ^\{j},_i|_\{j } = ^_i|_\{j } + & & + & & ^,_i| _ = q^()((_=1^r a_^,,^c_(),_i|_c _ ( ) ) ( _ = 1^r a_^,,^c_(),_i|_c_()))|q^ ( ) + & & ^y_i=^[n],y_i|_[n ] [ eq : mat_tree_decomp ] next , we move on to the second stage of the proof , where we establish the upper bound stated in the theorem : rank^y_i r^\{(i ) , ( i^c ) } [ eq : tree_decomp_ranks_ub ] we begin by `` propagating outwards '' the permutation matrices )} ] corresponding to the root node ] , we replace the matrix ,\gamma}\rrbracket}_{{{{\mathcal i}}|_{[n]}}} ] and )} ] a child of the root node ] and ))} ] : ),\gamma } : = \left(\sum_{\alpha=1}^{r } a_\alpha^{c_{\text{i}}([n]),\gamma,{\text{i}}}{\llbracket\phi^{c_{\text{i}}(c_{\text{i}}([n])),\alpha}\rrbracket}_{{{{\mathcal i}}|_{c_{\text{i}}(c_{\text{i}}([n]))}}}\right ) \odot \left(\sum_{\alpha=1}^{r } a_\alpha^{c_{\text{i}}([n]),\gamma,{\text{ii}}}{\llbracket\phi^{c_{\text{ii}}(c_{\text{i}}([n])),\alpha}\rrbracket}_{{{{\mathcal i}}|_{c_{\text{ii}}(c_{\text{i}}([n]))}}}\right)\ ] ] which in turn implies : b^[n ] , & = & ( _ = 1^r a_^[n],,q^(c_([n]))b^c_([n]),|q^(c_([n ] ) ) ) ( _ = 1^r a_^[n],,^c_([n]),_i|_c_([n ] ) ) + & = & ( q^(c_([n]))(_=1^r a_^[n],,b^c_([n]),)|q^(c_([n ] ) ) ) ( _ = 1^r a_^[n],,^c_([n]),_i|_c_([n ] ) ) now , for any matrices such that and are defined , the following equality holds : ( see for proof ) .we may therefore write : & b^[n ] , = & + & ( q^(c_([n]))i ) ( ( _ = 1^r a_^[n],,b^c_([n ] ) , ) ( _ = 1^r a_^[n],,^c_([n]),_i|_c_([n ] ) ) ) ( |q^(c_([n]))|i ) & where and are identity matrices of appropriate sizes . propagating outwards the matrices ))}{\odot}i ] ( while redefining ,\gamma} ] .we may thus define to be the matrix whose column is , and get the following equalities : where again , is an appropriately sized identity matrix .this implies that we can propagate outwards , just as we have done with permutation matrices . applying this procedure to all nodes in the tilings and , we arrive at the decomposition below : & & + & & b^ , = e^ ( ) + & & + & & b^ , = ( e^())^ + & & + & & b^ , = ( _ = 1^r a_^,,b^c _ ( ) , ) ( _ = 1^r a_^,,b^c _ ( ) , ) + & & ^y_i = ab^[n],y|a notice that for compactness in writing we made use of the fact that , where , ] .it is not difficult to see that this size is precisely -by- , meaning that the ranks of ,y}\}_y ] span . without loss of generality, assume that } ] , establishing eq .[ eq : tree_decomp_ranks_lb ] in the case proves that it holds in general ( ) . *bearing in mind that we assume ( and linear independence of } ] . from the tree decomposition ( eq . [ eq : tree_decomp ] ) it is evident that the discretizers affect generated grid tensors only through products of the form or , where is a parent of a leaf node in . since is invertible ( } ] . taking into account the above reductions , our objective is to show that there exists a setting of weights , such that the following special case of the matricized tree decomposition ( eq . [ eq : mat_tree_decomp ] ) generates matricizations meeting the lower bound in eq .[ eq : tree_decomp_ranks_lb ] : & & + & & ^\{j},_i|_\{j } = e^ ( ) + & & + & & ^\{j},_i|_\{j } = ( e^())^ + & & + & & ^,_i| _ = q^()((_=1^r a_^,,^c_(),_i|_c _ ( ) ) ( _ = 1^r a_^,,^c_(),_i|_c_()))|q^ ( ) + & & ^y_i=^[n],y_i|_[n ] similarly to the procedure carried out in the second stage of the proof ( establishing the upper bound in eq .[ eq : tree_decomp_ranks_ub ] ) , we now propagate outwards the permutation matrices and corresponding to all interior nodes .this brings forth the following decomposition : & & + & & b^\{j } , = e^ ( ) + & & + & & b^\{j } , = ( e^())^ + & & + & & b^ , = ( _ = 1^r a_^,,b^c _ ( ) , ) ( _ = 1^r a_^,,b^c _ ( ) , ) + & & ^y_i = ab^[n],y|a [ eq : tree_decomp_ranks_lb_reduce_decomp ] the matrices and in the assignments of essentially collect all permutation matrices and ( respectively ) that have been propagated outwards .specifically , ( respectively ) is a product of factors , each of the form ( respectively ) for a different interior node and appropriately sized identity matrix .since permutation matrices are invertible , and since the kronecker product between two invertible matrices is invertible as well ( see for proof ) , we conclude that the matrices and are invertible .therefore , for every ] .it thus suffices to find a setting of weights for which : rank(b^[n ] , ) r^\{(_1,_2)(i)(i^c ) : } [ eq : tree_decomp_ranks_lb_reduce ] disregard the trivial case where there exist siblings and of depth , and are the children of the root node ] is for every ] : \ ] ] * meets neither of the above ( and here denote the all - zero and all - one vectors in , respectively ) : & a^,1 , = \ { + ll 1 & , + e^(1 ) & , + . &+ & a^,1 , = \ { + ll 1 & , + e^(1 ) & , + . &+ & a^ , , = a^ , , = 0\{1 } & plugging this into the decomposition in eq .[ eq : tree_decomp_ranks_lb_reduce_decomp ] , one readily sees that : * for every , } ] are indicator matrices , where both the row and column indexes of the active entry do not repeat as varies . *the matrices ,\gamma}\}_{\gamma\in[r]} ] are equal to one another , given by a joint kronecker product between all of the following : * * for every node in either or which does not have a sibling in the other * * for every node that has one child in and the other in according to the first observation above , has rank for every in or .the second observation implies that has rank for every node that has one child in and the other in . in turn , and while taking into account the rank - multiplicative property of the kronecker product ( see for proof ) , the third observation implies : ,\gamma})=r^{{\left\lvert\{(\nu_1,\nu_2)\in\theta({{\mathcal i}})\times\theta({{\mathcal i}}^c):~\text{ and are siblings in~}\ } \right\rvert } } \quad\forall{\gamma\in[r]}\ ] ] we thus have found weights for which eq .[ eq : tree_decomp_ranks_lb_reduce ] holds .is such that there exist siblings and of depth ( and are the children of the root node ] , the ranks of generated grid tensors when matricized w.r.t . , attain their maximum possible values ( which depend on both the decomposition and ) for all configurations of weights ( for the tree decomposition , and for the mixed decomposition ) but a set of lebesgue measure zero .hereinafter we justify this assertion .when equipped with the product operator ( ) , a tree or mixed decomposition generates grid tensors whose entries are polynomials in the decomposition weights .therefore , for any index set $ ] , the entries of the matricizations are , too , polynomials in the decomposition weights . claim [ claim : max_rank ] below implies thatfor a particular index , the rank of is maximal almost always , _i.e. _ for all weight settings but a set of measure zero .since the union of finitely many zero measure sets is itself a zero measure set ( see for example ) , we conclude that the ranks of are jointly maximal almost always , which is what we set out to prove . [claim : max_rank ] let , and consider a polynomial function mapping weights to matrices ( `` polynomial '' here means that all entries of are polynomials in ) .denote , and consider the set .this set has lebesgue measure zero .we disregard the trivial case where .let be a point at which is attained ( ) , and assume without loss of generality that the top - left minor of , _ i.e. _ the determinant of , is non - zero .the function defined by is a polynomial , which by construction does not vanish everywhere ( ) .the zero set of a polynomial is either the entire space , or a set of lebesgue measure zero ( see for proof ) .therefore , the zero set of has lebesgue measure zero .now , for every : is thus contained in the zero set of , and therefore too , has lebesgue measure zero .
|
expressive efficiency is a concept that allows formally reasoning about the representational capacity of deep network architectures . a network architecture is expressively efficient with respect to an alternative architecture if the latter must grow super - linearly in order to represent functions realized by the former . a well - known example is the exponential expressive efficiency of depth , namely , that in many cases shallow networks must grow exponentially large in order to represent functions realized by deep networks . in this paper we study the expressive efficiency brought forth by the architectural feature of connectivity , motivated by the observation that nearly all state of the art networks these days employ elaborate connection schemes , running layers in parallel while splitting and merging them in various ways . a formal treatment of this question would shed light on the effectiveness of modern connectivity schemes , and in addition , could provide new tools for network design . we focus on dilated convolutional networks , a family of deep models gaining increased attention , underlying state of the art architectures like google s wavenet and bytenet . by introducing and studying the concept of mixed tensor decompositions , we prove that interconnecting dilated convolutional networks can lead to expressive efficiency . in particular , we show that a single connection between intermediate layers can already lead to an almost quadratic gap , which in large - scale settings typically makes the difference between a model that is practical and one that is not . _ deep learning _ , _ expressive efficiency _ , _ dilated convolutions _ , _ tensor decompositions _
|
study the two - user multiple - input single - output ( miso ) interference channel ( ic ) , consisting of two transmitter ( tx ) - receiver ( rx ) pairs ( or links ) .the transmissions are concurrent and cochannel ; hence , they interfere with each other .the txs employ multiple antennas and the rxs a single antenna .we assume that the channels are flat and slow fading and we say that a link is in _ outage _ if the ic experiences fading states that can not support a desired data rate .the fundamental question raised is how to define the _ outage rate region_. that is , which rate points can be achieved with a certain probability ?for multi - user systems , such as the ic , broadcast channel ( bc ) , and multiple - access channel ( mac ) , one can consider _ common _ or _ individual _ outage .we declare a common outage if the rate of at least one link can not be supported ( see , e.g. , for the bc ) .we declare an individual outage if a specific link is unable to communicate at the desired rate .so far , studies of outage rate regions have been restricted to the single - antenna bc and mac for which the outage capacity regions for instantaneous channel side information ( csi ) were given in and , respectively . for statistical csi ,the mac and bc were studied in and , respectively .the instantaneous rate region for the miso ic is well - understood ( see , e.g. , and ) . in , we defined the regions for individual and common outage for statistical csi and common outage for instantaneous csi . for the gaussian ic in the high signal - to - noise ratio ( snr ) regime ,recent research activities have explored the diversity - multiplexing trade - off ( dmt ) ( see e.g. , for characterization of the two - user ic ) . in ,our results in were used to approximately perform weighted sum - rate maximization under outage constraints for the miso ic with statistical csi . also for statistical csi , outage probabilities in the multiple - input multiple - output ( mimo ) ic were given in closed form and an outage - based robust beamforming algorithm was proposed in . in this paper , we propose and analyze achievable outage rate regions for the miso ic .the results generalize those of both in the sense that the bc and mac are special cases of the ic , and in that we treat the multiple antenna case .since we allow a non - zero outage probability , our results extend those of and where outage was not allowed .in contrast to the dmt analysis , e.g. , , our results are valid for any snr regime . for completeness, we consider common and individual outage for both instantaneous and statistical csi , but we focus on the individual outage rate region for instantaneous csi , which we did not treat in .a challenge is how to handle the scenario where either of the rates can be achieved , but not simultaneously .we solve this by proposing a stochastic mapping of the beamforming vectors that depends on the rates and the channels .we prove that the randomness of the mapping is independent of the channel realization . compared to ,the statistical csi definitions extend the single - stream transmission scheme to multi - stream . the definitions are valid for arbitrary assumptions on the channel distribution .we assume that the rxs treat the interference as additive gaussian noise . also , rx experiences additive gaussian thermal noise with variance .tx employs antennas and uses a gaussian vector codebook with covariance .by , we denote the slow - fading conjugated channel vector between tx and rx and we assume that the channels are statistically independent .we let denote a specific realization of the channels , i.e. , ^t ] if we restrict for some function of that does not depend on then likewise , we can force the integral to assume any value in ] which satisfies , then we have . in order to have the following conditions must be satisfied : a ) the lower bound in is less than .b ) the upper bound is non - negative .c ) the lower bound is smaller than the upper bound .hence , using the fact that the probabilities sum up to one , we have the conditions if all conditions in are satisfied , we choose according to and the rate point lies in the individual outage rate region .otherwise , does not belong to the outage rate region . to give some interpretation , we insert into , and get it is apparent that and are decreasing with and respectively .also , increases when one of the rates increases but the other is fixed .therefore , we conclude that points on the outer boundary of the outage rate region must satisfy at least one of the inequalities with equality .another observation is that and are the trivial outage constraints for the su points , i.e. , su miso channel , whereas gives the shrinkage of the outage rate region due to interference . note that, equivalently to def .[ def : indoutageinstcsi ] , we can define as the set of rate points which satisfy .we assume that the txs only have knowledge of the channels statistical distribution .that is , the txs have statistical csi and can only adapt their transmit covariance matrices to the channel statistics .therefore , the txs design the transmit covariance matrices once and use them for all fading states .the definitions are given for completeness ; for details we refer to .we give definitions for the common and individual outage rate regions in secs .[ sec : stat_com ] and [ sec : stat_ind ] , respectively .we denote by the sought common outage rate region for statistical csi and define it as follows .[ def : cdi_com ] let denote the _ common _ outage probability specification . then , if there exists a deterministic mapping with for such that note that the transmit covariance matrices and depend on both the channel statistics and the actual rate point .we denote by the sought individual outage rate region for statistical csi .we allow one link to be in outage while the other is not .since the txs do not know whether the transmission is in outage or not , a tx continues transmitting even when the link is in outage .[ def : cdi_ind ] let denote the _ individual _ outage probability specifications . then , if there exists a deterministic mapping with for such that [ l][l][1]common outage , statistical csi [ l][l][1]individual outage , statistical csi [ l][l][1]common outage , instantaneous csi [ l][l][1]individual outage , instantaneous csi [ l][l][1] for [ l][l][1] for [ c][c][1.2] [ bits / channel use ] [ c][c][1.2] [ bits / channel use ] [ c][c][1]0 [ c][c][1]0.2 [ c][c][1]0.4 [ c][c][1]0.6 [ c][c][1]0.8 [ c][c][1]1 [ c][c][1]1.2 [ c][c][1]1.4 [ c][c][1]2 [ c][c][1]3 [ c][c][1]4 [ fig : regions ]we illustrate the outage rate regions given in defs . [ def : csi_com][def : cdi_ind ] .the txs employ antennas each and we model as a zero - mean complex - symmetric gaussian vector with covariance .we assume that and . for a given set of channel covariance matrices , we depict the regions in fig . [fig : regions ] .we use exhaustive - search methods to generate the regions . for instantaneous csi, we make a grid of rate points .then , for each rate point , we estimate the probabilities by running monte - carlo simulations . for determining if we use the fast method given in . for statistical csiwe draw beamforming vectors randomly . using results from ,we compute the probabilities in defs .[ def : cdi_com ] and [ def : cdi_ind ] in closed form . for each pair of beamforming vectors , we determine the rate points that meet the outage specifications .we find the outer boundary via a brute - force comparison among all computed rate points .we observe that the individual outage regions are larger than the corresponding common outage regions and the instantaneous csi regions are larger than the corresponding statistical csi regions .these results are expected since common outage is more restrictive than individual outage and instantaneous csi is always better than statistical csi .these results are true in general , but we omit the proof due to space limitations .we also illustrate the effect of choosing the bias according to .area 1 is the gain , compared to the common outage case , from including the obvious cases and .area 2 ( or 3 ) is the gain from solving the conflict by always choosing in favor of link 1 ( or 2 ) , i.e. , by switching off deterministically .area 4 is the gain from randomly switching off the transmissions using the bias according to .we defined four outage rate regions for the miso ic .the definitions correspond to different scenarios of channel knowledge and outage specification .we observe that neither the definitions depend on the channels distributions nor they are restricted for gaussian coding . on the other hand , for gaussian coding and channels , we have efficient methods for illustrating the regions . whereas the definitions for statistical csi assume that interference is treated as noise , the definitions for instantaneous csi are valid for any achievable rate region andcould potentially be extended to the mimo ic .e. bjrnson , r. zakhour , d. gesbert , and b. ottersten , `` cooperative multicell precoding : rate region characterization and distributed strategies with instantaneous and statistical csi , '' , vol .58 , no . 8 , pp . 42984310 , aug .2010 .j. park , y. sung , d. kim , and h. v. poor , `` outage probability and outage - based robust beamforming for mimo interference channels with imperfect channel state information , '' , vol .35613573 , oct .2012 .j. lindblom , e. karipidis , and e. g. larsson , `` efficient computation of pareto optimal beamforming vectors for the miso interference channel with multiuser decoding , '' , submitted 2012 .available : http://arxiv.org/abs/1210.4459 .
|
we consider the slow - fading two - user multiple - input single - output ( miso ) interference channel . we want to understand which rate points can be achieved , allowing a non - zero outage probability . we do so by defining four different outage rate regions . the definitions differ on whether the rates are declared in outage jointly or individually and whether the transmitters have instantaneous or statistical channel state information ( csi ) . the focus is on the instantaneous csi case with individual outage , where we propose a stochastic mapping from the rate point and the channel realization to the beamforming vectors . a major contribution is that we prove that the stochastic component of this mapping is independent of the actual channel realization . achievable rate region , beamforming , interference channel , miso , outage probability .
|
in recent times there has been a revival in the study of the characterization of non - markovianity for an open quantum system dynamics . while the subject was naturally born together with the introduction of the first milestones in the description of the time evolution of a quantum system interacting with an environment , the difficulty inherent in the treatment led to very few general results , and the very definition of a convenient notion of markovian open quantum dynamics was not agreed upon .the focus initially was on finding the closest quantum counterpart of the classical notion of markovianity for a stochastic process , so that reference was made to correlation functions of all order for the process .recent work was rather focused on proposals of a notion of markovian quantum dynamics based on an analysis of the behaviour of the statistical operator describing the system of interest only , thus concentrating on features of the dynamical evolution map , which only determines mean values .different properties of the time evolution map have been considered in this respect .in particular two viewpoints appear to have captured important aspects in the characterization of a dynamics which can be termed non - markovian in the sense that it relates to memory effects .the aim of our work is to analyse the relationship between these approaches and the validity of the so - called quantum regression theorem , according to which the behaviour in time of higher order correlation functions can be predicted building on the knowledge of the dynamics of the mean values for a generic observable .the analysis can be performed introducing a suitable quantifier for the violation of the quantum regression hypothesis , which in turn requires knowledge of the exact two - time correlation functions .we therefore consider a two - level system coupled to a bosonic bath through a decoherence interaction , exactly estimating for a general class of spectral densities the predictions of different criteria for non - markovianity of a dynamics and the violation of the regression theorem .we further apply this analysis to a dephasing model , whose realization has been recently exploited to experimentally observe quantum non - markovianity . in both caseswe show that the quantum regression theorem can be violated even in the presence of a quantum dynamics which , according to either criteria , is considered markovian .the paper is organized as follows . in sect .[ sec : nmarkov ] we recall two recently introduced notions of markovianity for a quantum dynamics and the associated measures , while in sect .[ sec : qrt ] we address the formulation of the quantum regression theorem and introduce a simple estimator for its violation .we apply this formalism to the pure dephasing spin boson model in sect .[ sec : sbm ] discussing the relationship between the two approaches , and extend the analysis to a photonic dephasing model in sect . [sec : modellofotoni ] .we finally comment on our results in sect .[ sec : ceo ] .let us start by briefly recalling the main features of the notion of non - markovian quantum dynamics which will be exploited in the following analysis .in the classical theory of stochastic processes , the definition of markov process involves the entire hierarchy of joint probability distributions associated with the process .since such a definition can not be directly transposed to the quantum realm , different and non - equivalent notions of quantum markovianity have been introduced , along with different measures to quantify the degree of non - markovianity of a given dynamics ( see for a very recent comparison ) .these definitions all convey the idea that the occurrence of memory effects is the proper attribute of non - markovian dynamics , relying on different properties of the dynamical maps which describe the evolution of the open quantum system . in the absence of initial correlations between the open system and its environment ,i.e. , with assumed to be fixed , the evolution of an open quantum system is characterized by a one parameter family of completely positive and trace preserving ( cpt ) maps , such that where is the state of the open system at the initial time . a relevant class of open quantum system s dynamics is provided by the semigroup ones , which are characterized by the composition law the generator of a semigroup of cpt maps is fixed by the gorini - kossakowski - sudarshan - lindblad theorem , which implies that the dynamics of the system is given by the lindblad equation \\ + \sum_{k } \gamma_k\left(l_k \rho_s(t )l^{\dag}_k - \frac{1}{2 } \left\{l^{\dag}_kl_k , \rho_s(t ) \right\ } \right ) \end{gathered}\ ] ] with .the semigroups of cpt maps are identified with the markovian time - homogeneous dynamics according to all the previously mentioned definitions of markovianity , so that the differences between them actually concern the notion of time - inhomogeneous markovian dynamics . in the following, we will take into account two definitions of markovianity and the corresponding measures of non - markovianity .one definition is related with the contractivity of the trace distance under the action of the dynamical maps , while the other relies on a divisibility property of the dynamical maps , which reduces to the semigroup composition law in the time - homogeneous case .the basic idea behind the definition of non - markovianity introduced by breuer , laine and piilo ( blp ) is that a change in the distinguishability between the reduced states can be read in terms of an information flow between the open system and the environment .the distinguishability between quantum states is quantified through the trace distance , which is the metric on the space of states induced by the trace norm : where the are the eigenvalues of the traceless hermitian operator .the trace distance takes values between 0 and 1 and , most importantly , it is a contraction under the action of cpt maps . by investigating the evolution of the trace distance between two states of the open system coupled to the same environment but evolved from different initial conditions , one can thus describe the exchange of information between the open system and the environment .a decrease of the trace distance means a lower ability to discriminate between the two initial conditions and , which can be expressed by saying that some information has flown out of the open system . on the same ground ,an increase of the trace distance can be ascribed to a back - flow of information to the open system and then represents a memory effect in its evolution .non - markovian quantum dynamics can be thus defined as those dynamics which present a non - monotonic behaviour of the trace distance , i.e. such that there are time intervals in which consequently , the non - markovianity of an open quantum system s dynamics is quantified by the measure the maximization involved in the definition of this measure can be greatly simplified since the optimal states must be orthogonal and , even more , one can determine by means of a local maximization over one state only .this measure of non - markovianity has been also investigated experimentally in all - optical settings . the definition given by rivas ,huelga and plenio ( rhp ) identifies markovian dynamics with those dynamics which are described by a cp - divisible family of quantum dynamical maps ( cp standing for completely positive ) , i.e. such that being itself a completely positive map .indeed , if the composition law in eq.([divisibility ] ) is equivalent to the semigroup composition law .an important property of this definition is that , provided that the evolution of the reduced state can be formulated by a time - local master equation \\ & = & - i[h(t ) , \rho_s(t ) ] \nonumber \\ & & \hspace{-2truecm } + \sum_{k } \gamma_k(t)\left(l_k(t ) \rho_s(t ) l^{\dag}_k(t ) - \frac{1}{2 } \left\{l^{\dag}_k(t)l_k(t ) , \rho_s(t ) \right\ } \right ) , \nonumber\end{aligned}\ ] ] the positivity of the coefficients , for any , is equivalent to the cp - divisibility of the corresponding dynamics .this can be shown by taking into account the family of propagators associated with eq.([lindbladt ] ) , where denotes the time ordering and . by construction , the propagators satisfy eq.([divisibility ] ) , but , in general , they are not cp maps .one can show that the propagators are actually cp if and only if the coefficients are positive functions of time . the corresponding measure of non - markovianity is given by with where is the choi matrix associated with . given a maximally entangled state between the system and an ancilla , , one has the positivity of the choi matrix corresponds to the complete positivity of the map and it is equivalent to the condition , so that the quantity is different from zero if and only if the cp - divisibility of the dynamics is broken . finally , since the trace distance is contractive under cpt maps , if a dynamics is markovian according to the rhp definition , then it is so also according to the blp definition , i.e. , while the opposite implication does not hold .as recalled in the introduction , the quantum regression theorem provides a benchmark structure in order to study the multi - time correlation functions of an open quantum system . for the sake of simplicity ,we focus on the two - time correlation functions only . given two open system s operators , and , where denotes the identity on the hilbert space associated with the environment , their two - time correlation function is defined as ,\end{gathered}\ ] ] where is the overall unitary evolution operator and we set .in the following , we assume an initial state as in eq.([eq : prod ] ) , as well as a time - independent total hamiltonian , so that .the condition of an initial product state with a fixed environmental state guarantees the existence of a reduced dynamics , see eqs.([eq : prod ] ) and ( [ eq : red ] ) .this means that all the one - time probabilities associated with the observables of the open systems and , as a consequence , their mean values can be evaluated by means of the family of reduced dynamical maps only , without need for any further reference to the overall unitary dynamics .an analogous result holds for the two - time correlation functions , if one can apply the so - called quantum regression theorem .the latter essentially states that under proper conditions the dynamics of the two - time correlation functions can be reconstructed from the dynamics of the mean values , or , equivalently , of the statistical operator .indeed , if the quantum regression theorem can not be applied , one needs to come back to the full unitary dynamics in order to determine the evolution of the two - time correlation functions .we will not repeat here the detailed derivation of the quantum regression theorem , which can be found in .nevertheless , let us recall the basic ideas .first , by introducing the operator the two - time correlation function in eq.([twotimecorrfunc ] ) can be rewritten as now , suppose that we can describe the evolution of with respect to with the same dynamical maps which fix the evolution of the statistical operator , i.e. , ,\ ] ] where is the propagator introduced in eq.([eq : prop ] ) .then , eq.([eq : aux ] ) directly provides }.\ ] ] the two - time correlation functions can be fully determined by the dynamical maps which fix the evolution of the statistical operator : the validity of eq.([eq : preqrt ] ) can be identified with the validity of the quantum regression theorem and we will use the subscript to denote the two - time correlation functions evaluated through eq.([eq : preqrt ] ) .indeed , all the procedure relies on eq.([eq : aux2 ] ) , which requires that the same assumptions made in order to derive the dynamics of can be made also to get the evolution of with respect to . especially , the hypothesis of an initial total product state in eq.([eq : prod ] ) turns into the hypothesis of a product state at any intermediate time , the physical idea is that the quantum regression theorem holds when the system - environment correlations due to the interaction can be neglected .note that this condition will never be strictly satisfied , as long as the system and the environment mutually interact , but it should be understood as a guideline to detect the regimes in which eq.([eq : preqrt ] ) provides a satisfying description of the evolution of the two - time correlation functions .more precisely , dmcke demonstrated that the exact expression of the two - time ( multi - time ) correlation functions , see eq.([twotimecorrfunc ] ) , converges to the expression in eq.([eq : preqrt ] ) in the weak coupling limit and in the singular coupling limit . as well - known , in these limits the reduced dynamics converges to a semigroup dynamics , the correctness of a semigroup description of the reduced dynamics is not always enough to guarantee the accuracy of the quantum regression theorem .more in general , the precise link between a sharply defined notion of markovianity of quantum dynamics and the quantum regression theorem has still to be investigated . the quantum regression theorem provided by eq.([eq : preqrt ] )can be equivalently formulated in terms of the differential equations satisfied by mean values and two - time correlation functions , as was originally done in . forthe sake of simplicity , let us restrict to the finite dimensional case , i.e. , the hilbert space associated with the open system is . consider a reduced dynamics fixed by the family of maps and a basis of linear operators on , such that the corresponding mean values fulfill the coupled linear equations of motion with the initial condition . in this case ,the quantum regression theorem is said to hold if the two - time correlation functions satisfy with the initial condition in the following , we will compare the evolution of the exact two - time correlation functions obtained from the full unitary evolution , see eq.([twotimecorrfunc ] ) , with those predicted by the quantum regression theorem . to quantify the error made by using the latter , we exploit the relative error , i.e. , we use the following figure of merit : which depends on the chosen couple of open system s operators .hence , in general , one should consider different estimators , one for each couple of operators in the basis , and a maximization over them could be taken .nevertheless , in the following analysis it will be enough to deal with a single couple of system s operators , which fully encloses the violations of the quantum regression theorem for the models at hand .in this section , we take into account a model whose full unitary evolution can be exactly evaluated , so as to obtain the exact expression of the two - time correlation functions , to be compared with the expression provided by the quantum regression theorem .this model is a pure - decoherence model , in which the decay of the coherences occurs without a decay of the corresponding populations .indeed , this is due to the fact that the free hamiltonian of the open system commutes with the total hamiltonian .let us consider a two - level system linearly interacting with a bath of harmonic oscillators , so that the total hamiltonian is the unitary evolution operator of the overall system in the interaction picture is given by where the first factor is an irrelevant global phase and the second factor is the unitary operator ,\ ] ] with the reduced dynamics is readily calculated to give where the function is given by } \notag\\ & = { \ensuremath{\operatorname{tr}}}_e{\rho_e \prod_k \delta(\alpha_k(t ) ) } , \end{aligned}\ ] ] being the displacement operator of argument .the associated master equation reads + \frac{\mathcal{d}(t)}{2 } \left(\sigma_z \rho_s(t ) \sigma_z - \rho_s(t ) \right),\ ] ] where \ ] ] and the so - called dephasing function is = - \frac{{\mathrm{d}}}{{\mathrm{d}}t } \ln|\gamma(t)|.\ ] ] in the following , we will focus on the case of an initial thermal state of the bath , with and the inverse temperature .we also consider the continuum limit : given a frequency distribution of the bath modes , we introduce the spectral density , so that one has ,\ ] ] and hence and for this specific model , the two definitions of markovianity are actually equivalent , i.e. not only eq. holds , but also the opposite does so .this is due to the fact that there is only one operator contribution in the time - local master equation , corresponding to the dephasing interaction .nevertheless , the numerical values of the two measures of non - markovianity are in general different and , more importantly , they depend in a different way on the parameters of the model .let us start by evaluating the blp measure , see sec .[ sec : nmblp ] .the trace distance between two reduced states evolved through eq.([rho_s(t)matrix ] ) is given by where and are the differences between , respectively , the populations and the coherences of the two initial conditions and . the couple of initial states that maximizes the growth of the trace distance is given by the pure orthogonal states , where , and the corresponding trace distance at time is simply .the blp measure therefore reads where is the union of the time intervals in which increases .the blp measure is different from zero if and only if for some interval of time , which is equivalent to the requirement that the dephasing function in eq.([eq : me ] ) is not a positive function of time , i.e. , that the cp - divisibility of the dynamics is broken , sec .[ sec : nmrhp ] . as anticipated , for this model .furthermore , given a pure dephasing master equation as in eq.([eq : me ] ) , one has if and if , so that , see eq.([eq : dt ] ) , where the and are defined as for the blp measure . in order to evaluate explicitly the non - markovianity measures , we need to specify the spectral density . in the following ,we assume a spectral density of the form where is the coupling strength , the parameter fixes the low frequency behaviour and is a cut - off frequency .the non - markovianity for the pure dephasing spin model with a spectral density as in eq.([spectrals ] ) has been considered in for the case .we are now interested in the comparison between non - markovianity and violations of the quantum regression theorem , so that , as will become clear in the next section , the dependence on plays a crucial role . in particular , we consider the case of low temperature , i.e. , , so that .the dephasing function in this case reads , see eq.([eq : dephasing ] ) , with the euler gamma function , which can be expressed in the equivalent , but more compact form , see appendix [ app : a ] , }{\left ( 1+(\omega t)^2 \right)^s}.\ ] ] correspondingly , the decoherence function can be written as }{(1+(\omega t)^2)^{s-1}}\right)\right].\ ] ] as before , let be the union of the time intervals for which , i.e. , equivalently , increases .the number of solutions of the equation grows with the parameter : for the dephasing function is always strictly positive , while for and there is one zero at and respectively .indeed , if the number of zeros is odd , is negative from its last zero to infinity , while if the number of zeros is even , it approaches zero asymptotically from above . as a consequence ,the two measures of non - markovianity are equal to zero for and , to give an example , one has for and , analogously , for in fig . [fig : nm ] * ( a ) * and * ( b ) * , we report , respectively , the blp and the rhp measures of non - markovianity as a function of , for different values of . * ( a ) * + , see eq.([eq : nmblp ] ) , and * ( b ) * rhp measure of non - markovianity , see eq.([eq : nmrhp ] ) , as a function of the coupling strength for increasing values of the parameter . in both panelsthe curves are evaluated for ( black thick solid line ) , ( blue solid line ) , ( magenta dashed line ) , ( green dashed thick line ) , ( red dot - dashed line ) and ( orange dotted line).,title="fig : " ] + * ( b ) * + , see eq.([eq : nmblp ] ) , and * ( b ) * rhp measure of non - markovianity , see eq.([eq : nmrhp ] ) , as a function of the coupling strength for increasing values of the parameter . in both panelsthe curves are evaluated for ( black thick solid line ) , ( blue solid line ) , ( magenta dashed line ) , ( green dashed thick line ) , ( red dot - dashed line ) and ( orange dotted line).,title="fig : " ] the behaviour of the two measures is clearly different .the rhp measure is a monotonically increasing function of both and : the increase is linear with respect to the former parameter and exponential with respect to the latter . on the other hand , for every fixed , there is a critical value of the coupling strength , which is smaller for increasing , that separates two different regimes of the blp measure : for , the non - markovianity measure increases with the increase of the system - environment coupling , while for it decreases with the increase of the coupling .analogously , there is a threshold value of the parameter , which is higher for smaller values of , such that the blp measure increases for and decreases for , see also fig .[ fig:2 ] * ( a)*. incidentally , the maximum value as a function of , , is a monotonically increasing function of the parameter .indeed , the different behaviour of the non - markovianity measures traces back to their different functional dependence of the decoherence function , which is plotted in fig .[ fig:2 ] * ( b ) * and * ( c ) * for different values of and .one can see how takes on smaller values within ] in which increases , see eq .( [ eq : nmblp ] ) , the rhp measure is fixed by the ratio between the same values , see eq.([eq : nmrhp ] ) .hence , as the coupling strength grows over the threshold or the parameter overcomes the threshold , the difference between and is increasingly smaller , and therefore is so .however , the ratio between and always increases with and , as witnessed by the corresponding monotonic increase of . *( a ) * + , see eq.([eq : nmblp ] ) , as a function of the parameter , for . * ( b ) * and * ( c ) * decoherence function as a function of time for and different values of * ( b ) * , and for and different values of * ( c)*. , title="fig : " ] + * ( b ) * + , see eq.([eq : nmblp ] ) , as a function of the parameter , for . * ( b ) * and * ( c ) * decoherence function as a function of time for and different values of * ( b ) * , and for and different values of * ( c)*. , title="fig : " ] + * ( c ) * + , see eq.([eq : nmblp ] ) , as a function of the parameter , for . *( b ) * and * ( c ) * decoherence function as a function of time for and different values of * ( b ) * , and for and different values of * ( c)*. , title="fig : " ] the exact unitary evolution , eq.([eq : evolutionoperator ] ) , directly provides us with the average values , as well as the two - time correlation functions of the observables of the system . in view of the comparison with the description given by the quantum regression theorem ,see sec .[ sec : qrt ] , let us focus on the basis of linear operators on , orthonormal with respect to the hilbert - schmidt scalar product , given by .indeed , the first and the last element of the basis are constant of motion , see eq.([rho_s(t)matrix ] ) , while the mean values of and evolve according to , respectively , and the complex conjugate relation . in a similar way , all the two - time correlation functions involving or satisfy the condition of the quantum regression theorem in a trivial way , as at most one operator within the two - time correlation function actually evolves .the only non - trivial expressions are thus the following : where and .\ ] ] here , to derive we used the properties of the displacement operator and the equality .we can now obtain the corresponding two - time correlation functions as predicted by the quantum regression theorem . by eq.([eq : meann ] ) , one has and the complex conjugate relation for .the specific choice of the operator basis has lead us to a diagonal matrix in eq.([lindiffeq ] ) .hence , one has immediately the quantum regression theorem will be generally violated within this model , compare eq.([eq : exact ] ) and ( [ eq : qrt ] ) .we quantify such a violation by means of the figure of merit introduced in eq.([eq : figuremerit ] ) , which for the couple of operators and reads the expressions of the previous paragraph hold for generic initial state of the bath and spectral density .now , we come back to the specific choice of an initial thermal bath .the results in eq.([eq : qrt ] ) are in this case in agreement with those found in , where the two - time correlation functions have been evaluated focusing on a spectral density as in eq.([spectrals ] ) with , while keeping a generic temperature of the bath .instead , we will focus on the case and maintain a generic value of in order to compare the behaviour of the two - time correlation functions with the measures of non - markovianity .first , note that by using the definition of the displacement operator as well as eq.([eq : alpha ] ) , one can show the general identity but then , since for a thermal state is a function of only , eq.([eq : rel ] ) implies see eqs.([gammatwopoints ] ) and ( [ decoherentfunction ] ) . in additionwe have in the continuum limit , see eq.([phi ] ) , \ ] ] so that , for as in eq.([spectrals ] ) and using eq.([eq : dephasing ] ) in the zero temperature limit , we get the identities in eqs.([dephasingok ] ) and ( [ gammaok ] ) , along with eqs .( [ eq : tt ] ) and ( [ eq : phiz ] ) , finally provide us with the explicit expression of the estimator for the violations of the quantum regression theorem , see eq.([zmp ] ) , \right|,\end{gathered}\ ] ] whose behaviour as a function of and is shown in fig .[ fig : zs ] * ( a ) * and * ( b)*. * ( a ) * + as a function of the parameter and of the coupling strength , see eq.([eq : zslambda ] ) , for and . * ( b ) * section of * ( a ) * for .,title="fig : " ] + * ( b ) * + as a function of the parameter and of the coupling strength , see eq.([eq : zslambda ] ) , for and . * ( b ) * section of * ( a ) * for .,title="fig : " ] the violation of the quantum regression theorem monotonically increases with increasing values of both the coupling strength and the parameter .this behaviour is clearly in agreement with that of the rhp measure of non - markovianity , see sec .[ sec : zte ] and in particular fig .[ fig : nm ] . from a quantitative point of viewthere is , however , some difference as the estimator , at variance with the rhp measure , grows linearly with only for small values of , while it growths faster for ; compare with fig .[ fig : nm ] * ( b)*. in any case , the rhp measure appears to be more directly related with the strength of the violation to the quantum regression theorem , as compared with the blp measure .this can be traced back to the different influence of the system - environment correlations on the two measures . as we recalled in sec .[ sec : qrt ] , the hypothesis that the state of the total system at any time is well approximated by the product state between the state of the open system and the initial state of the environment , see eq.([eq : prodt ] ) , lies at the basis of the quantum regression theorem .this hypothesis is expected to hold in the weak coupling regime , while for an increasing value of , the interaction will build stronger system - environment correlations , leading to a strong violation of the quantum regression theorem . the establishment of correlations between the system and the environment due to the interaction plays a significant role also in the subsequent presence of memory effects in the dynamics of the open system .indeed , different signatures of the memory effects can be affected by system - environment correlations in different ways .in particular , the cp - divisibility of the dynamical maps appears to be a more fragile property than the contractivity of the trace distance and therefore it is more sensitive to the violations of the quantum regression theorem .furthermore , it is worth noting that the estimator steadily increases with the coupling strength even for values of such that the corresponding reduced dynamics is markovian according to either definitions .the validity of the quantum regression theorem calls therefore for stricter conditions than the markovianity of quantum dynamics .in the pure dephasing spin - boson model , there is no regime in which the quantum regression theorem is strictly satisfied , apart from the trivial case .in addition , we have shown that the strength of the violations of this theorem has the same qualitative behaviour of the rhp non - markovianity measure , as they increase with both and the parameter . in this section ,we take into account a different pure dephasing model , which allows us to deepen our analysis on the relationship between the quantum regression theorem and the markovianity of the reduced - system dynamics . in particular , we show that in general these two notions should be considered as different since the quantum regression theorem may be strongly violated , even if the open system s dynamics is markovian , irrespective of the exploited definition .let us deal with the pure - dephasing interaction considered in ref .the open system here is represented by the polarization degrees of freedom of a photon generated by spontaneous parametric down conversion , while the environment consists in the corresponding frequency degrees of freedom .the overall unitary evolution , which is realized via a quartz plate that couples the polarization and frequency degrees of freedom , can be described as where and are the two polarization states ( horizontal and vertical ) , with refractive indexes , respectively , and , while is the environmental state with frequency .if we consider an initial product state , see eq.([eq : prod ] ) , with a pure environmental state , where we readily obtain that the reduced dynamics is given by eq.([rho_s(t)matrix ] ) .again , we are in the presence of a pure dephasing dynamics , the only difference being the decoherence function , which now reads with . for the rest , the results of secs . [ sec : tm ] and [ sec : nmm ] directly apply also to this model : the master equation is given by eq.([eq : me ] ) , with and as in , respectively , eq.([eq : eps ] ) ( for ) and eq.([eq : dt ] ) , while the non - markovianity measures are as in eq.([eq : nmblp ] ) and eq.([eq : nmrhp ] ) .analogously , the two - time correlation functions are given by eq .( [ eq : exact ] ) with while the application of the quantum regression theorem leads to the expressions in eq.([eq : qrt ] ) ( with ) .hence , the violations of the quantum regression theorem can be quantified by despite its great simplicity , this model allows to describe the transition between markovian and non - markovian dynamics in concrete experimental settings .different dynamics are obtained for different choices of the initial environmental state , see eq.([eq : prod ] ) and the related discussion , i.e. , for different initial frequency distributions , see eq.([eq : psie ] ) .the latter can be experimentally set , e.g. , by properly rotating a fabry - prot cavity , through which a beam of photons generated by spontaneous parametric down conversion passes .a natural benchmark is represented by the lorentzian distribution },\ ] ] where is the width of the distribution and its central frequency , as this provides a reduced semigroup dynamics .the decoherence function , which is given by the fourier transform of the frequency distribution , see eq.([decoherencenature ] ) , is in fact thus , replacing this expression in eqs .( [ eq : eps ] ) and ( [ eq : dt ] ) , one obtains a lindblad equation , given by eq .( [ eq : me ] ) with and .in addition , and hence , as one can immediately see by eq.([znature ] ) , . for this model ,as long as the reduced dynamics is determined by a completely positive semigroup , the quantum regression theorem is strictly valid .let us emphasize , that this is the case even if the total state is not a product state at any time .for example if the initial state of the open system is the pure state , with , the total state at time is this is an entangled state , of course unless or ; nevertheless , the quantum regression theorem does hold .this clearly shows that for the quantum regression theorem , as for the semigroup description of the dynamics , the approximation encoded in eq.([eq : prodt ] ) should be considered as an effective description of the total state , which can be very different from its actual form , even when the theorem is valid .now , we consider a more general class of frequency distributions ; namely , the linear combination of two lorentzian distributions , },\ ] ] with .the decoherence function is in this case with , while the estimator of the violations of the quantum regression theorem , see eq.([znature ] ) , can be written as a function of the difference between the central frequencies , , as well as of the difference between the corresponding widths , .if we assume that the two central frequencies are equal , , the evolution of the two - level statistical operator is fixed by a time - local master equation as in eq.([eq : me ] ) , with and the latter is a positive function of time : the reduced dynamics is cp - divisible , see sec .[ sec : nmrhp ] , and hence it is markovian with respect to both the blp and rhp definitions. indeed , now we are in the presence of a time - inhomogeneous markovian dynamics .nevertheless , as the quantum regression theorem is violated , see eq.([znature ] ) .this is explicitly shown in fig .[ fig:4 ] * ( a ) * , where is plotted as a function of and , with . with growing difference between the two widths , as well as the length of the time interval , the deviations from the quantum regression theorem are increasingly strong , up to a saturation value of the estimator .contrary to the semigroup case , here , even if the dynamics is markovian according to both definitions , the actual behaviour of the two - time correlation functions can not be reconstructed by the evolution of the mean values . * ( a ) * + in eq.([znature ] ) * ( a ) * in the time - inhomogeneous markovian case , , as a function of and , for and ; * ( b ) * in the non - markovian case , , as a function of and , for and ; in all the panels .,title="fig : " ] + * ( b ) * + in eq.([znature ] ) * ( a ) * in the time - inhomogeneous markovian case , , as a function of and , for and ; * ( b ) * in the non - markovian case , , as a function of and , for and ; in all the panels .,title="fig : " ] finally , let us consider a frequency distribution as in eq ., but now with and .this frequency distribution has two peaks and the resulting reduced dynamics is non - markovian . in this casethe blp non - markovianity measure increases with the increasing of the distance between the two peaks , while the estimator grows for small values of the distance and then it exhibits an oscillating behaviour , see fig .[ fig:4 ] * ( b)*. indeed , for one recovers the semigroup dynamics previously described and , accordingly , goes to zero .summarizing , by varying the distance between the two peaks , one obtains a transition from a markovian ( semigroup ) dynamics to a non - markovian one and , correspondingly , the quantum regression theorem ceases to be satisfied and is even strongly violated .nevertheless , the qualitative behaviour of , respectively , the non - markovianity of the reduced dynamics and the violation of the quantum regression theorem appear to be different .we have explored the relationship between two criteria for markovianity of a quantum dynamics , namely the cp - divisibility of the quantum dynamical map and the behaviour in time of the trace distance between two distinct initial states , and the validity of the quantum regression theorem , which is a statement relating the behaviour in time of the mean values and of the two - time correlation functions of system operators .the first open system considered is a two - level system affected by a bosonic environment through a dephasing interaction . for a class of spectral densities with exponential cut - off and power law behaviour at low frequencies we have studied the onset of non - markovianity as a function of the coupling strength and of the power determining the low frequency behaviour , further giving an exact expression for the corresponding non - markovianity measures .the deviation from the quantum regression theorem has been estimated evaluating the relative error made in replacing the exact two - time correlation function for the system operators with the expression reconstructed by the evolution of the corresponding mean values .it appears that the validity of the quantum regression theorem represents a stronger requirement than markovianity , according to either criteria , which in this case coincide but quantify non - markovianity in a different way and exhibit distinct performances in their dependence on strength of the coupling and low frequency behaviour .we have further considered an all - optical realization of a dephasing interaction , as recently exploited for the experimental investigation of non - markovianity , obtaining also in this case , for different choices of the frequency distribution , significant violations to the quantum regression theorem even in the presence of a markovian dynamics .these results suggest that indeed the recently introduced new approaches to quantum non - markovianity provide a weaker requirement with respect to the classical notion of markovian classical process . further andmore stringent notion of markovian quantum dynamics can therefore be introduced , e.g. relying on validity of the quantum regression theorem .however , the usefulness of such criteria will heavily depend on the possibility to verify their satisfaction directly by means of experiments , as it is the case e.g. for the notion of markovianity based on trace distance , without asking for an explicit exact knowledge of the dynamical equations .the authors gratefully acknowledge financial support by the eu projects cost action mp 1006 and nanoquestfit .starting from eq . , namely and exploiting the identities together with we can come to the compact expression \notag\\ & = \frac{\lambda\omega\gamma(s)}{2i\left ( 1+(\omega t)^2 \right)^s } \left[(1+i\omega t)^s - ( 1-i\omega t)^s\right ] \notag\\ & = \lambda\omega\gamma(s)\frac{im\left[(1+i\omega t)^s\right]}{\left ( 1+(\omega t)^2 \right)^s}.\end{aligned}\ ] ] 99 g. lindblad , comm .* 48 * , 119 ( 1976 ) g.lindblad , comm .phys . * 65 * , 281 ( 1979 ) m. m. wolf , j. eisert , t. s. cubitt , and j. i. cirac , phys .* 101 * , 150402 ( 2008 ) h .- p .breuer , e .-laine , and j. piilo , phys .lett . * 103 * , 210401 ( 2009 ) h .- p .breuer , j. phys .b * 45 * 154001 ( 2012 ) .rivas , s.f .huelga , and m.b .plenio , phys .* 105 * , 050403 ( 2010 ) x .- m .lu , x. wang and c.p .sun , phys .a * 82 * , 042103 ( 2010 ) s. luo , s. fu , h. song , phys . rev .a * 86 * , 044101(2012 ) s. lorenzo , f. plastina , m. paternostro , phys .a * 88 * , 020102(r ) ( 2013 ) b. bylicka , d. chruciski , s. maniscalco , arxiv:1301.2585 d. chruciski and s. maniscalco , phys .112 * , 120404 ( 2014 ) h .- p . breuer and f. petruccione ,_ the theory of open quantum systems _ ( oxford university press , oxford , 2002 ) c.w .gardiner and p. zoller , _ quantum noise : a handbook of markovian and non - markovian quantum stochastic methods with applications to quantum optics _( springer , berlin , 2004 ) b .- h .liu , l. li , y .- f .huang , c .-f li , g .- c .laine , h .-breuer , j. piilo , nat .* 7 * , 931 ( 2011 ) b. vacchini , a. smirne , e .-laine , j. piilo , h .-breuer , new j. phys .* 13 * 093004 ( 2011 ) n. lo gullo , i. sinayskiy , t. busch , f. petruccione , arxiv:1401.1126 c. addis , b. bylicka , d. chruciski and sabrina maniscalco , arxiv:1402.4975 .rivas , s.f .huelga , and m.b .plenio , arxiv:1405.0303 , to appear in rep prog phys v. gorini , a. kossakowski , and e.c.g .sudarshan , j. math . phys . * 17 * , 821 ( 1976 ) m. nielsen and i. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 2000 ) s. wissmann , a. karlsson , e .- m .laine , j.piilo , h .-breuer , phys .a * 86 * , 062108 ( 2012 ) b .- h .liu , s. wissmann , x .-hu , c. zhang , y .- f .huang , c .- f .li , g .- c .guo , a. karlsson , j. piilo , and h .- p .breuer , arxiv:1403.4261 j .- s .tang , c .- f .li , y .- l .li , x .- b .zou , g .- c .breuer , e .-laine , and j. piilo , europhys .97 , 10002 ( 2012 ) b. h. liu , d .- y .cao , y .- f .huang , c .- f .li , g .- c .breuer , e .-laine , and j. piilo , sci .rep . 3 , 1781 ( 2013 ) e .- m .laine , j. piilo , and h .-breuer , phys .a * 81 * , 062115 ( 2010 ) a. rivas and s.f .huelga , _ open quantum systems , an introduction _ , springer briefs in physics 2012 m. d. choi , lin . alg . appl . * 10 * , 285 ( 1975 ) p. haikka , j.d .cresser , and s. maniscalco , phys .a * 83 * , 012112 ( 2011 ) d. chruciski , a. kossakowski , and .rivas , phys .a * 83 * , 052128 ( 2011 ) h. carmichael , _ an open systems approach to quantum optics _( springer - verlag , berlin , 1993 ) s. swain , j. phys . a : math* 14 * , 2577 ( 1981 ) r. dmcke , j. math. phys . * 24 * , 311 ( 1983 ) e. b. davies , commun .phys . * 39 * , 91 ( 1974 ) and math . ann . *219 * , 147 ( 1976 ) v. gorini and a. kossakowski , j. math .* 17 * , 1298 ( 1976 ) ; a. frigerio and v. gorini , j. math . phys .* 17 * , 2123 ( 1976 ) p. talkner , ann. phys . * 167 * , 390 ( 1986 ) g.w .ford and r.f .oconnell , phys .lett . * 77 * , 798 ( 1996 ) m. lax , phys .rev * 172 * , 350 ( 1968 ) w.g .unruh , phys .a * 51 * , 992 ( 1995 ) a. ferraro , s. olivares , and m.g.a .paris , _ gaussian states in quantum information _ ( bibliopolis , naples , 2005 ) h .- s .zeng , n. tang , y .- p .zheng , g .- y .wang , phys .a * 84 * , 032118 ( 2011 ) p. haikka , t. h. johnson , s. maniscalco , phys .a * 87 * , 010103(r ) ( 2013 ) h .- s .goan , c .- c .jian , p .- w .chen , phys .a * 82 * , 012111 ( 2010 ) e .- m .laine , j. piilo , and h .-breuer , europhys .lett . * 92 * , 60010 ( 2010 ) l. mazzola , c. a. rodrguez - rosario , k. modi , and m. paternostro , phys .a * 86 * , 010102(r ) ( 2012 ) a. smirne , l. mazzola , m. paternostro , b. vacchini , phys . rev.a * 87 * , 052129 ( 2013 ) .rivas , a.d .plato , s.f .huelga , and m.b .plenio , new j. phys .* 12 * , 113032 ( 2010 )
|
we explore the connection between two recently introduced notions of non - markovian quantum dynamics and the validity of the so - called quantum regression theorem . while non - markovianity of a quantum dynamics has been defined looking at the behaviour in time of the statistical operator , which determines the evolution of mean values , the quantum regression theorem makes statements about the behaviour of system correlation functions of order two and higher . the comparison relies on an estimate of the validity of the quantum regression hypothesis , which can be obtained exactly evaluating two points correlation functions . to this aim we consider a qubit undergoing dephasing due to interaction with a bosonic bath , comparing the exact evaluation of the non - markovianity measures with the violation of the quantum regression theorem for a class of spectral densities . we further study a photonic dephasing model , recently exploited for the experimental measurement of non - markovianity . it appears that while a non - markovian dynamics according to either definition brings with itself violation of the regression hypothesis , even markovian dynamics can lead to a failure of the regression relation .
|
in multi - agent pursuit - evasion problems one or more pursuers try to maneuver and reach a relatively small distance with respect to one or more evaders , which strive to escape the pursuers .this problem is usually posed as a dynamic game , , .thus , a dynamic voronoi diagram has been used in problems with several pursuers in order to capture an evader within a bounded domain , . on the other hand , presented a receding - horizon approach that provides evasive maneuvers for an unmanned autonomous vehicle ( uav ) assuming a known model of the pursuer s input , state , and constraints . in ,a multi - agent scenario is considered where a number of pursuers are assigned to intercept a group of evaders and where the goals of the evaders are assumed to be known .cooperation between two agents with the goal of evading a single pursuer has been addressed in and . in this paperwe consider a zero - sum three - agent pursuit - evasion differential game .a two - agent team is formed which consists of a target ( ) and a defender ( ) who cooperate ; the attacker ( ) is the opposition .the goal of the attacker is to capture the target while the target tries to evade the attacker and avoid capture .the target cooperates with the defender which pursues and tries to intercept the attacker before the latter captures the target .cooperation between the target and the defender is such that the defender will capture the attacker before the latter reaches the target .such a scenario of active target defense has been analyzed in the context of cooperative optimal control in , .indeed , sensing capabilities of missiles and aircraft allow for implementation of complex pursuit and evasion strategies , , and more recent work has investigated different guidance laws for the agents and .thus , in the authors addressed the case where the defender implements command to the line of sight ( clos ) guidance to pursue the attacker which requires the defender to have at least the same speed as the attacker . in the end - game for the tad scenariowas analyzed based on the minimization / maximization of the attacker / target miss distance for a _ non - cooperative _ target / defender pair .the authors develop linearization - based attacker maneuvers in order to evade the defender and continue pursuing the target . a different guidance law for the target - attacker - defender ( tad ) scenario was given by yamasaki _ et.al ._ , .these authors investigated an interception method called triangle guidance ( tg ) , where the objective is to command the defending missile to be on the line - of - sight between the attacking missile and the aircraft for all time , while the target aircraft follows some predetermined trajectory .the authors show , through simulations , that tg provides better performance in terms of defender control effort than a number of variants of proportional navigation ( pn ) guidance laws , that is , when the defender uses pn to pursue the attacker instead of tg .the previous approaches constrain and limit the level of cooperation between the target and the defender by implementing defender guidance laws without regard to the target s trajectory .different types of cooperation have been recently proposed in , , , , , , for the tad scenario . in optimal policies ( lateral acceleration for each agent including the attacker ) were provided for the case of an aggressive defender , that is , the defender has a definite maneuverability advantage .a linear quadratic optimization problem was posed where the defender s control effort weight is driven to zero to increase its aggressiveness . the work provided a game theoretical analysis of the tad problem using different guidance laws for both the attacker and the defender .the cooperative strategies in allow for a maneuverability disadvantage for the defender with respect to the attacker and the results show that the optimal target maneuver is either constant or arbitrary .shaferman and shima implemented a multiple model adaptive estimator ( mmae ) to identify the guidance law and parameters of the incoming missile and optimize a defender strategy to minimize its control effort .in the recent paper the authors analyze different types of cooperation assuming the attacker is oblivious of the defender and its guidance law is known .two different one - way cooperation strategies were discussed : when the defender acts independently , the target knows its future behavior and cooperates with the defender , and vice versa .two - way cooperation where both target and defender communicate continuously to exchange their states and controls is also addressed , and it is shown to have a better performance than the other types of cooperation - as expected . our preliminary work , considered the cases when the attacker implements typical guidance laws of pure pursuit ( pp ) and pn , respectively . in these papers , the target - defender team solves an _ optimal control _ problem that returns the optimal strategy for the team so that intercepts the attacker and at the same time the separation between target and attacker at the instant of interception of by is maximized .the cooperative optimal guidance approach was extended ( , , ) to consider a differential game where also the attacker missile solves an optimal control problem in order to minimize the final separation between itself and the target . in this paper , we focus on characterizing the region of the reduced state space formed by the agents initial positions for which survival of the target is guaranteed when both the target and the defender employ their optimal strategies . the optimal strategies for each one of the three agents participating in the active target defense differential game are provided in this paper as well . the paper is organized as follows .section [ sec : problem ] describes the engagement scenario .section [ sec : analysis ] presents optimal strategies for each one of the three participants in order to solve the differential game discussed in the paper .the target escape region is characterized in section [ sec : escape ] .examples are given in section [ sec : example ] and concluding remarks are made in section [ sec : concl ] .the active target defense engagement in the realistic plane is illustrated in figure [ fig : problem description ] .the speeds of the target , attacker , and defender are denoted by , , and , respectively , and are assumed to be constant .the simple - motion dynamics of the three vehicles in the realistic plane are given by : where the headings of , , and are , respectively , , , and .in this game the attacker pursues the target and tries to capture it .the target and the defender cooperate in order for the defender interpose himself between the attacker and the target and to intercept the attacker before the latter captures the target .thus , the target - defender team searches for a cooperative optimal strategy , optimal headings and , to maximize the separation between the target and the attacker at the time instant of the defender - attacker collision .the attacker will devise its corresponding optimal strategy , optimal heading , in order to minimize the terminal separation / miss distance . define the speed ratio problem parameter .we assume that the attacker missile is faster than the target aircraft , so that . in this workwe also assume the attacker and defender missiles are somewhat similar , so . in the following sectionsthis problem is transformed to an aimpoint problem where each agent finds is optimal aimpoint .furthermore , it is shown that the solution of the differential game involving three variables ( the aimpoint of each one of the three agents ) is equivalent to the solution of an optimization problem in only one variable .we now undertake the analysis of the active target defense differential game .the target ( ) , the attacker ( ) , and the defender ( ) have simple motion " la isaacs .we also emphasize that , , and have constant speeds of , , and , respectively .we assume that and the speed ratio .we confine our attention to point capture , that is , the separation has to become zero in order for the defender to intercept the attacker . and form a team to defend from .thus , strives to close in on while and maneuver such that intercepts before the latter reaches and the distance at interception time is maximized , while strives to minimize the separation between and at the instant of interception .since the cost is a function only of the final time ( the interception time instant ) and the agents have simple motion dynamics , the optimal trajectories of each agent are straight lines . in figure [fig : one ] the points and represent the initial positions of the attacker and the defender in the reduced state space , respectively .a cartesian frame is attached to the points and in such a way that the extension to infinity of the segment in both directions represents the -axis and the orthogonal bisector of represents the -axis .the state variables are , , and .notice that all points in the left - half - plane ( lhp ) can be reached by the defender before the attacker does ; similarly , all points in the right - half - plane ( rhp ) can be reached by the attacker before the defender does . in this paperwe focus on the case where the target is initially closer to the attacker than to the defender ; in other words , assume that . with respect to figure [ fig : one ] we note that the defender will intercept the attacker at point on the orthogonal bisector of at which time the target will have reached point .the attacker aims at minimizing the distance between the target at the time instant when the defender intercepts the attacker , that is , the distance between point and point on the orthogonal bisector of where the defender intercepts the attacker ; the points and represent the initial and terminal positions of the target , respectively . when the attacker and the target are faced with a maxmin optimization problem : the target chooses point and the attacker chooses point on the y - axis , see figure [ fig : maxminej ] .additionally , the defender tries to intercept the attacker by choosing his aimpoint at point on the y - axis .thus , the optimization problem is where the function represents the distance between the target terminal position and the point where the attacker is intercepted by the defender .the target tries to cross the orthogonal bisector of into the lhp where the defender will be able to allow it to escape by intercepting the attacker at the point on the orthogonal bisector of .therefore , the defender s optimal policy is in order to guarantee interception of the attacker .the optimality of this choice by the defender will be shown in proposition [ prop : spxtg0 ] .since the defender s optimal policy is , the decision variables and jointly determine the distance between the target terminal position and the point where the attacker is intercepted by the defender .this distance is a function of the decision variables and .thus , the attacker and the target solve the following optimization problem now , let us analyze the possible strategies . if the target chooses , the attacker will respond and choose . if the target would correct his decision and choose some such that , as shown in figure [ fig : maxminej ] for the case where and in figure [ fig : maxminej2 ] for the case where .in general , choosing is detrimental to the attacker since his cost will increase .thus , the attacker should aim at the point which is chosen by the target , that is , .given the cost / payoff function , the solution and of the optimization problem is such that moreover , when , the attacker strategy is so that it suffices to solve the optimization problem where assume that the attacker is faster than the target , for otherwise the target could always escape without the help of the defender .thus , we assume that the speed ratio . also , we assume that .the target needs to be able to break into the lhp before being intercepted by the attacker for the defender to be able to assist the target to escape , by intercepting the attacker who is on route to the target .thus , a solution to the active target defense differential game exists if and only if the apollonius circle , which is based on the segment and the speed ratio , intersects the orthogonal bisector of .this imposes a lower limit on the speed ratio , that is , we need .the critical speed ratio corresponds to the case where the apollonius circle is tangent to the orthogonal bisector of . andif the speed ratio the target always escapes and there is no need for a defender missile , that is , no target defense differential game is played out .the optimal strategies for the case can be obtained in a similar way as shown in this paper and the critical speed ratio is . assume that .then , the critical speed ratio is a function of the positions of the target and the attacker and is given by _ proof_. the attacker s initial position , the target s initial position , and the center of the apollonius circle are collinear and lie on the dotted straight line in figure [ fig : one ] whose equation is the geometry of the apollonius circle is as follows : the center of the circle , denoted by , is at a distance of from and its radius is , where is the distance between and and is given by hence , the following holds % \label{eq : circleeq}\end{aligned}\ ] ] and we calculate the coordinates of the center of the apollonius circle consequently , the critical speed ratio is the positive solution of the quadratic equation which is given by . in general , it can be seen from figure [ fig : one ] that if then as well .we will assume , so that a solution to the active target defense differential game exists ; otherwise , if , the defender will not be able to help the target by intercepting the attacker before the latter inevitably captures the target ; and if then the target can always evade the attacker and there is no need for a defender .when the target is on the side of the attacker , the target chooses its aimpoint , denoted by , on the orthogonal bisector of in order to maximize its payoff function , the final separation between target and attacker , and where represents the coordinate of the aimpoint on the orthogonal bisector of .this is so because the attacker will aim at the point . in order to minimizethe optimal strategy of the attacker is to choose the same aimpoint on the orthogonal bisector of , where it will be intercepted by the defender . in order to find the maximum of we differentiate eq . in and set the resulting derivative equal to zero the following quartic equation in is obtained in the sequel we focus on the case .in addition and without loss of generality assume that .let us divide both sides of eq .by and set , , and , whereupon the quartic equation assumes the canonical form we are interested in the real and positive solutions of the canonical quartic equation .has two real solutions , when , has two repeated solutions at and two complex solutions . _remark_. writing the quartic equation as we see that , , and . therefore , equation has two real solutions .equation has a real solution and an additional real solution , provided that .note that the quartic equation is parameterized by , so whether or makes no difference as far as the solutions to the quartic equation are concerned .however , if the applicable real solution is , whereas if the applicable real solution is .when , by choosing his heading , the target ( and the defender ) thus choose the coordinate to maximize ; that is , is the target s ( and defender s ) choice .then the payoff is given by eq . andthe expression for was shown in .the second derivative of the payoff function the target is choosing to maximize the cost .now , the attacker reacts by heading towards the point on the orthogonal bisector of where , invariably , he will be intercepted by the defender .both the target and the attacker know that the three points must be collinear .the defender will not allow the attacker to cross the orthogonal bisector because then the attacker will start to close in on the target .the optimal coordinate is the solution of the quartic equation such that the second - order condition for a maximum holds on . in view ofwe know that and inserting into yields we have that if and only if hence , the first real solution of the quartic equation does not fulfill the role of yielding a maximum and the second real solution of is the candidate solution . it is the target who chooses to maximize the payoff .note that and , where and are the real solutions of .inserting eq . into eq .yields the target and defender payoff and using when we have that .hence , the solution of the quartic equation must satisfy this situation is illustrated in figure [ fig : opsol ] where the three points , , and are collinear .concerning expression , we also need the solution of the quartic equation to satisfy the second real solution of the quartic equation must satisfy the points of intersection of the apollonius circle with the -axis ( the orthogonal bisector ) are and , where and are the solutions of the quadratic equation where the distance and the apollonius circle s center coordinates are given by .we have that where because and , from , we have that .hence , which results in the target s choice of the optimal , namely , the solution of the quartic equation must satisfy the inequalities [ prop : spxtg0 ] ( saddle point equilibrium ) . consider the case .the strategy of the target , where is the real solution of the quartic equation which maximizes , and the strategy of the defender of heading to the point , together with the strategy of the attacker of aiming at the point , constitute a strategic saddle point , that is this section we analyze the target s escape region for given target and attacker speeds , and , respectively .in other words , for given speed ratio . consider the active target defense differential game where the attacker and the defender missiles have the same speeds .when , the critical value of the speed ratio parameter can be obtained as a function of the attacker s and the target s coordinates , , and such that the target is guaranteed to escape since the defender will be able to intercept the attacker before the latter reaches the target .now , for a given target s speed ratio , , and for given attacker s initial position , , we wish to characterize the region of the reduced state space for which the target is guaranteed to escape . in other words, we want to separate the reduced state space into two regions : and .the region is defined as the set of all coordinate pairs such that if the target s initial position is inside this region , then , it is guaranteed to escape the attacker if both the target and the defender implement their corresponding optimal strategies .the region , represents all other coordinate pairs in the reduced state space where the target s escape is not guaranteed . for given speed ratio and for given attacker s initial position in the reduced state space , the curve that divides the reduced state space into the two regions and is characterized by the right branch of the following hyperbola ( that is , ) _ proof_. the requirement for the target to escape being captured by the attackeris that the apollonius circle intersects the y - axis .the radius of the apollonius circle is and the x - coordinate of its center is if , we need for the defender to be of any help to the target .thus , , , and must satisfy the condition equivalently , which is also equivalent to when the ` greater than ' sign in inequality is changed to ` equal ' sign , the resulting equation defines the curve that divides the reduced state space into regions and . additionally , since the symmetric case can be treated in a similar way as the case , we do not need to restrict to be greater than or equal to zero .thus , the coordinate pairs such that can be written in the hyperbola canonical form shown in . _ remark_. note that for a given speed ratio , the family of hyperbolas characterized by different values of shares the same center which is located at , and the same asymptotes which are given by the lines and .one can also see that , for , the slope of the asymptotes increases as decreases and viceversa .this behavior is expected since a relatively faster target will be able to escape the attacker when starting at the same position as a relatively slower target .it is important to emphasize that if then capture of the target by the attacker is guaranteed if the attacker employs its optimal strategy . in this casethe optimal strategies described in section [ subsec : optimal ] will result in . being negative makes sense in terms of the differential game formulated in this paper ( recall that the attacker tries to minimize ) .however , the cost / payoff function represents a distance and it does not make sense for it to be negative in the real scenario where the attacker tries to capture the target , i.e. the terminal separation should be zero instead of negative . based on the solution of the differential game presented in section [ subsec : optimal ] , the attacker is able to redefine its strategy and capture the target , that is , to obtain .the new strategy is as follows .the attacker , by solving the differential game and obtaining the optimal cost / payoff , realizes that , then , it simply redefines its optimal strategy to be .the details when the optimal strategies of section [ subsec : optimal ] result in are as follows . chooses his aimpoint to be that lies on the apollonius circle . that ( equivalently , the apollonius circle does not intersect the y - axis ) and chooses his aimpoint also on the apollonius circle - see figure [ fig : jneg ] . given s choice of , the soonest can make is by capturing on the apollonius circle ( otherwise the target will exit the apollonius circle and the defender may be able to assist the target ) .thus , .similarly , solves the differential game and obtains .this information is useful to and it realizes that is unable to intercept . thus , will be prepared to apply passive countermeasures such as releasing chaff and flares . can also change its objective and find some in order to optimize a different criterion such as to maximize capture time ; however , this topic falls outside the scope of this paper ._ example 1_. consider the speed ratio and the attacker s initial position .the right branch hyperbola shown in figure [ fig : reg1 - 1 ] divides the target escape / capture regions .simulation : let the target s initial coordinates be and .note that .the y - coordinate of the optimal interception point is given by .figure [ fig : ex1sim ] shows the results of the simulation .the optimal cost / payoff is and the target escapes being captured by the attacker . _example 2_. for a given speed ratio , we can plot a family of right hand hyperbolas on the same plane for different values of .consider .figure [ fig : regm ] shows several hyperbolas for values of .a cooperative missile problem involving three agents , the target , the attacker , and the defender was studied in this paper .a differential game was analyzed where the target and the defender team up against the attacker .the attacker tries to pursue and capture the target .the target tries to evade the attacker and the defender helps the target to evade by intercepting the attacker before the latter reaches the target .this paper provided optimal strategies for each one of the agents and also provided a further analysis of the target escape regions for a given target / attacker speed ratio .h. huang , w. zhang , j. ding , d. m. stipanovic , and c. j. tomlin , `` guaranteed decentralized pursuit - evasion in the plane with multiple pursuers , '' in _50th ieee conference on decision and control and european control conference _ , 2011 ,. 48354840 .j. sprinkle , j. m. eklund , h. j. kim , and s. sastry , `` encoding aerial pursuit / evasion games with fixed wing aircraft into a nonlinear model predictive tracking controller , '' in _43rd ieee conference on decision and control _, 2004 , pp . 26092614 .z. e. fuchs , p. p. khargonekar , and j. evers , `` cooperative defense within a single - pursuer , two - evader pursuit evasion differential game , '' in _49th ieee conference on decision and control _ , 2010 , pp .30913097 .t. yamasaki and s. n. balakrishnan , `` triangle intercept guidance for aerial defense , '' in _ aiaa guidance , navigation , and control conference_. 1em plus 0.5em minus 0.4emamerican institute of aeronautics and astronautics , 2010 .t. yamasaki , s. n. balakrishnan , and h. takano , `` modified command to line - of - sight intercept guidance for aircraft defense , '' _ journal of guidance , control , and dynamics _ , vol .36 , no . 3 , pp .898902 , 2013 .a. perelman , t. shima , and i. rusnak , `` cooperative differential games strategies for active aircraft protection from a homing missile , '' _ journal of guidance , control , and dynamics _34 , no . 3 , pp .761773 , 2011 .e. garcia , d. w. casbeer , k. pham , and m. pachter , `` cooperative aircraft defense from an attacking missile using proportional navigation , '' in _ 2015 aiaa guidence , navigation , and control conference _
|
the active target defense differential game is addressed in this paper . in this differential game an attacker missile pursues a target aircraft . the aircraft is however aided by a defender missile launched by , say , the wingman , to intercept the attacker before it reaches the target aircraft . thus , a team is formed by the target and the defender which cooperate to maximize the separation between the target aircraft and the point where the attacker missile is intercepted by the defender missile , while the attacker simultaneously tries to minimize said distance . this paper focuses on characterizing the set of coordinates such that if the target s initial position belong to this set then its survival is guaranteed if both the target and the defender follow their optimal strategies . such optimal strategies are presented in this paper as well .
|
the incredible volume and free availability of the google books corpus make it an intriguing candidate for linguistic research . in a previous work , we broadly explored the characteristics and dynamics of the english and english fiction data sets from both the 2009 and 2012 versions of the corpus .we showed that the 2009 and 2012 english unfiltered data sets and , surprisingly , the 2009 english fiction data set sets all become increasingly influenced by scientific texts throughout the 1900s , with medical research language being especially prevalent .we concluded that , without sophisticated processing , only the 2012 english fiction data set is suitable for any kind of analysis and deduction as it stands .we also described the library - like nature of the google books corpus which reflects word usage by authors with each book , in principle , represented once .word frequency is thus a deceptive aspect of the corpus as it is not reflective of how often these words are read , as might be informed by book sales and library borrowing data , much less spoken by the general public .nevertheless , the corpus provides an imprint of a language s lexicon and remains worthy of study , providing all caveats are clearly understood . in this paper, we therefore focus on the 2012 version of the english fiction data set .[ ficvolume ] shows the total number of 1-grams for this data set between 1800 and 2000 ( 1-grams are contiguous text elements and are more general than words including , for example , punctuation ) .an exponential increase in volume is apparent over time with notable exceptions during major conflicts when the total volume decreases .for ease of comparison with related work , and to avoid high levels of optical character recognition ( ocr ) errors due to the presence of the long s , `` said '' being read as `` faid '' we omit the first two decades and concern ourselves henceforth with 1-grams between the years 1820 and 2000 . in releasing the original data set , michel et al . noted that english fiction contained scholarly articles about fictional works ( but not scholarly works in general ) , and we also explore this balance here .many researchers have carried out broad studies of the google books corpus , examining properties and dynamics of entire languages .these include analyses of zipf s and heaps laws as applied to the corpus , the rates of verb regularization , rates of word `` birth '' and `` death '' and durations of cultural memory , as well as an observed decrease in the need for new words in several languages .however , most of the studies were performed before the release of the second version , and , to our knowledge , none have taken into account the substantial effects of scientific literature on the data sets . here , we are especially interested with revisiting work on word `` birth '' and `` death '' rates as performed in . as we show below in sec . [sec : critique ] ) , the methods employed in suffer from boundary effects , and we suggest an alternative approach insensitive to time range choice . we do not , however , dispute that an asymmetry exists in the changes in word use . in our earlier work , we observed this asymmetry in the contributions to the jensen - shannon divergence ( defined below ) between decades , with most large contributions being accounted for by words whose relative frequencies had increased over time . in this paper , we apply a similar information - theoretic approach to examine this effect for words moving across fixed usage frequency thresholds .we structure the paper as follows . in sec . [sec : critique ] , we critique the method from which examines the birth and death rates of words in an evolving , time - coded corpus . in sec .[ sec : methods ] , we recall and confirm a similar apparent bias toward increased usage rates of words from our prevoius paper .we then measure the flux of words across various relative frequency boundaries ( in both directions ) in the second english fiction data set .we describe the use of the largest contributions to the jensen - shannon divergence between successive decades from among the words crossing each boundary as signals to highlight the specific dynamics of word growth and decay over time . in sec .[ sec : discussion ] , we display examples of these word usage changes and explore the factors contributing to the observed disparities between growth and decay .we offer concluding remarks in sec .[ sec : conc ] .in , petersen _ et al._examined the birth and death rates of words over time for various data sets in the first version of the google books corpus .they defined the birth year and death year of an individual word as the first and last year , respectively , that the given word appeared above one twentieth its median relative frequency .excluded from considerations were words appearing in only one year and words appearing for the first time before 1700 .the rates of word birth and death , respectively , were found by normalizing the numbers of births and deaths by the total number of unique words in a given year .results typical to all data sets included decreased birth rates and increased death rates over time .these results are not implausible , and the results were noted to be qualitatively similar when one tenth the median frequency is used as a threshold .but the very specific nature of the analysis particularly the multiple temporal restrictions on the words included in the analysis , the reliance on a particular proportion of each word s median frequency , and the ignoring of all but the first and last crossings over this threshold raise questions as to the robustness of the method .now , the common - sense interpretation of `` word death '' is clearly that a word falls out of usage ( relatively ) at a fixed point in history . ignoring all but the first and last crossings of a threshold tied to both a word s usage frequency and a specific time range appears to cause problems in this regard in , and we find a boundary effect for death rates induced by the choice of the time range s end point . to demonstrate this ,we recreate the described analysis for the second version of english fiction .we note that in our analyses , the relative frequencies are coarse - grained at the level of decades ( see methods below ) .we excluded words appearing in only one decade ( rather than year ) and words appearing before the 1820s ( instead of 1700 ) .again , this more recent initial cut - off date accounts for the high frequency of ocr errors observed before 1820 .these differences with are not substantive , and allow us to re - examine their work and build out our own in meaningful ways .we compare the birth and death rates as observed recently versus historically by performing the analysis with three different endpoints imposed : the 1950s , the 1970s , and the 1990s .we present the results of the recreation in fig .[ petersen ] ( c.f .fig . 2 in ) .using the 1990s cutoff , the observed birth rates are qualitatively similar to those found for various data sets ( from the 2009 version of the corpus ) in and display spikes in the 1890s and 1920s ( top panel in fig .[ petersen ] , light gray ) .we see that birth rates are not affected by moving the `` end of history '' back to the 1950s or 1970s .the observed death rates with the 1990s boundary ( bottom panel in fig . [ petersen ] , light gray ) are also similar to that found in , despite the lack of deaths detected during much of the 19th century .( recall , we ignored words originating prior to 1820 . ) however , as the terminal boundary is moved back to the 1970s , what was originally a stable region between the 1910s and 1940s turns into a apparent region of gradually increasing word extinction .( bottom panel in fig .[ petersen ] , gray ) . as the boundary is moved further back to the 1950s , the increase in death rateis no longer gradual ( bottom panel in fig .[ petersen ] , dark gray ) .we thus see a clear dependence of the observations of the death rate on when the history of the corpus ends . moving the `` start of history '' forward in timesimilarly affects birth rates .thus , while the method in provides a reasonable approach to analyzing dynamics and asymmetries in the evolutionary dynamics of a language data set , the results for birth and death rates in [ petersen ] depend on when the experiment is performed .so motivated , we proceed to develop an approach that is robust with respect to time boundaries .we coarse - grain the relative frequencies in the second english fiction data set at the level of decades e.g . , between 1820-to-1829 and 1990-to-1999by averaging the relative frequency of each unique word in a given decade over all years in that decade .( we weight each year equally . )this allows us to conveniently calculate and sort contributions to the jensen - shannon divergence ( defined below ) of individual 1-grams between any two time periods as in our previous paper , we examined the dynamics of the 2012 version of english fiction by calculating contributions to the jensen - shannon divergence ( jsd ) between the distributions of 1-grams in two given decades .we then used these contributions to resolve specific and important signals in dynamics of the language .( this material , which is presented in greater detail in our previous work , is outlined in sufficient detail below . )given a language with 1-gram distributions in the first decade and in second , the jsd between and can be expressed as ,\ ] ] where is a mixed distribution of the two years , and is the shannon entropy of the original distribution .the jsd is symmetric and bounded between 0 and 1 bit .these bounds are only observed when the distributions are identical and free of overlap , respectively . the contribution from the word to the divergence between two decades , as derived from eq .[ eq : jsd ] , is given by where , so that contribution from an individual word is proportional to both the average frequency of the word and also depends on the ratio between the smaller and average frequencies . to elucidate the second dependency , we reframe the contribution as words with larger average frequencies yield larger contribution signals as do those with smaller ratios , , between the frequencies .a common 1-gram changing subtly can produce a large signal .so can an uncommon or new word given a sufficient shift from one decade to the next . , the proportion of the average frequency contributed to the signal , is concave ( up ) and symmetric about , where the frequency remains unchanged yielding no contribution .if a word appears or disappears between two decades ( e.g. , in the former case ) , then the contribution is maximized at precisely the average frequency of the word in question .we observed in that most large jsd contribution signals are due to words whose relative frequencies increase over time . in this paper , we confirm and explore this effect .we texture our observations by examining jsd signals due to words crossing various relative frequency thresholds in either direction , as well as the total volume of word flux in either direction across these thresholds .it is both convenient and consistent to record flux over relative frequency thresholds instead of rank thresholds . to demonstrate this consistency, we observe in fig .[ threshold_comp ] that rank threshold boundaries correspond to nearly constant relative frequency thresholds , with the exception of the top 1-gram ( always the comma ) , which decreases gradually in relative frequency . for thresholds of and below ,we omit signals corresponding to references to specific years , since such references would otherwise overwhelm the charts for these thresholds .as seen in fig .[ jsd_rising_ratios ] , more than half of the jsd between a typical given decade and the next is due to contributions from words increasing in relative usage frequency .the jsds between 1820s , 1840s , and 1970s and their successive decades are the only exceptions . moreover ,when the time differential is increased to three decades , no exceptions remain .this confirms asymmetry exists between signals for words increasing and decreasing in relative use .we note relative extrema of the inter - decade jsd in the vicinity of major conflicts . between the 1860s and successive decades ,words on the rise contribute substantially to the jsd .this is consistent with words not relatively popular during wartime ( specifically the american civil war ) being used more frequently in peacetime .a similar tendency holds for the jsd between the 1910s ( world war i ) and the 1920s .this is not as apparent in the jsd between the 1910s and the 1940s , possibly because the 1940s coincide with world war ii .the absolute maximum for the single - decade curve corresponds to the divergence between the 1950s and 1960s .this suggests a strong effect from social movements .( for the 3-decade split , the absolute peak comes from the jsd between the 1940s and 1970s . ) ) crossing relative frequency thresholds of , , , and in both directions between each decade and the next decade .for each threshold , the upward and downward flux roughly cancel . for either direction of flux, there appears to be little qualitative difference between the three smallest thresholds for which the downward flux between the 1950s and the 1960s is a minimum , the downward flux increases over the next two pairs of consecutive decades , then it dips again between the 1980s and 1990s . for the highest threshold ,the increase between the 1960s and 1970s and the next pair of decades is more noticeable for the upward flux , as is the decrease between the last two pairs of decades . ] between the 1970s and 1980s .signals for each pair of decades are sorted and weighted by contribution to the jsd between those decades .bars pointing to the right represent words that rose above the threshold between decades .bars pointing left represent words that fell .( the first signal is the asterisk `` * '' . ) , scaledwidth=45.0% ] between the 1980s and 1990s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1970s and 1980s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1980s and 1990s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1970s and 1980s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1980s and 1990s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1930s and 1940s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1930s and 1950s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] between the 1960s and 1970s .see the caption for fig .[ threshold_flux_4_70 ] for details ., scaledwidth=45.0% ] we next consider flux between decades across relative frequency thresholds of powers of 10 from down to . in fig .[ threshold_crossings ] , we display the volume of flux of 1-grams in both directions across relative frequency thresholds of powers of 10 from down to .we first describe the very limited flux across the and boundaries ( not shown in fig .[ threshold_crossings ] ) , and then investigate the richer transitions for the lower thresholds for , , , and .flux across the boundary between consecutive decades is almost nonexistent during the observed period . between the 1820s and 1830s, the semicolon falls below the threshold . between the 1840s and 1850s , `` i '' rises above the boundary . between the 1910s and 1920s , `` was '' rises across .this is the entirety of the flux across , which shows the regime of 1-grams above this frequency ( roughly the top 10 1-grams ) is quite stable .the eleven 1-grams above threshold in the 1990s in decreasing order of frequency are : the comma `` , '' , the period `` . '' , `` the '' , quotation marks , `` to '' , `` and '' , `` of '' , `` a '' , `` i '' , `` in '' , and `` was '' .the set of 1-grams with relative frequencies above ( roughly to the top 100 1-grams ) is also fairly stable .the flux of 1-grams across this boundary between consecutive decades is entirely captured by fig . [ threshold_flux_3 ] .parentheses drop in ( relative frequency of ) use between the 1840s and 1850s and cross back over the threshold after the american civil war ( between the 1860s and 1870s ) . the same is true for before and after world war ii ( between the 1930s and 1940s and between the 1940s and 1950s , respectively ) . beyond these, the flux is entirely due to proper words ( not punctuation ) .for example , `` made '' fluctuates up and down over this threshold repeatedly over the course of a century . between the 1870s and the 1880s , `` made '' , which sees slightly increased use , is the only word to cross the threshold .the most crossings is 12 , which occurs between the first two decades . also , `` great '' struggled over the first 5 decades and eventually failed to remain great by this measure . `` mr . ''fluctuated across the threshold between the 1830s and 1910s . more recently ( since the 1930s ) , `` they '' has been making its paces up and down across the threshold . for each threshold between and ,the upward and downward flux roughly cancel , which is consistent with fig .[ threshold_comp ] . for both upward and downward flux, there appears to be little qualitative difference between the three smallest thresholds . for these thresholds ,the downward flux between the 1950s and the 1960s is a minimum , the downward flux increases over the next two pairs of consecutive decades , then it dips again between the 1980s and 1990s . for the highest threshold ,the increase between the 1960s and 1970s and the next pair of decades is more noticeable for the upward flux , as is the decrease between the last two pairs of decades . in the experimentrecreated in fig .[ petersen ] , the word birth rate initially exceeds the death rate by three orders of magnitude , and this gap declines gradually over the next two centuries .however , with respect to words fluctuating across relative frequency thresholds in opposite directions , we see no strong evidence of such marked asymmetry during any long period of time . with respect to total contributions to the jsd between consecutive decades , there is typically some bias toward toward words with increased relative use as seen in fig .[ jsd_rising_ratios ] , but the difference need never be described in orders of magnitude . to address the fluctuations during the last couple of decades , we begin by displaying in fig .[ threshold_flux_4_70 ] the top 60 flux words between the 1970s and the 1980s sorted by contributions to the jsd between those decades .note that this pair of decades corresponds to both a dip ( below 50% ) in the proportion of rising word contributions to the jsd and to an increase in the volume of downward flux ( as well as upward flux for high thresholds ) . in fig .[ threshold_flux_4_80 ] , we show all 55 flux words between the 1980s and the 1990s . between each pair of decades , we see reduced relative use of particularly british words , including `` england '' between the first two decades and `` king '' , `` george '' , and `` sir '' between the latter two .we also see reduced use of more formal - sounding words , such as `` character '' , `` manner '' , and `` general '' between the first two decades and `` suppose '' , `` indeed '' , and `` hardly '' between the latter two . increasing are physical and emotional words .those between the first two decades include `` stared '' , `` breath '' , `` realized '' , `` shoulder '' and `` shoulders '' , `` coffee '' , `` guess '' , `` pain '' , and `` sorry . '' between the latter two , we see `` chest '' , `` skin '' , `` whispered '' , `` hit '' , `` throat '' , `` hurt '' , `` control '' , and `` lives . ''also included are `` phone '' and `` parents . '' in figs .[ threshold_flux_5_70 ] and [ threshold_flux_5_80 ] , we display the top 60 flux words , not counting references to years , across the threshold between the same decades .many of the words declining below the threshold between the 1970s and 1980s are unusual spellings such as `` tho '' , proper names like `` balzac '' , or words from non - english languages like `` une . ''increasing across this threshold between the first two decades are a plethora of mostly female proper names , with `` jessica '' and `` megan '' leading . also seen are `` kgb '' and `` jeans . ''( `` kgb '' decreases in the 1990s , as does `` russians . '' ) increasing between the 1980s and 1990s are a few proper names ; however , most of the signals here are social and sexual in nature , and in part point to the inclusion of academic , literary criticism .these include `` lesbian '' and `` lesbians '' , `` aids '' , and `` gender '' in the top positions .also included are both `` homosexuality '' and the more general `` sexuality . ''we also see `` girlfriend '' , `` boyfriend '' , `` feminist '' , and `` sexy . '' for contrast , we show in fig .[ threshold_flux_6_80 ] the flux across a threshold of between the 1980s and 1990s ( again , not counting years ) .in particular , while increases in `` hiv '' and `` bisexual '' make the list ( similarly to many signals in fig .[ threshold_flux_5_80 ] ) , as do `` fax '' , `` laptop '' , and `` internet '' , a great swath of the signals are accounted for by one franchise .we note increases in `` picard '' , `` tng '' , `` sisko '' , and `` ds9 . ''these latter signals should serve as a reminder that the word distributions in library - like google books corpus , even for fiction , do not remotely resemble the contents of normal conversations ( at least not for the general population ) .however , we do observe signals arising at this threshold from factors external to the imaginings of specific authors .it would therefore be premature to dismiss the contributions at this threshold because of an apparent overabundance of `` star trek . ''in fact , since `` the next generation '' and `` deep space 9 '' aired precisely during these two decades , an abundance of `` star trek '' novels in the english fiction data set is actually quite encouraging , because these novels do exist , are available in english , and are ( clearly ) fiction . for consistency, we also include the flux ( omitting years ) across this threshold between the 1970s and 1980s in fig .[ threshold_flux_6_70 ] . while not particularly topical , we do see `` aids '' increase above this threshold a decade prior to its increase over as seen in fig .[ threshold_flux_5_80 ] .the texture of the signals changes as we dial down the frequency threshold .we typically find that thresholds of and above produce signals with little to no noise .this is not surprising since this relative frequency roughly corresponds to rank threshold for the 1000 most common words ( see fig .[ threshold_comp ] ) in the data set . using a threshold of ( fewer than 10,000 words fall above this frequency in any given decade ) ,we see some noise ( mostly in the form of familiar names ) , but still observe many valuable signals .only when the threshold is reduced to does the overall texture of the signals become questionable as a result of a variety of proper nouns far less familiar than those observed with the previous threshold . however , at this threshold , we also observe several early signals of real social importance . curiously , between the 1930s and 1940s the volume of flux across each threshold is not atypical ( see fig . [ threshold_crossings ] ) . moreover, the asymmetry between the jsd contributions between those decades is very low .yet it is obvious that we should expect signals of historical significance between these two decades . in figs .[ threshold_flux_4_30 ] and [ threshold_flux_5_30 ] , we see words crossing the and thresholds , respectively ( with references to years omitted in fig [ threshold_flux_5_30 ] ) . for the higher threshold , only 56 words cross .the most noticeable such words that are more commonly used in the 1940s are `` general '' and `` german . ''also , `` killed '' appears in this list .words used less frequently include `` pleasure '' , `` garden '' , and `` spirit . '' for the lower threshold , we see the signals from prolific authors as in our previous paper , particularly upton sinclair s character , lanny budd .we also see more nazis ( `` nazi '' and `` nazis '' ) .last , we include one of the more colorful examples . in fig .[ threshold_flux_5_60 ] , we show signals ( not including years ) for words crossing the threshold between the 1960s and 1970s .profanity dominates .we see more references to _ the world according to garp _( `` garp '' ) and `` star trek '' , again ( `` kirk '' this time ) .we also see more `` computer '' , `` tv '' , and `` plastic . ''signals also appear for `` blacks '' and `` homosexual '' , for drugs ( `` drug '' and `` drugs '' ) , and ( plausibly ) for the war on drugs ( `` enforcement '' and `` cop '' ) .we refer to reader to our paper s online appendices at http://www.compstorylab.org/share/papers/pechenick2015b/ for figures representing flux across relative frequency thresholds of , , and between consecutive decades over the entire period analyzed ( the 1820s to the 1990s ) .we recall from and from our own work ( fig .7d ) that the rate of change of given language tends to slow down over time .this applies to the 2012 english fiction data set and is not contested by us in the present paper . in the critiqued paper , it was suggested that the birth and death rates of words can be calculated in an intuitive , albeit very specific manner .this experiment produces birth rates that begin vastly higher than death rates with both rates converging over time to around 1% .however , we have seen that these rates converge to roughly the same values at the end of the available history , regardless of when that is i.e ., the experiment depends on when you perform it , and recent results always appear qualitatively similar . beyond this boundary issue, we find another cause for concern .when the increased usage bias in the jsd contributions and the overall and directed volumes of flux are taken into account , we do not observe even the initial orders - of - magnitude gap between so - called birth and death rates .rather , the jsd bias toward increased relative use of words is within one order of magnitude , and the flux across thresholds is typically balanced .in fact , this latter point appears to be a fundamental facet of this data set . as we see in fig .[ threshold_comp ] , the number of words above each threshold is roughly constant .this stability of the rank - frequency relation compels the observed balancing act ( and is consistent with a stable zipf law distribution ) .previously in ( fig .5d ) , we have seen the divergence between a given year and a target year tends to increase gradually with the time difference .this is not true when , for example , the target year e.g ., 1940falls during a major war , in which case we see a spike in divergence .however , as the target year exits this period e.g . enters the 1950s the spike settles back into the original gradual growth pattern .it is plausible based on these earlier observations and the observations in this paper that the distribution of the language is self - stabilizing : the overall shape of the distribution does not appear to change drastically with time or with the total volume of the data set . as old words fall out of favor, new words inevitably appear to fill in the gaps .furthermore , despite the fact that the divergence between consecutive years has been observed to decay over time , we find no shortage of novel word introductions during the most recent decades ( which have the lowest decade - to - decade jsds ) .this apparent dissonance clearly invites further investigation . finally ,while extremely specific fiction can be of great interest whether it be in the form of war novels or volumes from the `` star trek '' franchise vocabulary from these works is more easily studied when placed in proper context .dialing down the relative frequency threshold across several orders of magnitude helps to capture this distinction .however , further experimentation is called for , since an automatic means of separating specific signals from the more general signals ( e.g. , `` star trek '' from social movements ) could allow both a more intuitive grasp of the linguistic dynamics and might , ideally , allow investigators to hypothesize causal relationships between exogenous and endogenous drivers of the language .
|
the google books corpus , derived from millions of books in a range of major languages , would seem to offer many possibilities for research into cultural , social , and linguistic evolution . in a previous work , we found that the 2009 and 2012 versions of the unfiltered english data set as well as the 2009 version of the english fiction data set are all heavily saturated with scientific and medical literature , rendering them unsuitable for rigorous analysis [ pechenick , danforth and dodds , plos one , 10 , e0137041 , 2015 ] . by contrast , the 2012 version of english fiction appeared to be uncompromised , and we use this data set to explore language dynamics for english from 18202000 . we critique an earlier method for measuring birth and death rates of words , and provide a robust , principled approach to examining the volume of word flux across various relative frequency usage thresholds . we use the contributions to the jensen - shannon divergence of words crossing thresholds between consecutive decades to illuminate the major driving factors behind the flux . we find that while individual word usage may vary greatly , the overall statistical structure of the language appears to remain fairly stable . we also find indications that scholarly works about fiction are strongly represented in the 2012 english fiction corpus , and suggest that a future revision of the corpus should attempt to separate critical works from fiction itself .
|
for helping themselves in writing , debugging and maintaining their software , professional software developers using object - oriented programming languages keep in their minds an image or picture of the subtyping relation between types in their software while they are developing their software . in pre - generics java , the number of possible object types ( also called _ reference types _ ) for a fixed set of classes in a program was a _finite _ number ( however large it was ) , and , more importantly , the structure of the subtyping relation between these types ( and hence of the mental image a developer kept in mind ) was simple : the graph of the subtyping relation between classes and interfaces ( _ i.e. _ , with multiple - inheritance of interfaces ) was a simple directed - acyclic graph ( dag ) , and the graph of the subtyping relation between classes alone ( _ i.e. _ , with single - inheritance only , more accurately called the _ subclassing _ relation ) was simply a tree .this fact about the graph of the subtyping relation applies not only to java but , more generally , also to the non - generic sublanguage of other mainstream nominally - typed oo languages similar to java , such as c # , c++ , and scala . today , generics and wildcards ( or some other form of ` variance annotations ' ) are a standard feature of mainstream nominally - typed oo languages .the inheritance relation , between classes ( and interfaces and traits , in oo languages that support these notions ) is still a finite relation , and its shape is still the same as before : a simple dag .but , given the possibility of arbitrary nesting of generic types , the number of possible object types in a generic java program has become infinite , and the shape of the subtyping relation in nominally - typed oo languages has become more complex than a tree or a simple dag .it is thus natural to wonder , `` _ _ what is the shape of the subtyping relation in java __ , now after the addition of generics and wildcards ? ''this question on subtyping in java is similar to one benoit mandelbrot , in the 1960s , wondered about : `` how long is the coast of britain ? '' . at that time, some mathematicians ( including many computer scientists ) used to believe that mathematics was perfect because it had completely banished pictures , even from elementary textbooks .mandelbrot , using computers , put the pictures back in mathematics , by discovering fractals , and , in the process , finding that britain s coast has infinite length .the goal of this paper is to present and defend , even if incompletely and unconventionally ( using mainly hierarchy diagrams , and only using equations suggestively ) , a fundamental observation about the graph of the subtyping relation in java .we observed that , after the addition of generics and of wildcards , in particular to java , the graph of the subtyping relation is still a dag , but is no longer a simple dag but is rather one whose structure can be better understood , of all possibilities , also as a _ fractal _ and in fact , as we explain below , an intricately constructed fractal ( albeit a different kind of fractal than that of britain s coast ) . to motivate our observation, we use very simple generic class declarations to present in the paper some diagrams for the subtyping relation that represent the iterative construction of the subtyping graph , in the hope of making the construction process very simple to understand and thus make the fractal observation very clear . to further argue for and strengthen the observation , we also suggest algebraic equations for mathematically describing the subtyping fractal and its construction process . ( our equations are akin of recursive domain equations of domain theory that are used to construct ` reflexive domains ' .the similarity is suggestive of a strong relationship , possibly even suggesting reflexive domains useful for giving mathematical meaning for programming languages might be fractals too , even though we refrain from arguing for this claim here . )given the popularity fractals enjoy nowadays , we believe the fractal observation about subtyping in nominally - typed oo languages may help oo software developers keep a useful and intuitive mental image of their software s subtyping relation , even if it is a little more frightening , and more amazing one than the one they had before . as an immediate application of the fractal observation , ides ( integrated development environments ) that oo developers use can make developers lives easier , making them develop their software faster and with more confidence , by presenting to them parts of the fractal representing the subtyping relation in their software and allowing developers to `` zoom - in''/``zoom - out '' on sections of the fractal / relation that are of interest to the developers , in order for them to better understand the typing relations in their software and so that they may resolve any type errors in their code more quickly and more confidently .oo language designers may also benefit from the fractal observation , since having a better understanding of the subtyping relation may enable them to have a better understanding of the interactions between different features of oo languages such as the three - tiered interaction , in java , between generics ( including wildcard types ) , ` lambdas ' ( formerly known as ` closures ' ) and type inference leading designers to improve the design of the language , and to better design and implement its compilers . finally , in allusion to joshua bloch s well - known quote when considering adding closures to java , we hope , by making the fractal observation about subtyping , to enable decreasing ( or at least , more accurately estimating ) the `` _ _ complexity budget _ _ '' paid for adding generics and wildcards to java .as any standard definition ( or an image ) of a fractal will reveal , fractals ( sometimes also called _ recursive graphs _ , or _self - referential graphs _ ) are drawings or graphs that are characterized by having `` minicopies '' of themselves inside of them . given their _ self - similar _ nature , when zooming in on a fractal it is not a surprise to find a copy of the original fractal spring up .more generally , the minicopy is not an exact copy , but some _ transformation _ of the original : it may be the original rotated , translated , reflected , and so on . as such , when constructing a fractal iteratively ( as is standard ) it is also not a surprise to add details to the construction of the fractal by using ( transformed ) copies of the fractal as constructed so far ( _ i.e. _ , as it exists in the current iteration of the construction ) to get a better , more accurate approximation of the final fractal ( see figure [ fig : fractals ] , and ) . while it may not be immediately obvious to the unsuspecting , but `` having transformed minicopies of itself '' is exactly what we have noticed also happens in ( the graph of ) the subtyping relation of java and of other similar generic nominally - typed oo languages such as c # , c++ , and scala after generics and wildcards were added to the java type system .figure presents a drawing of the first steps in the construction of a subtyping graph , to illustrate and give a `` flavor '' of the observation . in section [ sec : observation - illustrated ] , to motivate presenting the subsequent _ transformations observation _ in section [ sec : transformations - observation ] , we present a more precise and more detailed diagram one that , unlike figure [ fig : first - iterations ] , uses no ` raw types ' , and has an additional class ` d ` .fractals : ( first steps in constructing ) the koch curve and ( a step in constructing ) a fractal tree ]to illustrate our main observation and how the subtyping fractal is constructed , let us assume we have the non - generic class ` object ` ( which extends / subclasses no other classes , _i.e. _ , is at the top of the subclassing / inheritance hierarchy ) , and that we have , as expressed in the two simple lines of code below , two generic classes ` c ` and ` d ` that extend class ` object ` and that take one ( unbounded ) type parameter .similarly , and crucial to seeing the subtyping graph as a fractal , we also assume we have a `` hidden '' ( _ i.e. _ , inexpressible in some oo languages , such as java ) non - generic class ` null ` at the bottom of the class inheritance hierarchy ( whose only instance is the ` null ` object , which in java is an instance of every class and can be assigned to a variable of any object type , `` the non - terminating object '' . ) ] . .... class c < t > extends object { } class d< t > extends object { } .... figure demonstrates the subclassing hierarchy ( _ a.k.a ._ , inheritance hierarchy ) based on assuming these class declarations .the declared inheritance relation between class ( and interface / trait ) names in a program is the _ _ starting point _ _ for constructing the graph of the subtyping relation in nominally - typed oo languages , including java ( note the use of the identification of type inheritance and subtyping in nominally - typed oop to interpret ` class extension ' as ` subtyping between corresponding class types ' .we discuss the role of nominality in more detail in section [ sec : nomvsstruct ] ) .figure shows that the `` default type argument '' , namely ` ? ` ( the unbounded wildcard type ) , is used in this initial step as the type argument for all generic classes to form type names for corresponding class types .figure demonstrates how the ( names of ) types in the next iteration of constructing the subtyping fractal ( _ i.e. _ , of the iteration numbered , which we can `` see '' after looking at iteration if we `` zoom in '' one step ) constructing are constructed by replacing / substituting all the ` ? ` s in level / iteration 0 ( the base step ) with _ three _ different forms of each type ` t ` in the previous level ( level ) , namely `? extends t ` ( covariance ) , ` ?super t ` ( contravariance ) , and ` t ` ( invariance ) .( see below ) . replacing each of the _ innermost _ ( or , all ? ) ` ? `s of a type ( `` holes '' in the type ) in level with a ` # ` ( a hash , as a placeholder ) , then replacing these ` # ` s with three different forms of each one of the types in the previous level ( level ) , or in level 0 , to construct names of the types of the new level ( corresponding to equation or ) .see further comments below for a note on the likely equality of the first two equations , and on the likely uselessness of the third equation as defining the fractal . ]covariant , contravariant and invariant subtyping rules are then used to decide the subtyping relation between all the newly constructed types ( note that , due to the inclusion of types ` object ` and ` null ` in level 0 and in all subsequent levels , all level types are _ also _ types of level / iteration .this motivates the notion of the rank of a type .the level / iteration in which a type _ first _ appears is called the _rank _ of the type . as such , types ` object ` and ` null ` are always of rank 0 ) . in figure[ fig : fig3 ] we use ` ?xt ` and ` ?st ` as short - hands for ` ? extends t ` and ` ? super t ` respectively .[ [ the - effect - of - variant - subtyping - rules - on - the - subtyping - graph ] ] the effect of variant subtyping rules on the subtyping graph : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + * covariant subtyping : the level 0 graph is _ copied _ inside ` c < ?> ` and ` d < ?> ` ( illustrated by * * arrows in diagrams ) . for ten types `t ` ( from 2 non - generic classes + 2 generic classes 4 types in level 0 ) , we have paths + ` object - > c < ? extends t > - > null ` , and + ` object - > d <? extends t > - > null ` + ( note : `? extends null ` is the same as ` null ` .inexpressible in java ) .* contravariant subtyping : the level 0 graph is _ flipped _( turned upside - down ) inside ` c < ?> ` and ` d < ?> ` ( illustrated by * * arrows in diagrams ) . for ten types ` t ` , like for covariance , we have paths + ` object - > c < ?super t > - > null ` , and + ` object - > d < ?super t > - > null ` + ( note : ` ?super object ` is the same as ` object ` .see footnote regarding current java behavior ) .* invariant subtyping : the level 0 graph is _ flattened _ inside ` c < ?> ` and ` d < ?> ` ( no corresponding arrows in diagrams ) . for ten types` t ` , like for covariance , we have paths + ` object - > c < t > - > null ` , and + ` object - > d <t > - > null ` .figure illustrates how to use the notion of type intervals to combine all three ( _ i.e. _ , covariant , contravariant and invariant ) subtyping rules ( and to add even more types to the subtyping relation in later iterations / nesting levels ) . in figure[ fig : fig4 ] , we have all three transformations applied to level 0 graph and embedded inside ` c < ?> ` and ` d < ?> ` ( note that bounds of an interval can degenerately be equal types , corresponding to invariance ) . for twenty types` s ` and ` t ` ( where ` s ` is a subtype of ` t ` in the previous iteration / level ) , from 2 non - generic classes + 2 generic classes 9 intervals in level 0 , we have ` object - > c <s - t > - > null ` , ` object - > d <s - t > - > null ` ( the notation ` s - t ` means the interval with lowerbound ` s ` and upperbound ` t ` . for brevity , we use ` o ` for ` object ` and ` n ` for ` null ` ) . if class ` c ` or class ` d ` had subclasses other than ` null ` , this graph diagram would have been even richer_i.e . _ , it would have had more types than the graph in figure .( it can be noted that the ` null ` type is useful in expressing intervals . yetthe diagram can be presented without it , using ` extends ` only or ` super ` only , while allowing but not requiring a naked ` ? ` ; or , for brevity , using a symbol like ` < : ` ) .note : the types ` null ` , ` c < null > ` , and ` d < null > ` , inside dotted graph nodes in figure and figure , are currently _ inexpressible _ in java ( _ i.e. _ , as of java 8 , based on the assumption that these types are of little practical use to developers . ) subtyping relations involving these inexpressible types are also currently of little use to java developers ( except in type inference ) .accordingly , they also are drawn in figure [ fig : fig4 ] using dotted graph edges.(*a bug in javac * ) : also , as of java 8 , we have noted that java does not currently identify `? super object ` with ` object ` , and as such a variable ` b ` of type ` c <? super object > ` , for example , _ can not _ be assigned to a variable ` a ` of type ` c < object > ` ( _ i.e. _ , for the statement ` a = b ; ` the java compiler ` javac ` currently emits a type error with an unhelpful semi - cryptic error message that involves ` wildcard capturing ' ) even as java allows the opposite assignment of ` a ` to ` b ` ( _ i.e. _ , the statement ` b = a ; ` ) , implying that , even though java currently correctly sees ` c < object > ` as a subtype of ` c < ?super object > ` , it currently does _ not _ consider ` c < ?super object > ` as a subtype of ` c <object > ` .given that there are no supertypes of type ` object ` ( the class type corresponding to class ` object ` ) , and it is not expected there will ever be any , we believe the java type system should be fixed to identify the two type arguments ` ?super object ` and ` object ` , and thus correctly allow the mentioned currently - disallowed assignments . ]it should now be clear how to constructing the rest of the subtyping fractal .each next nesting level of generics corresponds to `` zooming one level in '' in the subtyping fractal , and the construction of the new `` zoomed - in '' graph is done using the same method above , where wildcards ( or , intervals ) over the previous subtyping graph substitute all the ` ?` in that graph to produce the next level graph of the subtyping relation .and there is nothing in generics that disallows arbitrarily - deep , potentially infinite , nesting .while making the fractal observation , we made yet another observation that helps explain the fractal observation more deeply . in particular , we noted that in constructing the graph of the subtyping relation , when moving from types of a specific level of nesting to types of the next deeper level ( _ i.e. _ , when `` zooming in '' inside the graph of the relation , or when doing the inductive step of the recursive definition of the graph ) , _ three _ kinds of _ transformations _ are applied to the level subtyping graph , in agreement with the general nature of fractals having transformed minicopies of themselves embedded within .we call these three transformations the _ identity _ ( or , _ copying _ ) transformation , the _ upside - down _ reflection ( or , _ flipping _ transformation ) , and the _ flattening _ transformation . the first transformation ( identity )makes an exact copy of the input subtyping relation , the second transformation ( upside - down reflection ) flips over the relation ( a subtype in the input relation becomes a supertype , and vice versa ) , while the third transformation `` attempts to do both ( _ i.e. _ , the identity and flipover transformations ) , '' in effect making types that were related in its input subtyping relation be _ unrelated _ in its output subtyping relation ( hence the output of this transformation is a `` flat '' relation , called an _ anti - chain_. ) explaining this observation regarding the subtyping fractal in terms of oo subtyping is done by noting that the three mentioned transformations correspond to ( in fact , result from ) the covariant subtyping rule , contravariant subtyping rule , and invariant subtyping rule , respectively .this is demonstrated , in a very abridged manner , in figure ( with the green arrows corresponding to copying the previous level graph , corresponding to covariant subtyping , the red arrows corresponding to flipping over the previous level graph , corresponding to contravariant subtyping . )it should be noted that _ also _ the level 1 graph as a _ whole _ is the same structure as the level 0 graph when the ` c group ' nodes are lumped into one node and the same for the ` d group ' node .that means that , in agreement with the graph being a fractal ( where self - similarity must exist at all levels of _ scale _ ) , when the graph of subtyping is `` viewed from far '' it _ looks the same _ as the level 0 graph .in fact , when looked at from a far enough distance this similarity to the level 0 graph will be the case for all level , where , graphs .it should be noted that class names information ( _ a.k.a . _ , nominality , and ` nominal type information ' ) of nominally - typed oo languages ( such as java , c # , c++ , and scala ) is used in the base / first construction step in constructing the subtyping relation between generic types as a fractal .in contrast , structurally - typed oo languages ( such as ocaml , moby , polytoil , and strongtalk ) , known mainly among programming languages researchers , do _ not _ have such a simple base step , since a record type corresponding to a class ( with at least one method ) in these languages does _ not _ have a finite number of supertypes to begin with , given that `` superclasses of a class '' in the program , when viewed structurally as supertypes of record types , do _ not _ form a finite set .any record type has an infinite set of record subtypes ( due to their width - subtyping rule ) .accordingly , a record type with a method_i.e ._ , a member having a function type causes the record type to have an infinite set of _ supertypes _ , due to contravariance of the type of the method .adding - in a depth - subtyping rule makes the subtyping relation between record types with functional member types even more complex .this motivates suspecting that subtyping in structurally - typed oo language is a _ dense _relation , in which every pair of non - equal types in the relation has a third type , not equal to either member of the pair , that is `` in the middle '' between the two elements of the pair , _i.e. _ , that is a subtype of the supertype ( the upperbound ) of the pair and a supertype of the subtype ( the lowerbound ) of the pair .in fact this may turn out to be simple to prove . due toa class in generic nominally - typed oo languages having a finite set of superclasses in the subclassing relation , subtyping in generic nominally - typed oo languages languages is not an ( everywhere ) dense relation , and the subclassing relation in these languages forms a simple finite basis ( the `` skeleton '' ) for constructing the subtyping relation . for structurally - typed oo languages ( where record types with functional members are a must , to model structural objects ) , this basis ( the `` skeleton '' ) is infinite and thus the `` fractal '' structural subtyping graph ( if indeed it is a fractal ) is not easy to draw or to even imagine . for more details on the technical and mathematical differences between nominally - typed and structurally - typed oop , the interested reader may consult .in this paper we presented an observation connecting subtyping in generic nominally - typed oo languages to fractals .we presented diagram for graphs of the subtyping relation demonstrating the iterative process of constructing the relation as a fractal .we further made an observation connecting the three variant subtyping rules in generic oop to the three transformations done on the graph of the relation for embedding inside the relation .we further noted some possible differences between generic nominally - typed oop and polymorphic structurally - typed oop as to the fractal nature of their subtyping relations .( see the appendix for some further notes , observations and conclusions that may be built on top of the observations and discussions we made , including a suggestive discussion on the use of algebraic equations to precisely describe the generic oo subtyping relation as a fractal ) . 10 c # language specification , version 3.0 .http://msdn.microsoft.com/vcsharp , 2007 .nova | hunting the hidden dimension - pbs .http://www.pbs.org/wgbh/nova/physics/hunting-hidden-dimension.html , 2011 .http://en.wikipedia.org/wiki/fractal , dec. 2014 . moez a. abdelgawad . .phd thesis , rice university , 2012 .moez a. abdelgawad . .scholar s press , 2013 .moez a. abdelgawad .an overview of nominal - typing versus structural - typing in object - oriented programming ( with code examples ) . technical report , arxiv.org:1309.2348 [ cs.pl ] , 2013 . moez a. abdelgawad . a domain - theoretic model of nominally - typed object - oriented programming .301:319 , 2014 . moez a. abdelgawad .a comparison of noop to structural domain - theoretic models of object - oriented programming . , 2016 .moez a. abdelgawad . towards an accurate mathematical model of generic nominally - typed oop ( extended abstract ) .moez a. abdelgawad . towards understanding generics. technical report , arxiv:1605.01480 [ cs.pl ] , 2016 .moez a. abdelgawad .why nominal - typing matters in oop . , 2016 .moez a. abdelgawad and robert cartwright . in nominally - typed oop , objects are not mere records and inheritance _ is _ subtyping . , 2016 .michael f barnsley . .courier dover publications , 2013 .richard bird et al ., volume 2 .prentice hall europe hemel hempstead , uk , 1998 .g. bracha and d. griswold .strongtalk : typechecking smalltalk in a production environment . in _ oopsla93 _ , pages 215230 , 1993 .k. bruce , a. schuett , r. van gent , and a. fiech .olytoil : a type - safe polymorphic object - oriented language ., 25(2):225290 , 2003 .robert cartwright and moez a. abdelgawad .inheritance _ is _ subtyping ( extended abstract ) . in _ the 25^th^ nordic workshop on programming theory ( nwpt ) _ , tallinn , estonia , 2013 .robert bruce findler , matthew flatt , and matthias felleisen .semantic casts : contracts and structural subtyping in a nominal world . in _ecoop 2004object - oriented programming _, pages 365389 .springer , 2004 .k. fisher and j. reppy .the design of a class mechanism for moby . in _ pldi _ , 1999 .james gosling , bill joy , guy steele , and gilad bracha . .addison - wesley , 2005 .douglas r. hofstadter . .basic books , second edition , 1999 .x. leroy , d. doligez , j. garrigue , d. rmy , and j. vouillon .the objective caml system .available at http://caml.inria.fr/. donna malayeri and jonathan aldrich . integrating nominal and structural subtyping . in _ecoop 2008object - oriented programming _ , pages 260284 .springer , 2008 .benoit b mandelbrot . fractals .martin odersky .the scala language specification , v. 2.7 .http://www.scala-lang.org , 2009 .klaus ostermann .nominal and structural subtyping in component - based programming . , 7(1):121145 , 2008 .benjamin c. pierce .. mit press , 2002 .david i spivak . .mit press , 2014 .the following notes and observations can be added to the ones we made in the main paper : 1 .relations on type intervals : for type intervals $ ] , where , with lowerbound ( ) and upperbound ( ) , two relations on intervals can be defined that can help in constructing the subtyping fractal : an interval containing another interval ( the _ contains _ relation : ) , and an interval preceding another interval ( the _ precedes _ relation : ) .the _ pruning transformation _ : bounds , _i.e. _ , lowerbounds or upperbounds , on a type parameter limit ( _ i.e. _ , decrease ) the types of level that can substitute the holes ( the ` ? `s ) when constructing a type in level , so pruning means that a substitution _ respects _ these declared bounds .demonstration software : an interactive mathematica program that demonstrates the iterative construction of the subtyping hierarchy , for multiple simple class hierarchies , up to four nesting levels is available upon request ( the program uses the ` manipulate ` function of mathematica 6 , is formatted as a mathematica 6 demo , and is in the mathematica .nb format , _i.e. _ , the file format mathematica has used as of 2007 . )multi - arity : generic classes with _ multiple _ type parameters simply result in types with multiple `` holes '' at the same nesting level for the same class .graph matrices : representing successive subtyping graphs as adjacency matrices ( 0 - 1 matrices ) is useful in computing ( paths in graph of ) the relation ( and in computing containment of intervals ) .( using , with binary addition and multiplication of matrices , to compute the transitive closure of the relation and thus paths / intervals over it ) .category theory : given the use of the notion of _ operads _ in category theory to model self - similarity , we intend to consider the possibility of using _ _ operads to express and communicate the fractal nature of the generic oo subtyping relation .algebraic equations : according to benoit mandelbrot , hermann well wrote that ` the angel of geometry and the devil of algebra share the stage , illustrating the difficulties of both . ' turning to some algebra , we expect the graph of the subtyping relation to be described by a recursive equation , as is the case for many fractals .we anticipate this equation to be ( something along the lines of ) where stands for the initial graph ( the ` skeleton ' of the subtyping fractal , resulting from turning the subclassing relation into a subtyping relation by using ` ?` as the default type argument for generic classes ) , and the application of to its argument ( another graph ) means the _ substitution _ ( similar to -reduction in -calculus ) of its `` holes '' ( the ` ? ` s in its types / nodes ) with the argument graph ( _ i.e. _ , the graph which applies the three above - mentioned transformations to , and where means `` subtyping - respecting union '' of component graphs . ) 1 .more on algebraic equations : 1 . the in the equation _ _ above__i.e . __ , the graph of the first iteration of the subtying relation , which is directly based on the subclassing relation is what makes ( all iterations / approximations of ) the graph have the same structure `` when viewed from far '' , _i.e. _ , when zooming out of it , as the subclassing relation ) .2 . to construct approximations of iteratively, the equation can be interpreted to mean which means when constructing approximations to we construct elements of the sequence ... etc .3 . another seemingly - equivalent recursive equation for describing the subtyping graph is which , even though not in the more familiar format , has the advantage of showing that ( the limit , infinite graph ) is equivalent to ( isomorphic to ) substituting its own holes with transformations of , _i.e. _ , that the substitution does _ not _ affect the final infinite graph ( just as adding 1 to , the limit of natural numbers , does not affect its cardinality ; . )it also reflects the zooming - in fact ( opposite to the zooming - out fact above ) that when zooming - in into we find ( transformed copies of ) each time we zoom in , ad infinitum .( see note [ enu : more - levels / iterations :- to ] below for why we believe this third equation may in fact be _incorrect_. ) + 2 . algebraic equations with intervals : with intervals ,the equation above becomes simpler and more general , where , if is the function computing all the intervals over a graph , we then have or , most accurately , note that the three equations agree on defining .the three equations disagree however on later terms of the construction sequence .they , for example , define , , and , respectively .the equivalence of the three equations ( _ i.e. _ , of the resulting graph from each ) is unlikely , but a mathematical proof or a convincing intuitive proof of that is needed ( see note [ enu : more - levels / iterations :- to ] below , however ) .benefits and applications : an obvious benefit of the observation in this paper is to demonstrate one more ( unexpected ? ) place where fractals show up . yet an additional benefit , and practical application , of the observation may be to apply some of the theory developed for fractals to better the understanding of the subtyping relation in oo languages , possibly leading to providing a better understanding of their generic type systems and thus developing better oo language compilers .parameterizing classes ` object ` and ` null ` : at least one needs to be non - parameterized , if not both ? otherwise we may have an unbounded infinite ascending chain of supertypes ( see section [ sec : nomvsstruct ] . )( what will then be the meaning of ` ? ` , and be the default type argument ? ) 5 .[ enu : more - levels / iterations :- to]more levels / iterations : to further demonstrate the fractal observation , and to help resolve which of the three equations above ( best ) describes the graph of the subtyping relation , we draw the level 2 graph using a simpler initial graph ( _ i.e. _ , the ` skeleton ' ) than we used for the earlier figures . see figure and figure . + + subtyping levels 0 , 1 with one generic class ( ` c ` ) ] + + + some notes on , , : 1 . , constructed as , has 32 nodes , and 66 edges . 2 . the number of levels in graphs , , , ...( _ i.e. _ , the maximum path length ) increases by two each time ( 2 , 4 , 6 , 8 , ... ) .this is clear in the diagrams , particularly ones with colored arrows .3 . the number of nodes and edges in , , , ... : 1 .nodes : **=3**=(2 + 1 ) , **=8**=(2 + 3 + 2 + 1 ) , **=32**=(2 + 8 + 10 + 7 + 4 + 1 ) , ... * * ? ? ?* * = ( 2 + 32 + 66+< ... 4 numbers ... >+1 ) .edges : **=2 * * , * * **=10 * * , * * **=66 * * , ... * ? ? ?algebraic equations : 1 . , as constructed above , is the same graph as ... ! !2 . thus , , meaning that , given , we have 3 .the skeptic reader may trying constructing the graph corresponding to the equation 4 . * proof * : each type / node constructed in is constructed in ( and vice versa , which is easy to see ) .same for proving , which means we have 5 .( an analogy ) something unknown becoming known . knowing it again does not add new .. philosophical observation , using ` old ' = , ` new ' = : 1 .new in new = new in old 2 .old in new != new in old 3 .old in old = new = old in new 5 .in addition to subgraphs highlighted in green and red ( which show an exact copy and a flipped copy , due to covariance and contravariance respectively ) of inside , figure also shows a miniature _ pruned _ flipped copy of inside , highlighted in blue ( due to bounded contravariance ) .
|
while developing their software , professional object - oriented ( oo ) software developers keep in their minds an image of the subtyping relation between types in their software . the goal of this paper is to present an observation about the graph of the subtyping relation in java , namely the observation that , after the addition of generics and of wildcards , in particular to java , the graph of the subtyping relation is no longer a simple directed - acyclic graph ( dag ) , as in pre - generics java , but is rather a _ fractal_. further , this observation equally applies to other mainstream nominally - typed oo languages ( such as c # , c++ and scala ) where generics and wildcards ( or some other form of ` variance annotations ' ) are standard features . accordingly , the shape of the subtyping relation in these oo languages is more complex than a tree or a simple dag , and indeed is also a fractal . given the popularity of fractals , the fractal observation may help oo software developers keep a useful and intuitive mental image of their software s subtyping relation , even if it is a little more frightening , and more amazing one than before . with proper support from ides , the fractal observation can help oo developers in resolving type errors they may find in their code in lesser time , and with more confidence .
|
recently , there has been a growing interest in the design and analysis of wireless cooperative transmission protocols ( e.g. , - ) .these works consider several interesting scenarios ( e.g. , fading - vs - awgn channels , ergodic - vs - quasistatic channels , and full - duplex - vs - half - duplex transmission ) and devise appropriate transmission techniques and analysis tools , based on the settings . here, we focus on the delay - limited coherent channel and adopt the same setup as considered by laneman , tse , and wornell in .there , the authors imposed the half - duplex constraint ( either transmit or receive , but not both ) on the cooperating nodes and proposed several cooperative transmission protocols . in this setup , the basic idea is to leverage the antennas available at the other nodes in the network as a source of _ virtual _ spatial diversity .the proposed protocols in were classified as either amplify and forward ( af ) , where the helping node retransmits a scaled version of its soft observation , or decode and forward ( df ) , where the helping node attempts first to decode the information stream and then re - encodes it using ( a possibly different ) code - book .all the proposed schemes in used a time division multiple access ( tdma ) strategy , where the two partners relied on the use of orthogonal subspaces to repeat each other s signals .later , laneman and wornell extended their df strategy to the partners scenario .other follow - up works have focused on developing practical coding schemes that attempt to exploit the promised information theoretic gains ( e.g. , ) .as observed in , previously proposed cooperation protocols suffer from a significant loss of performance in high spectral efficiency scenarios .in fact , the authors of posed the following open problem : _`` a key area of further research is exploring cooperative diversity protocols in the high spectral efficiency regime . ''_ this remark motivates our work here , where we present more efficient ( and in some cases optimal ) af and df protocols for the relay , cooperative broadcast ( cb ) , and cooperative multiple - access ( cma ) channels . to establish the gain offered by the proposed protocols , we adopt the diversity - multiplexing tradeoff as our measure of performance .this powerful tool was introduced by zheng and tse for point - to - point multi - input - multi - output ( mimo ) channels in and later used by tse , viswanath , and zheng to study the ( non - cooperative ) multiple - access channel in . in the following ,we summarize the main results of this paper , some of which were initially reported in . 1 . for the single relay channel ,we establish an upper bound on the achievable diversity - multiplexing tradeoff by the class of af protocols .we then identify a variant within this class , referred to as the nonorthogonal amplify and forward ( naf ) protocol , that achieves this upper bound .we then propose a dynamic decode and forward ( ddf ) protocol and show that it achieves the _ optimal _ tradeoff for multiplexing gains '' will be defined rigorously in the sequel ] .furthermore , the ddf protocol is shown to outperform all af protocols for arbitrary multiplexing gains .finally , the two protocols ( i.e. , naf and ddf ) are extended to the scenario with relays where we characterize their tradeoff curves .notably , the naf protocol is shown to outperform the space - time coded protocol of laneman and wornell ( lw - stc ) without requiring decoding / encoding at the relays .2 . for the cooperative broadcast channel, we present a modified version of the ddf protocol to allow for reliable transmission of the common information .we then characterize the tradeoff curve of this protocol and use this characterization to establish its superiority compared to af protocols .in fact , we argue that the gain offered by the ddf is more significant in this scenario ( as compared to the relay channel ) .3 . for the symmetric multiple - access scenario, we propose a novel af cooperative protocol where an _ artificial _ inter - symbol - interference ( isi ) channel is created .we prove the optimality ( in the sense of diversity - multiplexing tradeoff ) of this protocol by showing that , for all multiplexing gains ( i.e. , ) , it achieves the diversity - multiplexing tradeoff of the corresponding point - to - point channel .one can then use this result to argue that the sub - optimality of the schemes proposed in was dictated by the use of orthogonal subspaces rather than the half - duplex constraint .we also utilize this result to shed more light on the fundamental difference between half - duplex relay and cooperative multiple - access channels . before proceeding further , a brief remark regarding two independent parallel works is in order . in , nabar , bolcskei andkneubuhler considered the half - duplex single - relay channel , under _ almost _ the same assumptions as in ( i.e. , the only difference is that , for diversity analysis , the relay - destination channel was assumed to be non - fading ) and proposed a set of af and df protocols . in one of their af protocols ( nbk - af ) , nabar _ et ._ allowed the source to continue transmission over the whole duration of the codeword , while the relay listened to the source for the first half of the codeword and relayed the received signal over the second half .this makes the nbk - af protocol identical to the naf protocol proposed in this paper . here, we characterize the diversity - multiplexing tradeoff achieved by this protocol while relaxing the assumption of non - fading relay - destination channel . using this analysis, we establish the optimality of this scheme within the class of linear af protocols .furthermore , we generalize the naf protocol to the case of arbitrary number of relays and characterize its achieved tradeoff curve . in , prasad andvaranasi derived upper bounds on the diversity - multiplexing tradeoffs achieved by the df protocols proposed in . in the sequel, we establish the gain offered by the proposed ddf protocol by comparing its diversity - multiplexing tradeoff with the upper bounds in .finally , we emphasize that , except for the single - relay naf protocol , all the other protocols proposed in this paper are novel . in this paper, we use to mean , to mean and to mean nearest integer to towards plus infinity . and denote the set of real and complex -tuples , respectively , while denotes the set of non - negative -tuples .we denote the complement of set , in , by , while means . denotes the identity matrix , denotes the autocovariance matrix of vector , and denotes the base- logarithm .the rest of the paper is organized as follows . in section [ back ] , we detail our modeling assumptions and review , briefly , some results that will be extensively used in the sequel .the half - duplex relay channel is investigated in section [ relay ] where we describe the naf and ddf protocols and derive their tradeoff curves . in section [ broad ] , we extend the ddf protocol to the cooperative broadcast channel .section [ mac ] is devoted to the cooperative multiple - access channel where we propose a new af protocol and establish its optimality , in the symmetric scenario , with respect to the diversity - multiplexing tradeoff . in section [ num ] , we present numerical results that show the snr gains offered by the proposed schemes in certain representative scenarios .finally , we offer some concluding remarks in section [ conc ] . to enhance the flow of the paper , we collect all the proofs in the appendix .first we state the general assumptions that apply to the three scenarios considered in this paper ( i.e. , relay , broadcast , and multiple - access ) . assumptions pertaining to a specific scenario will be given in the related section . 1 . all channels are assumed to be flat rayleigh - fading and quasi - static , i.e. , the channel gains remain constant during a coherence - interval and change independently from one coherence - interval to another .furthermore , the channel gains are mutually independent with unit variance . the additive noises at different nodes are zero - mean , mutually - independent , circularly - symmetric and white complex - gaussian . furthermore , the variances of these noises are proportional to one another such that there will always be _ fixed _ offsets between the different channels signal to noise ratios ( snrs ) .all nodes have the same power constraint , have a single antenna , and operate synchronously .only the receiving node of any link knows the channel gain ; no feedback to the transmitting node is permitted ( the incremental relaying protocol proposed in can not , therefore , be considered in our framework ) .following in the footsteps of , all cooperating partners operate in the half - duplex mode , i.e. , at any point in time , a node can either transmit or receive , but not both .this constraint is motivated by , e.g. , the typically large difference between the incoming and outgoing signal power levels .though this half - duplex constraint is quite restrictive to protocol development , it is nevertheless assumed throughout the paper .3 . throughout the paper , we assume the use of random gaussian code - books where a codeword spans the entire coherence - interval of the channel . furthermore , we assume asymptotically large code - lengthes .this implies that the diversity - multiplexing tradeoffs derived in this paper , serve as upper - bounds for the performance of the proposed protocols with finite code - lengths .results related to the design of practical coding / decoding schemes that approach the fundamental limits established here will be reported elsewhere .next we summarize several important definitions and results that will be used throughout the paper . 1 .the snr of a link , , is defined as where denotes the average energy available for transmission of a symbol across the link and denotes the variance of the noise observed at the receiving end of the link .we say that is _ exponentially equal to _ , denoted by , when in , is called the _ exponential order _ of . and defined similarly .2 . consider a family of codes indexed by operating snr , such that the code has a rate of bits per channel use ( bpcu ) and a maximum likelihood ( ml ) error probability .for this family , the _ multiplexing gain _`` '' and the _ diversity gain _`` '' are defined as 3 .the problem of characterizing the optimal tradeoff between the reliability and throughput of a point - to - point communication system over a coherent quasi - static flat rayleigh - fading channel was posed and solved by zheng and tse in . for a mimo communication system with transmit and antennas , they showed that , for any , the optimal diversity gain is given by the piecewise linear function joining the pairs for , provided that the code - length satisfies .we say that protocol _ uniformly dominates _protocol if , for any multiplexing gain , . 5 .assume that is a gaussian random variable with zero mean and unit variance .if denotes the exponential order of , i.e. , then the probability density function ( pdf ) of can be shown to be : careful examination of the previous expression reveals that thus , for independent random variables distributed identically to , the probability that belongs to set can be characterized by provided that is not empty . in other words ,the exponential order of only depends on .this is due to the fact that the probability of any set , consisting of -tuples with at least one negative element , decreases exponentially with snr and therefore can be neglected compared to which decreases polynomially with snr .consider a coherent linear gaussian channel where a random gaussian code - book is used .the pairwise error probability ( pep ) of the ml decoder , denoted as , averaged over the ensemble of random gaussian codes , is upper bounded by where and denote the signal and noise components of the observed vector , respectively ( i.e. , ) .in this section , we consider the relay scenario in which relays help a single source to better transmit its message to the destination . asthe vague descriptions `` help '' and `` better transmit '' suggest , the general relay problem is rather broad and only certain sub - problems have been studied ( for example see ) . in this work, we focus on two important classes of relay protocols .the first is the class of amplify and forward ( af ) protocols , where a relaying node can only process the observed signal linearly before re - transmitting it .the second is the class of decode and forward ( df ) protocols , where the relays are allowed to decode and re - encode the message using ( a possibly different ) code - book . herewe emphasize that , a priori , it is not clear which class ( i.e. , af or df ) offers a better performance ( e.g. , ) .we first consider the single relay scenario ( i.e. , ) .for this scenario , we derive the optimal diversity - multiplexing tradeoff and identify a specific protocol within this class , i.e. , the naf protocol , that achieves this optimal tradeoff .we then extend the naf protocol to the general case with an arbitrary number of relays . under the half - duplex constraint, it is easy to see that any single - relay af protocol can be mathematically described by some choice of the matrices , , and in the following model in , represents the vector of observations at the destination , the vector of source symbols , the vector of noise samples ( of variance ) observed by the relay , and the vector of noise samples ( of variance ) observed by the destination . the variables , and denote the source - relay channel gain , source - destination channel gain , and relay - destination channel gain , respectively . and are diagonal matrices .in this protocol , the source can potentially transmit a new symbol in every symbol - interval of the codeword , while the relay listens during the first symbols and then , for the remaining symbols , transmits linear combinations of the noisy observations using the coefficients in .in fact , by letting , , and ( with denoting the relay repetition gain ) , we obtain laneman - tse - wornell amplify and forward ( ltw - af ) protocol .finally , we note that when the source symbols are independent , the average energy constraint translates to where ] , ^t ] , and note that , according to the svd theorem , because is diagonal , ( and therefore ) is maximized when are mutually independent , in which case we would have the mutual information between and is given by a lower - bound on is easily obtained by replacing by : since is an increasing function on the cone of positive - definite hermitian matrices and since ( where represents the largest eigenvalue of ) , we get the following upper - bound on : from and , we conclude that now , since is of the same exponential order as , the bounds converge as grows to infinity .that is plugging in for and from and , respectively , we get it is then straightforward to see that where and are the exponential orders of , and , respectively . in deriving this expression ,we have assumed that ; as explained earlier , we do not need to consider realizations in which , or are negative .similarly , which , together with , and , results in : for the quasi - static fading setup , the outage event is defined as the set of channel realizations for which the instantaneous capacity falls below the target data rate. thus , our outage event becomes letting grow with according to and using , we conclude that , for large , and thus as zheng and tse have shown in lemma 5 of , provides an upper - bound on ( i.e. , the optimal diversity gain at multiplexing gain ) : from and , it is easy to see that the right hand side of is maximized when is set to its maximum , which , according to , is .this is the case when is full - rank . on the other hand , itself is maximized when ( assuming an even codeword length ) , which corresponds to being a square matrix . for this , can be shown to take the value of the right hand side of .this completes the proof .the proof closely follows that for the mimo point - to point communication system in .in particular , we assume that the source uses a gaussian random code - book of codeword length , where is taken to be even , and data rate , where increases with according to the error probability of the ml decoder , , can be upper bounded using bayes rule : where denotes the outage event .the outage event is chosen such that dominates , i.e. , in which case in order to characterize , we note that , since the destination observations during different frames are independent , the upper - bound on the ml conditional pep [ recalling ] , assuming to be even , changes to where and denote the covariance matrices of destination observation s signal and noise components during a single frame : latexmath:[\ ] ] examining , we realize that for to be met , should be defined as the set of all real -tuples with nonnegative elements that satisfy the following condition for at least one nonempty : this way , by choosing large enough , can be made arbitrary large and thus is always met . from , it follows that substituting in this expression by gives on the other hand , replacing in by results in under the constraints given by and , it is easy to see that now , from and , it follows that again , according to , provides a lower - bound on the diversity gain achieved by the protocol .thus the protocol achieves the diversity gain given by and the proof is complete .a. sendonaris , e. erkip and b. aazhang , `` user cooperation diversity .implementation aspects and performance analysis , '' _ ieee transactions on on communications _ ,page(s ) : 1939- 1948 volume : 51 , issue : 11 , nov .2003 j. n. laneman and g. w. wornell , `` distributed space - time - coded protocols for exploiting cooperative diversity in wireless networks , '' _ ieee transactions on information theory _, volume : 49 , issue : 10 , oct .2003 pages:2415 - 2425 m. janani , a. hedayat , t. hunter , and a. nosratinia , `` coded cooperation in wireless communications : space - time transmission and iterative decoding , '' _ ieee transactions on signal processing _ ,volume : 52 , issue : 2 , feb .2004 pages:362 - 371 r. u. nabar , h. bolcskei , f. w. kneubuhler , `` fading relay channels : performance limits and space - time signal design , '' _ ieee journal on selected areas in communications _ , volume : 22 , issue : 6 , aug .2004 pages:1099 - 1109 k. azarian , h. el gamal , and p. schniter , `` on the achievable diversity - vs - multiplexing tradeoff in cooperative channels , '' _ proc .conference on information sciences and systems , ( princeton , nj ) _ , mar .2004 .k. azarian , h. el gamal , and p. schniter , `` on the achievable diversity - multiplexing tradeoff in half duplex cooperative channels , '' _ proc .allerton conf . on communication , control , and computing , ( monticello , il ) _ , oct .k. azarian , h. el gamal , and p. schniter , `` achievable diversity - vs - multiplexing tradeoffs in half - duplex cooperative channels , '' _ proc .ieee information theory workshop , ( san antonio , tx ) _ , oct .
|
in this paper , we propose novel cooperative transmission protocols for delay limited coherent fading channels consisting of ( half - duplex and single - antenna ) partners and one cell site . in our work , we differentiate between the relay , cooperative broadcast ( down - link ) , and cooperative multiple - access ( up - link ) channels . the proposed protocols are evaluated using zheng - tse diversity - multiplexing tradeoff . for the relay channel , we investigate two classes of cooperation schemes ; namely , amplify and forward ( af ) protocols and decode and forward ( df ) protocols . for the first class , we establish an upper bound on the achievable diversity - multiplexing tradeoff with a single relay . we then construct a new af protocol that achieves this upper bound . the proposed algorithm is then extended to the general case with ( ) relays where it is shown to outperform the space - time coded protocol of laneman and worenell without requiring decoding / encoding at the relays . for the class of df protocols , we develop a dynamic decode and forward ( ddf ) protocol that achieves the optimal tradeoff for multiplexing gains . furthermore , with a single relay , the ddf protocol is shown to dominate the class of af protocols for all multiplexing gains . the superiority of the ddf protocol is shown to be more significant in the cooperative broadcast channel . the situation is reversed in the cooperative multiple - access channel where we propose a new af protocol that achieves the optimal tradeoff for all multiplexing gains . a distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces , allowing for a more efficient use of resources . in fact , using our results one can argue that the sub - optimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half - duplex constraint .
|
astronomical research is benefiting from the use of large searchable catalogs made possible by advances in detector and computer technology . with the rapidly growing capacity of ccds ,astronomical objects are being detected at an increasingly fast rate .a major challenge to astronomy is being able to synthesize this large amount of data into an organized set of information that takes the form of a catalog .the catalog describes not only the properties of the images , but more importantly the sources detected within them . in its simplest usage , one can quickly determine whether an object is found in some region of space and examine its listed properties .but there are many other possible uses of a large catalog .for example ,non - positional ( all sky ) searches for objects with particular properties can provide statistical information on large numbers of objects .objects that lie at the extremes of the distributions provide the potential for new discoveries .advances in relational database technology have provided a means for building , storing , and searching large catalogs .this technology alone , however , is not sufficient .one reason is that the rate at which data is becoming available from detectors is on a path to exceed storage and processing capabilities of computers for a large class of problems .new approaches to data organization and new algorithms are needed to deal with the challenges of a telescope such as hubble space telescope ( hst ) . in the case of the hst ,some of these arise from its small field of view and the complex geometry of its overlapping exposures .crossmatching of detected sources from images taken at different wavelengths and at different epochs is a crucial capability for catalog construction . by matching source detections across multiple images to a single astronomical object , onecan then determine spectral and temporal properties of the object .the statistical methodology introduced by provides a clean framework for determining symmetric -way associations , and has been successfully applied in several studies including , , , and .performing crossmatching on hst observations also involves adjusting the positions of the images to place them into better alignment . in practice , these two are related , since the accuracy of the alignment is determined by how well the sources match together .one approach to crossmatching involves registering images against a known catalog , such as guide star catalog ( gsc ii ; ) or the sloan digital sky survey ( sdss ; * ? ? ?* ) skyserver database . in this approach ,the absolute position information in the catalog is used to anchor positions of matching sources detected in the images .sources from different images can then be compared .the drawback to this method is that few or none of the sources in the catalog may be in the image .this situation is particularly true of images taken by the hst which has a small field of view . on the other hand, it often detects many more sources than have been previously detected . a different approach , the one we adopt here , is to crossmatch the many sources in overlapping images taken by hst against each other , rather than against an existing catalog .the process involves adjusting the positions of the overlapping images to improve the residual errors in the crossmatching . in this case ,relative , rather than absolute , astrometry is determined involving many detected sources .absolute astrometric positioning can then later be determined by matching the set of overlapping images as a larger unit against the absolute standards .some astronomical projects , such the sdss , were designed with the goal of providing a catalog .the observations are made in a way that uniformly tiles regions of sky in certain filter bands and at certain regular time intervals .on the other hand , the hst as well as other major space observatories , have generally targeted particular sources , although particular programs have undertaken surveys for very limited regions of the sky . for more than two decades , images have been taken for many independent programs , resulting in a sparse , complex geometry of sky coverage and in irregular time intervals between observations of objects . in many cases ,the coverage involves partially overlapping exposures with a variety of outline sizes and shapes , orientations , filters , and exposure times .the hst has provided some of the highest resolution images ever obtained .therefore , in spite of the challenges , there is a potentially important scientific gain by having a catalog for hst . the hubble legacy archive ( hereafter hla ; * ?* ) provides enhanced data products and advanced browsing capabilities online .its products include lists of detected sources and their photometric properties .these source lists are obtained running daophot and source extractor software on combined , drizzled images .each of these images is the result of combining exposures for a single instrument , detector and filter from a single visit ( pointing of hst ) .these source lists contain information about source positions , fluxes , magnitudes , morphology , etc .the source lists and auxiliary hla data provide the needed input information to implement our algorithms for the hubble catalog . by crossmatching the source lists based on these images , we obtain multi - wavelength time - domain information about astronomical objects .in this paper we describe some novel approaches to crossmatching with application to the construction of a catalog for hst . in section[ sec : xmatch ] we describe general algorithms for image clustering , astrometric correction , and source aggregation into matches .section [ sec : catalog ] describes the pipeline that we have constructed to build an hst catalog based on these algorithms .section [ sec : results ] contains an analysis of the properties of this catalog for acs / wfc and wfpc2 .section [ sec : summary ] concludes our study .to create a reliable set of associations , we need high - precision astrometry .solutions exist for the global world - coordinate system ( wcs ; * ? ? ?* ) astrometric determination of large images ( e.g. , * ? ? ?. however , small images are still difficult to work with , especially when the sources are very faint , since they typically do not contain a sufficient number of calibration standard objects .the only possibility is to cross - calibrate the set of small overlapping images to obtain improved relative astrometry .once several of the small images are locked in and tied together , the number of available standards will increase that enables a more accurate absolute positioning .first we discuss how to cross - calibrate two or more images to improve their relative accuracy . later in the section ,we describe a bayesian method that determines the matching sources from sets of many nearby detections in overlapping images .crossmatching millions of sources contained in many irregularly placed images on the sky is a computationally challenging task .a substantial reduction in computational overhead is obtained by determining disjoint sets of overlapping images through a friends - of - friends ( fof ) algorithm .we use the term _ mosaic _ to describe each of these disjoint sets .since the mosaics are disjoint , the source matching is carried out independently within each and every one of them .the first step in creating sets of matched sources involves the identification of pairs of sources that reside in different images and are close together on the celestial sphere . a tolerance is then needed to be applied that depends on the accuracy of the relative astrometry .given the set of matched pairs in a mosaic , determined with some tolerance in separation , we next adjust the relative astrometry of the images that make up the mosaic , in order to reduce the separations of sources in the pairs .the traditional approach is to apply corrections to the world - coordinate system , which is often very expensive computationally , and involves many trigonometric function evaluations .here we choose a different approach , which is faster to calculate , and can accomplish multiple objectives in just one step .it is common practice to consider translations and rotations of the images in their x - y plane .our approach , however , is to use three - dimensional rotations on the celestial sphere .such transformations can account for both rotation and translation locally in the tangent plane .when the axis of the 3-d rotation is parallel with the pointing of a given image , the transformation indeed corresponds to a 2-d rotation .but if the axis is perpendicular to the pointing , the 3-d rotation results in a translation in the tangent plane . if we allow for any direction , a single transformation can describe a combined effect . herewe work with 3-d normal vectors , which is often the preferred way in spherical calculations .we first consider an idealized problem where a single image is rotated in 3-d to minimize the separations of its sources from those matched to a fixed reference image .let represent the direction of the detection in the image , and let be a matching reference direction .we can form a set of pairs to be used in the astrometric correction .now we have to solve an optimization problem for the 3-d rotation .the transformed position would be .hence the optimization formally is ^ 2 \right\ } \label{mr}\ ] ] where is the precision parameter that is related to the accuracy via .much like the potential energy stored in springs , our cost is quadratic in the displacements , and can be thought of as a system of springs , where the spring constants are proportional to the weights .the solution is the equilibrium position of the sphere with a fixed center , which is given by the rotation matrix .this general optimization , however , is computationally expensive . for this reason ,working with orthonormal matrices is not practical .but for small corrections , _ infinitesimal rotations _ are ideally suited .infinitesimal rotations have many advantageous mathematical properties over general 3-d rotations .for example , they are commutative .let represent the axis and the angle of the rotation .the axis is defined by the direction and the angle is the length of the vector .the infinitesimal transformation is then considering that the change is perpendicular to and small in amplitude , the resulting vector stays approximately normal to quadratic order in .the optimization problem with the infinitesimal rotation becomes analytically tractable .the cost function , whose minimization yields the estimate , is now not only quadratic in the displacement but also in its parameter , ^ 2 \right\}. \label{vomhat}\ ] ] by requiring that the gradients equal zero , we obtain a 3-d linear equation which is a result of the summations where is the identity matrix , and is the operator of the dyadic product . a derivation of equation ( [ omeq ] )is given in appendix [ app1 ] .in keeping with the spring analogy described below equation ( [ mr ] ) , equation ( [ omeq ] ) can be interpreted as a torque equation .vector can be considered an angular acceleration vector of a sphere at an early time after its release that is related to its angular displacement at this early time .the sphere that has springs connecting each point of mass located at position to the fixed standard located at .matrix is the moment of inertial tensor for sources that lie on the unit sphere that is subject to a torque given by .notice that in the equation for , quantity can be replaced by , which is the force due to the spring . since is a matrix , equation ( [ omeq ] ) has a simple analytic solution for , e.g. , by applying of cramer s rule for matrix inversion .in practice , the matrix inversion can be performed numerically by one of several possible methods .the cost of the aggregation is linear in the number of pairs , and the matrix inversion involves a small number of operations .therefore , the computational overhead for this approach is likely to be close to optimal .so far we assumed that there is a set of calibrators in a fixed , perfect reference image , which is clearly not true in our case .instead we have imprecise source positions contained in groups of overlapping observations , the mosaics . in each mosaicwe want to correct every one of the images , but the combined minimization problem is not as simple as it is for a single image . since we are only concerned with relative precision , different heuristic approaches come to mind to work around the lack of a true set of reference positions . for example , one of the images could be singled out to act as a reference frame onto which all others are corrected . in a group of images , however , there is no guarantee that there is a single image with which all others would overlap , because a mosaic consists of friends of friends .another option would be to use the average positions of a preliminary match .we follow a third approach where the correction of a given image is based on all the other images .one - by - one we consider each image and derive their corrections independently , effectively assuming that all the others are perfect .hence we use them as calibrators .the matched pairs being applied to equation ( [ omeq ] ) are then all the pairs for the mosaic involving the image being corrected .having derived the infinitesimal rotation for all images , we update the source positions with the appropriate corrections and repeat until convergence .the vectors are accumulated , i.e. , summed up , for each image ( cf .commutativity of infinitesimal rotations ) and saved for future reference .this also enables us to safely work with clean samples of stars , but apply the correction for all sources afterwards .one can also use this information to correct the alignment of the images in their wcs headers to reflect the changes .now we use the astrometrically improved coordinates to perform a final cross - identification .this time we apply a smaller tolerance than the initial crossmatch , and again produce matched pairs of sources , as discussed in section [ sec : pairs ] .once all close source pairs are identified across images , the next step is to run a friends - of - friends ( fof ) algorithm , also known to statisticians as single - linkage hierarchical clustering , to link all nearby detections into singly connected graphs .such clustering is very efficient and can be easily implemented in any programming environment .the fof groups enumerate all the sources that can be linked together using a specified pairwise distance threshold .there is no guarantee , however , that this algorithm provides statistically meaningful matches .for example , by linking nearby sources along a line , we can potentially construct very long chains whose ends are far away from each other .these fof matches are just considered preliminary and need to be studied further .we note that other hierarchical clustering methods also exist , whose performance depends on the nature of the data and the goals of the project . herewe look for clusters only to provide candidates for the cross - identification .preliminary experiments with several off - the - shelf tools , including average and complete linkage , were performed on select areas without noteworthy accuracy or speed improvements .these other tools also provide clusters with a wide range of sizes and shapes . for cross - matching, we are interested only in identifying isolated compact clusters . to accomplish this, we apply a bayesian method to evaluate the likelihood of various possible clusters based on the separation of the members and the estimated positional errors .the question is whether breaking up the fof groups into smaller ones could yield better models , and if the answer is yes , which configuration would the best .we can address the issue by applying a bayesian model comparison .if represents the entire set of positions in a given fof group , we can consider alternative hypotheses , where is partitioned into disjoint components that correspond to separate objects .the baseline case is when we have a single component .but there could be as many components as the number of detections .in general , let represent a hypothesis with components with model directions for .also let be the list of sources that are assigned to each component and their measured directions such that and similar to the problem of probabilistic cross - identification discussed by , the likelihood of can be derived from the astrometric uncertainties and the prior on the directions of the objects .in general , it is essentially just the product of the individual matches the calculation is analytic if the uncertainty is modeled with the spherical normal distribution and the prior is isotropic where is the dirac -function. we can derive the improvement over the case when all detections are considered separate , cf . . in the limit of high astrometric accuracies ,, the bayes factor becomes where is the angular separation between the and vectors , and the unmarked and summations are over the corresponding sets .note that this equation is also valid for singleton groups with only one detection because the summation includes the = cases , for which the separation is 0 by definition .other subdivisions of an fof group can be evaluated against each other using the corresponding bayes factors , the ratios of their likelihoods , which can be directly evaluated from the previous formula .for example , comparing and yields where the baseline case simply cancels out .when the value is 1 , the case is indecisive but if , the measurements prefer the hypothesis ; otherwise is favored . while the statistical approach for selecting the best components is clear in principle , the problem is still impossible to fully solve in practice due to its combinatorial nature . for fof associations of , say , only 20 sources , the computational cost of exploring all possible combinations is simply prohibitively large .thus we resort to a greedy algorithm that essentially corresponds to a shrinking pairwise distance threshold .each fof group is represented as a connection graph , where the edges link the nearby sources .we build the initial graph of the group , and evaluate its =1 baseline likelihood .next we repeatedly break the longest edge until the graph splits .we then evaluate the new hypothesis of these two components ( =2 ) against the baseline .if the new model is more likely , i.e. , the bayes factor is greater than unity , we save it along with ( the logarithm of ) its likelihood .afterwards we repeat the same steps to further break the subgraphs until we end up with separate objects with no connections at all .the result is a set of possible associations that are better than the original . out of thesewe can pick the most likely .the hla contains metadata extracted from hst images .they are stored in a commercial microsoft sql server 2008 database .the overall design builds on the success of the sloan digital sky survey s skyserver archive and applies a number of its extensions developed for spatial searches and spherical geometry . in this section , we describe our implementation of the aforementioned algorithms that runs completely inside the database engine , which in turn provides efficient parallel execution and rapid access to derived data products .the hst pointings generally do not follow any particular tiling pattern on the sky . during its long life ,the hubble telescope has taken over a half - million exposures date , and many of these overlap with others from different programs .we can determine the overlapping exposures based on their intersecting sky coverage .the geometry of all visits is stored inside the database using the spherical library , along with its sql routines and spatial indexing facilities .the hla contains source lists that are based on applying combined drizzled image files to daophot and source extractor software .they provide positional and other information for each detected source .we have carried out the crossmatching on the set of source extractor source lists available from dr6 of the hla for acs / wfc and wfpc2 taken together .furthermore , we only consider sources that are determined to be good quality by source extractor , which means that the flags value is less than 5 .the crossmatching process operates on each set of source lists in a mosaic .the absolute astrometry of the hla images is determined by the pointing of accuracy of hst , which is about 1 to 2 arc - sec in each coordinate. the hla attempts to correct the absolute astrometry further by matching against three catalogs : guide star catalog 2 ( gsc2 ) , 2mass , and the sloan digital sky survey ( sdss ) catalog .the typical absolute astrometric accuracy of the hla - produced images is .3 arcsecs in each coordinate .the majority of images , about 80% of acs / wfc , are corrected in this way .the source detections made in the hla are obtained in two steps .first , all images within a visit ( hst pointing ) for a given detector are combined to produce a `` white - light '' detection image , the deepest image possible for the visit .source extractor is then run on the detection image to identify source positions . in the second step , source extractor is run on each `` color '' ( single filter ) combined image within the visit for each detector to find sources at the positions identified by the detection image .if a source is found at an indicated position , then its properties are determined and stored in a database table .there are currently about 49,000 source lists containing about 46 million source detections .given the above procedure for extracting sources , it makes no sense to crossmatch them in different filters for the same detector and visit .the reason is that , for a given detector and visit , color sources are constructed to have exactly the same position when they exist in different filters .therefore , crossmatching must be carried out across visits and detectors . to do this , for each detector , we need to use visit level white - light source lists and the geometric outlines for the visit level combined images . the geometric outlines are used to determine the mosaics described in section [ sec : pairs ] .both the white light source lists and the geometric outlines are determined from available source lists and image metadata that reside in the hla database . fig .[ fig : mossize ] plots the distribution of the number of white - light images within a mosaic . for smaller mosaics ,the results follow a power law decline in frequency with number of images .the fall - off occurs with an index of about -2.5 and approaches unity at around 50 images .there is , however , a long tail that extends beyond that to almost 2000 images .this tail involves many of the large hst programs , such as the ultra deep field ( udf ) .much of the computational challenge lies in the crossmatching within the mosaics of this tail .mosaics are constructed by the application of the algorithms described in section [ sec : pairs ] .the determination of mosaics is efficiently accomplished by using the hierarchical triangular mesh indexing ( htm ; * ? ? ?* ) to determine white - light images that are close to a given image and by using the spherical library to determine whether the images actually overlap .the pipeline completes the cross - matching on mosaics sequentially .that is , the crossmatching of sources is carried out for one mosaic at a time . within each mosaic ,the pipeline determines pairs of white - light sources whose separations are less than 0.3 arc - sec .this threshold is required to accommodate some of the more significant systematic offsets between the images .larger offsets are currently not considered in this study .this matching can be carried out efficiently in an sql database that is indexed on ra and dec .using the white - light source pairs , the pipeline carries out astrometric corrections of the images within a mosaic using the algorithms described in section [ sec : astrometry ] . the astrometric correction algorithm that uses infinitesimal 3-d rotationsis implemented in the c # programming language and is integrated directly into the database .the query that selects sources for astrometric correction is limited to detections that are likely stars based on the morphology . despite the risk presented by the resolved objects, we also experimented with using the both resolved and unresolved sources also , which has the advantage of having more sources .but we obtained similar global results in most cases .the details of the sample selection for the correction might change in the future .as described in section [ sec : astrometry ] , the astrometric correction procedure iterates through all images in a mosaic and corrects them one - by - one assuming the others are correct . the initial large angular separation limit is decreased over time . in this way , any possible wrong associations ( due to the generous threshold ) are later eliminated as the relative calibration of the images tightens up .the convergence of the procedure is fast , but can be affected by erroneous initial matches . to avoid such situations ,we exclude at run time all the pairs whose angular separation grows too large during the iterative optimization , as opposed to shrinking as expected .the transformations are persisted in a database table and are available to future pixel - level corrections that can potentially improve the quality of a combined image . using the refined astrometry of all images , the cross - identification is performed once more with similarly large thresholds , so the list of matching pairs of sources is as inclusive as possible .the pairwise matches are quickly grouped into fof clusters that are obtained by using the sorted tables with appropriate primary keys .these clusters then provide preliminary sets of matched sources , which might contain large chains that may then be split by the bayesian method described in section [ sec : comparison ] .this probabilistic splitting is carried out by an sql stored procedure called the _ chainbreaker _, which is also written in the c # language .it reads all the sources grouped into matches and builds a graph out of them . using the refined astrometric positions , we can safely ignore large separations and start with a 0.1 arc - sec linking length . for each match, the splitting procedure considers a number of cases , using the greedy algorithm of section [ sec : comparison ] , all the way to separate objects .each result that proves to be better than the unsplit case is saved in a database table . for each match, the _ chainbreaker _ selects the most favorable case based on equations ( [ eq : bf1 ] ) and ( [ eq : bf2 ] ) , and stores the results in database tables that are used for catalog construction .the output of the stored procedure includes all the values that are required to link to the hla database and to fetch any relevant details of the observations related to the sources .the astrometric correction , fof and chainbreaker steps are conceptually intertwined .after completion of the entire procedure , one could analyze the astrometric uncertainties and revise the results by running the procedure repeatedly .this is a possible direction of further studies .a completely integrated scheme , where the chainbreaker is applied in each astrometric iteration , is impractical due to its computational cost .after the matching of the white - light sources is completed , we apply the available color ( filter ) level information for each white - light source and store the results in a database table for the catalog . the source catalogwe construct then contains the information about the source detections in each filter of a visit .we also include in the catalog the cases of color non - detections , i.e. , color dropouts , within a visit .such non - detections can be inferred from the two - step source detection process described above , since we know which images were used to build the white - light detection image and which images contained a source detection at a given position .consider the case that a user specifies a region - based ( e.g. , position and search radius ) search of the catalog .another form of non - detection arises in this case when no source is found within a white - light image that overlaps with this user specified region .all such white - light non - detections can not be stored in a database , but instead are inferred at run time within the catalog search function we have developed .we therefore provide information about detections and two types of non - detections ( color and white - light ) .the matches are a collection of color source detections and color non - detections ( as described above ) that lie sufficiently close together in the sky .( white - light nondetections do not belong to a match . )each match consists of at least one detected source .each match can be considered to be describing a single physical object whose properties are determined by the properties of sources in the match . for example , each match has a certain position associated with it ( matchra , matchdec ) that is determined by its source positions .this position is considered to be the location of the object described by the matched sources .not all matches involve different visits or detectors .for example , images that are crossmatched generally contain some sources do not match those in an image obtained from a different detector or visit .for other cases , the source could not be crossmatched because the image containing the source did not overlap sufficiently with an image from another visit or detector .about 60% of the images with source lists could not be crossmatched with other images .the sources in these images are also included in the catalog for completeness . in this case , the matches only involve a single detector and visit .the crossmatching of the hla dr6 acs / wfc and wfpc2 source lists was carried out with sql server 2008 running on a single ( aging ) dell poweredge 2950 server with intel xeon e5430 @ 2.66ghz ( 2 processors ) and 24 gb of memory . the entire pipeline for matching the sources and building the catalog runs automatically , and takes a few days to complete .the final catalog is comprised of entries for over 45 million sources detections and 15 million color dropouts for acs / wfc and wfpc2 . the catalog indicates which sources match together . among the matches that involve more than one visit, there are on average 5.4 detected ( color ) sources , which on average involve 3.8 different visits .the left panel of fig .[ fig : matchsize ] plots the cumulative number of detected sources as a function of the number of detected sources in a match .about 40% of the sources are in matches with only one detected source .about 30% of these cases involve non - overlapping or insufficiently overlapping images .about 20% of the sources are in matches with 5 or more detected sources .about 2% of the sources lie in matches with 50 or more sources .the right panel of fig .[ fig : matchsize ] is similar to the left panel , but restricted to crossmatched sources . in this case , there are no matches with less than 2 sources , as required for crossmatching .all these sources are in images that have been astrometrically corrected by our algorithm .about 6 million cross - identified detections lie in matches with more than 10 sources . for each set of matching sources , we determine the standard deviation of the positions among the white light sources involved in the match , where more than one white light source is involved .( the white light sources are used because they are involved in the crossmatching and are statistically independent , unlike the color sources . ) the distribution of the standard deviations for all matches involving more than one white - light source is shown in fig .[ fig : sigmac ] .the lower curve is based on the the current hst astrometry for these matching sources , i.e. , positions before our astrometric correction is applied .the upper curve is based on positions after astrometric correction .the median before astrometric correction is 32.5 mas .the peak ( or mode ) of the astrometrically corrected distribution is 3.0 mas , and the median is 9.1 mas . in order to test the validity of these matches, we compare the values of the fluxes determined by source extractor within radii of 3 pixels of the source centers ( fluxaper2 ) for pairs of source detections in the same match that have the same instrument , detector and filter .no constraints are applied on the exposure times of the sources in the pairs .apart from variable objects that are rare , we expect the flux difference to be small if the match corresponds to a single object .flux differences may also be caused by detector degradation over time .we do not attempt to account for this effect in this paper .we define the fractional flux difference between source pairs and having fluxes and to be with this definition , we have that in fig . [fig : dfluxmix ] , we plot in solid lines the distributions of for acs / wfc and wfc2 for pairs in the same match that have the same instrument , detector and filter .we compare these results to corresponding cases where pairs are not matched . to carry out this comparison , we considered the same set of pairs as those used in the respective solid lines and mapped the sources in each pair to a random pair taken from the same pair of images .the figures show that the matched pair distributions have strong peaks near , while the randomly selected pairs have a very broad distribution with a mild peak near .the results provide evidence that these matches contain repeated observations of the same physical object . a more detailed view of the flux differences for matched pairs is shown in fig .[ fig : dfluxacswfpc2 ] .the width of the acs flux distribution , based on where normalized distribution equals 0.5 , is equal to 0.027 and the corresponding width of the wfpc2 flux distribution is equal to 0.047 .the smallness of the widths reflect the accuracies of both the matching and the hst photometry .since no constraints are applied to the exposure times of the matched pairs , some of the larger flux differences likely involve detections of faint sources with short exposure times . in some other cases, the flux differences may reflect physical changes in the source brightness .we determine effectiveness of the chainbreaker in splitting unphysical matches by again considering fractional flux differences between pairs of matching sources that have the same instrument , detector and filter .[ fig : dfluxchain ] shows the comparison of the distributions before and after the application of the chainbreaker . for the case of acs / wfc, we see that the chainbreaker was effective in reducing the incidence of unphysical matches in the tail of the distribution .we note that the plot is semi - logarithmic and the apparent offset between the two curves involves only a small fraction of the matches .a smaller improvement was made for the case of wfpc2 .we consider the time distribution covered by the matches .this time coverage of a match is defined as the difference between the earliest start time and latest stop time for the exposures containing detected sources in the match .the largest match time span is 6419 days or about 17.5 years .the time span distributions are plotted in fig .[ fig : timeg ] .the number of matches greater than some time span falls off roughly logarithmically with the time span . over 1 million matches involving about 10 million source detectionshave a time span greater than a few days . about matches involving about 5 millionsource detections have a time span greater than a year . about 10%of the detected sources lie in matches with a duration of more than a year .about 30% of the crossmatched source detections lie in matches that have a duration of more than one year . many of the matches involve more than one filter . fig .[ fig : filterg ] shows that more than one million matches involve multiple filters .matches involve as many as 21 filters of detected sources and as many as 27 filters of detected sources and color dropouts .more than matches involve more than 5 filters for detected sources and more than matches involve more than 5 filters for detected sources and dropouts .[ fig : filtertimedens ] shows how the frequency of matches depends jointly on the number of filters and the time duration of a match .the minimum match duration considered is one day which implies that all matches being considered involve more than hst visit .the frequency of matches extends to long time spans for matches with less than 5 filters .there are bands of many - filter cases at several time spans .we developed a new approach for positional cross - matching astronomical sources that is well suited to the hubble space telescope ( hst ) .we applied the approach to crossmatching hst sources detected in the same or different detectors ( acs / wfc and wfpc2 ) and filters .the hst observations comprise a unique astronomy resource . preserving the measurements and enhancing their value are important but challenging tasks .while the overall volume of data is moderate by today s standards , the dataset presents a number of difficulties .one of them is the small field of view that does not contain enough calibrators to accurately pin down the astrometry of the images .we presented a new algorithm that can cross - calibrate overlapping images to each other .we introduced infinitesimal 3-d rotations for this purpose , which yield an analytically tractable optimization procedure .our implementation of this scheme is sufficient to provide high - precision relative astrometry across overlapping images .the improved astrometric accuracy of source positions is typically about a few milli - arcseconds ( see fig . [fig : sigmac ] ) .another challenge is to deal with the complex placement of the images on the sky and the fact that certain parts of the sky are observed many times ( see fig . [fig : mossize ] ) . instead of the naive combinatorial scaling ,we achieve high efficiency in a greedy `` chainbreaker '' procedure that applies bayesian model selection to find the best matches within a mosaic . to check on our matching results , we analyzed the flux differences between sources in the same match with the same instrument , detector and filter , which should optimally be zero ( ignoring variable sources ) .we found that the astrometric correction and the probabilistic object selection provide reliable matches ( see fig .[ fig : dfluxmix ] ) .based on just positional information , the bayesian model selection rejects spurious matches and improves the tail of the distribution with large flux deviations ( see fig .[ fig : dfluxchain ] ) .we demonstrated that many of the matches cover a broad range of time spans and filters ( figs .[ fig : timeg ] , [ fig : filterg ] , and [ fig : filtertimedens ] ) .therefore , the catalog should enable time - domain , multi - wavelength studies of sources detected by hst .the catalog also provides information about nondetections .the presented catalog is publicly available online via web forms as well as an advanced query interface .the catalog provides a basis for further extensions , such as by including other detectors and source lists based on other software .the algorithms and tools developed in this paper are not specific to the hubble space telescope .they are directly applicable to any astronomy observations that exhibit similar challenges .we benefited from inspiring discussions with alex szalay , rick white , and brad whitmore on different aspects of the project .we acknowledge assistance on a previous approach to this project by nathan cole .we are grateful for support from nasa aisrp grant nnx09ak62 g .in this section we provide a short derivation for equation ( [ omeq ] ) using a variational method .this approach to the minimization is equivalent to writing out the components of the vectors and matrices to obtain the vanishing partial differentials , but more concise .the quadratic cost function is ^ 2 \ .\ ] ] its minimization yields the value of defined by equation ( [ vomhat ] ) , that is , to determine the solution , we consider small variations in for and require that they vanish .this procedure is equivalent to requiring that the partial derivatives of with respect to the components of vanish at .the linear variation of the cost function due to a small change is \cdot ( \delta { { { \mbox{\boldmath{}}}}}\times{{{{\mbox{\boldmath{}}}}_{{{i } } } } } ) \ . \label{dc}\ ] ] we use the fact that the terms in a scalar triple product can be cyclically permuted as in \ , \ ] ] and the known equivalence for the vector triple products , in addition to that for the dot products of two cross products , \\ & = & -\delta { { { \mbox{\boldmath{}}}}}\cdot \left [ { { { { \mbox{\boldmath{}}}}_{{{i}}}}}\,({{{{\mbox{\boldmath{}}}}_{{{i}}}}}\cdot { { { \mbox{\boldmath{ } } } } } ) - r_i^2 \ , { { { \mbox{\boldmath{}}}}}\right ] \ .\end{aligned}\ ] ] since all are unit vectors , i.e. , , equation ( [ dc ] ) can now be written as \ .\ ] ] by requiring that for arbitrary , but small , we have that = 0 \ .\ ] ] the cross - product of any vector by itself vanishes , so , and the final result is = 0 \ .\ ] ] symbol represents the identity and is the dyadic vector product , hence the term in parenthesis is a linear operator applied to the vector .this formula is equivalent to equation ( [ omeq ] ) in the main body of the article .
|
object cross - identification in multiple observations is often complicated by the uncertainties in their astrometric calibration . due to the lack of standard reference objects , an image with a small field of view can have significantly larger errors in its absolute positioning than the relative precision of the detected sources within . we present a new general solution for the relative astrometry that quickly refines the world coordinate system of overlapping fields . the efficiency is obtained through the use of infinitesimal 3-d rotations on the celestial sphere , which do not involve trigonometric functions . they also enable an analytic solution to an important step in making the astrometric corrections . in cases with many overlapping images , the correct identification of detections that match together across different images is difficult to determine . we describe a new greedy bayesian approach for selecting the best object matches across a large number of overlapping images . the methods are developed and demonstrated on the hubble legacy archive , one of the most challenging data sets today . we describe a novel catalog compiled from many hubble space telescope observations , where the detections are combined into a searchable collection of matches that link the individual detections . the matches provide descriptions of astronomical objects involving multiple wavelengths and epochs . high relative positional accuracy of objects is achieved across the hubble images , often sub - pixel precision in the order of just a few milli - arcseconds . the result is a reliable set of high - quality associations that are publicly available online .
|
structured models have long served as a representative deterministic model used to describe the evolution of biological systems , see for instance or and references therein . in their simplest form , structured models describe the temporal evolution of a population structured by a biological parameter such as size , age or any significant _ trait _ , by means of an evolution law , which is a mass balance at the macroscopic scale .a paradigmatic example is given by the transport - fragmentation equation in cell division , that reads the mechanism captured by equation can be described as a mass balance equation ( see ) : the quantity of cells of size at time is fed by a transport term that accounts for growth by nutrient uptake , and each cell can split into two offsprings of the same size according to a division rate . supposing where we suppose a given model for the growth rate known up to a multiplicative constant and experimental data for the problem we consider here is to recover the division rate and the constant . in , perthame and zubelli proposed a deterministic method based on the asymptotic behavior of the cell amount indeed , it is known ( see _ e.g. _ ) that under suitable assumptions on and , by the use of the _ general relative entropy principle _ ( see ) , one has where and is the adjoint eigenvector ( see ) .the density is the first eigenvector , and the unique solution of the following eigenvalue problem moreover , under some supplementary conditions , this convergence occurs exponentially fast ( see ) .hence , in the rest of this article , we work under the following analytical assumptions . [ analytical assumptions][as : an ] 1 . for the considered nonnegative functions and and for , there exists a unique eigenpair solution of problem .[ as : an:1 ] 2 .this solution satisfies , for all and .[ as : an:2 ] 3 .the functions and belong to with , and in particular and .( denotes the sobolev space of regularity measured in -norm . ) [ as : an:3 ] 4 .we have with . [ as : an:4 ] hereafter and denote the usual and norms on .assertions [ as : an:1 ] and [ as : an:2 ] are true under the assumptions on and stated in theorem 1.1 of , under which we also have assertion [ as : an:3 ] is a ( presumably reasonable ) regularity assumption , necessary to obtain rates of convergence together with the convergence of the numerical scheme .assertion [ as : an:4 ] is restrictive , but mandatory in order to apply our statistical approach .thanks to this asymptotic behavior provided by the entropy principle , instead of requiring time - dependent data which is experimentally less precise and more difficult to obtain , the inverse problem becomes : how to recover from observations on ? in , as generally done in deterministic inverse problems ( see ) , it was supposed that experimental data were pre - processed into an approximation of with an _ a priori _ estimate of the form for a suitable norm . then , recovering from becomes an inverse problem with a certain degree of ill - posedness . from a modelling point of view, this approach suffers from the limitation that knowledge on is postulated in an abstract and somewhat arbitrary sense , that is not genuinely related to experimental measurements . in this paper, we propose to overcome the limitation of the deterministic inverse problems approach by assuming that we have data , each data being obtained from the measurement of an individual cell picked at random , after the system has evolved for a long time so that the approximation is valid .this is actually what happens if one observes cell cultures in laboratory after a few hours , a typical situation for _ e. coli _cultures for instance , provided , of course , that the underlying aggregation - fragmentation equation is valid .each data is viewed as the outcome of a random variable , each having probability distribution .we thus observe with and where hereafter denotes probability the expectation operator with respect to likewise . ] .we assume for simplicity that the random variables are defined on a common probability space and that they are stochastically independent .our aim is to build an estimator of , that is a function that approximates the true with optimal accuracy and nonasymptotic estimates . to that end , consider the operator from representation , we wish to find , solution to where based on statistical knowledge of only .suppose that we have preliminary estimators and of respectively and and an approximation of .then we can reconstruct in principle by setting formally this leads us to distinguish three steps that we briefly describe here . the whole method is fully detailed in section [ proposed - section ] .the first and principal step is to find an optimal estimator for to do so , the main part consists in applying twice the goldenschluger and lepski s method ( gl for short ) .this method is a new version of the classical lepski method .both methods are adaptive to the regularity of the unknown signal and the gl method furthermore provides with an oracle inequality . for the unfamiliar reader ,we discuss _ adaptive properties _ later on , and explain in details the _ gl method _ and _ the oracle point of view _ in section 2 . 1 .first , we estimate the density by a kernel method , based on a kernel function .we define where is defined by and the bandwidth is selected automatically by from a properly - chosen set ( see section [ sec : n : lepski ] for more details ) .a so - called oracle inequality is obtained in proposition [ estn ] measuring the quality of estimation of by .notice that this result , which is just a simplified version of , is valid for estimating any density , since we have only assumed to observe an of so that this result can be considered _ per se .second , we estimate the density derivative ( up to ) , again by a kernel method with the same kernel as before , and select an optimal bandwidth given by formula similarly .this defines an estimator where is specified by , and yields an oracle inequality for stated in proposition [ estd ] . in the saemway as for this result has an interest _ per se _ and is not a direct consequence of . from there, it only remains to find estimators of and .to that end , we make the following _ a priori _ ( but presumably reasonable ) assumption [ hypla ] on the existence of an estimator of [ hypla ] there exists some such that \big)^{1/q } < \infty , \qquad r_{\lambda , n } = \operatorname{{\mathbb e}}[\hat\lambda_n^{2q}]<\infty.\ ] ] indeed , in practical cell culture experiments , one can track individual cells that have been picked at random through time . by looking at their evolution , it is possible to infer in a classical parametric way , via an estimator that we shall assume to possess from now on .based on the following simple equality obtained by multiplying by and integrating by part , we then define an estimator by .finally , defining ends this first step .the second step consists in the formal inversion of and its numerical approximation : for this purpose , we follow the method proposed in and recalled in section [ sec : inversion ] . to estimate , we state where is defined by on a given interval . ] , we will have access to error bounds only on ] .if the fundamental ( yet technical ) statistical result is the oracle inequality for stated in theorem [ oraclesurh ] ( see section [ oracle - section ] ) , the relevant part with respect to existing works in the non - stochastic setting is its consequence in terms of rates of convergence . for presenting them ,we need to assume that the kernel has regularity and vanishing moments properties .[ hypk2 ] the kernel is differentiable with derivative .furthermore , and and are finite .finally , there exists a positive integer such that for and is finite .then our proposed estimators satisfy the following properties . [ rate ] under assumptions [ as : an ] , [ hypla ] and [ hypk2 ] , let us assume that and are bounded uniformly in and specify with f. assume further that the family of bandwidth depends on is such that and for all . then satisfies , for all ] in such that : = [ \inf_{x\in[a , b ] } n(x ) , \sup_{x\in [ a , b ] } n(x ) ] \subset ( 0,\infty ) , \qquad q : = \sup_{x\in [ a , b ] } |h(x)|<\infty,\ ] ] and if and for some , then satisfies , for all ] and is performing as well as the oracle up to some multiplicative constant . in that sense , we are able to select the _best bandwidth _ within our family . as compared to the results of goldenschluger and lepski in , we do not consider the case where is an interval and we do not specify except for assumption [ hypk ] .this simpler method is more reasonable from a numerical point of view , since estimating is only a preliminary step .the probabilistic tool we use here is classical in model selection theory ( see section [ proofs - section ] and ) and actually , we do not use directly .in particular the main difference is that , in our specific case , we are able to get fixed whereas goldenschluger and lepski require to tend to 0 with .the price to pay is that we obtain a uniform bound ( see lemma [ concentration ] in section [ technique ] ) which is less tight , but that will be sufficient for our purpose .the previous method can of course be adapted to estimate we adjust here the work of to the setting of estimating a derivative .we again use kernel estimators with more stringent assumptions and but this choice is not mandatory . ] on .[ hypk ] the function is differentiable , and . for any bandwidth , we define the kernel estimator of as indeed again we can look at the integrated squared error of .we obtain the following upper bound : \leq \|d - k_h\star d\|_2 + \operatorname{{\mathbb e}}[\|k_h\star d - \hat{d}_h\|_2],\ ] ] with ^ 2 dx \big]\\ & = & \frac{1}{n^2 } \int \sum_{i=1}^n \operatorname{{\mathbb e}}\big[\big(g(x_i)k'_h(x - x_i)-\operatorname{{\mathbb e}}\big(g(x_i)k'_h(x - x_i)\big)\big)^2\big ] dx\\ & \leq & \frac{1}{n } \operatorname{{\mathbb e}}\big[\int g^2(x_1){k'_h}^2(x - x_1 ) dx\big]\\&\leq & \frac { { { \big\lvert g\big\rvert}}_\infty^2 { { \big\lvert k'_h\big\rvert}}_2 ^ 2}{n}=\frac { { { \big\lvert g\big\rvert}}_\infty^2 { { \big\lvert k'\big\rvert}}_2 ^ 2}{nh^3}. \end{aligned}\ ] ] hence , by cauchy - schwarz inequality \leq { { \big\lvert d - k_h\star d\big\rvert}}_2 + \frac{1}{\sqrt{nh^3 } } { { \big\lvert g\big\rvert}}_\infty { { \big\lvert k'\big\rvert}}_2.\ ] ] once again , there is a bias - variance decomposition , but now the variance term is of order .we therefore define the oracle by now let us apply the gl method in this case .let be a family of bandwidths .we set for any , and where , given , we put finally , we estimate by using with as before , we are able to prove an oracle inequality for .[ estd ] assume .work under assumption [ hypk ] .if , with for , then for any , \leq \tilde c(q)\tilde{\chi}^{2q } \inf_{h\in \tilde \operatorname{{\mathcal h}}}\big\{\|k_{h}\star d - d\|_2^{2q}+\big(\frac { { { \big\lvert g\big\rvert}}_\infty\|k'\|_2}{\sqrt{nh^3}}\big)^{2q}\big\}+\tilde c_1n^{-q},\ ] ] where is a constant depending on and is a constant depending on , and . as mentioned in the introduction , we will not consider the problem of estimating and we work under assumption [ hypla ] : an estimator of is furnished by the practitioner prior to the data processing for estimating .it becomes subsequently straightforward to obtain an estimator of by estimating , see the form of .we estimate by where is a ( small ) tuning constant . ] .next we simply put from , the right - hand side of is consequently estimated by it remains to formally apply the inverse operator .however given , the dilation equation admits in general infinitely many solutions , see doumic _ et al . _ , appendix a. nevertheless , if , there is a unique solution to , see proposition a.1 . in , and moreover it defines a continuous operator from to . since and belong to , one can define a unique solution to when .this inverse is not analytically known but we can only approximate it via the fast algorithm described below . given and an integer , we construct a linear operator that maps a function into a function with compact support in ] with mesh defined by we set and define by induction the sequence , we define ] what gives , for and finally , we define as stated in the introduction , we eventually estimate by the stability of the inversion is given by the fact that is continuous , see lemma [ lk-1 ] in section [ technique ] , and by the following approximation result between and [ stabilitel2 ] let and .let denote the unique solution of belonging to .we have for : with hence , behaves nicely over sufficiently smooth functions. moreover the estimation of and the estimators and are essentially regular .finally we estimate as stated in .the overall behaviour of the estimator is finally governed by the quality of estimation of the derivative , which determines the accuracy of the whole inverse problem in all these successive steps .we are ready to state our main result , namely the oracle inequality fulfilled by .[ oraclesurh ] work under assumptions [ as : an ] and [ hypla ] and let a kernel satisfying assumptions [ hypk ] and [ hypk ] .define with for and with for for and let us define by on the interval . ] and solve the direct problem by the use of the so - called _ power algorithm _ to find the corresponding density and the principal eigenvalue ( see for instance for more details ) .we check that almost vanishes outside the interval ; ] , we consider the cases where and first second for then linear to for this particular form is interesting because due to this fast increase on the solution is not that regular and exhibits a 2-peaks distribution ( see figure [ fig : b123:1 ] ) .finally , we test reconstruction of ( left ) and of ( right ) obtained with a sample of data , for three different cases of division rates ,title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained with a sample of data , for three different cases of division rates ,title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained with a sample of data , for three different cases of division rates , title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained with a sample of data , for three different cases of division rates , title="fig:",width=264,height=264 ] in figures [ fig : b123:1 ] and [ fig : b123:2 ] , we show the simulation results with ( a realistic value for _ in vitro _ experiments on _ e. coli _ for instance ) for the reconstruction of and one notes that the solution can well capture the global behavior of the division rate but , as expected , has more difficulties in recovering fine details ( for instance , the difference between and ) and also gives much more error when is less regular ( case of ) .one also notes that even if the reconstruction of is very satisfactory , the critical point is the reconstruction of its derivative .moreover , for large values of even if and its derivative are correctly reconstructed , the method fails in finding a proper division rate this is due to two facts : first , vanishes , so the division by leads to error amplification .second , the values taken by for large have little influence on the solutions of the direct problem : whatever the values of , the solutions will not vary much , as shown by figure [ fig : b123:1 ] ( left ) .a similar phenomenon occurred indeed when solving the deterministic problem in ( for instance , we refer to fig .10 of this article for a comparison of the results ) .we also test a case closer to biological true data , namely the case and the results are shown on figures [ fig : b4:1 ] and [ fig : b4:2 ] for -samples of size and reconstruction of ( left ) and of ( right ) obtained for and for various sample sizes.,title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained for and for various sample sizes.,title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained for and for various sample sizes.,title="fig:",width=264,height=264 ] reconstruction of ( left ) and of ( right ) obtained for and for various sample sizes.,title="fig:",width=264,height=264 ] one notes that reconstruction is already very good for when unlike the reconstruction of that requires much more data . finally , in table [ tab : num ] we give average error results on simulations , for .we display the relative errors in norms , ( defined by ) , and their empirical variances . in table[ tab : num2 ] , for the case and we give some results on standard errors for various values of and compare them to which is the order of magnitude of the expected final error on since with a gaussian kernel we have in proposition [ rate ] .we see that our numerical results are in line with the theoretical estimates : indeed , the error on is roughly twice as large as + [ cols="^,^,^,^",options="header " , ]in section [ proof - main - section ] , we first give the proofs of the main results of section [ proposed - section ] .this allows us , in section [ proof - main - main - section ] , to prove the results of section [ oracle - section ] , which require the collection of all the results of section [ proposed - section ] , _ i.e. _ the oracle - type inequalities on the one hand and a numerical analysis result on the other hand .this illustrates the subject of our paper that lies at the frontier between these fields . finally , we state and prove the technical lemmas used in section [ proof - main - section ] .these technical tools are concerned with probabilistic results , namely concentration and rosenthal - type inequalities that are often the main bricks to establish oracle inequalities , and also the boundedness of . in the sequel, the notation denotes a generic positive constant depending on ( the notation simply denotes a generic positive absolute constant ) .it means that the values of may change from line to line . for any , we have : with and we obtain since we have -\big(\hat n_{h'}-\operatorname{{\mathbb e}}[\hat n_{h'}]\big)\|_2-\frac{\chi}{\sqrt{nh'}}\|k\|_2\big\}_+\nonumber\\ & & \hspace{1cm}+\|\operatorname{{\mathbb e}}[\hat n_{h^*,h'}]-\operatorname{{\mathbb e}}[\hat n_{h'}]\|_2\big\}\end{aligned}\ ] ] and for any and any &=&\int ( k_{h^*}\star k_{h'})(x - u)n(u)du-\int k_{h'}(x - v)n(v)dv\\ & = & \int\int k_{h^*}(x - u - t)k_{h'}(t)n(u)dt du-\int k_{h'}(x - v)n(v)dv\\ & = & \int\int k_{h^*}(v - u)k_{h'}(x - v)n(u)dudv-\int k_{h'}(x - v)n(v)dv\\ & = & \int k_{h'}(x - v)\big(\int k_{h^*}(v - u)n(u)du - n(v)\big)dv,\end{aligned}\ ] ] we derive where represents the approximation term . combining ( [ risk ] ) , ( [ ah * ] ) and ( [ biais ] ) entails with -\big(\hat n_{h'}-\operatorname{{\mathbb e}}[\hat n_{h'}]\big)\|_2-\frac{\chi}{\sqrt{nh'}}\|k\|_2\big\}_+\\ & = & \sup_{h'\in\operatorname{{\mathcal h}}}\big\{\|k_{h^*}\star\big(\hat n_{h'}-\operatorname{{\mathbb e}}[\hat n_{h'}]\big)-(\hat n_{h'}-\operatorname{{\mathbb e}}[\hat n_{h'}])\|_2-\frac{(1+\e)(1+\|k\|_1)}{\sqrt{nh'}}\|k\|_2\big\}_+\\ & \leq & ( 1+\|k\|_1)\sup_{h'\in\operatorname{{\mathcal h}}}\big\{\|\hat n_{h'}-\operatorname{{\mathbb e}}[\hat n_{h'}]\|_2-\frac{(1+\e)}{\sqrt{nh'}}\|k\|_2\big\}_+.\end{aligned}\ ] ] hence \leq\square_q\big(\operatorname{{\mathbb e}}\big[\|\hat n_{h^*}-n\|_2^{2q}\big]+ { { \big\lvert k\big\rvert}}_1^{2q}\|e_{h^*}\|_2^{2q}+\chi^{2q}\frac{\|k\|_2^{2q}}{(nh^*)^q}+(1 + { { \big\lvert k\big\rvert}}_1)^{2q}\operatorname{{\mathbb e}}[\xi_n^{2q}]\big),\ ] ] where now , we have : &\leq&2^{2q-1}\big ( \operatorname{{\mathbb e}}\big[\|\hat n_{h^*}-\operatorname{{\mathbb e}}[\hat n_{h^*}]\|_2^{2q}\big]+\|\operatorname{{\mathbb e}}[\hat n_{h^*})-n\|_2^{2q}\big]\\ & \leq&2^{2q-1}\big ( \operatorname{{\mathbb e}}\big[\|\hat n_{h^*}-\operatorname{{\mathbb e}}[\hat n_{h^*}]\|_2^{2q}\big]+\|e_{h^*}\|_2^{2q}\big).\end{aligned}\ ] ] then , by setting we obtain \|_2^{2q}\big]&=&\operatorname{{\mathbb e}}\big[\big(\int \big(\frac{1}{n}\sum_{i=1}^n kc_{h^*}(x_i , x)\big)^2dx\big)^q\big]\\ & \leq&\frac{2^{q-1}}{n^{2q}}\big(\operatorname{{\mathbb e}}\big[\big(\sum_{i=1}^n\int kc_{h^*}^2(x_i , x)dx\big)^q\big]\big.\\ & & \hspace{1cm}+\big.\operatorname{{\mathbb e}}\big[\big|\sum_{1\leq i , j\leq n\ i\not = j}\int kc_{h^*}(x_i , x)kc_{h^*}(x_j , x)dx\big|^q\big]\big).\end{aligned}\ ] ] since \big)^2dx\\ & \leq&2\big(\int k_{h^*}^2(x - x_i)dx+\int\big(\operatorname{{\mathbb e}}\big[k_{h^*}(x - x_1)\big]\big)^2dx\big)\\ & \leq&2\big(\|k_{h^*}\|_2 ^ 2+\int\operatorname{{\mathbb e}}\big[k_{h^*}^2(x - x_1)\big]dx\big)\\ & \leq&4\|k_{h^*}\|_2 ^ 2=\frac{4}{h^*}\|k\|_2 ^ 2,\end{aligned}\ ] ] the first term can be bounded as follows \leq\big(\frac{4n}{h^*}\|k\|_2 ^ 2\big)^q.\ ] ] for the second term , we apply theorem 8.1.6 of de la pea and gin ( 1999 ) ( with ) combined with the cauchy - schwarz inequality : \\ \leq\ ; & \big(\operatorname{{\mathbb e}}\big[\big| \sum_{1\leq i , j\leq n\ i\not = j}\int kc_{h^*}(x_i , x)kc_{h^*}(x_j , x)dx\big|^{2q}\big]\big)^{\frac{1}{2}}\\ \leq\ ; & \square_qn^{q}\big(\operatorname{{\mathbb e}}\big[\big| \int kc_{h^*}(x_1,x ) kc_{h^*}(x_2,x)dx\big|^{2q}\big]\big)^{\frac{1}{2}}\\ \leq\ ; & \square_qn^{q}\big(\operatorname{{\mathbb e}}\big[\big| \int kc_{h^*}^2(x_1,x)dx\big|^{2q}\big]\big)^{\frac{1}{2 } } \leq\ ; \square_q\big(\frac{4n}{h^*}\|k\|_2 ^ 2\big)^q.\end{aligned}\ ] ] it remains to deal with the term .by lemma [ concentration ] below , we obtain \leq \square_{q , \eta , \delta { { \big\lvert k\big\rvert}}_2 , { { \big\lvert k\big\rvert}}_1 , { { \big\lvert n\big\rvert}}_\infty } n^{-q}\ ] ] and the conclusion follows .the proof is similar to the previous one and we avoid most of the computations for simplicity . for any , with and then , to study , we first evaluate -\operatorname{{\mathbb e}}[\hat d_{h_2}(x)].&=&(k_{h_1}\star k_{h_2}\star ( gn)')(x)- ( k_{h_2}\star ( gn)')(x)\\ & = & \int d(t)(k_{h_1}\star k_{h_2})(x - t)dt-\int d(t)k_{h_2}(x - t)dt\\ & = & \int d(t)\int k_{h_1}(x - t - u)k_{h_2}(u)dudt-\int d(t)k_{h_2}(x - t)dt\\ & = & \int d(t)\int k_{h_1}(v - t)k_{h_2}(x - v)dvdt-\int d(v)k_{h_2}(x - v)dv\\ & = & \int k_{h_2}(x - v)\big(\int d(t)k_{h_1}(v - t)dt - d(v)\big)dv\\ & = & ( k_{h_2}\star \tilde e_{h_1})(x),\end{aligned}\ ] ] where we set , for any real number it follows that -\big(\hat d_{h}-\operatorname{{\mathbb e}}[\hat d_{h}]\big)\|_2-\frac{\tilde\chi}{\sqrt{nh^3}}\|g\|_\infty\|k'\|_2\big\}_+\big.\nonumber\\ & & \hspace{1cm}+\big.\|\operatorname{{\mathbb e}}[\hat d_{h_0,h}]-\operatorname{{\mathbb e}}[\hat d_{h}]\|_2\big\}\nonumber\\ & \leq & \sup_{h\in\tilde\operatorname{{\mathcal h}}}\big\{\|\hat d_{h_0,h}-\operatorname{{\mathbb e}}[\hat d_{h_0,h}]-(\hatd_{h}-\operatorname{{\mathbb e}}[\hat d_{h}])\|_2-\frac{\tilde\chi}{\sqrt{nh^3}}\|g\|_\infty\|k'\|_2\big\}_++\|k\|_1\|\tilde e_{h_0}\|_2\nonumber\\ & \leq & ( 1+\|k\|_1)\sup_{h\in\tilde\operatorname{{\mathcal h}}}\big\{\|\hat d_{h}-\operatorname{{\mathbb e}}[\hat d_{h}]\|_2-\frac{(1+\tilde\e)}{\sqrt{nh^3}}\|g\|_\infty\|k'\|_2\big\}_++\|k\|_1\|\tilde e_{h_0}\|_2,\end{aligned}\ ] ] in order to obtain the last line , we use the following chain of arguments : and &=&\int k_{h_0}(t ) \big(\int g(u)k_h'(x - u - t)n(u)du \big)dt,\end{aligned}\ ] ] therefore =\int k_{h_0}(t)g(x - t)dt = k_{h_0}\star g(x),\ ] ] with .\end{aligned}\ ] ] therefore \|_2&\leq & \|k_{h_0}\|_1\|g\|_2\\ & \leq&\|k\|_1\|\hat d_h-\operatorname{{\mathbb e}}[\hat d_h]\|_2,\end{aligned}\ ] ] which justifies ( [ tildea ] ) . in the same way as in the proof of proposition [ estn ], we can establish the following : =\operatorname{{\mathbb e}}\big[\|\hat d_{h_0}-d\|_2^{2q}\big]\leq \square_q \big(\|\tilde e_{h_0}\|_2^{2q}+\big(\frac { { { \big\lvert g\big\rvert}}_\infty\|k'\|_2}{\sqrt{nh_0 ^ 3}}\big)^{2q}\big).\ ] ] finally , we successively apply ( [ biaisd ] ) , ( [ tildea ] ) and lemma [ concentration ] in order to conclude the proof .we use the notation and definitions of section [ sec : inversion ] .we have we prove by induction that for all one has .the result follows by summation over we first prove the two following estimates : by definition , is the average of the function on the interval ], we use the cauchy - schwarz inequality : we are ready to prove by induction the two following inequalities : for two constants and specified later on .first , for we have we recall ( see proposition a.1 . of ) that and we use the fact that and for , in order to write this proves the first induction assumption for and proves the second one .let us now suppose that the two induction assumptions are true for all and take let us first evaluate we distinguish the case when is even and when is odd .let be even : then , by definition by the induction assumption and assertion [ ass : delta ] on for if is odd , we write by definition hence , re - organizing terms , we can write putting together the four inequalities above ( the estimates for and the induction assumptions ) , we obtain and is proved .it remains to establishe .let us write it for even ( the case of an odd is similar ) : hence , as previously , we obtain to complete the proof , we remark that and are suitable .it is consequently sufficient to take .it is easy to see that thanks to proposition [ stabilitel2 ] .note that so that we can write we obtain , thanks to lemma [ lk-1 ] that gives the boundedness of the operator taking expectation and using cauchy - schwarz inequality , we obtain for any , &\leq & \square_q\big[\big(\operatorname{{\mathbb e}}[\hat\lambda_n^{2q}]\big)^{1/2}\big\{\big(\operatorname{{\mathbb e}}[{\hat\rho_n}^{4q}]\big)^{1/4}\big(\operatorname{{\mathbb e}}[\|\hat{d}-d\|_2^{4q}]\big)^{1/4}+\big(\operatorname{{\mathbb e}}[\|\hat{n}-n\|_2^{2q}]\big)^{1/2 } \\ & & \big.+\|d\|_{2}^{q}\big(\operatorname{{\mathbb e}}[|\hat\rho_n-\rho_g(n)|^{2q}]\big)^{1/2}\big\}\\ & & \big .+ \big(\|n\|_{2}+\rho_g(n ) { { \big\lvert d\big\rvert}}_{2}\big ) ^q\operatorname{{\mathbb e}}[|\hat\lambda_n-\lambda|^q]+ \big((\|n\|_{{\cal w}^1 } + \|g n\|_{{\cal w}^2}){\frac}{t}{\sqrt{k}}\big)^q\big].\end{aligned}\ ] ] now , lemma [ hatrho ] gives the behaviour of ] .so , for all , now we consider a family of possible functions and .let us introduce some strictly positive weights and let us apply the previous inequality to for . hence with probability larger than , for all , one has let it is also easy to obtain an upper bound of for any with =\int_0^{+\infty}\operatorname{{\mathbb p}}\big(\sup_{(\varphi , \psi)\in \mathcal{m } } \big(\|\xi_{\varphi,\psi}\|_2-m_{\varphi,\psi}\big)_+^{2q}\geq x\big ) dx.\ ] ] indeed then , let us take such that so hence .\\\end{aligned}\ ] ] finally , we have proved that .\ ] ] now let us evaluate what this inequality means for each set - up .* first , when and , the family corresponds to the family . in that case and will respectively be denoted by and . the upper bound given in becomes .\ ] ] now it remains to choose .but let and let obviously the series in is finite and for any , since , we have : since , one obtains that it remains to choose and small enough such that to obtain the desired inequality . * secondly , if and family corresponds to the family .so , and will be denoted by and respectively .the upper bound given by now becomes .\ ] ] but let and let obviously the series in is finite and we have : but .hence as previously , it remains to choose and accordingly to conclude .[ bar ] under assumptions and notations of proposition [ rate ] , if there exists an interval ] and for , first let us look at .\ ] ] one can apply bernstein inequality to ( see _ e.g. _ ( 2.10 ) and ( 2.21 ) of ) to obtain : \leq \exp\big[\frac{\lambda^2 v^2(x , y)}{2(1-\lambda c(x , y))}\big ] , \quad \forall \lambda \in ( 0 , 1/c(x , y)),\ ] ] with and but , with the lipschitz constant of , and let be a fixed point of ] and all consequently , for large enough , , 2\hat{n}_h(x ) < m)\leq \square_\eta n^{-q } .\ ] ] we have : \leq \square_{p}(a_n+b_n),\ ] ] with \ ] ] and .\ ] ] we use the rosenthal inequality ( see _ e.g. _ the textbook ) : if are independent centered variables such that <\infty, ] , for any .hence , for the first term , using ( [ rosenthal ] ) , we have : \\ & \leq&\square_{g , n , c}\operatorname{{\mathbb e}}\big[\big|\frac{1}{n}\sum_{i=1}^nx_i\int_{\operatorname{{\mathbb r}}_+}g(x)n(x)dx-\int_{\operatorname{{\mathbb r}}_+}xn(x)dx\big(\int g(x)n(x)dx+\frac{c}{n}\big)\big|^p\big]\\ & \leq&\square_{p , g , n}\operatorname{{\mathbb e}}\big[\big|\frac{1}{n}\sum_{i=1}^nx_i-\int_{\operatorname{{\mathbb r}}_+}xn(x)dx\big|^p\big]+\square_{g ,n , c}n^{-p}\\ & \leq&\square_{p , g , n , c}n^{-p/2}. \end{aligned}\ ] ] let us turn to the term : \\ & \leq&\big(\operatorname{{\mathbb e}}\big[\big|\frac{1}{n}\sum_{i=1}^nx_i\big|^{2p}\big]\big)^{1/2}\times\big(\operatorname{{\mathbb e}}\big[\big|\frac{\frac{1}{n}\sum_{i=1}^ng(x_i)-\int g(x)n(x)dx}{\big(\frac{1}{n}\sum_{i=1}^ng(x_i)+\frac{c}{n}\big)\big(\int g(x)n(x)dx+\frac{c}{n}\big)}\big|^{2p}\big]\big)^{1/2}\\ & \leq & \|gn\|_1^{-p}\big(\operatorname{{\mathbb e}}\big[\big|\frac{1}{n}\sum_{i=1}^nx_i\big|^{2p}\big]\big)^{1/2 } \big(\operatorname{{\mathbb e}}\big[\big|\frac{\frac{1}{n}\sum_{i=1}^ng(x_i)-\int g(x)n(x)dx}{\frac{1}{n}\sum_{i=1}^ng(x_i)+\frac{c}{n}}\big|^{2p}\big]\big)^{1/2}\\ & \leq&\square_{p , g ,n}\big(\operatorname{{\mathbb e}}\big[\big|\frac{1}{n}\sum_{i=1}^nx_i-\operatorname{{\mathbb e}}[x_1]\big|^{2p}+\big|\operatorname{{\mathbb e } } [ x_1]\big|^{2p}\big)^{1/2 } \times \\ & & \big(\operatorname{{\mathbb e}}\big[\big|\frac{\frac{1}{n}\sum_{i=1}^ng(x_i)-\int g(x)n(x)dx}{\frac{1}{n}\sum_{i=1}^ng(x_i)+\frac{c}{n}}\big|^{2p}\big]\big)^{1/2}\\ & \leq&\square_{p , g , n}\big(\operatorname{{\mathbb e}}\big[\big|\frac{\frac{1}{n}\sum_{i=1}^ng(x_i)-\int g(x)n(x)dx}{\frac{1}{n}\sum_{i=1}^ng(x_i)+\frac{c}{n}}\big|^{2p}\big]\big)^{1/2}.\end{aligned}\ ] ] now , we set for ( recall that assumption [ as : an ] states that , which also implies <\infty$ ] ) . since is positive , the bernstein inequality ( see section 2.2.3 of ) gives : therefore , we bound from above the term by a constant times \big)^{1/2}\\ & \leq&\square_{p , g , n , c}n^{-p/2}+ \square_{p , g , n } \sqrt{2}(nc^{-1}\|g\|_\infty)^pn^{-\gamma/2}\\ & \leq & \square_{p , g , n , c } n^{-p/2 } , \end{aligned}\ ] ] where we have used ( [ rosenthal ] ) for the first term and ( [ bernstein ] ) for the second one .this concludes the proof of the lemma .we have : }{\frac}{1}{16}\bigl(h_{j , k } + \varphi_{j , k}\bigr)^2 + \sum\limits_{j=1}^{[{\frac}{k}{2 } ] } { \frac}{1}{64}(h_{j , k } + \varphi_{j , k } + h_{j-1,k } + \varphi_{j-1,k})^2 \biggr)\\ \\ & \leq&{\frac}{t}{k } \biggl(\sum\limits_{j=0}^{[{\frac}{k-1}{2 } ] } { \frac}{1}{8}\bigl(h_{j , k}^2 + \varphi_{j , k}^2\bigr ) + \sum\limits_{j=1}^{[{\frac}{k}{2 } ] } { \frac}{1}{16 } ( h_{j , k}^2 + \varphi_{j , k}^2 + h_{j-1,k}^2 + \varphi_{j-1,k}^2\bigr ) \biggr)\\ \\ & \leq & { \frac}{1}{4 } { \frac}{t}{k } \sum\limits_{i=0}^{k-1 } \biggl(h_{j , k}^2 + \varphi_{j , k}^2\biggr)={\frac}{1}{4 } \biggl ( \int |{\mathcal l}^{-1}_k(\varphi)(x)|^2 dx + { \frac}{t}{k}\sum\limits_{i=0}^{k-1}\varphi_{j , k}^2 \biggr ) .\end{array}\ ] ] at the second line , we have distinguished the s that are even and the s that are odd . at the third line, we have used the inequalities and by substraction , we obtain the cauchy - schwarz inequality gives : so that and finally we obtain the desired result : * acknowledgment : * we warmly thank oleg lepski for fruitful discussions that eventually led to the choice of his method for the purpose of this article .the research of m. hoffmann is supported by the agence nationale de la recherche , grant no .anr-08-blan-0220 - 01 .the research of p. reynaud - bouret and v. rivoirard is partly supported by the agence nationale de la recherche , grant no .anr-09-blan-0128 parcimonie .the research of m. doumic is partly supported by the agence nationale de la recherche , grant no .anr-09-blan-0218 toppaz .99 h. t. banks , karyn l. sutton , w. clayton thompson , gennady bocharov , dirk roose , tim schenkel and andreas meyerhans , _ estimation of cell proliferation dynamics using cfse data _ , bull . of math ., doi : 10.1007/s11538 - 010 - 9524 - 5 .baraud , y. _ a bernstein - type inequality for suprema of random processes with applications to model selection in non - gaussian regression ._ to appear in bernoulli .doumic , m. and gabriel , p. ( 2010 ) _ eigenelements of a general aggregation - fragmentation model_. math .models methods appl .20 ( 2010 ) , no . 5 , 757783 .doumic , m. , perthame , b. and zubelli , j. ( 2009 ) _ numerical solution of an inverse problem in size - structured population dynamics_. inverse problems , * 25 * , 25pp .engl , m. hanke , a. neubauer , _ regularization of inverse problems _ , springer verlag , 1996 .gasser , t. and mller , h.g .( 1979 ) _ optimal convergence properties of kernel estimates of derivatives of a density function_. in lecture notes in mathematics * 757*. springer , berlin , 144154 .hall , p. , heyde , c.c . ,_ martingale limit theory and its applications_. academic press , new york .hrdle , w. , kerkyacharian , g. , picard , d. and tsybakov a. ( 1998 ) _ wavelets , approximation and statistical applications_. springer - verlag , berlin .lepski , o. v. ( 1990 ) ._ one problem of adaptive estimation in gaussian white noise_. theory probab. appl . * 35 * 459470 .lepski , o. v. ( 1991 ) ._ asymptotic minimax adaptive estimation . 1 .upper bounds_. theory probab. appl . * 36 * 645659 .lepski , o. v. ( 1992 ) ._ asymptotic minimax adaptive estimation .2 . statistical models without optimal adaptation . adaptive estimators ._ theory probab. appl . * 37 * 468481 .lepski , o. v. ( 1992 ) ._ on problems of adaptive estimation in white gaussian noise_. in advances in soviet mathematics ( r. z. khasminskii , ed . )* 12 * 87106 .soc . , providence .goldenshluger , a. , lepski , o. ( 2009 ) _ uniform bounds for norms of sums of independent random functions_ arxiv:0904.1950 .goldenshluger , a. , lepski , o. ( 2010 ) _ bandwidth selection in kernel density estimation : oracle inequalities and adaptive minimax optimality _ arxiv:1009.1016 .mair , b. a. and ruymgaart , f.h .( 1996 ) _ statistical inverse estimation in hilbert scales_. siam j. appl . math . * 56 * , 14241444 .massart , p. ( 2007 ) _ concentration inequalities and model selection ._ lectures from the 33rd summer school on probability theory held in saint - flour , july 623 , 2003 .springer , berlin .nussbaum , m. ( 1996 ) ._ asymptotic equivalence of density estimation and white noise_. ann .* 24 * 2399 - 2430 .nussbaum , m. and pereverzev , s. ( 1999 ) _ the degrees of ill - posedness in stochastic and deterministic noise models_. preprint wias 509 .perthame , b. transport equations arising in biology ( 2007 ) . in _frontiers in mathematics _ , frontiers in mathematics .metz , j.a.j . and dieckmann o. ( 1986 ) ._ formulating models for structured populations_. in _ the dynamics of physiologically structured populations ( amsterdam , 1983 ) _ , lecture notes in biomath . , vol .68 , 78135 .p. michel , s. mischler , b. perthame , _ general relative entropy inequality : an illustration on growth models , _ j. math .pures appl .( 9 ) 84 , 12351260 ( 2005 ) .b. perthame and l. ryzhik , _ exponential decay for the fragmentation or cell - division equation , _ j. of diff .eqns , * 210 * , 155177 ( 2005 ) .b. perthame and j. p. zubelli . on the inverse problem for a size - structured population model ., 23(3):10371052 , 2007 .tsybakov , a. ( 2004 ) ._ introduction lestimation non - paramtrique_. springer , berlin .wahba , g. ( 1977 ) _ practical approximate solutions to linear operator equations when the data are noisy_. siam j. numer .* 14 * , 651667 .
|
we consider the problem of estimating the division rate of a size - structured population in a nonparametric setting . the size of the system evolves according to a transport - fragmentation equation : each individual grows with a given transport rate , and splits into two offsprings of the same size , following a binary fragmentation process with unknown division rate that depends on its size . in contrast to a deterministic inverse problem approach , as in , we take in this paper the perspective of statistical inference : our data consists in a large sample of the size of individuals , when the evolution of the system is close to its time - asymptotic behavior , so that it can be related to the eigenproblem of the considered transport - fragmentation equation ( see for instance ) . by estimating statistically each term of the eigenvalue problem and by suitably inverting a certain linear operator ( see ) , we are able to construct a more realistic estimator of the division rate that achieves the same optimal error bound as in related deterministic inverse problems . our procedure relies on kernel methods with automatic bandwidth selection . it is inspired by model selection and recent results of goldenschluger and lepski . * keywords : * lepski method , oracle inequalities , adaptation , aggregation - fragmentation equations , statistical inverse problems , nonparametric density estimation , cell - division equation . + * mathematical subject classification : * 35a05 , 35b40 , 45c05 , 45k05 , 82d60 , 92d25 , 62g05 , 62g20 +
|
the problem of studying a charged particle beam focused by external electric or magnetic fields is important in many applications .the motion of such particle beams is governed by the interactions between the electric field generated by the particles themselves and by the external focusing electromagnetic field .the modelling framework consists in coupling a kinetic equation with maxwell equations . disregarding the collisions between particles ,the kinetic modelling is performed by means of the vlasov equation . in this paperwe will consider only non - relativistic long and thin beams .therefore , instead of studying the phenomenon by means of the full vlasov - maxwell system , we can use its paraxial approximation ( see for mathematical modelling and numerical simulation of focused particle beams dynamics ) . in this framework , the effects of the self - consistent magnetic and electric fields can be both taken into account by solving a single poisson equation . finally , we are led to solve a two dimensional phase space vlasov - poisson system with a parameter .this small parameter acts on the time variable ( in fact the longitudinal variable of a thin beam , see for details and physical meaning of the parameter ) , producing the highly oscillatory behaviour in phase space . in this framework ,the aim of this paper is to propose a numerical scheme able to study efficiently the evolution of a beam over a large number of fast periods .although more precise alternatives exist , we have chosen in this work to perform the numerical solution of vlasov equation by particle methods , which consist in approximating the distribution function by a finite number of macroparticles .the trajectories of these particles are computed from the characteristic curves of the vlasov equation , whereas the self - consistent electric field is computed on a mesh in the physical space .this method allows to obtain satisfying results with a small number of particles ( see ) .the contribution of this paper is to propose a new numerical method for solving the characteristic curves , or equivalently , for computing the macroparticles trajectories .namely , we are faced with solving the following stiff differential equation for several small values of the parameter and where represents a nonlinear term which plays the role of the self - consistent electric field .the difficulty arising in the numerical solution for this equation relies on the ability of the scheme to be uniformly stable and accurate when goes to zero .following the survey article , we are encouraged to use efficient numerical methods like exponential integrators in order to describe the dynamics of equation .the basic idea behind these methods is to make use of the solution s exact representation given by the variation - of - constants formula applying this formula from one time step to another has the merit to solve exactly the linear ( stiff ) part .classical numerical schemes fail to capture the stiff behaviour regardless of the size of the time step with respect to the small parameter .one may consult for construction , mathematical analysis and implementation of exponential integrators in the two classical types of stiff problems encapsulated in equation . as a specific implementation, one may cite , where , in the context of laser - plasma interactions , an exponential integrator is used for a particle - in - cell ( pic ) method in order to model the high - frequency plasma response .once the stiff part is exactly solved , one may use an exponential time differencing ( etd ) method ( see ) for the specific numerical treatement of the nonlinear term .the etd schemes turn out to outperform many other schemes when treating problems like ; see for comparisons of etd against the implicit - explicit method , the integrating factor method , etc . in the present paperwe construct and implement an exponential integrator in order to solve the characteristics of the highly oscillatory vlasov - poisson problem .the aim is to use a scheme with large time steps compared to the fast period that arises from the linear term without loosing the accuracy when the small parameter vanishes .the novelty of this method is in the numerical approximation of the integral term in .more precisely , when the time step is much larger than the rapid period , the idea of the algorithm is the following : we first finely solve the odes over one fast period by means of a high - order solver ( we have used explicit 4th order runge - kutta ) .then , thanks to formula , we may compute an approximation of the solution over a large integral number of periods .we have also found that using a more accurate period , instead of the period of the solution to the system without the nonlinear term , leads to more accurate simulations .in addition , we have checked if the scheme gives accurate solutions starting with an initial condition which lies on the slow manifold or not .we cite for a `` definition '' of the slow manifold : `` the slow manifold is that particular solution which varies only on the slow time scale ; the general solution to the ode contains fast oscillations also . ''the remainder of the paper is organized as follows .following , we briefly recall in section [ sec : axivp ] the paraxial approximation together with the axisymmetric beam assumption . in section [ sec: pic - vp ] we describe the pic method for the vlasov - poisson system in which we are interested .then , section [ sec : eipic ] is devoted to the construction of the new numerical scheme as an exponential integrator for solving the time - stepping part of the pic algorithm .eventually , in section [ sec : val - numerics ] , we implement and test our method on several test cases related to the vlasov - poisson system .the paraxial approximation relies on a scaled vlasov - poisson system in a phase space of dimension four , for space variable and for velocity variable .this simplified model of the full vlasov - maxwell system is particularly adapted to the study of long and thin beams of charged particles and it describes their evolution in the plane transverse to their direction of propagation .subject of many research investigations , the paraxial model was derived in a number of papers , see e.g. . in this workwe are interested in solving numerically a paraxial model with some additional hypotheses , see below .the solution of system is represented by a beam of particles in phase space .the beam evolves by rotating around the origin in the phase space , and in long times a bunch forms around the center of the beam from which filaments of particles are going out .these filaments are difficult to capture with classical numerical methods .we now introduce the paraxial model that we aim to solve . in the additional axisymmetric beam assumption ( i.e. invariant beam under rotation in for ) , we are led to change the coordinate in the polar frame .we thus write the model in polar coordinates , where and is such that and . then we use new velocity variables and , where . assuming in addition , as in , that is concentrated in angular momentum , i.e. , the paraxial model becomes where the external force writes with and some -periodic function with zero mean value . in this paperwe assume that there is no time oscillation in the external field but only the strong uniform focusing .we thus take and and the vlasov - poisson system in which we are interested writes in order to test the numerical method that we propose , we first consider two numerically simpler test cases where the electric field is not issued from poisson equation , but it has analytical forms : in the first case and in the second one . unlike the second case , for the first one , we can analytically compute the solution to ( a ) .we solve the vlasov - poisson system by using a particle - in - cell ( pic ) method .we thus introduce the following dirac mass sum approximation of where is the number of macroparticles and is the position in phase space of macroparticle moving along a characteristic curve of equation ( a ) .therefore , the problem is to find the positions and velocities at time from their values at time , by solving in this case , the standard pic algorithm writes as follows : ( 1 ) deposit particles on a spatial grid , leading to the grid density ; ( 2 ) solve poisson equation on the grid , leading to the grid electric field ; ( 3 ) interpolate the grid electric field in each particle ; ( 4 ) push particles with the previously obtained electric field .the first three steps are classically treated .the first one deals with the computing of the grid density by convoluting defined by with a first order spline ( this corresponds to the cloud - in - cell method in ) . then , we solve poisson equation ( b ) on a uniform one - dimensional grid by using finite differences .in our case this amounts to only discretize some space integral .we have done this by using the trapezoidal rule . as for the third step, we used the same convolution function as for the deposition step in order to get the particle electric field ( this corresponds to a linear interpolation of the grid electric field on each cell ) .eventually , the major issue is the fourth step of the pic algorithm , consisting in the numerical integration of system .here is the main focus of this paper , taking into account that we want to propose a stable and accurate scheme using large time steps with respect to the fast oscillation . to this end, we introduce in the next section a method based on exponential time differencing .we now describe the exponential numerical integrator that we have implemented to solve the fourth step of the pic algorithm .we first write down the exponential time differencing ( etd ) method in the case of the stiff ode system we are interested in . then , in section [ sec : etd - pic ] , we develop an algorithm based on the exponential time differencing in the framework of the pic method .the so - called exponential time differencing scheme arose originally in the field of computational electrodynamics but has been reinvented many times over the years ( see and the references therein ) .we take details of the ideas behind the various etd schemes from the comprehensive paper by cox and matthews .recall the stiff system of odes that we have to solve : with some initial condition . in this sectionwe assume that the electric field is given .as exposed in , the stiffness comes from the two scales on which the solution evolves : the rapid oscillations due to the linear term and a slower evolution due to the nonlinear ( electric ) term .thus , while any explicit scheme is limited to a small time step , of order , a fully implicit one requires nonlinear problems to be solved and is therefore slow . a suitable time - stepping scheme for should be able to avoid the small time steps when treating the stiffness .the essence of the etd methods is to solve the stiff linear part exactly and to derive appropriate approximations when integrating numerically the slower nonlinear term . to derive the exponential time differencing ( etd ) method for this systemwe first apply to and then integrate the obtained equation from to to deduce that where this formula is exact .in addition , it is also useful to write by replacing by any couple such that .more precisely , we have now the main question is how to derive approximations to the integral term in .all the etd schemes are results of this process . in this spirit, it was shown in that etd methods can be extended to any order by using multistep or runge - kutta methods and explicit formulae for such arbitrary order etd methods were derived .in particular , explicit coefficients for etd runge - kutta methods of order up to four have been computed .the authors also illustrated on several odes and pdes that etd is superior over the integrating factor and implicit - explicit methods , two other classical schemes able to avoid the small time step .nevertheless , in the form written in , a high - order etd scheme ( e.g. etdrk4 ) suffers from numerical instability as explained in .the problem has been solved in by using a contour integral method for evaluating the coefficients .then , this modified etdrk4 scheme has been tested in against five other 4th order schemes on several pdes , and etdrk4 has been found the best in terms of errors .these results encouraged us to use an etd scheme in order to solve stiff odes . in this paperwe do not use a high - order etd method as described in the references above but merely the formula for justifying the derivation of algorithm [ algo - time ] .more precisely , we adopt a different approach for approximating the integral term in , which is justified by the remark in the next paragraph and by our aim to use very large time steps compared to the fast oscillations , say when .this is the core of the method described in the following section . [cols="^,^ " , ]in this paper we have introduced a new numerical scheme for solving some stiff ( highly oscillatory ) differential equation .this scheme is based on exponential time differencing and can accurately handle large time steps with respect to the fast oscillation of the solution .it is applied in the framework of a particle - in - cell method for solving some vlasov - poisson equation .since the numerical results are encouraging , several ways to explore in the future may be usefull to improve some points in the scheme development .we have seen that the use of as fast time within the first step of algorithm [ algo - time ] may lead to an unstable simulation .in addition , even in a stable simulation , the use of the particles mean period gives smaller errors than those obtained with .therefore , we think it is important to find theoretically a more accurate approximation of the fast time . it will be interesting to see if our numerical scheme preserves the two - scale asymptotic limit , meaning that in the numerical scheme leads to a consistent discretization of the two - scale limit model .this remark is based on the fact that the etd discretization we have used is very close to an explicit discretization of the limit model in theorem 1.1 of .
|
in the framework of a particle - in - cell scheme for some 1d vlasov - poisson system depending on a small parameter , we propose a time - stepping method which is numerically uniformly accurate when the parameter goes to zero . based on an exponential time differencing approach , the scheme is able to use large time steps with respect to the typical size of the fast oscillations of the solution .
|
consider estimation of a parameter in the linear regression model where is a given , deterministic matrix , and is an -variate standard normal vector .the model is standard , but we are interested in the _ sparse _ setup , where , and possibly , and `` many '' or `` most '' of the coefficients of the parameter vector are zero , or close to zero .we study a bayesian approach based on priors that set a selection of coefficients a priori to zero ; equivalently , priors that distribute their mass over models that use only a ( small ) selection of the columns of .bayes s formula gives a posterior distribution as usual .we study this under the `` frequentist '' assumption that the data has in reality been generated according to a given ( sparse ) parameter .the expectation under the previous distribution is denoted .specifically , we consider a prior on that first selects a _ dimension _ from a prior on the set , next a random subset of cardinality and finally a set of nonzero values from a prior density on .formally , the prior on can be expressed as where the term refers to the coordinates being zero .we focus on the situation where is a product of densities over the coordinates in , for a fixed continuous density on , with the laplace density as an important special case .the prior is crucial for expressing the `` sparsity '' of the parameter .one of the main findings of this paper is that weights that decrease slightly faster than exponential in the dimension give good performance .priors of the type of ( [ defprior ] ) were considered by many authors , including .other related contributions include .the paper contains a theoretical analysis similar to the present paper , but restricted to the special case that the regression matrix is the identity and ; see example [ examplesequencemodel ] .the general model ( [ model ] ) shares some features with this special case , but is different in that it must take account of the noninvertibility of and its interplay with the sparsity assumption , especially for the case of recovering the parameter , as opposed to estimating the mean . while the proofs in use a factorization of the model along the coordinate axes , exponential tests and entropy bounds , in the present paper we employ a direct and refined analysis of the posterior ratio ( [ bayes ] ) , exploiting the specific form of the prior laplace density . furthermore , even for the case that is the identity matrix , the present paper provides several new results of interest : distributional approximations to the posterior distribution , insight in the scaling of the prior on the nonzero coordinates and oracle formulations of the contraction rates .algorithms for the computation of the posterior distribution corresponding to ( [ defprior ] ) , especially for the `` spike and slab '' prior described in example [ example.slabspike ] below , are routine for small dimensions and ( e.g. , ) . for large dimensions the resulting computations are intensive , due to the large number of possible submodels .many authors are currently developing algorithms that can cope with larger numbers of covariates , in the sparse setup considered in the present paper . in section [ sectioncomputationalaspects ]we review recent progress on various methods , of which some are feasible for values of up to hundreds or thousands .although this upper bound will increase in the coming years , clearly it falls far short of the dimensions attainable by ( point ) estimation methods based on convex programming , such as the lasso . other bayesian approaches to sparse regression that do not explicitly include model selection ( e.g. , )can cope with somewhat higher dimensions , but truly high - dimensional models are out of reach of fully bayesian methods at the present time . not surprisingly to overcome the nonidentifiability of the full parameter vector in the overspecified model ( [ model ] ) , we borrow from the work on sparse regression within the non - bayesian framework ; see .good performance of the posterior distribution is shown under _ compatibility _ and _ smallest sparse eigenvalue _ conditions ; see section [ sec.recovery ] . although the constants in these results are not as sharp as results for the lasso , the posterior contraction rates obtained are broadly comparable to convergence rates of the lasso .the lasso and its variants are important frequentist methods for sparse signal recovery . as the lasso is a posterior mode ( for an i.i.d .laplace prior on the ) , it may seem to give an immediate link between bayesian and non - bayesian methods .however , we show in section [ sec.spars_and_lasso ] that the lasso is essentially non - bayesian , in the sense that the corresponding _ full _ posterior distribution is a useless object .in contrast , the posterior distribution resulting from the prior ( [ defprior ] ) gives both reasonable reconstruction of the parameter and a quantification of uncertainty through the spread in the posterior distribution .we infer this from combining results on the contraction rate of the full posterior distribution with distributional approximations .the latter show that the posterior distribution behaves asymptotically as a mixture of bernstein von mises type approximations to submodels , where the location of the mixture components depends on the setting .the latter approximations are new , also for the special case that is the identity matrix .it is crucial for these results that the prior ( [ defprior ] ) models sparsity through the _ model selection _prior , and separates this from modeling the nonzero coordinates through the prior densities .for instance , in the case that is a product of laplace densities , this allows the scale parameter to be constant or even to tend to zero , thus making this prior uninformative .this is in stark contrast to the choice of the smoothing parameter in the ( bayesian ) lasso , which must tend to infinity in order to shrink parameters to zero , where it can not differentiate between truly small and nonzero parameters .technically this has the consequence that the essential part of the proofs is to show that the posterior distribution concentrates on sets of small dimension .this sets it apart from the frequentist literature on sparse regression , although , as mentioned , many essential ideas reappear here in a bayesian framework .the paper is organized as follows . in section [ sec.recovery ]we present the main results of the paper .we specialize to laplace priors on the nonzero coefficients and investigate the ability of the posterior distribution to recover the parameter vector , the predictive vector and the set of nonzero coordinates .furthermore , we derive a distributional approximation to the posterior distribution , and apply this to construct and study credible sets . in section [ sec.spars_and_lasso ]we present the negative result on the bayesian interpretation of the lasso .next in section [ sectionarbitrarydesign ] we show that for recovery of only the predictive vector , significantly milder conditions than in section [ sec.recovery ] suffice .proofs are deferred to section [ sectionproofs ] and the supplementary material . for a vector and a set of indices , is the vector , and is the cardinality of . the _ support _ of the parameter is the set .the support of the true parameter is denoted , with cardinality .similarly , for a generic vector , we write and .we write if there is no ambiguity to which set is referred to . for and , let .we let be the column of , and for the prior defined above , bayes s formula gives the following expression for the posterior distribution n\times the columns with , and let be a least square estimator in the restricted model , that is , in case the restricted model would be correctly specified , the least squares estimator would possess a -distribution , and the posterior distribution ( in a setting where the data washes out the prior ) would be asymptotically equivalent to a -distribution , by the bernstein von mises theorem . in our present situation ,the posterior distribution is approximated by a random mixture of these normal distributions , of the form where denotes the dirac measure at , the weights satisfy and , for a sufficiently large the weights are a data - dependent probability distribution on the collection of models .the latter collection can be considered a `` neighborhood '' of the support of the true parameter , both in terms of dimensionality and the ( lack of ) extension of the true parameter outside these models .a different way of writing the approximation is where is the intersection ( and not projection ) of with the subspace . to see this , decompose , and observe that the two summands are orthogonal .the lebesgue integral can be interpreted as an improper prior on the parameter of model , and the expression as a mixture of the corresponding posterior distributions , with model weights proportional to the prior weights times .it follows that the laplace priors on the nonzero coordinates wash out from the components of the posterior . on the other hand , they are still visible in the weights through the factors . in general , this influence is mild in the sense that these factors will not change the relative weights of the models much .[ thmm.bvm_type ] if satisfies ( [ eq.lambda_cond ] ) , and satisfies ( [ assump.on_the_dim_prior ] ) , then for every and any with , [ cor.strong_mod_selection ] under the combined assumptions of corollary [ cor.consistent_mod_selection ] and theorem [ thmm.bvm_type ] , the distributional results imply that the spread in the posterior distribution gives a correct ( conservative ) quantification of remaining uncertainty on the parameter .one way of making this precise is in terms of _ credible sets _ for the individual parameters .the marginal posterior distribution of is a mixture of a point mass at zero and a continuous component .thus a reasonable _ upper 0.975 credible limit _ for is equal to it is not difficult to see that under the conditions of corollary [ cor.strong_mod_selection ] , if and if .the lasso ( cf . )\ ] ] is the posterior mode for the prior that models the coordinates as an i.i.d . sample from a laplace distribution with scale parameter , and thus also possesses a bayesian flavor . it is well known to have many desirable properties : it is computationally tractable ; with appropriately tuned smoothing parameter it attains good reconstruction rates ; it automatically leads to sparse solutions ; by small adaptations it can be made consistent for model selection under standard conditions .however , as a bayesian object it has a deficit : in the sparse setup the full posterior distribution corresponding to the lasso prior does not contract at the same speed as its mode .therefore the full posterior distribution is useless for uncertainty quantification , the central idea of bayesian inference .we prove this in the following theorem , which we restrict to the sequence model of example [ examplesequencemodel ] , that is , model ( [ model ] ) with the identity matrix . in this setting the lasso estimator is known to attain the ( near ) minimax rate for the square euclidean loss over the `` nearly black bodies '' , and a near minimax rate over many other sparsity classes as well , if the regularity parameter is chosen of the order .the next theorem shows that for this choice the lasso posterior distribution puts no mass on balls of radius of the order , which is substantially bigger than the minimax rate ( except for extremely dense signals ) . intuitively ,this is explained by the fact that the parameter in the laplace prior must be large in order to shrink coefficients to zero , but at the same time reasonable so that the laplace prior can model the nonzero coordinates . that these conflicting demands do not affect the good behavior of the lasso estimatorsmust be due to the special geometric , sparsity - inducing form of the posterior mode , not to the bayesian connection .[ lemlb ] assume that we are in the setting of example [ examplesequencemodel ] . for any such that , there exists such that , as , vector is the mean vector of the observation in ( [ model ] ) , and one might guess that this is estimable without identifiability conditions on the regression matrix . in this sectionwe show that the posterior distribution based on the prior ( [ defprior ] ) can indeed solve this _ prediction problem _ at ( nearly ) optimal rates under no condition on the design matrix .these results are inspired by and theorem [ thmm - orapred ] below can be seen as a full bayesian version of the results on the pac - bayesian point estimators in the latter paper ; see also for prediction results for mixtures of least - squares estimators .we are still interested in the sparse setting , and hence the regression matrix still intervenes by modeling the unknown mean vector as a linear combination of a small set of its columns .first , we consider the case of priors ( [ defprior ] ) that model the mean vector indirectly by modeling the set of columns and the coefficients of the linear combination .the prior comes in through the constant for the choice of prior on coordinates , the best results are obtained with heavy - tailed densities . in generalthe rate depends on the kullback leibler divergence between the measure with distribution function ( corresponding to the prior density ) and the same measure shifted by .let be the kullback leibler divergence , and set [ thmm - orapred ] for any prior and as in ( [ predcpi ] ) , any density that is symmetric about , any and , if the prior on the dimension satisfies ( [ assump.on_the_dim_prior ] ) with , then is bounded in , and the rate for squared error loss is determined by this rate might be dominated by the kullback leibler divergence for large signal .however , for heavy tailed priors the induced constraints on the signal to achieve the good rate are quite mild .consider the prior distribution ( [ defprior ] ) with a product of univariate densities of the form [ corpredht ] if satisfies ( [ eq.classofdimpriors ] ) with , and is of the form ( [ def.ht ] ) with and , then for sufficiently large , for . the constant in theorem [ thmm - orapred ] can be improved to , for an arbitrary , by a slight adaptation of the argument .using pac - bayesian techniques dalalyan and tsybakov obtain an oracle inequality with leading constant for a so - called pseudo - posterior mean : the likelihood in ( [ bayes ] ) is raised to some power , which amounts to replacing the factor by .the `` inverse temperature '' must be taken large enough ; the case corresponding to the bayes posterior as considered here is not included ; see also .theorem [ thmm - orapred ] and its corollary address the question of achieving prediction with no condition on , and the same rate is achieved as in section [ sec.recovery ] with the same type of priors , up to some slight loss incurred only for true vectors with very large entries .as shown in the corollary , this slight dependence on can be made milder with flatter priors .we now consider a different approach specifically targeted at the prediction problem and which enables to remove dependency on the size of the coordinates of completely .because the prediction problem is concerned only with the mean vector , and the columns of will typically be linearly dependent , it is natural to define the prior distribution directly on the corresponding subspaces . for any ,let be the subspace of generated by the columns of .let denote the collection of all _ distinct _ subspaces .define a ( improper ) prior on by first selecting an integer in according to a prior , next given selecting a subspace of dimension uniformly at random among subspaces in of dimension ; finally , let given be defined as lebesgue measure on if , and let be the dirac mass at for .note that the posterior distribution ] , which is the sum in the display . by ( [ defprior ] ), the left - hand side of the lemma is bounded below by by ( [ eq.lr_representation ] ) , the change of variables and the inequality .the finite measure defined by the identity is symmetric about zero , and hence the mean of relative to is zero .let denote the normalized probability measure corresponding to , that is , .let denote the expectation operator with respect to .define . by jensen s inequality .however , , by the just mentioned symmetry of .so the last display is bounded below by almost surely . using that , and then ( [ eq.int_explicit ] ), we find that the integral in the last display is bounded below by with ( [ eq.lambda_cond ] ) , is bounded from below by , if and by , if .since and decays to zero slower than any polynomial power of , we find in both cases , provided that is sufficiently large .the lemma follows upon substituting these bounds and the bound in the display .[ lem.com ] for any and random variable , write the left - hand side as ] under of ( [ eq.post_exp_bd_gen ] ) over is bounded above by by the triangle inequality , for , as is seen by splitting the norms on the right - hand side over and .if , then we write and use the definition of the compatibility number to find that we combine the last three displays to see that ( [ eqexpectedoverto ] ) is bounded above by for the set and , the integral in this expression is bounded above by by assumption ( [ assump.on_the_dim_prior ] ) . combining the preceding with ( [ eq.main_decomp_mod_sel_gen ] ) , we see that using that , we can infer the theorem by choosing for fixed .proof of theorem [ thmm.pred_and_l1 ] by theorem [ thmm.mod_sel_gen ] the posterior distribution is asymptotically supported on the event , for and the same expression with replaced by .thus it suffices to prove that the intersections of the events in the theorem with the event tends to zero . by combining ( [ eq.post_exp_bd_gen ] ) ,( [ eqestimateinnerproduct ] ) and the inequality , we see that on the event , the variable is bounded above by by definition [ assump.uniform_comp ] of the uniform compatibility number , since , on the event and by assumption , it follows from ( [ defpsis ] ) that for a set , \\[-8pt ] \nonumber & & { } \times\int_b e^{-({1}/8 ) \|x({\beta}-{\beta}^*)\|_2 ^ 2 -{\overline{\lambda}}\|{\beta}-{\beta}^*\|_1+{\lambda}\|{\beta}\|_1 } \,d\pi({\beta}).\end{aligned}\ ] ] since it suffices to show that the right - hand side tends to zero for the relevant event ._ proof of first assertion_. on the set , we have , by the triangle inequality .note that .it follows that for the set , the preceding display is bounded above by by ( [ assump.on_the_dim_prior ] ) and a calculation similar to the proof of theorem [ thmm.mod_sel_gen ] .for this tends to zero .thus we have proved that for some sufficiently large constant , _ proof of second assertion_. similar to ( [ eqinvokecompcond ] ) , the claim follows now from the first assertion ._ proof of third assertion_. note that .now , the proof follows from the first assertion .proof of theorem [ thmm.bvm_type ] the total variation distance between a probability measure and its renormalized restriction to a set is bounded above by .we apply this to both the posterior measure and the approximation , with the set where is a sufficiently large constant . by theorem [ theoremrecovery ]the probability tends to one under , and at the end of this proof we show that tends to one as well .hence it suffices to prove theorem [ thmm.bvm_type ] with and replaced by their renormalized restrictions to .the measure is by its definition a mixture over measures corresponding to models . by theorems [ thmm.mod_sel ] and [ theoremrecovery ] the measure is asymptotically concentrated on these models .if is the renormalized restriction of a probability vector to a set , then , for any probability measures , by the preceding paragraph .we infer that we can make a further reduction by restricting and renormalizing the mixing weights of to .more precisely , define probability measures by then it suffices to show that .( the factor in the second formula cancels in the normalization , but is inserted to connect to the remainder of the proof . ) for any sequences of measures and , we have if is absolutely continuous with respect to with density , for every .it follows that this tends to zero by the definition of and the assumptions on .finally we show that . for , the likelihood ratio given in ( [ eq.lr_representation ] ), we have by ( [ eq.lr_representation ] ) the denominator in , and for the second inequality we use jensen s inequality similarly as in the proof of lemma [ lem.expl_bd_for_denom ] . using hlder s inequality , we see that on the event , since for every , it follows that on the numerator in is bounded above by it follows that is bounded above by by jensen s inequality applied to the logarithm , and hence , by ( [ eq.lambda_cond ] ) .the prior mass can be bounded below by powers of by ( [ assump.on_the_dim_prior ] ) .this shows that the display tends to zero for sufficiently large .proof of theorem [ theoremselectionnosupersets ]let be the collection of all sets such that and . in view of theorem [ thmm.bvm_type ]it suffices to show that .note that due to , any set in has cardinality smaller . by ( [ eqdefweightsw ] ) , with , we shall show below that the factors on the right - hand side can be bounded as follows : for any fixed , combining these estimates with assumption ( [ assump.on_the_dim_prior ] ) shows that for , the event in the second relation , for we have .thus the expression tends to zero if . since can be chosen arbitrarily close to , this translates into .to prove bound ( [ eqseparateinequalities ] ) , we apply the interlacing theorem to the principal submatrix of to see that , for , where denote the eigenvalues in decreasing order , whence assertion ( [ eqseparateinequalities ] ) follows upon combining this with ( [ eq.lambda_cond ] ) . to bound the probability of the event in ( [ eqseparateinequalitiestwo ] ) , we note that by the projection property of the least squares estimator , for the difference is the square length of the projection of onto the orthocomplement of the range of within the range of , a subspace of dimension .because the mean of is inside the smaller of these ranges , it cancels under the projection , and we may use the projection of the standard normal vector instead . thus the square length possesses a chi - square distribution with degrees of freedom .there are models that give rise to such a chi - square distribution .since , we can apply lemma [ lem - chisq ] with to give that is bounded above by .this tends to zero as , due to , where the last inequality follows from .[ lem - chisq ] for every , there exists a constant independent of and such that for any variables that are marginally distributed , by markov s inequality , for any , the results follows upon choosing , giving and .proof of theorem [ theoremselection ] _ proof of first two assertions_. because , the posterior probability of the set tends to zero by theorem [ thmm.pred_and_l1 ] .this implies the first assertion .the second assertion follows similarly from the second assertion of theorem [ thmm.pred_and_l1 ] ._ proof of third assertion_. first we prove that the largest coefficient in absolute value , say , is selected by the posterior if this is above the threshold . by theorem [ thmm.bvm_type ]it is enough to show that .for any given set with , let and .then we shall bound this further by showing that , for every in the sum .the quotient of these weights is equal to in view of ( [ assump.on_the_dim_prior ] ) . by the interlacing theorem ,the eigenvalues in increasing order of the matrices and satisfy , for any .this implies that . since , for any ,the largest eigenvalue is at most . combining this with ( [ eq.lambda_cond ] ), we conclude that the preceding display is bounded below by by definition of the least squares estimator , the difference of the square norms in the exponent is the square length of the projection of onto the orthocomplement of the range of in the range of , the one - dimensional space spanned by the vector , where denotes the projection onto the range of .if , with an abuse of notation , is the projection onto , then \\[-8pt ] \nonumber & = & \frac{\langle x{\beta}^0,x_m - p_sx_m\rangle^2}{2\|x_m - p_sx_m\|_2 ^ 2 } -\frac{\langle{\varepsilon},x_m - p_sx_m\rangle^2}{\|x_m - p_sx_m\|_2 ^ 2}.\end{aligned}\ ] ] we shall show that the first term on the right is large if is large , and the second is small with large probability .we start by noting that for and any , \\[-8pt ] \nonumber & = & \frac{1}{\widetilde\phi(s)^2\|x\|^2}\sum_{i\in s}\bigl(x^tx \bigr)_{i , j}^2 \le\frac{s{\mathop{\mathrm { mc } } } ( x)^2\|x\|^2}{\widetilde\phi(s)^2}.\end{aligned}\ ] ] it follows from the definitions that , for every .combined , this shows that if .we write , for the matrix obtained by removing the column from , and split the first inner product in ( [ eqsizeprojection ] ) in the two parts using that if , the definition of to bound , the cauchy schwarz inequality on and ( [ eqprojnormestimate ] ) . putting the estimates togetherwe find that for , we can split the random inner product in ( [ eqsizeprojection ] ) in the two parts and . for , each variable is normally distributed with mean zero and variance , for any .when varies over and over all subsets of size that do not contain , there are possible variables in the first term and possible variables in the second .for the variances of the variables in the two terms are of the orders and , respectively .therefore the means of the two suprema are of the orders and , respectively , if . with probability variables do not exceed a multiple of their means .we conclude that for and , the left - hand side of ( [ eqsizeprojection ] ) is , with probability tending to one , bounded below by , whence for for large , uniformly in , for as large as desired ( depending on ) and a suitable positive constant .so , with overwhelming probability , thus at the order . next ,for the second largest coefficient , we consider . by reasoning similar to the preceding , we show that the index is included asymptotically , etc .we thank an associate editor and four referees for valuable comments .we are also grateful to amandine schreck for helpful discussions .
|
we study full bayesian procedures for high - dimensional linear regression under sparsity constraints . the prior is a mixture of point masses at zero and continuous distributions . under compatibility conditions on the design matrix , the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector , and to give optimal prediction of the response vector . it is also shown to select the correct sparse model , or at least the coefficients that are significantly different from zero . the asymptotic shape of the posterior distribution is characterized and employed to the construction and study of credible sets for uncertainty quantification . ./style / arxiv - general.cfg , +
|
assume that we are given a population of elements , ( at least at first , the problem of infinite populations will be elaborated on later ) , along with a sequence of probabilities of each element of , denoted , such that , and a single number , the sample size , which might be greater or lower than .the task is to compute a random sample of size from the population , such that each element from the sample is one of the elements of , each with its corresponding probability .note that without loss of generality we can ( and will ) assume that the algorithm assumes a non - naive ( constant - time ) implementation of procedures for sampling single random numbers from the beta ( in the easy case , where and parameters are integer and ) , and binomial distributions , as well as lack of numerical errors .some consideration to mitigating the effects of numerical inaccuracies will be given in later sections .the algorithm is best presented ( as the author feels ) by starting from the naive algorithm , and iteratively refining it , until the desired time and memory complexity are reached .the naive algorithm ( which , despite its non - optimal costs , in practice is reasonably efficient , and is used , in its second variant , for example by the _ numpy _ numeric library for python ) is based on a cruicial idea , which will be used also in the novel version presented here .the idea is based on a geomertical intuition : if an interval ] into which it falls ( and the corresponding element of ) results in a choice of a single element of with the desired probability distribution .efficient finding of the selected subinterval is faciliated by precomputing an array of cumulative sums of probabilities , then performing a binary search on it .compute array new empty multiset ( times ) randomize find greatest s.t . < x ] , representing population with mostly equal probabilities , one drawn from a geometric sequence starting with and ending at , representing a population with skewed probabilities .the third type of population is generated by applying a gaussian pdf function to points evenly spaced between and 10 times the stdev of the gaussian function .this is meant to simulate the usual application of sampling function in modelling in population genetics ( which in fact was the inspiration for this research ) : in population genetics models , selection and reproduction of modelled organisms is often done precisely by randomly sampling with replacement of organisms ( that reproduce and pass their offspring to a next generation ) from a population of .the probability of a given organism being chosen to reproduce is proportional to its _ fitness function _ - which is often gaussian .each population type ( uniform , geometric and gaussian ) is rescaled so that it sums to , and randomly shuffled .the results of the tests are presented on figure [ fig - res ] .it is evident from the results of the tests that not only is the proposed algorithm asymptotically optimal , but , unlike algorithm 4 it is also efficient in practice , outperforming the competing methods in most scenarios , by as much as several orders of magnitude in some cases . in the single pessimistic case ,where the distribution of probabilities in the population is close to uniform and , although it runs slightly slower , it still remains competetive , moreover , the difference in runtime grows smaller as , and it overtakes the walker s method at ( data not shown ) . the algorithm is able to adapt to the input data and use any skew from uniform distribution to its advantage , to increase its runtime , as evidenced by the tests on gaussian and especially geometric populations . unlike the popular algorithms it works in constant additional memory , and is capable of online operation .an implementation of this algorithm in a few programming languages may be downloaded from http://bioputer.mimuw.edu.pl/~mist/statsas the proposed algorithm is online , it may accept an infinite sequence of states as its population , and can still be expected to produce a sample in finite time , without exhausting the whole sequence . as such , one application is immediately obvoius : mass sampling of iid variates from any discrete distribution .all one needs ito do is to exhaustively walk through the configuration space of the distribution , preferably ( though not necessarily ) in order of decreasing probability mass function ( pmf ) , and feed the resulting sequence into the proposed algorithm .the result is a sample from the input distribution of any desired size .the advantage of the proposed solution is that the input distribution does not need to have an easily invertible cdf , only a computable pmf .the runtime is usually sublinear , wrt . to the sample size , but that depends on the exact properties of the distribution being sampled , distributions with light tails being faster to sample from than heavy - tailed ones . as an example : generating a sample of size from poisson distribution with using r programming language s _ rpois _ function takes about 90 seconds , while using the scheme proposed above elapses 0.7 seconds .such an algorithm itself consumes constant memory plus any memory needed for datastructures needed to walk through the configuration space ( trivially constant in case of distributions with integer support , at worst a linear `` visited '' hashtable plus a linear priority queue when the configuration space is complicated , and needs to be traversed in a dijkstra - like fashion ) .the algorithm works online , in the meaning that the generated part of the sample is immediately available for consumption , before computations proceed to generate the rest of it .this could be used to provide an alternative implementation of sampling functions in many programming languages , most of which accept an argument denoting sample size , but then proceed to generate even a large sample in naive , iterative fashion .one point worth noting , however , is that the algorithm , as presented , returns the sample sorted in the order in which the configuration space was traversed .if this is undesirable , a fisher - yates shuffle may be performed on the resulting stream , at the cost of loss of online property .i would like to thank prof .anna gambin , baej miasojedow phd , and mateusz cki msc for their helpful comments .this research was funded by grant no .2012/06/m / st6/00438 by polish national science centre , and grant polonium , , matematyczne i obliczeniowe modelowanie ewolucji ruchomych elementw genetycznych .
|
this paper presents a novel algorithm solving the classic problem of generating a random sample of size from population of size with non - uniform probabilities . the sampling is done with replacement . the algorithm requires constant additional memory , and works in time ( even when , in which case the algorithm produces a list containing , for every population member , the number of times it has been selected for sample ) . the algorithm works online , and as such is well - suited to processing streams . in addition , a novel method of mass - sampling from any discrete distribution using the algorithm is presented . = 1
|
decoding techniques , especially when applied to low - density parity - check ( ldpc ) codes , have attracted a great attention recently . in these techniques , decoding is based on a tanner graph determined by a parity - check matrix of the code , which does not necessarily , and typically does not , have full rank .it is well known that the performance of iterative decoding algorithms in case of binary erasure channels depends on the sizes of the stopping sets associated with the tanner graph representing the code .several interesting results on stopping sets associated with tanner graphs of given girths are given in .there are more specific results for classes of codes represented by particular tanner graphs , see , e.g. , , as well as more general results pertaining to ensembles of ldpc codes , see e.g. , . in this paper, we define the notion of dead - end sets to explicitly show the dependency of the performance on the stopping sets .we then present several results that show how the choice of the parity - check matrix of the code , which determines decoding complexity , affects the stopping and the dead - end sets , which determine decoding performance .our study differs from the aforementioned studies , but agrees with the studies by schwartz and vardy , hollmann and tolhuizen , han and siegel , and weber and abdel - ghaffar , in its focus on the relationship between the stopping sets on one hand and the underlying code representation , rather than the code itself , on the other hand .since linear algebra is used to study this relationship , for our purpose , parity - check matrices are more convenient than the equivalent tanner graphs for code representation .let be a binary linear ] dual code of .the support of a binary word is the set and the weight of is the size of its support .for the zero word , the support is the empty set , , and the weight is zero . since a binary word is a codeword of if and only if , the parity - check matrix gives rise to parity - check equations , denoted by an equation is said to check in position if and only if . on the binary erasure channel , each bit of the transmitted codewordis erased with probability , while it is received correctly with probability , where . for a received word ,the erasure set is a received word can be decoded unambiguously if and only if it matches exactly one codeword of on all its non - erased positions .since is a linear code , this is equivalent to the condition that the erasure set does not contain the support of a non - zero codeword .if does contain the support of a non - zero codeword , then it is said to be _incorrigible_. a decoder for which achieves unambiguous decoding whenever the erased set is not incorrigibleis said to be optimal for the binary erasure channel .an exhaustive decoder searching the complete set of codewords is optimal .however , such a decoder usually has a prohibitively high complexity .iterative decoding procedures may form a good alternative , achieving close to optimal performance at much lower complexity , in particular for ldpc codes . here , we consider a well - known algorithm , often expressed in terms of a tanner graph , which exploits the parity - check equations in order to determine the transmitted codeword .initially , we set and .if checks in exactly one erased position , then we use ( [ pce ] ) to set and we remove from the erasure set . applying this procedure iteratively , the algorithm terminates if there is no parity - check equation left which checks exactly one erased symbol .erasure sets for which this is the case have been named _ stopping sets _ . in casethe final erasure set is empty , the iterative algorithm retrieves all erased symbols , and thus the final word is the transmitted codeword . in casethe final erasure set is a non - empty stopping set , the iterative decoding process is unsuccessful .the final erasure set is the union of the stopping sets contained in , and thus is empty if and only if contains no non - empty stopping set .therefore , we introduce the notion of a _dead - end set _ for an erasure set which contains at least one non - empty stopping set . in summary , on the binary erasure channel , an optimal decoder is unsuccessful if and only if is an incorrigible set , and an iterative decoder is unsuccessful if and only if is a dead - end set .this paper is organized as follows . in section [ definition ]we characterize codeword supports , incorrigible sets , stopping sets , and dead - end sets in terms of a parity - check matrix and derive basic results from this characterization .we also review results from and which are most relevant to this work .dead - end sets and stopping sets are studied in sections [ sec ds ] and [ sec ss ] , respectively .conclusions are presented in section [ conc ] .again , let be a linear binary ] block code , it holds that the stopping set enumerator satisfies where the first property follows from ( [ 012 ] ) and ( [ basica ] ) , the second property follows from the definition of , and the third property follows from the fact that the weight of any row in is either or at least equal to for any with .further , again for any parity - check matrix , it follows from the definitions of the various enumerators , ( [ basici ] ) , and ( [ basics ] ) , that the dead - end set enumerator satisfies for code on the binary erasure channel , the probability of unsuccessful decoding ( ud ) for an optimal ( opt ) decoder is similarly , the probability of unsuccessful decoding for an iterative ( it ) decoder based on parity - check matrix is hence , these two probabilities are completely determined by the incorrigible and dead - end set enumerators .notice from ( [ popt ] ) and ( [ pit ] ) that iterative decoding is optimal if and only if . at small erasure probabilities , and are dominated by the terms and , respectively . actually , for sufficiently small values of , the parameters and are the most important parameters characterizing the performance of optimal decoding and iterative decoding , respectively . in ( [ 012 ] ) it is stated that if , then .therefore , for any parity - check matrix of a code with , which is derived as theorem 3 in . here , we show that this can not be extended further .[ badh ] for any code with hamming distance , there exists a parity - check matrix for which .we may order the positions so that has a codeword composed of ones followed by zeros .in particular , the first columns in any given parity - check matrix of are linearly dependent , but no columns are such .the row space of the submatrix composed of these first columns has dimension and a sequence of length belongs to this row space if and only if its weight is even . by elementary row operations , we can obtain a parity - check matrix of the form for some matrices and of appropriate sizes , where is the matrix given by clearly , is a stopping set for as no row of has weight one .contrary to the weight enumerator and the incorrigible set enumerator , which are fixed for a code , the stopping and dead - end set enumerators depend on the choice of the parity - check matrix .theorem [ badh ] shows that no matter how large the hamming distance of the code is , a bad choice of the parity - check matrix may lead to very poor performance .therefore , it is important to properly select the parity - check matrix of a code when applying iterative decoding .clearly , adding rows to a parity - check matrix does not increase any coefficient of the stopping set enumerator or the dead - end set enumerator . on the contrary, these coefficients may actually decrease at the expense of higher decoding complexity .the rows to be added should be in the dual code of . by having all codewords in as rows ,we obtain a parity - check matrix that gives the best possible performance , but also the highest complexity , when applying iterative decoding .since the order of the rows does not affect the decoding result , we refer to such matrix , with some ordering imposed on its rows which is irrelevant to our work , as the complete parity - check matrix of the code , and denote it by .its stopping set enumerator is denoted by , its dead - end set enumerator by , and its stopping distance by .since the support of any codeword is a stopping set for any parity - check matrix , we have consequently , , and and are called the code s optimal stopping set enumerator and optimal dead - end set enumerator , respectively . schwartz andvardy have shown that and the results derived recently by hollmann and tolhuizen imply , in addition , that and actually , schwartz and vardy have shown that , for , it is possible to construct a parity - check matrix with at most rows for which .they also obtain interesting results on the minimum number of rows in a parity - check matrix for which .they obtain general bounds on this minimum number , which they call the stopping redundancy , as well as bounds for specific codes such as the golay code and reed - muller codes .han and siegel derived another general upper bound on the stopping redundancy for given by .hollmann and tolhuizen specified rows that can be formed from any parity - check matrix of rank to yield a parity - check matrix for which for , where is any given integer such that .they have shown that the number of rows in the smallest parity - check matrix achieving this is at most [ ex1 ] let be the ] binary linear code with .then , there exists a parity - check matrix with at most rows for which .hollmann and tolhuizen also show that for some codes , and in particular for hamming codes , for any parity - check matrix with less than rows .however , depending on the code , it may be possible to reduce the number of rows in a parity - check matrix for which below as we show next .[ di1 ] let be the matrix whose rows are the non - zero codewords in of weight at most .then , is a parity - check matrix for and for this matrix .let be an parity - check matrix for the code .then , there is a subset of of size such that is an matrix of rank .the row space of this matrix contains every unit weight vector of length .therefore , the row space of contains vectors such that each vector has exactly a single one in a unique position indexed by an element in .since these vectors have weight at most and are linearly independent , it follows that , which contains all of them as rows , has rank and is indeed a parity - check matrix for .next , we prove that for this matrix , i.e. , for . from ( [ 012 ] ) , ( [ basici ] ) , and ( [ basicd ] ) , it suffices to show that for .for such an , assume that is a subset of of size which does not contain the support of a non - zero codeword .then , the columns of the parity - check matrix indexed by the elements in are linearly independent .as has rank , there is a set such that and is an matrix of rank . from the argument given in the first part of this proof, contains vectors such that each vector has exactly a single one in a unique position indexed by an element in , and in particular each vector has weight at most .the existence of any one of the vectors with a single one in a position indexed by an element in proves that is not a stopping set for .we conclude that every stopping set of size for contains the support of a non - zero codeword .hence , for all .let denote the well - known binary entropy function for .[ dopt ] let be an ] reed - muller code .actually , it can be checked that this is the smallest parity - check matrix for this code satisfying .[ ex2 ] let be the ] repetition code consisting of the all - zero and all - one vectors of length . from theorem [ shs ], it follows that for . hence , , , and for .\(ii ) is the ] zero - code consisting of one codeword only , which is the all - zero vector of length .since all vectors of length , including those of weight one , belong to the complete parity - check matrix of the code , it follows that for , and .next , we introduce a useful notation . for ,let be a binary linear ] binary linear block code . finally , recall that two codes are equivalent if there is a fixed permutation of indices that maps one to the other , and that a code is said to have no zero - coordinates if and only if there is no index such that for every codeword .[ lem ccct ] the code is minimum stopping if and only if is minimum stopping for all .first , notice that the code has a block diagonal parity check matrix defined by where is a parity check matrix for .the complete parity - check matrix of has all elements of the row space of the matrix from ( [ hoh ] ) as its rows .next , let be a subset of . for ,define then , it follows from ( [ cct ] ) that is the support of a codeword in if and only if is the support of a codeword in for all .further , is a stopping set for if and only if is a stopping set for for all .this follows from the fact that a sequence is a row in if and only if it can be written as where is a row in for all .hence , a sequence is a row in if and only if it can be written as where is a row in for all .if for some , is not a stopping set for , then has a row of weight one .juxtaposing this row with the all - zero rows in , for all , gives a row in of weight one .this implies that is not a stopping set for . on the other hand , if for all , is a stopping set for , then for all , has no row of weight one . juxtaposing rows of weights other than one yields a row of weight other than onehence , is a stopping set for .we conclude that is a stopping set for which is the support of a codeword in if and only if , for all , is a stopping set for which is the support of a codeword in .hence , is minimum stopping if and only if , for all , is minimum stopping .[ lem_4 ] let be a minimum stopping binary linear ] block code with and no zero - coordinates .up to equivalence , we may assume that has a codeword composed of ones followed by zeros .in particular , the first columns in any given parity check matrix of are linearly dependent , but no columns are such .the row space of the submatrix composed of these first columns has dimension and a sequence of length belongs to this row space if and only if its weight is even .therefore , in case , has a full - rank parity check matrix of the form which shows that is the repetition code of length .further , in case , has a full - rank parity check matrix of the form where and are and matrices , respectively .notice that has at least one row since by the singleton bound and if equality holds with , then is the even weight code of length which has as a stopping set for which is not the support of a codeword .this contradicts the assumption that is minimum stopping . clearly , is a matrix of rank since in ( [ eq_lem31 ] ) is a full - rank matrix . if , then has rank and has zero - coordinates .therefore , . to complete the proof , it suffices to show that the row space of is a subspace of the row space of since , in this case , by elementary row operations , has a parity - check matrix and thus where is the code with parity check matrix .this code has length , dimension , and hamming distance with no zero - coordinates . now ,suppose , to get a contradiction , that the above is not true , i.e. , the row space of is not a subspace of the row space of .then , the null space of is not a subspace of the null space of .let be a vector of length which belongs to the null space of but not to the null space of .up to equivalence , we may assume that is composed of ones followed by zeros , where , , is the weight of .we claim that is a stopping set for which is not the support of a codeword in . from ( [ eq_lem31 ] ), we have notice that any nontrivial linear combination of the rows of yields a non - zero vector of even weight .furthermore , since , which starts with ones followed by zeros , is in the null space of , it follows that any linear combination of the rows of yields an even weight vector .we conclude that no linear combination of the rows of yields a vector of weight one .hence , is a stopping set for .next , notice that if is the support of a codeword in , then the columns in in ( [ eq_lem32 ] ) should add up to zero . from ( [ eq_pcmr ] ) , we know that the first columns add up to zero. therefore , the columns of should add up to the zero .however , this can not be the case as , which starts with ones followed by zeros , is not in the null space of .in conclusion , we have shown that is a stopping set for which is not the support of a codeword in .this contradicts the fact that is minimum stopping .[ lem_5 ] if is a minimum stopping binary linear ] block code with and no zero - coordinates . from lemma [ lem ccct ] , we know that is a minimum stopping code .since has length , it follows from the induction hypothesis that is equivalent to , for some integers and such that .then , has the same form as given in the lemma .[ thmsstara ] a binary linear $ ] block code is minimum stopping , i.e. , satisfies , if and only if it is equivalent to for some nonnegative integers and , where and . in the theorem, we allow in which case is equivalent to .we also allow and/or , in which case the corresponding code with length zero disappears from .the `` if''-part of the theorem follows from lemma [ lem ccct ] and the observations that the property holds for any repetition code , the full - code , and the zero - code .next , we proof the `` only if''-part of the theorem . up to equivalence , we may assume that , where is the number of zero - coordinates of , is the number of codewords of weight one in , i.e. , the number of all - zero columns in any parity check matrix of , and is a binary linear code of length with and no zero - coordinates .here we assume that if , , or equal zero , then the corresponding code disappears from .if does not disappear , then it can be written as stated in lemma [ lem_5 ] .in this paper , we examined how the performance of iterative decoding when applied to a binary linear block code over an erasure channel depends on the parity - check matrix representing the code .this code representation determines the complexity of the decoder .we have shown that there is a trade - off between performance and complexity .in particular , we have shown that , regardless of the choice of the parity - check matrix , the stopping set enumerator differs from the weight enumerator except for a degenerate class of codes . in spite of that , it is always possible to choose parity - check matrices for which the dead - end set enumerator equals the incorrigible set enumerator .iterative decoding based on such matrices is optimal , in the sense that it gives the same probability of unsuccessful decoding on the binary erasure channel as an exhaustive decoder .we presented bounds on the number of rows in parity - check matrices with optimal dead - end set enumerators , thus bounding the complexity of iterative decoding achieving optimal performance .d. burshtein and g. miller , `` asymptotic enumeration methods for analyzing ldpc codes , '' _ ieee trans .inform . theory _11151131 , june 2004 .c. di , d. proietti , i.e. telatar , t.j .richardson , and r.l .urbanke , `` finite - length analysis of low - density parity - check codes on the binary erasure channel , '' _ ieee trans .inform . theory _48 , no . 6 , pp .1570 - 1579 , june 2002 .h. d. l. hollmann and l. m. g. m. tolhuizen .( 2005 , july ) . on parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size .arxiv : cs.it/0507068 .[ online ] .r. ikegaya , k. kasai , t. shibuya , and k. sakaniwa , `` asymptotic weight and stopping set distributions for detailedly represented irregular ldpc code ensembles , '' proceedings of the ieee international symposium on information theory , chicago , usa , p. 208, june 27 - july 2 , 2004 .n. kashyap and a. vardy , `` stopping sets in codes from designs , '' proceedings of the ieee international symposium on information theory , yokohama , japan , p. 122, june 29 - july 4 , 2003 .c. kelley , d. sridhara , j. xu , and j. rosenthal , `` pseudocodeword weights and stopping sets , '' proceedings of the ieee international symposium on information theory , chicago , usa , p. 67, june 27 - july 2 , 2004 .a. orlitsky , r. urbanke , k. viswanathan , and j. zhang , `` stopping sets and the girth of tanner graphs , '' proceedings of the ieee international symposium on information theory , lausanne , switzerland , p. 2, june 30 - july 5 , 2002 .j. h. weber and k. a. s. abdel - ghaffar , `` stopping set analysis for hamming codes , '' proceedings of the information theory workshop on coding and complexity , rotorua , new zealand , pp .244247 , august 28september 1 , 2005 .
|
the performance of iterative decoding techniques for linear block codes correcting erasures depends very much on the sizes of the stopping sets associated with the underlying tanner graph , or , equivalently , the parity - check matrix representing the code . in this paper , we introduce the notion of dead - end sets to explicitly demonstrate this dependency . the choice of the parity - check matrix entails a trade - off between performance and complexity . we give bounds on the complexity of iterative decoders achieving optimal performance in terms of the sizes of the underlying parity - check matrices . further , we fully characterize codes for which the optimal stopping set enumerator equals the weight enumerator . dead - end set , iterative decoding , linear code , parity - check matrix , stopping set .
|
most real systems are driven by nonlinear dynamics in which a decay term prevents the system s variables from increasing without bounds .the state of the system s nodes at time , characterized by source node variables ( for nodes with no incoming edges ) and internal node variables , obeys the equations where is the number of source nodes .the dynamics of each source node is determined by an environmental signal , while the dynamics of each internal node is governed by , which captures the nonlinear response of node to its predecessor nodes , and which includes decay in the dependence of on ( ) . functions of the form , which satisfy these conditions , are used to describe the dynamics of birth - death processes , epidemic processes , biochemical dynamics , and gene regulation . in many systemsthere is adequate knowledge of the underlying wiring diagram but not of the specific parameter values required to fully specify and .analyzing such systems requires the use of structure - based control methods such as structural controllability and feedback vertex set control .[ fig : controlexample ] in structural controllability ( sc ) the objective is to drive the system from any initial state to any final state in finite time ( i.e. full control , fig .[ fig : controlfigure]a ) by manipulating the state of the system using a certain number of external driver node signals .the dynamics of the system are considered to be well approximated by linear dynamics ( e.g. , by linearizing eq .[ eq:1 ] around a state of interest , see ) where is a vector composed of all the s and s , is a matrix that encodes the wiring diagram of the network and is such that is nonzero only if node is a successor of node ( i.e. , there is a directed edge from to ) , and is a matrix that describes which nodes and are driven by the external signals .this system is such that if it can be controlled in the specified way by a given pair , this will also be true for almost all pairs ( except for a set of measure zero ) . in other words , sc is necessary and sufficient for full control of a system governed by eq .[ eq:3 ] for almost all s consistent with the network wiring diagram .sc is a mathematical formalization of the idea that a node can fully manipulate only one of its successor elements at a time and that a directed cycle is inherently self - regulatory .a consequence of this is that the driver nodes are such that every network node is either part of a set of non - intersecting linear chains of nodes that begin at the driver nodes or is part of a set of directed cycle that do not intersect each other or the set of linear chains ( fig .[ fig : controlfigure ] ) . as ruths &ruths showed , this implies that there are three types of network nodes that must be directly manipulated by a unique driver node , and which we call sc nodes : ( i ) every source node , and every successor node of a dilation ( when a node has more than one successor node ) that is not part of the set of linear chains or of the cycles , namely ( ii ) the surplus of sink nodes with respect to source nodes or ( iii ) internal dilation nodes .an alternative structure - based control method for networks that lack source nodes was developed by fiedler , mochizuki et al . . this method is a mathematical formalization of the following idea : in order to drive the state of a source - less network to any one of its dynamical attractors one needs to manipulate a set of nodes that intersects every feedback loop in the network - the feedback vertex set ( fvs ) .this requirement encodes the importance of feedback loops in determining the dynamical attractors of the network , a fact that was recognized early on in the study of the dynamics of biological networks .fiedler , mochizuki et al .mathematically proved that for a network governed by the nonlinear dynamics in eq .[ eq:1 ] , locking the feedback vertex set of the network in the trajectory specified by a given dynamical attractor ensures that the network will asymptotically approach the desired dynamical attractor , regardless of the specific form of the functions .controlling the fvs is both necessary and sufficient to drive the system to the desired attractor for every form of ( and ) .[ fig : fvsrandom ] here we extend this structural theory to networks in which source nodes are governed by eq .[ eq:2 ] ( fig .[ fig : controlfigure]b and ) .since the source nodes are unaffected by other nodes , one additionally needs to lock the source nodes of the network in the trajectory specified by the attractor . in summary , control of the source nodes and of the feedback vertex set of a networkguarantees that we can guide it from any initial state to any of its dynamical attractors ( i.e. , its natural long term dynamic behavior ) regardless of the specific form of the functions and . in the following we refer to this attractor - based control method as feedback vertex set control ( fc ) ( fig .[ fig : controlfigure]b ) , and to the group of nodes that need be manipulated in feedback vertex set control as a fc node set .to illustrate structural controllability and feedback vertex set control , consider the example networks in fig .[ fig : controlexample ] . in a linear chain of nodes ( fig .[ fig : controlexample]c , left ) the only node that needs to be controlled in both frameworks is the source node . for fig .[ fig : controlexample]d , which consists of a source node connected to a cycle , sc requires controlling only the source node since the cycle is considered self - regulating ( fig [ fig : controlexample]d , middle ) , while fc additionally requires controlling any node in the cycle , the feedback vertex set in this network ( fig .[ fig : controlexample]d , right ) .fig [ fig : controlexample]e consists of a source node with three successor nodes ; sc requires controlling two of the three successor nodes because of the dilation at the source node , while for fc controlling is sufficient . in fig[ fig : controlexample]f we show a more complicated network with a cycle and several source and sink nodes , and two minimal node sets for sc and fc .these examples illustrate that the control of the source nodes is shared by full control in sc and attractor control in fc , and that their main difference is in the treatment of cycles , which require to be controlled in fc and do not require independent control in sc .sc was applied to diverse types of real networks and the ratio of the minimal number of sc nodes needed , , and the total number of nodes , was used to gauge how difficult it is to control these networks . both sc and fccan be used to answer the question of how difficult to control a network is ( albeit each focuses on a different aspect of control , full control or attractor control ) , so a natural question is how the fraction of control nodes in real networks compares between sc and fc ( , where is the size of the minimal fc control set ) . to answer this question ,we apply sc and fc to the real networks in , and compare the fraction of control nodes and ( fig .[ fig : fvsvssc]a and [ tab : stable2 ] ) .a surprising result is that the fraction of control nodes and appears to be inversely related across several types of networks .for example , gene regulatory networks require between 75% - 96% of nodes in sc yet only require between 1% - 18% of nodes in fc .a similar relationship is also seen in food web networks and internet networks , while the opposite relationship ( ) is seen in the social trust networks with low and intra - organizational networks . on further reflection , fc s prediction that gene regulatory networks are easier to control than social trust / communication networksis supported by recent experimental results in cellular reprogramming and large - scale social network experiments . to explain the topological properties underlying the difference in and , we note that the fraction of nodes and obey the relations where is the fraction of source nodes , is the fraction of external dilations nodes in sc , is the fraction of internal dilation nodes in sc , and is the fraction of nodes in the fvs of the network .empirical directed networks tend to have a bow - tie structure , in which most of the network belongs to the largest strongly connected component ( which contains most cycles in the network , and thus determines ) , its in - component ( the nodes that can reach the strongly connected component , which thus determine ) , or its out - component ( the nodes that can be reached from the strongly connected component , which thus determine ) .we define the fractions , where .these fractions reflect the potential domination of a network component over the others .[ eq:4]-[eq:5 ] and the bow - tie structure of real networks offer a topological explanation for the observed relationships between and . applying this reasoning to the studied real networks ( [ tab : stable2 ] ) , we find that all networks with have a topology dominated by their scc component ( , fig . [fig : fvsvssc ] , brown shading ; e.g. intra - organizational networks , the college students and prison inmates trust networks , and the _ c. elegans _ neural network ) .most networks with are dominated by their out - component ( , fig .[ fig : fvsvssc ] , yellow shading ; e.g. gene regulatory networks , most food webs , and internet networks ) or by internal dilations ( , fig .[ fig : fvsvssc ] , pink shading ; e.g. metabolic networks and circuits ) .the rest of the networks have a mixed profile ( , fig .[ fig : fvsvssc ] , no shading ) , and include networks with ( citation networks and the texas power grid ) and the networks in which ( a political blog network and two online social communication networks ) . motivated by the observed remarkable agreement between the number of sc nodes of real networks and their degree - preserving randomized versions , we study fc control in similarly randomized real networks ( [ tab : stable2 ] and ) .we find much weaker agreement : for most networks the number of fc control nodes is higher than the number of control nodes in randomized versions ( ) ( fig . [fig : fvsrandom]c ) .a closer look reveals that the cycle structure of the real networks is responsible for the underestimation of .although the size of the largest scc is similar or smaller compared to their degree - preserving randomized counterparts , real networks tend to have a more complicated cycle structure , evidenced by the over - representation of cycles compared to the randomized networks ( fig .[ fig : fvsrandom]d ) , and reflected by the larger size of their fvs ( [ tab : stable2 ] ) .a subset of networks , which includes food webs and citation networks , features fewer cycles than randomized networks and a smaller scc than randomized networks , leading to ( [ tab : stable2 ] ) . similarly to the case of sc , full randomization , which turns the network into an erds - rnyi directed network , shows little correlation between in randomized networks and real networks ( [ fig : fvser ] , [ tab : stable2 ] , and ) .validated dynamic models can be an excellent testing ground to assess control methods . herewe use two models for the gene regulatory network underlying the segmentation of the fruit fly ( _ drosophila melanogaster _ ) during embryonic development : a differential equation model by von dassow et al . ( fig .[ fig : dynmod]a ) and a discrete ( boolean ) model by albert and othmer ( fig .[ fig : dynmod]b ) .both models consider a group of four subsequent cells as a repeating unit , include intracellular and intercellular interactions among proteins and mrnas , and both recapitulate the observed ( wild type ) pattern of gene expression ( fig .[ fig : dynmod]a - c and ) .using sc and fc on these network models , we find ( 4 ) and ( 14 ) for the differential equation ( discrete ) model ( fig .[ fig : dynmod]a - c , [ fig : sfig2 ] , and ) . both model networks are dominated by ( 0.66 and 0.71 , respectively ) , similarly to the brown - shaded networks in fig .[ fig : fvsvssc ] . for sc , the appropriate driver signal needs to be determined for each initial condition using , for example , minimum - energy control or optimal control . for fc , locking the fc nodes in their trajectory in the wild type attractor successfully steers the system to the wild type attractor ( fig .[ fig : dynmod]d - e , [ fig : sfig3]-[fig : sfig5 ] , and ) .thus , fc gives a control intervention which is directly applicable to dynamic models and which is directly linked to their natural long - term behavior .we emphasize that a control intervention for a real biological system would involve combining fc or sc with a closed - loop control approach because of the inherently approximate nature of any model .a subset of the fc node set is often sufficient for a given model and an attractor of interest . for the fruit fly gene regulatory models we show that 16 ( 12 ) nodes are sufficient for the continuous ( discrete ) model , respectively , which is a 66% ( 14% ) reduction ( fig .[ fig : dynmod]a - c , [ fig : sfig3]-[fig : sfig5 ] , and ) .this shows that fc provides a benchmark for attractor control node sets that are model independent , as well as an upper limit to model dependent control sets .thus fc can be used as a gauge for the large body of recent control methods that require a dynamic model to be used . to our knowledge, sc provides no analogue to this .network control methods have the general objective of identifying network elements that can manipulate a system given a specified goal and a set of constraints . yet , as we demonstrate using sc and fc , the definition of control ( e.g. full control or attractor control ) and the dynamics of the system ( linear or non - linear ) can have a significant impact on what these network elements are and how many of them are needed . sc and fc answer complementary aspects of control in a complex network ; which one to use depends on the specific question being asked and on what the natural definition of control is in the system or discipline of interest . we argue that attractor - based control ( and , thus , fc ) is often the natural choice of control for systems in which the use of dynamic models is well established , particularly in biological networks , in which dynamic models have a long history and an ever - increasing predictive power . as we showed in this work ,fc is directly applicable to systems in which only structural information is known , and also to systems in which a parameterized dynamic model is available , for which it provides realizable control strategies that are robust to changes in the parameters and functions .fc also provides a benchmark and a point of contact with the large body of work in control methods that require the network structure and a dynamic model . to our knowledge , something similar is not the case for sc , which instead has the advantage of being a well - developed concept in control and systems theory with connections to other notions of control in linear and nonlinear systems .further work is needed to extend fc and address questions such as the level of control provided by a subset of nodes and the difficulty of steering the system towards a desired state ( control energy ) , concepts which are well - developed in control theory approaches . taken together, our work opens up a new research direction in the control of complex networks with nonlinear dynamics , connects the field of dynamic modeling with classical structural control theory , and has promising theoretical and practical applications .we would like to thank a. mochizuki and m.t .angulo for helpful discussions , and y.y .liu for his generous assistance and for providing us some of the networks in this study .we would also like to thank the mathematical biosciences institute for the workshop `` control and observability of network dynamics '' , which greatly enriched this paper .this work was supported by nsf grants phy 1205840 and iis 1160995 .jgtz is a recipient of a stand up to cancer - the v foundation convergence scholar award .part of this research was conducted with computational resources provided by the institute for cyberscience at the pennsylvania state university .10 liu , y. y. , slotine , j. j. & barabsi , a. l. ( 2011 ) .controllability of complex networks. nature , 473(7346 ) , 167 - 173 .nepusz , t. & vicsek , t. ( 2012 ) .controlling edge dynamics in complex networks .nature physics , 8(7 ) , 568 - 573 .mochizuki , a. , fiedler , b. , kurosawa , g. & saito , d. ( 2013 ) .dynamics and control at feedback vertex sets .ii : a faithful monitor to determine the diversity of molecular activities in regulatory networks. journal of theoretical biology , 335 , 130 - 146 .sun , j. & motter , a. e. ( 2013 ) . controllability transition and nonlocality in network control .physical review letters , 110(20 ) , 208701 .cornelius , s. p. , kath , w. l. & motter , a. e. ( 2013 ) .realistic control of network dynamics .nature communications , 4 , 1942 .ruths , j. & ruths , d. ( 2014 ) .control profiles of complex networks .science , 343(6177 ) , 1373 - 1376 .zaudo , j. g. t. & albert , r. ( 2015 ) .cell fate reprogramming by control of intracellular network dynamics .plos comput biol , 11(4 ) , e1004193 .slotine , j. j. e. & li , w. ( 1991 ) . applied nonlinear control ( vol .englewood cliffs , nj : prentice - hall .kirk , d. e. ( 2012 ) .optimal control theory : an introduction .dover publications .lin , c. t. ( 1974 ) .structural controllability .automatic control , ieee transactions on , 19(3 ) , 201 - 208 .shields , r. w. & pearson , j. b. ( 1975 ) .structural controllability of multi - input linear systems .automatic control , ieee transactions on , 21 , 203 - 212 .vinayagam , a. , gibson , t. e. , lee , h. j. , yilmazel , b. , roesel , c. , hu , y. , ... and barabsi , a. l. ( 2015 ) . controllability analysis of the directed human protein interaction network identifies disease genes and drug targets .proceedings of the national academy of sciences , 113 ( 18 ) 4976 - 4981 .kawakami , e. , singh , v. k. , matsubara , k. , ishii , t. , matsuoka , y. , hase , t. , ... and subramanian , i. ( 2016 ) .network analyses based on comprehensive molecular interaction maps reveal robust control structures in yeast stress response pathways .npj systems biology and applications , 2 , 15018 .gu , s. , pasqualetti , f. , cieslak , m. , telesford , q. k. , alfred , b. y. , kahn , a. e. , ... and bassett , d. s. ( 2015 ) .controllability of structural brain networks .nature communications , 6 , 8414 .liu , y. y. , slotine , j. j. & barabsi , a. l. ( 2013 ) .observability of complex systems .proceedings of the national academy of sciences , 110(7 ) , 2460 - 2465 .menichetti , g. , dallasta , l. & bianconi , g. ( 2014 ) .network controllability is determined by the density of low in - degree and out - degree nodes .physical review letters , 113(7 ) , 078701 .nacher , j. c. & akutsu , t. ( 2013 ) .structural controllability of unidirectional bipartite networks .scientific reports , 3 , 1647 .liu , y. y. & barabsi , a. l. ( 2015 ) .control principles of complex networks .arxiv preprint arxiv:1508.05384 .mller f.j . &schuppert a. ( 2011 ) few inputs can reprogram biological networks .nature 478 , e4 .doi : 10.1038/nature10543 .wells , d. k. , kath , w. l. & motter , a. e. ( 2015 ) . control of stochastic and induced switching in biophysical networks .physical review x , 5(3 ) , 031036 .murrugarra , d. & dimitrova , e. s. ( 2015 ) . molecular network control through boolean canalization .eurasip journal on bioinformatics and systems biology , 2015(1 ) , 1 - 8 .fiedler , b. , mochizuki , a. , kurosawa , g. & saito , d. ( 2013 ) .dynamics and control at feedback vertex sets .: informative and determining nodes in regulatory networks . journal of dynamics and differential equations , 25(3 ) , 563 - 604 .allen , l. j. ( 2010 ) .an introduction to stochastic processes with applications to biology .crc press .novozhilov , a. s. , karev , g. p. , & koonin , e. v. ( 2006 ) .biological applications of the theory of birth - and - death processes .briefings in bioinformatics , 7(1 ) , 70 - 85 .daley , d. j. , gani , j. & gani , j. m. ( 2001 ) .epidemic modelling : an introduction ( vol .15 ) . cambridge university press .tyson , j. j. , chen , k. c. , & novak , b. ( 2003 ) .sniffers , buzzers , toggles and blinkers : dynamics of regulatory and signaling pathways in the cell .current opinion in cell biology , 15(2 ) , 221 - 231 .alon , u. ( 2006 ) .an introduction to systems biology : design principles of biological circuits .crc press .thomas , r. ( 1978 ) . logical analysis of systems comprising feedback loops . journal of theoretical biology , 73(4 ) , 631 - 656 .glass , l. , & kauffman , s. a. ( 1973 ) .the logical analysis of continuous , non - linear biochemical control networks .journal of theoretical biology , 39(1 ) , 103 - 129 .kramer , a. d. , guillory , j. e. , & hancock , j. t. ( 2014 ) . experimental evidence of massive - scale emotional contagion through social networks .proceedings of the national academy of sciences , 111(24 ) , 8788 - 8790 .newman , m. ( 2010 ) .networks : an introduction .oup oxford .maslov , s. , & sneppen , k. ( 2002 ) .specificity and stability in topology of protein networks .science , 296(5569 ) , 910 - 913 .gates , a. j. , & rocha , l. m. ( 2015 ) . control of complex networks requires both structure and dynamics . scientific reports , 6 ( 24456 ) .von dassow , g. , meir , e. , munro , e. m. , & odell , g. m. ( 2000 ) .the segment polarity network is a robust developmental module .406(6792 ) , 188 - 192 .albert , r. , & othmer , h. g. ( 2003 ) .the topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in drosophila melanogaster .journal of theoretical biology , 223(1 ) , 1 - 18 .phillips , r. , kondev , j. , theriot , j. , & garcia , h. ( 2012 ) .physical biology of the cell .garland science .festa , p. , pardalos , p. m. , & resende , m. g. ( 1999 ) .feedback set problems . in handbook of combinatorial optimization ( pp .209 - 258 ) .springer us . even , g. , naor , j. s. , schieber , b. , & sudan , m. ( 1998 ) .approximating minimum feedback sets and multicuts in directed graphs .algorithmica , 20(2 ) , 151 - 174 .karp , r. m. ( 1972 ) .reducibility among combinatorial problems . re miller ,jw thatcher ( eds . ) , complexity of computer computations , plenum press , new york , 85 - 103 .resende , m. g. ( 2009 ) .greedy randomized adaptive search procedures greedy randomized adaptive search procedures .encyclopedia of optimization , 1460 - 1469 .pardalos , p. m. , qian , t. , & resende , m. g. ( 1998 ) . a greedy randomized adaptive search procedure for the feedback vertex set problem .journal of combinatorial optimization , 2(4 ) , 399 - 412 .festa , p. , pardalos , p. m. , & resende , m. g. ( 2001 ) .algorithm 815 : fortran subroutines for computing approximate solutions of feedback set problems using grasp .acm transactions on mathematical software ( toms ) , 27(4 ) , 456 - 464 .kalman , r. e. ( 1963 ) . mathematical description of linear dynamical systems .journal of the society for industrial and applied mathematics , series a : control , 1(2 ) , 152 - 192 .isidori , a. ( 2013 ) .nonlinear control systems .springer science & business media .cowan , n. j. , chastain , e. j. , vilhena , d. a. , freudenberg , j. s. , & bergstrom , c. t. ( 2012 ) .nodal dynamics , not degree distributions , determine the structural controllability of complex networks .plos one , 7(6 ) , e38398 .zhao , c. , wang , w. x. , liu , y. y. , & slotine , j. j. ( 2015 ) .intrinsic dynamics induce global symmetry in network controllability .scientific reports , 5 .gama - castro , s. , jimnez - jacinto , v. , peralta - gil , m. , santos - zavaleta , a. , pealoza - spinola , m. i. , contreras - moreira , b. , ... & bonavides - martnez , c. ( 2008 ) .regulondb ( version 6.0 ) : gene regulation model of escherichia coli k-12 beyond transcription , active ( experimental ) annotated promoters and textpresso navigation .nucleic acids research , 36(suppl 1 ) , d120-d124 .shen - orr , s. s. , milo , r. , mangan , s. , & alon , u. ( 2002 ) .network motifs in the transcriptional regulation network of escherichia coli .nature genetics , 31(1 ) , 64 - 68 .balaji , s. , babu , m. m. , iyer , l. m. , luscombe , n. m. , & aravind , l. ( 2006 ) . comprehensive analysis of combinatorial regulation using the transcriptional regulatory network of yeast . journal of molecular biology , 360(1 ) , 213 - 227 . milo , r. , shen - orr , s. , itzkovitz , s. , kashtan , n. , chklovskii , d. , & alon , u. ( 2002 ) .network motifs : simple building blocks of complex networks .science , 298(5594 ) , 824 - 827 .norlen , k. , lucas , g. , gebbie , m. , & chuang , j. ( 2002 , august ) .eva : extraction , visualization and analysis of the telecommunications and media ownership network . in proceedings of international telecommunications society14th biennial conference ( its2002 ) , seoul korea .jeong , h. , tombor , b. , albert , r. , oltvai , z. n. , & barabsi , a. l. ( 2000 ) .the large - scale organization of metabolic networks .nature , 407(6804 ) , 651 - 654 .watts , d. j. , & strogatz , s. h. ( 1998 ) .collective dynamics of small - world networks .nature , 393(6684 ) , 440 - 442 .white , j. g. , southgate , e. , thomson , j. n. , & brenner , s. ( 1986 ) .the structure of the nervous system of the nematode caenorhabditis elegans : the mind of a worm .lond , 314 , 1 - 340 .huxham , m. , beaney , s. , & raffaelli , d. ( 1996 ) .do parasites reduce the chances of triangulation in a real food web ?oikos , 284 - 300 .dunne , j. a. , williams , r. j. , & martinez , n. d. ( 2002 ) .food - web structure and network theory : the role of connectance and size .proceedings of the national academy of sciences , 99(20 ) , 12917 - 12922 .christian , r. r. , & luczkovich , j. j. ( 1999 ) .organizing and understanding a winter s seagrass foodweb network through effective trophic levels .ecological modelling , 117(1 ) , 99 - 124 .martinez , n. d. , hawkins , b. a. , dawah , h. a. , & feifarek , b. p. ( 1999 ) .effects of sampling effort on characterization of food - web structure .ecology , 80(3 ) , 1044 - 1055 .martinez , n. d. ( 1991 ) .artifacts or attributes ? effects of resolution on the little rock lake food web .ecological monographs , 61(4 ) , 367 - 392 .adamic , l. a. , & glance , n. ( 2005 , august ) .the political blogosphere and the 2004 us election : divided they blog . in proceedings of the 3rd international workshop on link discovery ( pp .36 - 43 ) .leskovec , j. , lang , k. j. , dasgupta , a. , & mahoney , m. w. ( 2009 ) . community structure in large networks : natural cluster sizes and the absence of large well - defined clusters .internet mathematics , 6(1 ) , 29 - 123 .albert , r. , jeong , h. , & barabsi , a. l. ( 1999 ) .internet : diameter of the world - wide web .401(6749 ) , 130 - 131 .matei , r. , iamnitchi , a. , & foster , i. ( 2002 ) . mapping the gnutella network .internet computing , ieee , 6(1 ) , 50 - 57 .leskovec , j. , kleinberg , j. , & faloutsos , c. ( 2007 ) .graph evolution : densification and shrinking diameters .acm transactions on knowledge discovery from data ( tkdd ) , 1(1 ) , 2 .brglez , f. , bryan , d. , & kozminski , k. ( 1989 , may ) .combinational profiles of sequential benchmark circuits . in circuits and systems , 1989 . , ieee international symposium on ( pp .1929 - 1934 ) .i cancho , r. f. , janssen , c. , & sol , r. v. ( 2001 ) .topology of technology graphs : small world patterns in electronic circuits .physical review e , 64(4 ) , 046119 .bianconi , g. , gulbahce , n. , & motter , a. e. ( 2008 ) . local structure of directed networks .physical review letters , 100(11 ) , 118701 .leskovec , j. , huttenlocher , d. , & kleinberg , j. ( 2010 , april ) . signed networks in social media . in proceedings of the sigchi conference on human factors in computing systems ( pp .1361 - 1370 ) .leskovec , j. , huttenlocher , d. , & kleinberg , j. ( 2010 , april ) .predicting positive and negative links in online social networks . in proceedings of the 19th international conference on world wide web ( pp .641 - 650 ) .milo , r. , itzkovitz , s. , kashtan , n. , levitt , r. , shen - orr , s. , ayzenshtat , i. , ... & alon , u. ( 2004 ) .superfamilies of evolved and designed networks .science , 303(5663 ) , 1538 - 1542 .zeleny , l. d. ( 1950 ) .adaptation of research findings in social leadership to college classroom procedures .sociometry , 13(4 ) , 314 - 328 .macrae , d. ( 1960 ) .direct factor analysis of sociometric data .sociometry , 23(4 ) , 360 - 371 .richardson , m. , agrawal , r. , & domingos , p. ( 2003 ) .trust management for the semantic web .in the semantic web - iswc 2003 ( pp .351 - 368 ) .springer berlin heidelberg .leskovec , j. , kleinberg , j. , & faloutsos , c. ( 2005 , august ) .graphs over time : densification laws , shrinking diameters and possible explanations . in proceedings of the eleventh acm sigkdd international conference on knowledge discovery in data mining ( pp .177 - 187 ) .gehrke , j. , ginsparg , p. , & kleinberg , j. ( 2003 ) .overview of the 2003 kdd cup .acm sigkdd explorations newsletter , 5(2 ) , 149 - 151 .opsahl , t. , & panzarasa , p. ( 2009 ) .clustering in weighted networks .social networks , 31(2 ) , 155 - 163 .chicago song , c. , qu , z. , blumm , n. , & barabsi , a. l. ( 2010 ) . limits of predictability in human mobility .science , 327(5968 ) , 1018 - 1021 .eckmann , j. p. , moses , e. , & sergi , d. ( 2004 ) .entropy of dialogues creates coherent structures in e - mail traffic .proceedings of the national academy of sciences of the united states of america , 101(40 ) , 14333 - 14337 .freeman , s. c. , & freeman , l. c. ( 1979 ) .the networkers network : a study of the impact of a new communications medium on sociometric structure .school of social sciences university of calif .. cross , r. l. , & parker , a. ( 2004 ) .the hidden power of social networks : understanding how work really gets done in organizations .harvard business review press .chaves , m. , sontag , e. d. , & albert , r. ( 2006 ) . methods of robustness analysis for boolean models of gene control networks .iee proc .- syst .biol , 153(4 ) , 154 .von dassow , g. , & odell , g. m. ( 2002 ) .design and constraints of the drosophila segment polarity module : robust spatial patterning emerges from intertwined cell state switches .journal of experimental zoology , 294(3 ) , 179 - 215 .daniels , b. c. , chen , y. j. , sethna , j. p. , gutenkunst , r. n. , & myers , c. r. ( 2008 ) . sloppiness , robustness , and evolvability in systems biology .current opinion in biotechnology , 19(4 ) , 389 - 395 .meir , e. , munro , e. m. , odell , g. m. , & von dassow , g. ( 2002 ) .ingeneue : a versatile tool for reconstituting genetic networks , with examples from the segment polarity network .journal of experimental zoology , 294(3 ) , 216 - 251 .gates , a. j. , & rocha , l. m. ( 2015 ) . control of complex networks requires both structure and dynamics . scientific reports , 6 ( 24456 ) .akutsu , t. , hayashida , m. , ching , w. k. , & ng , m. k. ( 2007 ) .control of boolean networks : hardness results and algorithms for tree structured networks .journal of theoretical biology , 244(4 ) , 670 - 679 .cheng , d. , & qi , h. ( 2009 ) .controllability and observability of boolean control networks .automatica , 45(7 ) , 1659 - 1667 . [ tab : stable ]in , mochizuki , fiedler et al . introduced the mathematical framework underlying feedback vertex set control ( fc ) . herewe give a brief overview of the main concepts and results of and its relation the work presented here . in the following , ,denotes the state of the variable associated to node at time , and is a vector composed of the state of the variables of the network . in addition , we use to denote where .let each of the system s node states evolve in time according to the differential equations where encodes the network structure ; defines the predecessor ( regulator ) nodes of node in the network and is such that contains node only if . in other words , negative self - regulation ( )is not included in , only positive self - regulation is .is equivalent to adding a new auxiliary variable to encode for positive self - regulation ( if any ) and not including as part of .in other words , if with , then we introduce and set as the new equation for node .this would make for the expanded system and would make the feedback vertex set of the expanded system always include or .this approach of adding an auxiliary variable is used in . ] additionally , and its first derivatives are assumed to be continuous functions and are assumed to be such that is bounded ( for some constant ) for any finite initial condition and for all , including the limit . note that eq .[ eq : s1 ] determines the dynamics of all node variables , including source nodes , which stands in contrast to eqs .[ eq:1]-[eq:2 ] in the main text ( eqs .[ eq : sdyn1]-[eq : sdyn2 ] ) .we consider the more general case of eqs .[ eq:1]-[eq:2 ] in .the boundedness conditions listed in the previous paragraph makes this system a so - called dissipative dynamical system , and guarantee that any initial state will converge to a global attractor as , the global attractor is bounded and invariant under eq .[ eq : s1 ] , and contains all bounded dynamical attractors : steady states , limit cycles , quasi - periodic orbits , and bounded chaotic trajectories ._ theorem .consider a differential equation system governed by eq .[ eq : s1 ] with dissipative functions , and the associated directed graph obtained from the .we also assume and its derivatives to be continuous . moreover, can contain a self - loop only if does not satisfy the decay condition .then a possibly empty subset of vertices of , and any two solutions and of eq .[ eq : s1 ] satisfy_ _ for all choices of nonlinearities if and only if is a feedback vertex set ( fvs ) of the graph .a consequence of this theorem is that a system governed by eq .[ eq : s1 ] with an empty fvs must have any pair of solutions approach each other as , i.e. , there is single dynamical attractor .now , if we take a system with a non - empty fvs and override the dynamics of its fvs with the trajectory in one of its dynamical attractors , then the overriden system is equivalent to a system with an empty fvs . be the node indices of a fvs , andlet be the node indices of nodes not in the fvs .the dynamics of nodes in the overriden system are given by , , where is the trajectory of the overriden node states . since , then does not contain any node in and the graph defined by the will have no cycles ( removing , by definition , makes the graph acyclic ) . ] since the dynamical attractor is still a dynamical attractor of the overriden system , which has an empty fvs , it must be the only dynamical attractor the overriden system . hence ,if we override the dynamics of the fvs of system eq .[ eq : s1 ] with the trajectory in one of its dynamical attractors , this theorem guarantees that the overriden system will converge to this attractor .furthermore , overriding the full fvs is necessary and sufficient if one wants this control strategy to hold for all choices of s .consider the general system used in the main text .the state of the system s nodes at time , characterized by source node variables ( for nodes with no incoming edges ) and internal node variables , obeys the equations the dynamics of each source node is determined by an environmental signal , while the dynamics of each internal node is governed by , where the determines the predecessor nodes of and satisfies the same conditions as in .the dynamics are assumed to be bounded , and the s and s and their first derivatives are taken to be continuous . for this system , the theorem in and its consequences ( i.e. , the results of refs . ) can not be applied directly since the source node variables do not obey eq .[ eq : s1 ] .note that the addition of the environmental signals is not merely cosmetic ; the s denote stimuli the system obtains from its environment through the source nodes ; these stimuli can affect the dynamical attractors available to the system ( e.g. steady states can merge or disappear if is fixed at different values ) . herewe extend the previous results of feedback vertex set control to the more general system dynamics .let be the desired dynamical attractor and let be the external signals in which this attractor is obtained .now , assume that the system s source nodes are driven by an arbitrary . if starting at time , we override the state of the source nodes with , then for we will have be in their state in .additionally , the dynamics the for can be described by , where the no longer depend on ( i.e. , is with all the removed ) . since the dynamics of the modified system now obey eq .[ eq : s1 ] ( with instead of ) , then we can guarantee that the can be used to steer the system to any dynamical attractor of interest . finally , since , then is one of the attractors of the modified system ( and with both have the same governing equations ) .the result is that the overriding the state of the source nodes and of the into the state in a dynamical attractor is guaranteed to steer the system to as .the fc node set of a network of nodes is composed of the source nodes of the network ( of them ) and of the fvs of the network .the minimal fc node set of a network is obtained by finding a minimal fvs , since the number of source nodes is fixed for any given network .the minimal fvs of a network is not guaranteed to be unique , and is often found to have a large degeneracy ( see the examples in fig .[ fig : controlfigure ] of the main text ) . in order to find the minimal fvs control set of a network ,we must find which of the possible node sets is a minimal fvs .the problem of identifying the minimal fvs has a long history in the area of circuit design and a variety of fast algorithms exist to find close - to - minimal solutions even though solving the minimal fvs problem exactly is np - hard .here we use the fvs adaptation of a heuristic algorithm known as the greedy randomized adaptive search procedure ( grasp ) , which is commonly used for combinatorial optimization problems .grasp is an iterative procedure in which each iteration consists of two phases : a construction phase in which a feasible solution to the problem is produced based on a greedy measure and a randomized selection process ( given a cutoff for the greedy measure , a feasible solution below the cutoff is chosen randomly and uniformly ) , and a local search phase in which the local neighborhood in the space of solutions is explored to find a local minimum of the problem .the fvs adaptation of grasp incorporates the wiring diagram of the network into the procedure by using the in - degree and out - degree of each node as the greedy measure in the construction phase and by utilizing a graph reduction technique that preserves the fvs during the local search phase .in addition , we preprocess all networks by iteratively removing source and sink nodes ( this is done iteratively because new source / sink nodes may appear after a source / sink node is removed ) , since a minimal of a network is invariant under removing nodes that do no participate in directed cycles .for this work , we use a custom code in python to iteratively remove source and sink nodes in each network analyzed .the resulting network is then used as an input to the fortran implementation of the fvs adaptation of grasp using the default settings ( 2048 iterations and a random uniformly chosen cutoff for the randomized selection process in each iteration ) , unless otherwise noted . in structural controllability ( sc )we consider a system with an underlying network structure whose autonomous dynamics are governed by linear time - invariant ordinary differential equations where denotes the state of the system , and is a matrix that encodes the network structure and is such that is nonzero only if there is a directed edge from to .given this system , sc s aim is to identify external driver node signals that can steer the system from any initial state to any final state in finite time ( i.e. , full control ) , and that are coupled to eq .[ eq : linaut ] in the following way ( eq . [ eq:3 ] in the main text ) where is a matrix that describes which nodes are driven by the external signals .the work of lin , shields , pearson , and others showed that if such a system can be controlled in the specified way by a given pair , which can be verified using kalman s controllability rank condition , matrix has full rank , i.e. , . ]this will also be true for almost all pairs ( except for a set of measure zero ) .in other words , sc is necessary and sufficient for control of almost all linear time - invariant systems consistent with the network structure in .the applicability of sc also extends to nonlinear systems ; sc of the linearized nonlinear system around a state of interest is a sufficient condition for ( local ) controllability of the system from to any sufficiently close state in a sufficiently small time ( the same is also true if we are interested in a trajectory instead of single state ) .furthermore , sc of the linearized nonlinear system is also a sufficient condition for some nonlinear notions of controllability such as accessibility . in a system governed by eq .[ eq : lincont ] , self - dynamics is captured by having the matrix elements in the diagonal of be nonzero ( i.e. , a self - loop in the network structure ) .if each node variable in the system has self - dynamics , then every node in the associated graph structure of will have a self - loop . directly applying sc to such a graphwill yield the surprising result that a single driver signal is necessary and sufficient for full control , regardless of any other aspect of the graph structure .this result , although mathematically correct , gives little insight into the impact of the underlying network structure of ( other than self loops ) on control - related questions .furthermore , as sun et al. showed using minimal - energy control driver signals , , where is the desired final time . ] the required driver signal might be numerically impossible to implement unless the number of control nodes is significantly increased . we should emphasize that controllability of a system with self - dynamics by a single driver signal is a consequence of sc s assumption that each nonzero entry in and is independent of each other . thus , if one considers sc for the set of s in which the diagonal elements of are fixed ( i.e. , the self - dynamics are fixed but the every other nonzero entry is still arbitrary ) then the number of driver nodes can be obtained from the eigenvalues of and their geometric multiplicities , of an eigenvalue of is given by , where is the identity matrix . ) ] as shown in a recent study by zhao et al . .for most cases , obtaining the eigenvalues of and their geometric multiplicities is computationally demanding and requires specifying a value for the weight of each self - loop .for the special case of a single fixed weight for the self - dynamics of every node ( , ) , the number of driver nodes is equivalent to the one specified by sc using but setting all diagonal elements to zero .these considerations about self - dynamics are crucial when using sc on the nonlinear systems we consider , eqs .[ eq : sdyn1]-[eq : sdyn2 ] . since the nonlinear functions have a decay term that prevents the system from increasing without bounds, then a linearization of the s will give nonzero diagonal entries for .thus , sc would predict that a single driver signal is sufficient for controllability regardless of the topology of the real network considered , a result which tells us little about structure - based control in these networks .instead , we follow the approach of liu et al . and do not include the decay self - dynamics as a self - loop in the graph structure. two equivalent interpretations of this approach under sc are that ( i ) we consider the decay terms to not dominate the linearized dynamics ( i.e. , we set them to zero ) , or ( ii ) every element has the same ( or very similar ) fixed weight for its self - dynamics ( i.e. , the self - dynamics are fixed and every other nonzero entry in is arbitrary ) .here we use the maximum matching approach of liu et al . to identify the minimum number of driver nodes in . given a directed network ,an undirected bipartite graph is created in the following way : for every node in the original network , a node of type and a node of type are created in the bipartite graph .the connectivity in the bipartite graph is such that if node has a directed edge to node in the original network , then the bipartite graph will have an undirected edge from node to node . as liu et al. showed , a maximum matching of the bipartite graph ( maximum number of edges with no common nodes ) gives the minimum number of driver nodes in sc ; each node in the original network corresponding to a node of type that is not in the maximum matching must be directly regulated by a driver node .a maximum matching of a graph is not unique , which implies that the set of nodes that must be directly regulated by a driver node is not unique either .the maximum matching of a bipartite graph can be efficiently found in time using the hopcroft - karp algorithm .for this work , we use a custom code in python to implement the maximum matching approach of liu et al . , and use the implementation of the hopcroft - karp algorithm in the python package networkx ( https://networkx.github.io/ , version 1.10 ) to find the maximum matching . herewe describe each network in [ tab : stable2 ] , provide the reference where each network was first reported , and give the link to where the network was obtained ( if publicly available ) . for many of these networks, the orientation of the directed edges does not match the expected direction of influence in a dynamic model ; if there is an edge from node to node , we expect the state of node to influence the state of node ( e.g. , in an epidemic model , if individual is infected and can spread the disease to , then we expect node to get infected ) . for these networks , we follow and , and reverse the orientation of the directed edges in order for it to match the expected directionality of influence . 1 .transcription regulatory network 1 *. graph of the transcriptional regulation network in the bacterium _escherichia coli_. vertices denote genes ; a gene that codes for a transcription factor that regulates the transcription of a target gene is denoted by a directed edge between them .the version of the network used was obtained directly from yang - yu liu .2 . * _ e . coli _transcription regulatory network 2 *. graph of the transcriptional regulation network in the bacterium _escherichia coli_. operons ( a gene or group of genes transcribed together ) are denoted by vertices ; an operon that codes for a transcription factor that directly regulates a target operon is denoted by a directed edge .this network was obtained from hawoong jeong s website http://stat.kaist.ac.kr/index.php .3 . * _ s . cerevisae _transcription regulatory network 1 , 2 *. graph of the transcriptional regulation network in the yeast _ saccharomyces cerevisiae_. genes are denoted by vertices ; a gene that codes for a transcription factor that regulates a target gene is denoted by a directed edge between them .network 1 was obtained from the supplemental information in ref . , and network 2 was obtained from uri alon s website https://www.weizmann.ac.il/mcb/urialon/ download / collection - complex - networks . 4 . * us corporate ownership *. graph of the ownership relations among companies in the telecommunications and media industries in the united states .companies are denoted by vertices and ownership of a company by another is denoted by an edge originating from the owner company .this network was obtained from the pajek network dataset http://vlado.fmf.uni-lj.si/pub/networks/data/econ/eva/eva.htm 5 .s. cerevisae _ , _ c. elegans _ metabolic networks *. graph of the metabolic network of the bacterium _ escherichia coli _ , the yeast _ saccharomyces cerevisiae _ , and the worm _ caenorhabtitis elegans_. substrates ( molecules ) and temporary complexesare denoted by vertices ; substrates that participate as a reactant in the reaction associated to a complex have an edge to it , and substrates that are products of the reaction associated to a complex have an edge from it .these network were obtained from hawoong jeong s website http://stat.kaist.ac.kr/index.php .elegans _ neural network *. graph of the _ caenorhabtitis elegans _ worm s neural network .neurons are denoted by vertices and synapse / gap junctions between neurons are denoted by edges .this network was obtained from the uc irvine network data repository http://networkdata.ics.uci.edu / data / celegansneural/. 7 . * ythan , seagrass , grassland , and little rock food web networks*. graph of the predatory interactions among species in the ythan estuary , the st .marks seagrass , the england / wales grassland , and the little rock lake .every species is denoted by a vertex , and if a species preys on another species an edge is drawn from the prey to the predator .this network was obtained from the cosin project network data http://www.cosinproject.eu/extra/data/foodwebs/ web.html .* political blogs *. graph of the hyperlinks between blogs on us politics in 2005 .every blog is denoted by a vertex and hyperlinks are denoted by edges that point towards the linked blog . in this workwe reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a blog has a hyperlink to another blog , then the latter influenced the former ) .this network was obtained from mark newman swebsite http://www - personal.umich.edu/ / netdata/. 9 . *www network of stanford.edu and nd.edu *. graph of the web networks of stanford university ( domain stanford.edu ) and the university of notre dame ( domain nd.edu ) .every webpage is denoted by a vertex and hyperlinks are denoted by edges that point towards the linked webpage .this network was obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 10 .* internet networks *. graphs of the gnutella peer - to - peer file sharing network from august 2002 ; each graph represents a different snapshot of the gnutella network .every host is denoted by a vertex and a connection from one host to another is denoted by an edge that points towards the latter .these networks were obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 11 . * electronic circuits *. network representations of electronic circuits from the iscas89 benchmark collection .logic gates and flip - flops are represented by vertices , and the directed connections between them are denoted edges .these networks were obtained from uri alon s website https://www.weizmann.ac.il/mcb/urialon/ download / collection - complex - networks . 12 . * texas power grid *. network representation of the texas power grid .substations , generators , and transformers are represented by vertices , and transmission lines between them are denoted by edges , with the edge directionality corresponding to the electric power flow .this network was obtained directly from yang - yu liu* slashdot *. friend / foe network of the technology - related news website slashdot obtained in 2009 .users are denoted by vertices , and a user tagging another user as a friend / foe is denoted by an edge pointing towards the latter user . in this workwe reverse the edges in this network so that they match the direction of influence in a dynamic model ( i.e. , if a user tags another user , the latter has an influence on the former ) .this network was obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 14 .* wikivote *. who - votes - for - whom network of wikipedia users for administrator elections .users are denoted by vertices , and a user voting for another user is denoted by an edge pointing towards the latter user . in this workwe reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a user votes for another user , the latter has an influence on the former ) .this network was obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 15 .* college student and prison inmate trust networks *. social networks of positive sentiment of college students in a course about leadership and of inmates in prison .each person is denoted by a vertex , and the expression of a positive sentiment of a person towards another person ( based on a questionnaire ) is denoted by an edge pointing towards the latter .in this work we reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a person has a positive sentiment towards another , the latter has an influence on the former ) .these networks were obtained from uri alon s website https://www.weizmann.ac.il/mcb/urialon/ download / collection - complex - networks . 16 .* epinions *. who - trusts - whom online social network of epinions.com , a general consumer review site .users are denoted by vertices , and a user trusting another user is denoted by an edge pointing towards the latter . in this workwe reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a user trusts another user , the latter has an influence on the opinion of the former ) .this network was obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 17 . *arxiv s high energy physics - theory and high energy physics - phenomenology citation networks *. citations between preprints in the e - print repository arxiv for the high energy physics - theory ( hep - th ) and high energy physics - phenomenology ( hep - ph ) sections .the citations cover the period from january 1993 to april 2003 .each preprint in the network is denoted by a vertex ; a preprint citing another preprint is denoted by a directed edge from the citing preprint to the cited preprint . in this workwe reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a preprint is cited by another preprint , the latter had an influence on the former ) .this network was obtained from the stanford large network dataset collection https://snap.stanford.edu / data/. 18 . *uc irvine online social network *. network of messages among users in an online community for students at university of california , irvine .users are denoted by vertices , and a user messaging another user is denoted by an edge pointing towards the latter .this network was obtained from tore opsahl s website https://toreopsahl.com / datasets/. 19 .* cellphone communication network *. call network of a subset of anonymized cellphone users .each user is denoted by a vertex , and a call or text message from one user to another is denoted by a directed edge from the sender to the receiver .this network was obtained directly from yang - yu liu . 20 . *e - mail communication network *. network of e - mails sent among users in a university during a period of 83 days .each user is denoted by a vertex , and an e - mail sent from one user to another during this period of time is denoted by an edge from the sender to the receiver .this network was obtained directly from yang - yu liu* intra - organizational freeman networks *. network of personal relationships among researchers working on social network analysis at the beginning and at the end of the study .each researcher is denoted by a vertex , and a personal relationship from a researcher to another is denoted by a directed edge from the former to the latter . in this workwe reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if a researcher has a personal relationship with another , the latter has an influence on the former ) .this network was obtained from tore opsahl s website https://toreopsahl.com / datasets/. 22 .* intra - organizational consulting and manufacturing networks *. network describing the relationships between employees in a consulting company and in a research team from a manufacturing company .each employee involved is denoted by a vertex , and the frequency / extent of information or advice an employee obtains from another ( as measured by a questionnaire ) is denoted by a weighted , directed edge among them that points from the questioned employee .we follow and , and use all edges with a nonzero weight to define a unweighted network , which we use for our analysis .we also reverse the edges of this network so that they match the direction of influence in a dynamic model ( i.e. , if an employee receives advice or information from another , the latter has an influence on the former ) .this network was obtained from tore opsahl s website https://toreopsahl.com / datasets/. we follow and , and study the control properties of ensembles of randomized real network using two randomization procedures : full randomization , which turns the network into a directed erds - rnyi network with nodes and edges , and degree - preserving randomization , which keeps the in - degree and out - degree of every node but shuffles its successor and predecessor nodes .erds - rnyi randomization is implemented by creating a graph of nodes , randomly ( uniformly ) choosing a source and a target of an edge from the set of nodes , and repeating this for each of the edges . for the degree - preserving randomization ,we start from the original network and choose two edges randomly ( uniformly ) , for which we switch their target nodes if the target and source nodes of both edges are each different ( if they are the same , we choose another edge pair ) .we repeat this step for a transient of times , after which we save the obtained network as the first element of the ensemble .we then repeat the target - node - switching step times , save the resulting network as the second element of the ensemble , and repeat the target - node - switching step times for each consequent ensemble element .a ) for 100 randomly chosen initial conditions .( b ) the thin light blue lines are the evolution of the norm of the difference between the wild type steady state and the controlled state trajectory using reduced fc ( dark blue symbols on [ fig : sfig2]a ) for 100 randomly chosen initial conditions .the thin red lines indicate the norm of the difference between the uncontrolled trajectory and the wild type steady state for 100 randomly chosen initial conditions . in all initial conditionsthe concentration of each quantity is chosen uniformly from the interval $ ] .the thick blue ( red ) lines indicate the average of the relevant 100 realizations.,scaledwidth=48.0% ] for each real network we used networks as the ensemble size . for most ensemble properties we used the 100 ensemble networks to estimate the average value and standard deviation of the property , but for some properties this was too computationally expensive for very large networks ( e.g. of networks with nodes ) or for very dense networks ( e.g. cycle numbers of intra - organizational networks ) . for these properties and networks, we used a smaller ensemble size , as specified below .+ + - political blogs . for cycle numbers of length , .+ - nd.edu .for cycle numbers of length , . for , and iteration for grasp .+ - stanford.edu . for cycle numbers of length , . for , and iteration for grasp .+ - slashdot . for cycle numbers of length , . for , and iterations for grasp .+ - epinions . for , and iterations for grasp .+ - arxiv hepth , hepph . for cycle numbers of length , . for , and iterations for grasp .+ - ucionline . for cycle numbers of length , .+ - cellphone . for , and iterations for grasp .+ - emails . for cycle numbers of length , . + - manufacturing . for cycle numbers of length , .[ [ iv .- structure - based - control - of - the - drosophila - melanogaster - segment - polarity - gene - regulatory - network ] ] iv .structure - based control of the _ drosophila melanogaster _segment polarity gene regulatory network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we compare the results of the two control methods for the gene regulatory network of the drosophila segment polarity genes , for which several dynamic models exist .the segment polarity genes , especially wingless ( _ wg _ ) and engrailed ( _ en _ ) , are important determinants of embryonic pattern formation and contributors to embryonic development . the wingless mrna and protein are expressed in the cell that is anterior to the cell that expresses the engrailed and hedgehog ( _ hh _ ) mrna and protein .all models consider a group of four subsequent cells as a repeating unit , and include intra - cellular and inter - cellular interactions . the continuous model of von dassow et al .represents each cell as a hexagon with six relevant cell - to - cell boundaries .it includes 136 nodes that represent mrnas and proteins , among them 4 source nodes and 24 sink nodes , and 488 edges that represent transcriptional regulation , translation , and protein - protein interactions .[ fig : dynmod]a in the main text , reproduced here as [ fig : sfig2]a , shows the network corresponding to the _ wg_-expressing cell ( cell 1 ) and three of its boundaries with the _ en_-expressing cell 2 .additional nodes in the network include , _ ptc _ ( patched ) , _ ci _ ( cubitus interruptus ) , its proteins _ cid _ and _ cn _ ( repressor fragment of _ cid _ ) , _ iwg _ ( intracellular _ wg _ protein ) , _ ewg _ ( extracellular _ wg _ protein ) , _ ph _ ( complex of patched and hedgehog proteins ) , and _b _ , a constitutive activator of _ ci_. for each gene ,the mrna is written in lower case and the protein(s ) are written in upper case .the nodes are characterized by continuous concentrations , whose rate of change is described by ordinary differential equations ( ode ) involving hill functions for gene regulation and mass action kinetics for protein - level processes , and using 48 kinetic parameters .von dassow et al .have shown that the model can reproduce the essential feature of the wild type steady state : _wg_/_wg _ are expressed anterior to the parasegment boundary ( cell 1 ) and _ en_/_en_/_hh_/_hh _ are expressed posterior to the parasegment boundary ( cell 2 ) as shown in fig .[ fig : dynmod ] . the initial condition that yields this steady state for the most parameter sets , the so - called `` crisp '' initial condition , _wg_/_iwg _ in the first cell is at maximal concentration ( 1 ) , _en_/_en _ in the second cell has concentration 1 , the source nodes b are fixed at 0.4 in each cell and all the other nodes have zero concentration .wild type steady state of the von dassow et al .model for the second parameter set provided by the _program , using normalized concentration variables where represents all sides of the cell .the concentration of the other nodes is smaller than .another initial condition considered here is a nearly - null initial condition , wherein intra - cellular nodes have a concentration of 0.05 in the first and third cell and 0.15 in the second and fourth ( zeroth ) cell ; membrane - localized nodes have concentration of 0.15 for even - numbered sides and 0.05 for odd - numbered sides in every cell .this initial condition yields an unpatterned steady state for the majority of parameter sets .unpatterned steady state of the von dassow et al .model , for the second parameter set provided by the _program , using normalized concentrations : where represents for all cells , and represents for all sides in all cells .the concentration of the other nodes is smaller than .the differential equation system is solved using a custom code in python and the odeint function with default parameter setting .we used the differential equations given in the appendix of ._ ingeneue _ can be found at http://rusty.fhl.washington.edu/ingeneue/papers/ papers.html . .( a ) the thin light blue lines show the evolution of the norm of the difference between the wild type attractor and the controlled state trajectory using fc for 100 randomly chosen initial conditions .( b ) the thin light blues lines are the evolution of the norm of the difference between the wild type attractor and the controlled state trajectory using reduced feedback fc for 100 randomly chosen initial conditions .the thin red lines are the evolution of the norm of the difference between the wild type attractor and uncontrolled trajectory using reduced fc for 100 randomly chosen initial conditions . in all initial conditions the concentration of each quantity is chosen uniformly from the interval [ 0,1 ] .the thick blue(red ) line is the average of the 100 realizations .( c)the concentration of _ ptc _ in the first cell ( solid lines ) and en in the second cell ( dashed lines ) with respect to time .pink lines and green lines represent autonomous trajectories that start from different initial conditions ( a wild type initial condition and a nearly null , respectively ) and converge to different attractors ( the wild type limit cycle and an unpatterned limit cycle , respectively ) .blue lines represent the case when the system starts from the nearly null initial condition , and after applying fc , evolves into the wild type limit cycle .inset : evolution of the norm of the difference between the desired attractor and the controlled state trajectory using fc.,scaledwidth=48.0% ] the boolean model implements a few modifications in the network topology compared with the ode network model , and considers only two cell - to - cell boundaries instead of six .there are 56 nodes and 144 edges in the network as shown in fig .[ fig : dynmod]b .one difference compared with the von dassow et al .model is the existence of three cubitus interruptus proteins : the main protein _ ci _ , and two derivatives with opposite function : _ cia _ , which is a transcriptional activator , and _ cir _ , a transcriptional repressor .there are four source nodes , representing the sloppy paired protein ( _ slp _ ) , which is known to have a sustained expression in two adjacent cells ( cells 0 and 1 if the _wg_-expressing cell is considered cell 1 ) and is absent from the other two .there are ten steady states for this boolean network model when considering the biologically relevant pattern of the source node states .starting from the biologically known wild type initial condition , which consists of the expression ( on state ) of , , , , , , , , , , , the model converges into the biologically known wild type steady state illustrated on fig .[ fig : dynmod]c . analytical solution reported in indicated that the states of the _ wg _ and _ ptc _ nodes , each of which has a positive auto - regulatory loop , determine the steady state for the given source node ( _ slp _ ) configuration .for example , any initial condition with no _ wg _ expression leads to an unpatterned steady state wherein _ ptc _ , _ ci _ , _ ci _ and _ cir _ are expressed in each cell , and the rest of the nodes are not expressed in any cell .[ [ iv.a .- structure - based - control - of - the - von - dassow - et - al .- differential - equation - model ] ] iv.a .structure - based control of the von dassow et al .differential equation model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the fc method predicts that one needs to control nodes ( 4 source nodes and 48 additional nodes ) to lead any initial condition to converge to any original attractor of the model .there are multiple control sets with ; one of them consists of _ b _ ( source node ) , _ ci _ , _ cn _ , _ iwg _ , _ ewg _ on every other side , _ hh _ on every other side , _ ptc _ on every other side in all four cells ( shown in [ fig : sfig2]a ) .we perform simulations using two benchmark parameter sets to test this prediction .we use the second parameter set provided by the ingeneue program to test the system s convergence to a steady state .the ode system has at least two steady states with this parameter set .a nearly null initial condition leads to the unpatterned state ( illustrated by the green lines in fig .[ fig : dynmod]d in the main text ) .the crisp initial condition leads to the wild type pattern ( see pink lines in fig . [fig : dynmod]d ) , which we choose as the desired steady state .if we start from the nearly null initial condition and maintain the concentrations of the nodes in the fc node set in the values they would have in the desired steady state , the system evolves into the desired steady state ( see blue lines and inset of fig .[ fig : dynmod]d ) .we obtained the same success of fc control when starting from 100 different random initial conditions ( shown in [ fig : sfig3]a ) .we also obtained the same success using a reduced fc set ( blue lines in [ fig : sfig3]b ) , which consists of _ b _ , _ cid _ , _ cn _ , _ iwg _ in every cell .in contrast , in the absence of control none of the trajectories converge to the wild type steady state ( red lines in [ fig : sfig3]b ) .we also numerically verified , using a different benchmark parameter set , namely the first parameter set provided by the ingenue program , that fc control can also successfully drive any state to a limit cycle attractor ( see [ fig : sfig4]a ) .this limit cycle attractor has the same expression pattern of _ en _ , _ wg _ and _ hh _ as the wild type steady state , thus we refer to it as the wild type limit cycle ( illustrated in [ fig : sfig4]c ) .we also obtained the same success of driving any state to a limit cycle attractor using the same reduced feedback vertex control shown in [ fig : sfig4]b .sc control indicates multiple control sets with nodes .one possible combination is , , , , , , where represents all cells ( shown in [ fig : sfig2]b . though sc predicts that less nodes need to be controlled , applying it requires a potentially complicated time - varying driver signal , which would need to be determined for each initial condition using , for example , minimum - energy control or optimal control .[ [ iv.b .- structure - based - control - of - the - albert - othmer - boolean - model ] ] iv.b .structure - based control of the albert & othmer boolean model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the fc method predicts that nodes need to be controlled , including the 4 source nodes ( _ slp _ ) , the 8 self - sustaining nodes ( all _ wg _ and _ ptc _ ) , and 2 additional nodes ( with one possibility being and ) .since the fc set contains all _ wg _ and ptc nodes , which were shown to determine the steady states under the indicated source node states , we can conclude that controlling the nodes in the fc set is enough to drive any initial condition to the desired steady state in the albert & othmer model .the simulation result is consistent with the theoretical result , as shown in fig .[ fig : dynmod]e .the wild type initial condition leads to the wild type steady state ( pink lines ) .the null initial condition used in the boolean model is that all the nodes are in the off state ; the resulting steady state is the unpatterned steady state ( green lines ) . the controlled trajectory with fc is shown in blue lines .we obtained the same success of fc control when starting from 100 different random initial conditions , as shown in [ fig : sfig5]a .moreover , the 12 nodes consisting of _ slp _ , _ wg _ and _ ptc _ in each cell ( which we refer to as the reduced fc set ) are enough to drive all the random initial conditions to the desired steady state in this particular model , as shown in [ fig : sfig5]b .sc control predicts that we only need to control the four source nodes as the network can be covered by four branches and one loop . for a simplified , single - cell version of the albert & othmer model , gates and rocha showed that the sc node set is sufficient for attractor control , but does not fully control this system .thus , a control method such as seems to be required for correctly predicting full control node sets in boolean models .
|
what can we learn about controlling a system solely from its underlying network structure ? here we use a framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological , technological , and social processes . this feedback - based framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors , regardless of the dynamic details and system parameters . we use this framework on several real networks , compare its predictions to those of classical structural control theory , and identify the topological characteristics that underlie the observed differences . finally , we demonstrate this framework s applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case , but not in specific model instances . controlling the internal state of complex systems is of fundamental interest and enables applications in biological , technological and social contexts . an informative abstraction of these systems is to represent the system s elements as nodes and their interactions as edges of a network . often asked questions related to control of a networked system are how difficult to control it is , and which network elements play an important role in controlling it . control theory provides well developed mathematical frameworks that allow a variety of control - related questions to be addressed . structural controllability ( sc ) , introduced by lin , distinguishes itself among these methods due to its ability to draw strong dynamical conclusions based solely on network structure and unspecified linear time - invariant dynamics . despite its success and wide - spread application , sc may give an approximate answer to the question of how difficult to control a system is . it can only provide sufficient conditions to control systems with nonlinear dynamics , and its definition of control ( full control ; from any initial to any final state ) does not always match the meaning of control in biological , technological , and social systems , in which control tends to involve only naturally occurring system states . several new methods of control have been proposed to incorporate the inherent nonlinear dynamics of real systems and relax the definition of full control . only one of these methods , namely feedback vertex set control ( fc ) , can be reliably applied to large complex networks in which only the structure is well known . this method , based on the mathematical framework in , incorporates the nonlinearity of the dynamics and considers only the naturally occurring end states of the system as desirable final states . in this work , we study whether a complex network is difficult to control using sc and fc as benchmark methods , and identify the topological characteristics that underlie their commonalities and differences .
|
population dynamics are a fundamental aspect of many biological processes . in this paper , we introduce and investigate a mathematical model for the population dynamics of an invasive species in a three - species food chain model .exotic species are defined as any species , capable of propagating themselves into a nonnative environment .if the propagating species is able to establish a self sustained population in this new environment , it is formally defined as invasive .the survival and competitiveness of a species depends intrinsically on an individual s fitness and ability to assimilate limited resources .often invasive species possess the ability to dominate a particular resource .this allows them to expand their range via out - competing other native species . in the united states damages caused by invasive species to agriculture , forests , fisheries and businesses ,have been estimated to be billion a year . in the words of daniel simberloff : `` _ _ invasive species are a greater threat to native biodiversity than pollution , harvest , and disease combined . _ _ '' therefore understanding and subsequently attenuating the spread of invasive species is an important and practical problem in spatial ecology and much work has been devoted to this issue .more recently , the spread of natural and invasive species by nonrandom dispersal , say due to competitive pressures , is of great interest .however , there has been less focus , on the actual eradication of an invasive species , once it has already invaded .this is perhaps a harder problem . in the words of marklewis : _ `` once the invasive species are well established , there is not a lot you can do to control them''_. it is needless to say however , that in many ecosystems around the world , invasion has already taken place !some prominent examples in the us , are the invasion by the burmese python in southern regions of the united states , with climatic factors supporting their spread to a third of the united states .the sea lamprey and round goby have invaded the great lakes region in the northern united states and canada .these species have caused a severe decline in lake trout and other indigenous fish populations .lastly the zebra mussel has invaded many us and canadian waterways causing large scale losses to the hydropower industry . another factor attributed to the increase of an invasive population ,is that the environment may turn favorable for the invasive species in question , while becoming unfavorable for its competitors or natural enemies . in such situations ,the population of the invasive species may rapidly increase .this is defined as an _ outbreak _ in population dynamics .these rapid changes tend to destabilize an ecosystem and pose a threat to the natural environment . as an illustration , inthe european alps certain environmental conditions have enabled the population of the larch budmoth to become large enough that entire forests have become defoliated . in most ecological landscapes , due to exogenous factors, one always encounters an invasive species and an invaded species .if the density of the former , be it invasive , disease causing , an agricultural pest , defoliator or other , undergoes a rapid transition to a high level in population , the results can be catastrophic both for local and nonlocal populations .biological and chemical controls are an adopted strategy to limit invasive populations .chemical controls are most often based on direct methods , via the use of pesticides .biological control comprises of essentially releasing natural enemies of the invasive species / pest against it. these can be in the form of predators , parasitoids , pathogens or combinations thereof .there are many problems with these approaches .for example , a local eradication effort was made by usgs through a mass scale poisoning of fish in order to prevent the asian carp ( an invasive fish species ) from entering the chicago sanitary and ship canal .the hope was to protect the fishing interests of the region .however , among the tens of thousands of dead fish , _ biologists found only one asian carp_. thus chemical control is not an exhaustive strategy .however , biological controls are also not without its share of problems .in fact , sometimes the introduced species might attack a variety of species , other than those it was released to control .this phenomena is referred to as a _ non - target effect _ , and is common in natural enemies with a broad host range .for example , the cane toad was introduced in australia in 1935 to control the cane beetle . however , the toad seemingly attacked everything else but its primary target !in addition , the toad is highly poisonous and therefore predators shy away from eating it .this has enabled the toad population to grow virtually unchecked and is today considered one of australia s worst invasive species . in studies of biological control in the united statesestimate that when parasitoids are released as biological controls that of the introduced species will attack non - targets , in canada these numbers are estimated as high as . in practiseit is quite difficult to accurately predict these numbers .the current drawbacks make it clear that alternative controls are necessary .furthermore , that modeling of alternative controls is important to validate the effectiveness of a management strategy that hopes to avoid non - target effects .such modeling is essential to access and predict the best controls to employ , so that the harmful population will decrease to low and manageable levels .this then gives us confidence to devise actual field trials .we should note that in practice actual eradication is rarely achieved .thus there are clear questions that motivate this research : 1 .how does one define a `` high '' level for a population , and further , how well does an introduced control actually work , at various high levels ? 2 .is it possible to design controls that avoid chemicals / pesticides / natural enemy introduction , and are still successful ?this paper addresses these questions through the investigation of a mathematical model that : 1 .blows - up in finite time . given a mathematical model for a nonlinear process , say through a partial differential equation ( pde ) , one says finite time blow up occurs if where is a certain function space with a norm , is the solution to the pde in question , and is the blow up time . therefore `` highest '' level is equated with blow up , and the population passes through every conceivable high level of population as it approaches infinity .2 . incorporates certain controls that avoid chemicals / pesticides / natural enemy introduction .the controls we examine are : 1 .the primary food source of the invasive species is protected through spatial refuges .the regions that offer protection are called _ prey refuges _ and may be the result of human intervention or natural byproducts , such as improved camouflage . 2 .an overcrowding term is introduced to model the movement or dispersion from high concentrations of the invasive species .densely populated regions have increased intraspecific competition and interference which cause an increase in the dispersal of the invasive species .this is an improvement to current mathematical models and will be seen to be beneficial if used a control .3 . we introduce role reversing mechanisms , where the role of the primary food source of the invasive species , and the prey of this food source , is switched in the open area ( the area without refuge ) .this models situations where the topography provides competitive advantages to certain species .it will be seen that this also is beneficial if used a control . in effect, this uses the current ecosystem and by modifying the landscape a natural predator in the environment has an advantage in key areas .hence , the invasive species population will adversely be effected .it can also be thought of as introducing a competitor of the invasive species , to compete with it for its prey .note , none of the above rely on enemy release to predate on the invasive species , or a parasite or pathogen release to infect the invasive species .thus potential non - target effects due to such release can be avoided . in the literaturefinite time blow up is also referred to as an explosive instability .there is a rich history of blow up problems in pde theory and its interpretations in physical phenomenon .for example , this feature is seen in models of thermal runaway , fracture and shock formation , and combustion processes .thus blow up may be interpreted as the failure of certain constitutive materials leading to gradient catastrophe or fracture , it may be interpreted as an uncontrolled feedback loop such as in thermal runaway , leading to explosion . it might also be interpreted as a sudden change in physical quantities such as pressure or temperature such as during a shock or in the ignition process .the interested reader is referred to .blow up in population dynamics is usually interpreted as excessively high concentrations in small regions of space , such as seen in chemotaxis problems .our goal in the current manuscript is to bring yet another interpretation of blow up to population dynamics , that is one where we equate such an excessive concentration or blow up " of an invasive population with disaster for the ecosystem .furthermore , it is to devise controls that avoid non - target effects and yet reduce the invasive population , before the critical blow up time . in the following , the norms in the spaces , and respectively denoted by in addition , the constants , and may change between subsequent lines , as the analysis permits , and even in the same line if so required .the current manuscript is organised as follows . in section[ 2 ] we formulate the spatially explicit model that we consider . in section[ 3 ] we describe in detail the modeling of the control mechanisms that we propose , and term ecological damping " . herewe make three key conjectures [ c1 ] , [ c2 ] and [ c3 ] , concerning our control mechanisms . section [ 4 ] is devoted to some analytical results given via lemma [ lem : wsol ] , theorem [ thm : wsolimproved ] and [ thm : gattr ] .section [ 5 ] is where we explain our numerical approximations and test conjectures [ c1 ] , [ c2 ] and [ c3 ] numerically . in section [ 6 ]we investigate spatio - temporal dynamics in the model .we investigate the effect of overcrowding on turing patterns , and we also confirm spatio - temporal chaos in the model .lastly we offer some concluding remarks and discuss future directions in section [ 7 ] .a three species food chain model is developed , where the top predator , denoted as , is invasive . in our model, may blow up in finite time .although populations can not reach infinite values in finite time , they can grow rapidly .the blow up event indicates that the invasive population has reached `` an extremely high '' and uncontrollable level .naturally , that level occurs prior to the blow up time .therefore , as approaches infinity in finite time , it passes through every conceivable `` high '' level .the blow up time , , is viewed as the `` disaster '' time .we investigate mechanisms that attempt to lower and control the targeted population _ before time . this modeling approach has distinct advantages : 1 .there is no ambiguity as to what is a disastrous high level of population .2 . there is a clear demarcation between when or if the disaster occurs .3 . the controls that are proposed do not rely on a direct attack on the invasive species , as is the traditional approach , rather we attempt to control the food source of .this will avoid possible nontarget effects .the model provides a useful predictive tool that can be tuned and established through data , in various ecological settings .mathematical models are advantageous in many situations due to their cost - effectiveness and versatility .of course obtaining an analytical solution for a nonlinear model is virtually impossible , outside of special cases .however , a numerical approximation can be developed to accurately investigate the role and effect our controls have on the blow up behavior .suppose an invasive species has invaded a certain habitat and it has become the top predator in a three species food chain .hence , predates on a middle predator , which in turn predates on a prey .a temporal model is given for the species interaction , namely the spatial dependence is included via diffusion , defined on .here , and is the one or two dimensional laplacian operator .we define to be the spatial coordinate vector in one or two dimensions .the parameters , and are positive diffusion coefficients .neumann boundary conditions are specified on the boundary .the initial populations are given as are assumed to be nonnegative and uniformly bounded on .there are various parameters in the model : , and are all positive constants .their definitions are as follows : is the growth rate of prey ; measures the rate at which dies out when there is no to prey on and no ; is the maximum value that the per - capita rate can attain ; and measure the level of protection provided by the environment to the prey ; is a measure of the competition among prey , ; is the value of at which its per capita removal rate becomes ; represents the loss in due to the lack of its favorite food , ; describes the growth rate of via sexual reproduction .these models offer rich dynamics and were originally proposed in in order to explain why chaos has rarely been observed in natural populations of three interacting species .the model stems from the leslie - gower formulation and considers interactions between a generalist top predator , specialist middle predator and prey .the study of these models have generated much research .an interesting fact is if the spatially independent and spatially dependent models are easily seen to blow up in finite time .the spatially dependent model offers further rich dynamics , in particular the possibility of turing instabilities and non turing patterns . nevertheless , to avoid blow upit appears that one must restrict , where .this was established for the spatially independent model in and is offered here for convenience : [ thm : aziz ] consider the model - . under the assumption that all non - negative solutions ( i.e. solutions initiating in ) of - are uniformly bounded forward in time and they eventually enter an attracting set .we have recently shown the above result to be _ false _ in the ode and pde cases .that is , - may blow up in finite time , even under provided the initial data is large enough .it is clear that even if is maintained can blow up .this becomes more evident if we consider the coefficient on and the fact that if the fecundity of is large compared to then blow up will occur .this may happen in situations where : 1 .there is an abundance of , 2 . possesses certain abilities to out compete other species , and harvest enough , 3 .the environment has turned favorable for and unfavorable for its natural enemies or competitors .thus it can harvest unchecked .if no intervention is made we can envision growing to disastrous levels with adequate initial resources .the blow up time is viewed as the point of disaster in an ecosystem .thus one is interested in controlling via the use of biological controls , before the critical time .these observations motivate an interesting question .assume that both - and its spatially explicit form - blow up in finite time for for a given initial condition .can we modify - , via introducing certain controls , so that now _ there is no blow up _, for the same initial condition ?it is clear that controlling the population of an invasive specie is advantageous , and most often necessary .however the avenues for which this is possible , whilst avoiding non - target effects , is not clear . here, we propose new controls that may delay or even remove blow - up in the invasive population .we refer to these controls as ecological damping " , akin to damping forces such as friction in physical systems , that add stability to a system .the crux of our idea is to use prey refuges , in conjunction with role reversal and overcrowding effects .there is a vast literature on prey refuge , spatial refuges as well as role reversal in the literature .the interested reader is referred to . however , to the best of our knowledge these have not been proposed as control mechanisms for invasive species . in the one dimensional case .this function decreases monotonically throughout its domain .the range is . if then the prey is protected while if then is unprotected ., title="fig : " ] we consider modeling a prey refuge .consider the continuous function .the region where or sufficiently close to one is defined as a _ prey refuge domain _ or _patch_. we call the region where or sufficiently close to zero an open area , this is the region where can predate on . in figure[ fig : refugeplot ] we see a sharp gradient between the prey refuge domain and the open area .the inclusion of a prey refuge influences the equations for and , namely , posed on a bounded domain in one or two dimensions .neumann boundary conditions are specified . the equation for be found in .the introduction of the prey refuge creates regions where it is impossible for to predate on .notice , that if the entire spatial domain is considered a prey refuge , that is , , then the coefficient of will depend on the sign of .if this is negative , then the invasive species dies off .likewise , in the absence of a prey refuge , that is , the equations for and collapse to our previous ones , that is , and , respectively .in such a case , it is know that blow up may still occur for sufficiently large enough data even if .this is because the coefficient of may still be positive , as one always has . the introduction of the refuge _ forces _ the coefficient of to change sign between the refuge and the open area . in the literaturesuch problems are referred to as indefinite parabolic problems .the word _ indefinite_ refers to the sign of the coefficient being indefinite . although there is a vast literature on such problems for single species models , there is far less work for systems .there is also a large amount of literature for such switching mechanisms incorporated to understand competitive systems , particularly in the vein of human economic progress .this indefinite parabolic problem motivates a collection of questions : if blow up occurs for particular parameters in the case , will a prey refuge prevent blow up ?how does this depend on the geometry of the refuge ?what is the critical size or shape of a refuge that prevents blow up from occurring ? does this depend on the size of the initial condition ?in the situation that blow up still persists , how is the blow up time affected ?are these results influenced by multiple refuges ? in either case, we make the conjecture : [ c1 ] consider the three species food chain model - , a set of parameters with , and an initial condition such that which is the solution to , blows up in finite time , that is there exists a patch , s.t for any single patch of measure greater than or equal to , the modified model - , with the same parameter set and initial condition , has globally existing solutions . in particular the solution to , does not blow up in finite time .a proof of this conjecture can be established for certain special cases in one dimesion .assume - blows up at time , for a certain parameter set and an initial condition , such that . we consider now introducing a refuge at the right end , starting at some positive .now for ] .thus the modified model is equivalent to , \\mbox{where } \ a < b < \pi,\ ] ] and and . herewe assume the initial condition satisfies .let us now compare to , \\mbox{where } \ a < b < \pi.\ ] ] the solution of the above is a supersolution to .however we know that there exists small data solutions to .this is easily seen via following the methods in . basically without loss of generalitywe may assume .we then multiply by and integrate by parts in to obtain thus we now define the functional since we obtain now we multiply by and integrate integrate by parts in to obtain this yields so if then and so this essentially yields setting .we derive the following differential inequality . andso which means the solutions exist for , where and then the solution blows up at time .this means that if , then the initial data is actually small enough to ensure globally existing solutions .what is required is this criteria can always be obtained for large enough refuge , that is , for small enough .since the boundary terms cancel , the norm here is equivalent to the norm , hence by sobolev embedding we have now since , for chosen small enough we obtain thus combining and we obtain this proves a particular case of conjecture [ c1 ] . in high population density areasa species should have greater dispersal in order to better assimilate available resources and avoid crowding effects such as increased intraspecific competition .these effects can be modeled via an overcrowding term , that has not been included in mathematical models of biological control .consider the improved mathematical model that includes an overcrowding effect of the invasive species , namely , with initial and boundary conditions as before .again , the equation for remains unchanged .the addition of represents a severe penalty on local overcrowding .this is interpreted as movement from high towards low concentrations of , directly proportional to .hence , attempts to avoid overcrowding and disperses toward lower concentrations .such models have been under intense investigation recently and are referred to as cross - diffusion and self - diffusion systems .the mathematical analysis of such models is notoriously difficult .we limit ourselves to the one - dimensional case in the forthcoming analysis and its numerical approximations .therefore the one dimensional laplacian is considered in .the presence of blow up is not affected if we maintain neumann boundary conditions at the boundaries .this can be seen in a straightforward manner .let us assume the classical model , that is - , blows up .therefore , without loss of generality , there exists a such that .this implies that if we set , then , leading to the blow up of .now , consider the integration over of . since the overcrowding term integrates to zero then blow up still persists .however , a combination of a prey refuge with the overcrowding effect may prevent blow up .further , we expect it takes a smaller refuge to accomplish the removal of blow up .this is precisely the following conjecture : [ c2 ] consider the three species food chain model - , a set of parameters , such that , and an initial condition such that which is the solution to , blows up in finite time , that is there exists a patch , and an overcrowding coefficient , s.t for any single patch of measure greater than or equal to , the modified model - , with the same parameter set and initial condition , has globally existing solutions . in particular the solution to , does not blow up in finite time .furthermore , , where is the patch found in conjecture .this is not difficult to see , as multiplying through by and integrating by parts yields so if then and so thus even if one has negative energy , so that blows - up , due to the presence of the positive term in , will not blow up for small data , thus yielding a global solution .thus there are initial data for which blow ups but does not , and so a smaller refuge would work in this case .the method follows by mimicing the steps in - , with the additional term .the prey refuge and overcrowding are included in the one - dimensional mathematical model .we also include a role - reversal of within the protection zone of the refuge .this models the scenario in which two species may prey on each other in various regions where it is advantageous .hence , the role - reversal of will compete with the invasive species .figure [ fig : refugeplotrolereversal ] depicts the scenario of a one dimensional prey refuge for which outside the protection zone both and may predate on .hence , the model we propose of this scenario is given below , in light of this model we propose conjecture [ c3 ] : [ c3 ] consider the three species food chain model - , a set of parameters such that , and an initial condition such that which is the solution to , blows up in finite time , that is there exists a patch such that the modified model - , with the same parameter set and initial condition , has globally existing solutions . in particular , the solution to does not blow up in finite time .furthermore , , where is the patch found in conjecture .all three conjectures are tested numerically in section [ sec : numresults ] .the nonnegativity of the solutions of - is preserved by application of standard results on invariant regions .this is due to to the reaction terms being quasi - positive , that is , since the reaction terms are continuously differentiable on , then for any initial data in or , it is easy to directly check their lipschitz continuity on bounded subsets of the domain of a fractional power of the operator , where the three dimensional identity matrix , is the laplacian operator and denotes the transposition . under these assumptions ,the following local existence result is well known ( see ) .[ prop : ls ] the system - admits a unique , classical solution on . if then [ lem : gronwall ] let , and be nonnegative functions in .assume that is absolutely continuous on and the following differential inequality is satisfied : if there exists a finite time and some such that for any , where , and are some positive constants , then in this section we improve the global existence conditions that has been derived recently in . here, we provide the previous result , theorem [ thm : wsol ] .[ thm : wsol ] consider the three - species food - chain model described by - . for any initial data , such that , and parameters such that , there exists a global classical solution to the system .we now give an improved result for the ode case , which is easily modified for pde case .in essence , given any initial data ( however large ) , there is global solution , for appropriately large .this is summarised in following lemma , [ lem : wsol ] consider the three - species food - chain model described by - .as long as , given any initial data , however large , there exists a global solution to the system as long as the parameter is s.t consider the following subsystem the here grows faster than the determined by .similarly , the here blows up faster than in . in order to drive down below , we use the exact solution to , that is , assuming that , then for to be below implies that thus when , we shall have now blows up at time .so if we choose appropriately , we can make .this is done by choosing thus if is s.t , then , before blows up .therefore , if we consider the determined by then this will certainly not blown up by time . at this point however , is negative , so in , can not blow up after , and will decay . in this latter casethe global attractor is a very simple , .hence , we have an extinction state for . in short, we see that for any initial condition that is specified , there exists an ( depending on the initial condition ) , s.t - has a global solution .we recap the following estimate from therefore , there exists time given explicitly by such that , for all , the following estimate holds uniformly : here is the compactification time of .we also recall the following local in time integral estimate for , from now we move to the estimate for the component .this can not be obtained directly .thus we use the grouping method again by multiplying by and by , adding the two together , and setting , we obtain : we add a convenient zero , , to to obtain by multiplying by , integrating by parts over , and keeping in mind the positivity of the solutions , we find that this yields such that now via estimates in we obtain for where is explicitly derived in , which implies that notice by multiplying by and integrating by parts one obtains , integrating the above on ] s.t in addition , we recap the uniform estimates found in , latexmath:[\[\label{eq : x1-h11 } compactification time of the solution .we next need to derive estimates for the component .this is tricky , since when we multiply by , integrate by parts , and then apply young s inequality with epsilon we obtain however , there is no global estimate on , as it may possibly blow up , but we know that there is always a local solution on ] yields , for .note , lemma [ lem : wsol ] requires us to manipulate , in order to prove large data global existence . from a practical point of viewthis is not always easy as manipulating the death rate directly may not be condusive to realistic control strategies .we next provide the following improved result that is valid even if , without manipulating .[ thm : wsolimproved ] consider the three - species food - chain model described by - . for certain initial data ,there exists a global classical solution to the system , even if , as long as the parameters are such that , by the absorption time of ( denoted ) , if satisfies , where is the blow up time of the following pde with the same initial and boundary conditions as in order to show that blow up can be avoided as stated above , even in our estimates via must be improved .this is because the embedding of is lost in .thus , if we are to control the norm in we need control .this is achieved in the next subsection .we will estimate the norms via the following procedure , we rewrite as we square both sides of the equation and integrate by parts over to obtain this result follows by the embedding of .therefore , we obtain due to classical local in time regularity results , see [ prop : ls ] , the solutions are ) , thus can not escape to , and is bounded below , upto the existence time .thus we obtain these constants may depend on since even though can not escape to in finite time the case of it decaying like is not precluded .we now make uniform in time estimates of the higher order terms .integrating the estimates of in the time interval ] .next , consider the gradient of .following the same technique as in deriving we obtain for the left hand side this follows via the boundary condition .thus we have this follows via young s inequality with epsilon , as well as the embedding of .this implies that we now use the uniform gronwall lemma with to obtain the estimate for is derived similarly , but again we assume we are on ] where , the absorbing time , to obtain similarly , an estimate for can be established , namely we now develop higher order estimates for the time derivatives .first , consider the partial derivative w.r.t of equations , multipling by , and integrate over we obtain using holder s inequality , young s inequality with epsilon , and our earlier estimates yield thus we obtain the application of the uniform gronwall lemma with gives us the following uniform bound latexmath:[\[\label{eq : uh1-t } methods applied to the equation for and yield we now show that the asymptotic state under certain parameter restrictions is an attractor with more regularity than was derived previously .essentially , we consider for which we have globally existing solutions , that is the maximal existence time for the solutions .then we consider the omega limit set for such solutions . we state the following result .[ thm : gattr ] consider the three - species food - chain model described by - . for initial conditions , such that there is a globally existing solution , there exists a parameter , ( depending on the initial condition ) , for which the omega limit set .furthermore this is an attractor with regularity .thus for suitably chosen , the attractor for such solutions is the extinction state of .we have shown that the system is well posed , under certain parameter , and data restrictions , via theorem [ thm : wsolimproved ] .thus , there exists a well - defined semigroup .also , via lemma [ lem : wsol ] we can find an such that decays to zero .the estimates via establish the existence of bounded absorbing sets in .now we rewrite the equation for a sequence as follows we aim to show the convergence of both the right hand side strongly in .due to the uniform bounds via , , and the embedding of and we obtain a subsequence stil labeled s.t as , as .similarly , as . thus , given a sequence that is bounded in , we know that , for , this yields the precompactness in , hence the closure in is in .a similar analysis can be done for the and components .hence the theorem is established .in this section a numerical approximation for the one and two dimensional models are developed in order to numerically demonstrate our earlier conjectures , , and in addition to exploring the rich dynamics the mathematical models exhibit .a one - dimensional spectral - collocation method is developed to approximate the one dimensional equations - which include overcrowding , refuges , and role - reversal .we then offer a second order finite difference approximation of the two - dimensional equations , , and that investigate the effect of a refuge .overcrowding and role - reversal effects are not investigated in the latter numerical approximation .the two dimensional approximation employs a peaceman - rachford operator splitting and techniques developed by one of the authors in .this approach takes advantage of the sparsity and structure of the underlying matrices and the known computational efficiency of operator splitting methods . without loss of generality the domainis scaled and translated to and in one and two - dimensions , respectively .we develop a spectral - collocation approximation of - .these equations can be written compactly as where and is the differential operator that includes the laplacian operator and reactive terms .the spatial approximation is constructed from a chebychev collocation scheme .the spatial approximation is constructed as a linear combination of the interpolating splines on the gauss - lobatto quadrature .the resulting system is then integrated in time using an implicit scheme , in particular a second order adams - moulton method .the chebychev collocation approximation allows for a high order spatial approximation .this offers the ability to capture fine resolution details with a relatively smaller number of degrees of freedom .however , there is a downside .the resulting matrices are full and ill conditioned .this is problematic for the inversion in the resulting linear solve .nevertheless , the spectrum associated with the second derivative operator is real , negative , and grows in magnitude like .the collocation scheme is constructed on the gauss - lobatto abscissa with respect to the chebychev weight , where .an approximation is then constructed as a linear combination of the lagrange interpolates on the abscissa , where is an approximation to the unknown populations and is a vector of fourier coefficients .the basis functions are defined by and each is a polynomial of degree .the basis functions also satisfy .this is used to develop an approximation to our differential operator and a system of equations is constructed via a discrete inner product , where is the projection operator onto the space of polynomials of degree .the discrete inner product is given below , and it is based on an inner product defined by the integral can be approximated as a sum over the gauss - lobatto quadrature for the chebychev weight and is exact for polynomials up to degree , the resulting norm results in an equivalent norm for polynomials up to degree .the abscissa , , are the same as those defined in equation , and the are the quadrature weights .the resulting numerical approximation is then constructed using this finite inner product the resulting approximation is an extension of that given in and is constructed using a galerkin approach , the basis functions , , are lagrange interpolants , and after substitution of the definition in equation ( [ eqn : numerics : approximationsum ] ) the result can be greatly simplified , where .the adams - moulton method used to advance the initial value problem in time necessitates solving a nonlinear system of equations at each time step .this is achieved through a newton - raphson method .the ensuing linearized problem in the newton - raphson method is solved through a restarted gmres . a second order approximation of , , and is developed . while the spectral - galerkin appoximation of the previous section may be extended to two dimensions , the computational cost of the nonlinear solve is troublesome due to the sensitivity of the matrices and the lack of their sparsity . here, the approximation is still of high order , while the underlying matrices are block tridiagonal or tridiagonal with diagonal blocks .this enables direct inversion techniques , in particular the thomas algorithm .the governing equations are written compactly as where , are the reactive terms that depend on space , time , and the species and , , is taken component - wise , and is a diagonal matrix with nonnegative entries and from semigroup theory the formal solution is where is the evolution operator associated with .a suitable quadrature is used to approximate the integral . here, a second order trapezoidal rule is used , that is , this motivates the implicit method , where . the exponential is approximated through a peaceman - rachford operator .this creates the second order implicit method , where is the variable time step , , and .the last reactive term does require the solution at the next time step . to avoid a required nonlinear solve, we use a first order approximation to avoid this .this simplification maintains the second order accuracy of the approximation .we write the solution in an alternative form , this can be conveniently solved through adi procedures , that is , by splitting the problem into , we see that the first equation keeps implicit while is explicit .we then take our intermediate solution , , and solve the second equation keeping implicit and explicit .now , the spatial operators may be approximated through the two - dimensional chebyshev approximation similar to that of the previous section , however we choose second order central differences .let and , where , , for , and let and be the second order approximations to the operators and . the approximation is utilized throughout the entire two - dimensional domain . at the boundary , we require an equation for at _ ghost _ points , that is locations of and . these are established using central difference approximations to the neumann boundary conditions .we then substitute these back into the system of equations .this maintains the second order accuracy and the tridiagonal structure of the equations . at each step , the tridiagonal equations in the adi method are directly solved using the thomas algorithm which comes at a expense of .we numerically demonstrate that there is good evidence for conjectures , , and . in the one dimensionalsetting , we translate and scale the domain to and use a refuge function of the refuge size is delineated by the value since that is the location where the gradient is steepest . as expected the blow up timeis affected by the presence of a refuge .for instance , for a fixed parameter set we look at the effect on the blow up time with a refuge , refuge with role reversal , and role reversal and overcrowding as compared to the classical model .the results are shown in figure [ numblowupvsrefuge ] .interestingly , in the modified model with role reversal , it is found that blow up does not occur when or in the spatial domain .we also see that the blow up times are influenced by the size of the refuge .in fact , there is critical refuge size for which given any refuge size greater then the solutions will not blow up .this is evidenced in figure [ numblowupvsrefuge ] by the steep gradient of the curves around .therefore , the population of the invasive species can be controlled . in situations where there is only a spatial refuge the blow up time curve is always increasing .this is consistent with our two dimensional results .it is clear that there exists a critical refuge size for which refuges larger than this will prevent blow up . in the one - dimensional model , without role reversal or overcrowding effects , we investigate the effect of the critical refuge size versus the size of the initial condition of while maintaining the other initial conditions and parameters values .we use the same parameters as given in figure [ numblowupvsrefuge ] and vary the uniform initial condition on .interestingly , figure [ num : refugevsic ] shows a logarithmic dependency of the critical refuge size to prevent blow - up of , on the initial condition size of . for which blow up in the invasive specie population will occur .identical parameters and resolution are used as in figure [ numblowupvsrefuge ] .the domain was translated and scaled to . ]the size of the spatial refuge has an influence on the blow up time .however , the location and number of spatial refuges also influences the blow up time .in fact , blow up is sometimes eliminated depending on the initial condition , refuges , and their subsequent locations .for instance , if we consider a uniform initial condition in and compare the blow up time in a situation of a single refuge of width located near the boundary versus the case of evenly splitting this refuge then we find the blow up time is increased .this delay is exasperated if the two refuges are farther apart .of course , if we consider a different initial condition then blow may not just be delayed , rather removed !for instance if we consider a concentrated initial population of for which the highest concentration of contains the spatial refuge then blow up can be eliminated ! hence , the concentration of inside the refuge may protect the species enough so that the population of decays sufficiently to avoid blow up in its population . that is , , is shown versus time . with a circular refuge , we see that blow up is avoided .this provides experimental evidence that the location and size of the refuge is important to avoid blow up in the invasive population .the dashed line represents the population in the case where there is no spatial refuge .the simulations were done on a uniform grid with a temporal step of . ] to illustrate this consider the two - dimensional model , , and with an initial condition of with parameters , , , , , , , , , , , , . clearly , the coefficient . if is zero throughout the entire spatial domain there exists blow up in the population . however , if we have a circular refuge such that , for then blow up is avoided .figure [ rpopulationavoided ] shows the total population versus time .we can see the population starts to increase rapidly , but the increase is attenuated as a result of the spatial refuge .hence , the population growth is not sustained and begins to decrease .hence , the size * and * location of the refuge and the initial condition play a delicate balance in preventing blow up . in two dimensionsif we choose a particular shape of the spatial refuge it was conjectured that there exists a critical size for which blow up is avoided . here ,we look at two situations : a square and circular refuges with increasing size centered in the middle of the spatial domain . we let inside the refuge while outside .this clearly delineates the protection zones . for a fixed parameter regime and initial conditions of determine the critical refuge size .each calculation is carried out with a grid .the temporal step was fixed at .the parameters used : , , , , , , , , , , , .it is found that for a square the critical refuge area is roughly of the spatial domain .the critical refuge area for the circular refuge is approximately , roughly of the spatial domain .in this section we shall investigate the effects of overcrowding in the absence of refuges and role reversal , in the classical model .therefore , we focus on whether an appropriate choice of can induce turing instabilities . it is shown in that diffusion processes can destabilize the homogenous steady state solution when . for convenience ,we restate the one - dimensional model given in equations , , and , consider the linearization of - about the positive interior equilibrium point where , is the the spatially homogenous steady state solution , and ( see ) . for instance , for the parameters given at the beginning of this section and . consider a small space time perturbation , that is , with as the positive interior equilibrium point given as and . substituting and collecting linear terms of order , we obtain where is the diffusion matrix and is the jacobian matrix associated with the ordinary differential equation part of model - .let where is the spatial coordinate in , is the amplitude , is the eigenvalues associated with the interior equilibrium point , and is the wave number of the solution . upon substituting ,we obtain the characteristic equation is a identity matrix .the sign of indicates the stability , or lack thereof , of the equilibrium point .the dispersion relation is the coefficients of are determined by expanding , namely , to check for stability of the equilibrium solution , we use the routh hurwitz criterion .this state that for to be stable we need contradicting either of these statements ensures instability for . finally , for diffusion to cause a turing instability it is sufficient to require that around the equilibrium point we have we refer the reader to a well detailed analysis of this in .hence , for a turing instability to occur , we require that is satisfied when ( without diffusion ) and at least one of the equations in changes sign when ( with diffusion ) . by thiswe consider letting become negative when and satisfy when .this suggests that spatial patterns should be observed .our parameter search has not yielded a parameter set for which holds while when , at least one of these inequalities changes sign .thus we can not conclusively say that will induce or inhibit turing pattern .however , it is conclusive that certainly has an effect on the type of patterns that do form . in particular, the patterns fall into two types : spatial patterns and spatio - temporal patterns . the conditions for which type forms is succinctly described via table [ table : pa ] ..conditions on the signs of coefficients for different types of patterns .[ cols="^,^,^,^,^",options="header " , ] now when this clearly changes the spatio - temporal pattern , as evidenced by the sign changes from case 1 to 2 in table [ table : pa ] . in this situation the dispersion relation is shown in figure [ dispersionplot2 ] .the resulting patterns are shown figure [ spatiotemporalpatternsuvr](a)-(f ) .resembling the two cases in table [ table : pa ] .notice that when a band of wavenumbers are unstable . ]the long - time simulation yields ( a)-(c ) turing patterns ( ) , that are spatio - temporal , and ( d)-(f ) stripe turing patterns ( ) , which are purely spatial .the other parameters are : , grid points are used with a temporal step size of . ] if then - reduces to a model similar to the classical predator - prey model with a holling type ii functional response , for which we know there can not occur turing instability .there is one caveat , the death rate in now becomes nonlinear .so essentially the equations are such systems have been investigated with and without cross and self diffusion . in the regular diffusion case, it is known that a turing instability can exist , where the nonlinearity in death rate is due to cannibalism . however , to our knowledge , for the specific form of the death rate as above this is not yet known .thus we see that the classical model - can exhibit spatio - temporal patterns , apart from just the spatial patterns that were uncovered in .furthermore addition of the overcrowding term in - , can cause the spatio - temporal patterns to change into a purely spatial patterns .it is noted that the inclusion of overcrowding with a nonlinear death rate , such as in , can lead to turing instability in the classical predator - prey model with holling type ii response .the goal of this section is to investigate spatio - temporal chaos in the classical model - .spatio - temporal chaos is usually defined as deterministic dynamics in spatially extended systems that are characterized by an apparent randomness in space and time .there is a large literature on spatio - temporal chaos in pde , in particular there has been a recent interest on spatially extended systems in ecology exhibiting spatio - temporal chaos .however , most of these works are on two species models , and there is not much literature in the three - species case . in showed diffusion induced temporal chaos in - , as well as spatial chaos when the domain is enlarged . in ,various patterns non - turing were uncovered in - , in the case of equal diffusion , but spatio - temporal chaos was not confirmed .note , that the appearance of a jagged structure in the species density , as seen in , which seems to change in time in an irregular way , does not necessarily mean that the dynamics is chaotic .one rigoros definition of chaos means sensitivity to initial conditions .thus two initial distribution , close together , should yield an exponentially growing difference in the species distribution at later time . in order to confirm this in -, we perform a number of tests as in . we run - form a number of different initial conditions , that are the same modulo a small perturbation .the parameter set is chosen as in .we then look at the difference of the two densities , at each time step in both the and norms .thus we solve - with the following parameter set : , thus the steady state solution for the ode system is .the simulations use two different ( but close together in norms ) initial conditions .the first simulation ( which we call ) is a perturbation of by . the second simulation ( which we call ) is a perturbation of by .the densities of the species are calculated up to the time at each time step in the simulation we compute where are used .then , is plotted on a log scale . in doing son, we observe the exponential growth of the error .this grows at an approximate rate of .since this is positive then this is an indicator of spatio - temporal chaos .these numerical tests provide experimental evidence to the presence of spatio - temporal chaos in the classical model - .figure [ contourchaos ] shows the densities of the populations in the -plane while figure [ contourchaoserror ] gives the error and its logarithm till .-plane for , , and from left to right .the long - time simulation yields spatio - temporal chaotic patterns . points are used with a temporal step size of . ] and its logarithm in the norm for the species , under slightly different initial conditions are shown .the difference is seen to grow at an exponential rate of approximately .comparable results were found for the and norms .these tests provide experimental evidence that there presence of spatio - temporal chaos in the classical model -.,title="fig : " ] and its logarithm in the norm for the species , under slightly different initial conditions are shown .the difference is seen to grow at an exponential rate of approximately .comparable results were found for the and norms .these tests provide experimental evidence that there presence of spatio - temporal chaos in the classical model -.,title="fig : " ]in this work we have proposed and investigated a new model for the control of an invasive population .an invasive species population is said to reach catastrophic population levels when its population reaches a particular threshold .the mathematical model uses the mathematical construct of finite time blow up which enables the model to examine the effect of controls for any particular threshold , especially since this level depends on the application . in effect, this construct demonstrates that if the invasive population has large enough numbers initially , it can grow to explosive levels in finite time : thus wreaking havoc on the ecosystem .hence , we are interested in the influence certain controls will have on the invasive population and if its population may be reduced below disastrous levels .this formulation yields a clear mathematical problem : assume that - blows - up in finite time for for a given initial condition .are there controls and features of the model that we can include to modify - so that now _ there is no blow up _ in the invasive population given the same initial condition ?this paper addresses this question , suggesting clear controls and improvements to the mathematical model .we then investigate these improvements numerically and theoretically . in traditional practice ,biological control works on the enemy release hypothesis . that is releasing an enemy of the invasive species into the ecosystem will lead to a decrease in the invasive population .however , non - target effects are prevalent , hence this approach is problematic and may create even more devastating impacts on the ecosystem . here, we propose controls that do not use the release of biological agents .in particular , we introduce spatial refuges or safe zones for the primary food source of the invasive species .mathematically , this transforms - into an indefinite problem , that is , where the sign of the coefficient of switches between inside and outside of the refuge .we demonstrated numerically and in analytically , with some assumptions , that this control can prevent blow up and drive the invasive population down .our numerical experiments suggests that there is a delicate balance between the size and location of the refuge and the initial condition .in particular , we revealed a logarithmic dependence on the size of the initial condition for versus the critical refuge size to prevent blow - up of , see figure [ num : refugevsic ] .clearly , the balance is even more pronounced for multiple refuges and in higher dimensions .we also improved the mathematical model by incorporating overcrowding effects , which also may be used a control .we also examined the situation where a species may switch its primary food source based in regions of the domain , that is , a prey may switch to a predator , hence their roles are reversed .this models scenarios where influences on the landscape provide a competitive advantages to certain species . both , role - reversal and overcrowding act as damping mechanisms and also may prevent blow - up in the invasive population . in particular , smaller refuge sizes in conjunction with role - reversal and overcrowding are required to prevent blow up . of course , how does one enforce overcrowding in an ecosystem such that we obtain this desired effect ?can one devise mechanisms to facilitate this dispersal of population ?we suggest one approach .suppose we create a lure , such as a pheromone trap , that is placed in the refuge .this would lure the invasive species into the patch , where its growth would be controlled .of course , the species would eventually exit the refuge in search for a higher concentration of food . hence , in the future we plan to include spatially dependent diffusion constants to model this situation . in our mathematical models we confirmed spatio - temporal chaos .we also see that overcrowding can effect the sorts of turing patterns that might form .since , environmental effects are inherently stochastic , part of our future investigations introduces stochasticity into the model .it is not known what effect this will have on the spatio - temporal chaos or turing patterns that may emerge .it is clear that our new mathematical modeling constructs and results are useful analytical and numerical tool for scientists interested in control of invasive species .moreover , the results motivate and encourage numerous avenues of future exploration , many of which are currently under study and will be presented in future papers .we would like to acknowledge very helpful conversations with professor pavol quittner , professor philippe souplet and professor joseph shomberg , as pertains to the analysis of indefinite parabolic problems , as well as finite time blow - up in the superlinear parabolic problem , under various boundary conditions , and initial data restrictions .bampfylde , c.j . and lewis , m.a . , _ biological control through intraguild predation : case studies in pest control , invasive species and range expansion _ , bulletin of mathematical biology , 69 , 1031 - 1066 , 2007 .bryan , m.b . ; zalinski , d. ; filcek , k.b . ; libants , s. ; li , w. ; scribner , k.t . , _ patterns of invasion and colonization of the sea lamprey in north america as revealed by microsatellite genotypes _ , molecular ecology , 14(12 ) , 3757 - 3773 , 2005 .white , k. a. j. and gilligan , c.a . , _ spatial heterogeneity in three species , plant parasite hyperparasite , systems _ , philosophical transactions of the royal society of london .series b : biological sciences 353.1368 ( 1998 ) : 543 - 557 .loda , s.m . ,pemberton , r.w . , johnson , m.t . and follet , p.a ._ nontarget effects - the achilles heel of biological control ? retrospective analyses to reduce risk associated with biocontrol introductions ._ , annual reveiw of entomology , 48 , 365 - 396 , 2003 .parshad , r. d. , kumari , n. and kouachi , s. , _ a remark on study of a leslie - gower - type tritrophic population model [ chaos , solitons and fractals 14 ( 2002 ) 1275 - 1293 ] _ , chaos , solitons fractals , 71(2 ) , 22 - 28 , 2015 . philips , b. ; shine , r. , _ adapting to an invasive species : toxic cane toads induce morphological change in australian snakes _ , proceedings of national academy of sciencesusa , 101(49 ) , 17150 - 17155 , 2004 .ackermann , n. , bartsch , t. , kaplicky , .p and quittner , p. _ a priori bounds , nodal equilibria and connecting orbits in indefinite superlinear parabolic problems _ , transactions of the american mathematical society , 360(7 ) , 3493 - 3539 , 2008 .rodda , g.h . ; jarnevich , c.s . ; reed , r.n . , _what parts of the us mainland are climatically suitable for invasive alien pythons spreading from everglades national park ? _ , molecular ecology , 14(12 ) , 3757 - 3773 , 2005 .upadhyay , r. k. , naji , r. k. , kumari , n. , _ dynamical complexity in some ecological models : effects of toxin production by phytoplanktons _ , nonlinear analysis : modeling and control , 12(1 ) , 123 - 138 , 2007 .
|
in this work we develop and analyze a mathematical model of biological control to prevent or attenuate the explosive increase of an invasive species population in a three - species food chain . we allow for finite time blow - up in the model as a mathematical construct to mimic the explosive increase in population , enabling the species to reach disastrous " levels , in a finite time . we next propose various controls to drive down the invasive population growth and , in certain cases , eliminate blow - up . the controls avoid chemical treatments and/or natural enemy introduction , thus eliminating various non - target effects associated with such classical methods . we refer to these new controls as ecological damping " , as their inclusion dampens the invasive species population growth . further , we improve prior results on the regularity and turing instability of the three - species model that were derived in . lastly , we confirm the existence of spatio - temporal chaos . rana d. parshad , kelly black , and emmanuel quansah matthew beauregard
|
it has often been suggested that scaling and renormalization group ideas should be helpful in the analysis of the small scales of turbulence at high reynolds numbers , and there is a vast literature in the subject but few concrete results ( see e.g. and references therein ) .one of the outstanding difficulties is that scaling ideas are usually implemented within a perturbative framework , and in the absence of a small parameter the validity of the framework remains doubtful . in an earlier paper , one of us has attempted to marry scaling with a particular numerical method ; the results could hardly be called definitive . in the current paperwe try again in the context of a spectral method . for reviews of related methods in turbulence and statistical physics ,see .related ideas have also been presented in .suppose you can represent on the computer fourier modes up to the some wave number .all the equations we work with in the present paper contain an energy cascade ; when the energy reaches wavenumber aliasing begins , energy is reflected into the longer wavelengths in ways that are not justified by the equations , and the approximation becomes invalid . on the other hand ,the characteristic time of the modes becomes shorter and one could view the amplitudes of larger modes as nearly stationary on the time scales of the smaller modes .when the energy reaches mode one could think of rescaling the computation so that small scale modes are added , large scale modes are removed from the computation because they are nearly constant , until a new cutoff is reached , and so on , thus computing in a moving window of modes and probing the large wavenumber coefficients of the flow .the problem is how to do this scaling and how to justify the results .we first explore the rescaling method in the case of burgers equation where the structure of the flow is well understood , and we show how to pick the rescaling time and how to rescale the flow ; we then verify that well - known results are reproduced .the key difficulty , as in some other numerical renormalization methods , has to do with the rescaled boundary conditions .we then apply the idea to the euler equations in 3d ; well - known results are rediscovered at low cost , attempts at settling current controversies are made , and ideas for future improvement are suggested .the paper represents work in progress ; we felt that the methodology is promising and worthy of presentation .what should be done next is discussed in the final section .we focus in particular on structure functions , which are averages of the velocity field of the form , where are points in the fluid apart , is the velocity at this points , is a power , and the brackets denote a ( spatial , temporal , or ensemble ) average .some of the key results in turbulence theory relate to these functions .the kolmogorov k41 " theory deduced that , where is a constant that depends on only .this would be an exact result if the velocity field were gaussian ; however , the velocity field in turbulence is not gaussian ( see e.g. ) .it is well - known that experiment gives exponents different than the kolmogorov values ( ) .the exponent for in particular has given rise to much controversy .the case is special because the conclusion is almost " a theorem ( ) . in recent yearsbarenblatt et al . have conjectured that the structure funnction exponents may be reynolds - number dependent .note that in some sense the scaling transformations here accomplish the opposite of what is usually attempted with real - space renormalization methods ( as in ) : the focus here narrows to ever smaller scales rather than expand to ever larger scales .the point is of course that in either case one uses scaling trasnformations to explore those properties of a system that are scale invariant .the paper is organized as follows .section [ algo ] contains an explanation of the construction ; section [ burg ] describes a validation in the well - understood case of the one - dimensional inviscid burgers equation . section [ eulns ] contains results about the euler equations .a discussion and ideas about futher work follow in section [ conc ] .we present the algorithm in the case of the navier - stokes equations in three space dimensions .the modifications needed for the euler equations , fewer dimensions , and in the case of the burgers equation are straightforward .consider therefore the 3d navier - stokes equations with periodic boundary conditions in the box ^ 3 ] and the initial condition , which gives rise at time to a shock wave located at .we use this fact to pick a scaling criterion and calibrate the algorithm .we use fourier modes ( positive and negative ) to resolve the solution ; we rescale and restart when we want to find the value of for which the total time approximates the known value of .we approximate the sum by ; after 45 scalings , the time spent in a cycle has shrunk down to about so that this is acceptable .we present results from a calculation with n=1024 ; the equations were solved by a runge - kutta - fehlberg method ( ) with the tolerance per unit step set to and the rule was used for dealiasing .the energy ratio restarting criterion was , which yields a total time for the 45 cycles of 1.014 , a error .the structure functions are given by \label{struct1d}\ ] ] for if the structure functions are invariant under scaling transformations , then the structure functions for the different cycles should have the same form .plotted in log - log coordinates in the original scale , the structure functions for all cycles should be translates of each other .equivalently , though each cycle operates on a scale which is half that of the previous cycle , common features in the structure functions for different cycles should appear when the structure functions for all cycles are plotted for arguments in . ] while fig.[fig_eul41 ] ) shows the thrid and fourth order structure functions with the same detail , but the results also corroborate our modest claims .we do not include the 0th cycle in the averaging of the structure functions , which can be thought of again as an equilibration step , helping to forget specifics of the initial condition .this omission is consistent with our algorithm , which aims to reveal the generic structure of regions with highest vorticity and not problem dependent parameters . as in the case of burgers, the averaged structure functions can only reveal the common features of the different cycles small scales , since the large scales features differ from cycle to cycle .we have included in the figures power laws of the form where is the order of the structure function .the figures include the power law predicted by kolmogorov s theory , i.e. as well as the power laws with exponents we present this envelope of power laws to show that a calculation with modes does not allow an accurate determination of the exponents . however , the range of values around the kolmogorov values that are compatible with the numerical results is not broad .this observation leads us to hope that the inertial range exponents for the euler equation can be calculated accurately when , in the future , we use our method with a larger number of fourier modes ( see also discussion in section [ conc ] ) . for the sake of completeness ,we include in figure [ fig_eul7 ] log - log plots and the corresponding least squares fits for the averaged structure functions of order 2 - 4 .we see that the slopes of the fits are within the envelope of power laws shown in figures [ fig_eul4],[fig_eul41 ] .in particular , we obtain the slope for the second order , for the third and for the fourth . the result for the fourth order structure functionis marginally inside the envelope presented in figure [ fig_eul41]b , but this is to be expected .as one goes to higher order moments , the inadequacy of the resolution results in the envelope of power exponents to broaden .this fact prohibits us to make any stronger claims at this moment .figure [ fig_eul42 ] shows the longitudinal and transverse second order structure functions .if the flow is isotropic and a power law behavior holds for the second order structure function , then the longitudinal and transverse second structure functions should exhibit the same exponent , but with a different prefactor .experiments for high reynolds numbers ( e.g. ) show that there is some discrepancy between the exponents of the longitudinal and transverse structure functions .this discrepancy is usually attributed to the the fact that perfect isotropy can not be obtained in an experiment . in our numerical experimentswe see that the longitudinal and transverse second structure functions do exhibit the same scaling behavior for small ( to within the accuracy afforded by the numerics ) .finally , figure [ fig_eul6 ] is a log - log plot of the second order structure function for different cycles ; as before , the structure functions for all the cycles was converted to the original spatial scale .suppose that the structure functions for the different cycles ( translated to the original scale ) are if the structure function exhibits power law behavior , then the structure functions for the different cycles should be given by where is the power law that holds across all scales . to check that , plot in log - log coordinates for , ] in log - log coordinates for $ ] etc .if the same scaling law holds across the cycles , the plots for the different cycles are parallel to one another , at a distance apart . for the numerical experiments with the euler equations , this is not exactly sothe reason is , that in three dimensions there is no longer a localized singular structure ( like the shocks in one dimension ) .thus , tracking of only the point of highest vorticity ( and considering the inevitable inaccuracy of the numerical calculations ) can result in jumping around different points of the singular structure .these different points can exhibit the same scaling behavior but with different numerical prefactors .this leads to the log - log plot of the second structure function being divided into clusters of cycles with different heights. however , the ranges of heights of the various clusters are in the same order of magnitude .we have presented an algorithm that combines successive scaling with a spectral method in an attempt to probe the small scale structure of turbulence .we addressed the controversial issue of finite time blow - up for the solution of the euler equations starting from the taylor - green vortex initial condition .we find that the behavior is consistent with a finite time blow - up of the vorticity while not being in any way desicive .we used the algorithm to compute low order structure functions . for small distances , the structure functions exhibit power - law behavior . for the euler equations , the kolmogorov estimates of the power - law exponents are compatible with our results but we can not conclude whether there exist corrections to the kolmogorov estimates . even though our calculations allow us to probe very fine scales , we need higher resolution , e.g. or fourier modes .such resolutions should be feasible through parallelization and we expect to report on such calculations in the future .the merit of the algorithm is that it reduces the resolution needed to decide the values of the exponents from something astronomical to something merely very difficult .before we attempt such calculations we expect to improve our constructions , in particular in their treatment of boundary conditions , as we have explained .we are grateful to prof .barenblatt , g.i . , prof .v.m . prostokishin and dr .yelena shvets for many helpful discussions and comments .this work was supported in part by the national science foundation under grant dms 04 - 32710 , and by the director , office of science , computational and technology research , u.s .department of energy under contract no .de - ac03 - 76sf000098 .
|
we show how to use numerical methods within the framework of successive scaling to analyse the microstructure of turbulence , in particular to find inertial range exponents and structure functions . the methods are first calibrated on the burgers problem and are then applied to the 3d euler equations . known properties of low order structure functions appear with a relatively small computational outlay ; however , more sensitive properties can not yet be resolved with this approach well enough to settle ongoing controversies .
|
in industry , drilling operations on pieces of material often increase the temperature of the component . in order to avoid overheating the material at any one point , the processing operation is split into multiple parts .this implies that nodes may need to be visited more than once .additionally , there may be a waiting time at some nodes to be able to continue processing .similar to , we translate the problem to a traveling salesman problem ( tsp ) variant , with multiple visits per node .the resulting problem is called the intermittent traveling salesman problem ( itsp ) , due to the required time between different visits of a node to avoid overheating .the tsp has been studied in depth in the field of combinatorial optimization , but several variants exist . to clearly distinguish the itsp from existing extensions, we briefly discuss the most similar problems . * the tsp with multiple visits ( tspm ) : similar to the itsp each node has to be visited several times , butunlike in the itsp no time constraints exist between multiple visits . * the tsp with time windows ( tsptw ) : time windows within which the node has to be visited are associated to each node , but multiple visits are not required . * the inventory routing problem ( irp ) : in the irp , the goal is to find the best shipping policy to supply several retailers ( nodes ) with a common product subject to vehicle capacities , minimum required demand and limited storage capacity at the retailers .the major difference with the itsp lies in the effect of traveling time on inventory levels .whereas in the itsp the temperature at a node decreases during travel time , in the irp traveling routes compose one discrete period . in the remainder of this paper , we first discuss the problem definition with different temperature profiles in section [ prob ] , before going into detail about the proposed metaheuristic approach in section [ meth ] .preliminary results for the temperature profiles and metaheuristic variations are the focus of section [ res ] , whereas we finish with a conclusion and future work in section [ concl ] .a network of nodes can be represented by an undirected graph , with the nodes or points and the arcs or routes between different points .each node ( ) has a processing time , and the distance between each pair of nodes is .it is explicitly assumed that and that the triangle inequality holds . due to the temperature constraint ( maximum temperature of at each node ), nodes may have to be visited more than once .the goal is to minimize the total completion time , i.e. the time at which all nodes have been fully processed and the `` salesman '' has returned to its starting node .the objective function value consists of the total processing time of each node , the distances of the selected routes between the nodes , and any waiting time required in case the temperature at a node is too high but we want to process anyway ( section [ meth ] ) . as a result , the corresponding tsp without temperature constraints serves as a lower bound for the itsp . to model the temperature of a node at a time , we use equations ( [ cons ] ) and ( [ temp ] ) .the first equation determines the number of consecutive time units that have been processed for node at time ( ) , based on which is a binary variable equal to one if node is processed at time and zero otherwise .if job is processed at time equation ( [ cons ] ) increases by one , whereas otherwise it decreases by one with a minimum value of zero for . is equal to zero for all nodes . equation ( [ temp ] ) sets the temperature of a node at time based on the corresponding value of . an increase function and a decrease function , which determine the rate at which the temperature changes , are also defined .we employ three variants for the temperature profile changes , namely a linear ( ) , a quadratic ( ) and an exponential ( ) function . in the linear variantthe temperature increase or decrease is the same as the change in , namely it increases or decreases by 1 . for the quadratic casethe temperature is the cube of the number of consecutively processed time units , and in case of the exponential function the base number is used with the time as exponent . allow us to illustrate the application of both functions with a simple example .assume that we have a node with a total processing time of 6 , a quadratic increase and decrease temperature function , and that the maximum temperature equals 16 .we start processing the job at time 3 and process for 3 time units until time 6 ( node temperature equal to 9 or ) , after which no processing occurs for 2 time units .we then process the remaining 3 time units until time 11 ( node temperature of 16 or ) .the resulting consecutive time units and temperature profiles are shown in figure [ fig1 ] .we can observe that the temperature is indeed a function of the number of consecutively processed time units , and that the temperature decreases when no processing occurs .based on the three possible temperature increase and decrease profiles , nine combinations exist . in the remainder of this manuscript, we assume that the same function is used for both the increase and decrease of the temperature , but in our presentation we will discuss combinations as well .in this section , we discuss our solution approach for the itsp .we propose to use three different applications of a genetic algorithm ( ga ) based on three different solution representations .we first go into detail about the representations , before giving an overview of the employed metaheuristic .the choice of solution representation is important since we not only have to determine the order in which the different nodes are visited , but also the amount of time processed for each visit .since , to the best of our knowledge , no previous research exists on the impact of solution representations for the itsp , we employ three different approaches in order to determine their impact on the solution quality . 1 . a single list ( 1l ) representation consisting of a node list ( nl ) .the nl contains the node numbers in the order in which they are processed , but due to the nature of the itsp each node occurs times .this function determines the maximum required number of splits for each node , depending on , and the temperature increase function .the maximum required number of splits corresponds with a greedy approach since in this case we choose to always process as much as possible during a visit given the current node temperature . only forthe final visit is waiting included if required .2 . a two list ( 2l )representation with a nl and a processing time list ( ptl ) .the nl again contains the order in which the nodes are processed , but each node occurs times instead of just a single time .the ptl then holds the actual processing times at each occurrence of a node , with a total value over all occurrences equal to .ptl values may be equal to zero to signify that less than splits are used for node . as a result ,these visits without processing time may increase the total duration because of distances traveled .3 . a three list ( 3l )representation with a nl , a ptl and a split list ( sl ) .the third list is the number of visits or splits for each node , and its value lies within the interval for each node .a value of 1 means that this node is only visited once for a duration of , whereas a value of implies as many visits each with a duration of 1 .the nl and ptl are similar as for the second representation , with the major difference that both lists are only as long as the sum of the job splits in the sl . as a result, the ptl does not contain any zero values . recall that in the second representation the ptl could have zero values and that both the nl and ptl sizes equal the sum of the job durations .it can be stated that the 3l representation constitutes a middle ground between the other two .it is less naive than the second one , since no unused visits with a processing time of zero are included .it does , however , allow for more and different splits than the greedy approach , by incorporating more information . to illustrate the three representations , consider the example network in figure [ fig2 ] , with three jobs and a maximum temperature of 3 .we furthermore assume a linear temperature increase and decrease function for each job , which implies that for each job 3 time units can be processed consecutively without exceeding the maximum temperature . 1 . with the single list we first calculate the number of splits based on the processing times and the maximum temperature .this results in 2 splits for job 1 , 2 for job 2 and 1 for job 3 .assume that we have nl equal to ( 2 , 1 , 3 , 1 , 2 ) .we process job 2 for a duration of 3 ( the maximum amount possible ) , then process job 1 for 3 time units as well , move on to job 3 for a duration of 2 , return to 1 for another 2 time units and finish with 3 time units for job 2 .the total duration is equal to the sum of the job durations , the distances traveled and any waiting time .the first equals 13 , the second 14 and the third 0 since no waiting times are needed . as a result ,the total duration is equal to 27 . 2 .for the two list representation , assume a nl of ( 1 , 1 , 2 , 2 , 2 , 1 , 3 , 3 , 2 , 2 , 1 , 1 , 2 ) and a ptl of ( 2 , 0 , 1 , 0 , 1 , 2 , 2 , 0 , 1 , 3 , 0 , 1 , 0 ) .consider that the length of both lists is equal to 13 , or to the sum of the individual job durations , that several zeros are included in the ptl , and that the sum of the corresponding visit durations equals for each job .the total duration is equal to 13 ( the total job duration , the same as before ) + 25 ( the total distance traveled , including returning to node 1 where the tour started ) + 0 ( the total waiting time ) = 38 . due to the increase in total distance the objective function for this nl and ptl combination is worse than for the greedy nl .3 . in case of the three lists , we use the nl ( 1 , 2 , 1 , 2 , 3 , 2 ) , ptl ( 3 , 1 , 2 , 2 , 2 , 3 ) and sl ( 2 , 3 , 1 ) . each job is included as many times in the nl and ptl as the number of splits in the sl .the sum of the ptl values is again equal to for each of the three jobs .the total completion time equals 13 + 16 + 0 = 29 .in the example we have explicitly assumed linear temperature profiles , which in this case resulted in no waiting time for the given lists .however , if the profiles would increase and decrease in a quadratic manner , then only 1 time unit can be processed before a node has to cool down . in the 3l examplethis results in repeatedly processing 1 time unit and waiting for 1 time unit .for instance , the first visit to node 1 involves 3 time units of processing time , which results in a total time of 5 (= 3 x 1 processing , 2 x 1 waiting ) .the same logic applies for the other visits in the tour .hence , in the example the total duration increases from 29 to 36 due to 7 time units of waiting . an overview of our ga can be found in figure [ figga ] .the population is initialized by generating random 1l , 2l or 3l respectively .this includes any relevant repair method and the evaluation of the solution value ( cf .the selection operator is the elite selection of , which selects one parent based on a four - tournament selection and the other one at random from the subset of best solutions in . in the populationupdate the best elements of the previous generation are retained , and the rest is replaced by the best children .afterwards , the set is updated based on the new .it has been shown that due to the elite selection and population update , the ga contains elements of both scatter search and evolutionary path relinking . however , for the sake of simplicity we refer to the algorithm as a ga , although the more general term evolutionary algorithm ( ea ) would also be correct. _ initialize _ population * repeat * _ select _ and from _ crossover * _ and and _ repair * _ and * for * each offspring * do * _ mutate * _ _ repair * _ _ evaluate * _ * end for * _ update _ : remove worst , add best offspring , update * until * stopping criterion met the asterisks ( * ) indicate that the corresponding steps differ based on the solution representation used ( 1l , 2l or 3l ) , since these parts require different applications of some operators .we also distinguish between the three types of lists used ( nl , ptl and sl ) when discussing the operators with an * in more detail , since it are these lists which result in differences between the operators used .recall that 1l consists only of a nl , 2l of a nl and ptl , and 3l of a nl , ptl and sl . in table[ tabop ] an overview of the differences in terms of operators between the nl , ptl and sl is displayed , whereas table [ tabop2 ] shows the repair and solution evaluation employed for the three solution representations .* nl : we apply a one - point crossover and a two - activity swap as crossover and mutation operator with a mutation rate of respectively .no repair method is required , and the evaluation consists of processing as much as possible of a node during each visit . only the final visit to a node may include waiting time to ensure that the temperature constraint is not violated . *ptl : we again use a one - point crossover . consider that the nl and ptl always have the same length , so the same crossover point can be used for both lists .however , the ptl requires a repair method to ensure that for each job the sum of the corresponding ptl values equals . in casethe sum is too large ( small ) , this repair method randomly decreases ( increases ) ptl values of the job until the condition is met . in terms of mutation, we have chosen a random change operator which randomly selects a different ptl value .the mutation rate is used for each job in the list , rather than for the ptl as a whole . also after the mutation, we have to apply the same repair method to ensure that the ptl is feasible .finally , in terms of solution evaluation we process the corresponding values in the ptl during each visit to a node .hence , waiting time is included if this is required based on the ptl value .* sl : the crossover operator is an adjusted version of the one - point crossover due to the different length of the sl on the one hand and the nl and ptl on the other hand , and uses a different crossover point .additionally , the crossover of nl and ptl takes the different lengths of the parents lists into account .the mutation of the sl is similar to that of the ptl ; a random change is applied to each job with a rate of .the use of a sl requires a second repair method to ensure that the number of splits of a job in the sl corresponds with the number of job occurrences in the other two lists .this method adjusts the nl and ptl by removing ( adding ) job occurrences in the nl and increasing ( decreasing ) values in the ptl at random . this way the total length of both lists corresponds with the sum of the sl values .the repair method is applied after both the crossover and the mutation .finally , the usage of a sl ( 3l representation ) has no effect on the evaluation of the solution , so the same ptl - based technique can be used as without a sl ( 2l representation ) .a second interesting research avenue concerns the shape of the temperature functions . in this manuscriptwe explicitly assumed that these functions were straightforward ( i.e. , and ) , which resulted in a clear distinction between them , since e.g. the linear function was never larger than the quadratic one for the same value of .however , employing functions such as ( increase ) and ( decrease ) would make it harder to determine how much should be best processed during a visit given the current node temperature .finally , we currently assume a uniform surface and density of the material ( section [ intro ] ) , where all the points or nodes are similar .it might prove interesting to investigate the impact of the shape and density of the material surface , and what the link might be with the temperature profiles .leyman and m. vanhoucke , a new scheduling technique for the resource - constrained project scheduling problem with discounted cash flows , _ international journal of production research _53(9 ) : 2771 - 2786 , 2015 .
|
in this research , we discuss the intermittent traveling salesman problem ( itsp ) , which extends the traditional traveling salesman problem ( tsp ) by imposing temperature restrictions on each node . these additional constraints limit the maximum allowable visit time per node , and result in multiple visits for each node which can not be serviced in a single visit . we discuss three different temperature increase and decrease functions , namely a linear , a quadratic and an exponential function . to solve the problem , we consider three different solution representations as part of a metaheuristic approach . we argue that in case of similar temperature increase and decrease profiles , it is always beneficial to apply a greedy approach , i.e. to process as much as possible given the current node temperature .
|
the oxidation and corrosion of metals affects many areas of industry , technology , and everyday life .indeed , it plays a critical role in the failure of metal parts in engineering , electrochemical and catalytic devices , as well as of metal construction parts such as piping and roofing , to name but a few .the atomistic details of the processes of metal corrosion are still unclear , however an effective solution to the problem of corrosion has been found in the use of organic molecules as corrosion inhibitors , which work by interacting with metal / oxide surfaces and forming a protective film .widely - used organic corrosion inhibition systems , whose effectiveness has been empirically verified , are amines or zinc dithiophosphates on steel and benzotriazole ( btah ) on copper . however , the chemical details of the inhibiting action and the structure of the protective layer these molecules form against metal surfaces are still unresolved . in this work ,state - of - the - art computational methods are applied to the btah / cu copper system , which is the most studied both experimentally and computationally .the aim is to identify the structures btah forms on the copper surface and obtain insights into how they affect its function as a corrosion inhibitor .most experimental studies on organic corrosion inhibitors are macroscopic studies involving the immersion of a metal specimen in a solution containing corrosive agents and organic inhibitors .information about inhibition can be gathered during sample immersion , monitoring the evolution of the material using electrochemical or spectroscopic techniques ( see _ e.g. _ refs . ) or after immersion , by measuring the mass change of the specimen or the amount of metal solvated in the solution ( see _ e.g. _ ref . ) .these studies reveal insight into the effectiveness of different chemicals .an alternative approach is to examine well - defined copper single - crystal surfaces under ultra - high vacuum ( uhv ) using x - ray absorption spectroscopy ( xas ) and scanning tunnelling microscopy ( stm ) .these techniques , used mainly by the surface science community , reveal structural details of the adsorbed system .the inhibitor molecules can be deposited onto the surface either from gas phase or via an aqueous solution ( which is subsequently evaporated ) .discrepancies are seen for different experimental conditions , with zigzag structures of flatly - adsorbed molecules having been postulated ( for molecules deposited from aqueous solutions ) , as well as a structure of vertically - adsorbed btas ( for evaporated molecules ) . in this workwe only compare our results with experiments performed on evaporated btah molecules on copper substrates in uhv , since the computational work presented here consists of molecules adsorbed on a cu(111 ) surface in vacuum conditions .previous studies have shown that btah deprotonates when adsorbed on copper surfaces , revealing that it can deprotonate not only by interacting with high - ph environments ( its pk is at c ) but also through its interaction with the surface .bta was seen to form nearly - upright organometallic surface complexes involving bonds between the azole nitrogen atoms and copper adatoms . in particular stm studiesshowed the formation of strings of stacked dimers at low coverage and of more complex structures , also composed of bta - cu-bta dimers , at high coverage .a number of computational studies , using density functional theory ( dft ) , have looked at the stability of btah and bta structures on copper .benzotriazole is a challenging molecule to study with dft as it combines a strongly electronegative azole moiety , which preferably interacts with the surface through chemisorption , with a benzene - like ring which can interact with the surface via van der waals ( vdw ) forces .traditionally , vdw dispersion forces have been a challenge for dft methods and are not accounted for in the most widely used exchange - correlation functionals , such as perdew - burke - ernzerhof ( pbe ) , that have been used in most previous studies of btah .thus the role of vdw forces in this type of system is still largely unexplored .isolated btah was found , in calculations using pbe , to chemisorb weakly to a cu(111 ) surface in an upright geometry , forming two n - cu bonds via the triazole moiety .physisorption , with the molecule lying flat onto the surface , was found to yield a very small binding energy ( ev ) with pbe .the addition of an empirical van der waals correction lead instead to a much stronger bond ( ev ) . at higher coverages , btah was found to form hydrogen - bonded ( hb ) chains with the molecules lying parallel to the surface .the dehydrogenated bta molecule was found to bind strongly to the surface in an upright configuration when isolated , and to form cu - bonded organometallic complexes of vertical or tilted molecules at higher coverages .however , the structure of the organometallic bta - cu complexes is still debated .chen and hakkinen found that deprotonated bta - cu-bta dimers are more stable than [ bta - cu] chains thus agreeing with the experimental results of grillo _ et al . _ , and in disagreement with the dft results of kokalj _ et al ._ .here we report the most extensive dft study performed to date and we explore in depth the importance of using vdw approaches in the study of adsorbate - surface interactions .intramolecular interactions and coverage effects are also examined in detail ; they are found to be particularly relevant for the systems formed by fully protonated btahs , with hydrogen bonding dominating in the low - coverage structures and vdw and electrostatic forces at high coverage .moreover , the energetics of the dissociation process and of the formation of complex structures with cu adatoms were investigated and linked to experimental conditions .the obtained structures and adsorption energy trends are comparable to the experimental results in uhv and a link with the effectiveness of the molecule as a corrosion inhibitor is discussed .the remainder of the paper is organised as follows .the computational methodology and set - up is presented in the next section ( sec .[ method ] ) , followed by the results for the protonated ( sec .[ sub : btah_cu111 ] ) and deprotonated ( sec .[ sub : bta_cu111 ] ) molecules adsorbed on cu(111 ) .finally , a discussion and conclusions are presented in sec .[ conclusions ] .calculations of inhibitor molecules adsorbed on copper surfaces were performed by means of dft using the vasp code . in the present work most resultshave been obtained with the optb86b - vdw functional , a modified version of the non - local vdw density functional which explicitly accounts for dispersion based on the electron density .indeed , optb86b - vdw has been shown to perform best in comparison to experiment for several adsorption problems , including the ( relevant for this work ) adsorption of benzene on cu(111 ) .a number of other functionals were also considered for testing and comparison with previous results : pbe and a number of vdw - inclusive functionals ( vdw - df , optb86b - vdw , pbe - d2 and pbe - ts ) . for all functionalsthe calculated value for the lattice constant and of the bulk modulus are within of the experimental value ( see table [ table : tests ] ) , and in good agreement with previous theoretical results ..lattice constants and bulk moduli for all the exchange - correlation functionals used in this work .all the values are in good agreement with the experimental values corrected for the zero - point anharmonic expansion and with previous computational results . [ cols="^,^,^",options="header " , ] these are compared to the formation energy of the most stable btah / cu(111 ) structure , the hb chains , and the blue highlighting indicates the deprotonated structures which are more stable than the btah / cu(111 ) system ( e ev ) .when the formation energy of the copper adatom is not taken into account ( top half of table [ table : btah_form ] ) most deprotonated structures , except for the isolated bta molecule , are found to be more stable than the protonated hb chains . if the formation energy of the copper adatom is considered ( bottom half of table [ table : btah_form ] ) , only the organometallic chains and the stacked dimers are found to be more stable that the btah hb chains , and only if the hydrogen atom is assumed to adsorb on the surface . if the h atoms are assumed to associate into h molecules the only competitive structure to the btah hb chains are the stacked dimers ( e ev / mol ) .this shows that benzotriazole has a higher drive to deprotonate on defective surfaces with cu adatoms than on atomically flat surfaces .the results obtained from these four extreme cases can be used to give an approximate description of the behaviour of a real experimental system . in the work of grillo _et al . _ the copper surface is reconstructed and therefore the mobility of the surface copper atoms is low thus making this system close to the case in fig .[ fig : dep_adat]b , _ i.e. _ the copper adatoms need to be extracted from the bulk . in this case (table [ table : btah_form ] ) the stacked bta - cu-bta dimers , with the dissociated h atoms adsorbed on the surface , are the lowest energy configuration with e .this is in excellent agreement with the stm experimental results , where stacked dimers are observed at low coverage and where the reconstruction of the cu(111 ) surface , which is generally known not to reconstruct , indicates the presence of impurities ( such as h atoms ) on the surface .results for the structures and energies of btah and bta on cu(111 ) have been presented here .the adsorption of fully protonated btah is relevant to applications where the environment is acidic , whereas the adsorption of bta on copper is relevant to alkaline environments and to compare with experiments of btah deposition on copper in vacuum conditions , where the molecule is found to deprotonate .benzotriazole is a complex molecule which requires treatment with a theoretical methodology capable of simultaneously describing chemisorption and physisorption . in the present work dft with a vdw - inclusivefunctional has been employed to optimise a large number of protonated and deprotonated structures on cu(111 ) .we find that dispersion forces significantly alter the relative stabilities of adsorbed btah structures .in addition to this , in the lowest energy tilted structure an interesting interplay with chemical bonding is found wherein dispersion forces bring the molecule close to the surface thus enhancing the chemical bonding of the molecule with the surface via the triazole group . whilst this is interesting and a potentially general effect ,we have also shown that a single absorbed molecule provides limited insight into the behaviour of the overall system . indeed , while isolated btah preferentially adsorbs on copper via the azole nitrogen atoms and -bonding of the carbon ring with the surface , the lowest energy structure at low coverage are hb chains of flat - lying molecules .overall we have found an incredibly rich coverage - dependent phase diagram for btah on cu(111 ) .three regimes were identified for the adsorption of btah on cu(111 ) , as a function of coverage : a low coverage hydrogen - bonded regime , where the molecules preferentially adsorb flat on the surface , an intermediate regime , where mixed flat and upright structures are observed , and a high coverage regime , where the molecules adsorb upright .steric interaction drive the change in configuration from the flat - lying physisorbed to the upright chemisorbed configuration of the btah .the lowest energy configurations seen for bta are either stacked bta - cu-bta dimers or organometallic chains , according to whether the formation energy of the copper adatom is taken into account when calculating the stability of the complexes .good agreement is seen with experimental results in uhv , where the molecule was found to adsorb via the azole moiety in a vertical or near - vertical manner . in particular , the stacked dimer configuration was observed using stm to form on a reconstructed cu(111 ) surface , where the mobility of the copper atoms is likely to be low and therefore copper adatoms require a large amount of energy to form . in this case , the conditions of fig .[ fig : dep_adat]b apply , and indeed the expected configuration from calculations are stacked bta - cu-bta dimers .the use of a suitable exchange - correlation functional was found to be important for all the systems considered here . indeed , for an isolated btah on cu(111 ) the ` flat ' configuration was found to be favourable only when vdw dispersion forces were accounted for , and not in the case of pbe .moreover , the ` tilted ' low coverage structure , which is the most stable with the optb86b - vdw functional and is favourable with all vdw functionals tested , is instead unstable when optimized with pbe ( the ` flat ' configuration is retrieved instead ) . at high coverage, the lack of any description of - bonding in pbe leads to a larger equilibrium distance between two btah molecules and thus to ( comparatively ) weaker adsorption energies for the high - coverage structures . in the bta / cu systemsit has been seen that pbe favours upright adsorption , because of the lack of dispersion interactions between the benzene - like ring and the surface . when vdw interactions are accounted for a more complex behaviour is uncovered , especially for the bta - cu-bta dimers where many degenerate low - energy structures are seen .both btah and bta offer the possibility of forming fairly close packed layers on cu(111 ) with little cost to the adsorption energy , and , for the case of bta , with a gain in energy when the dimers are stacked , thus in principle offering a physical barrier to incoming corrosive molecules or atoms .there is however a large difference ( ev / mol ) in the adsorption energy of the molecule with the surface between btah and bta .since benzotriazole performs better as an inhibitor in alkaline conditions , where the likelihood of deprotonation is higher , there might be a link between the strongest interaction of the molecule with the surface and inhibition .indeed , the adsorption energy of bta ( ev / atom ) is fairly close to the adsorption energy of _ e.g. _ two well known corrosive agents for copper .it is likely that the actual corrosive agents in a corrosive solution are sulphur- or chlorine - containing molecules , rather than isolated atoms .however , our calculations on chlorine and sulphur atoms give a ballpark estimate which suggests that competitive adsorption could be the key here for the success of benzotriazole as a corrosion inhibitor .a.m.s work is partly supported by the european research council under the european union s seventh framework programme ( fp/2007 - 2013)/erc grant agreement no .616121 ( heteroice project ) and the royal society through a wolfson research merit award .the authors are grateful for computational resources to the london centre for nanotechnology and to the u.k .car - parrinello consortium ukcp ( ep / f036884/1 ) , for access to hector .c.g . would like to thank dr . federico grillo for useful discussions .
|
the corrosion of materials is an undesirable and costly process affecting many areas of technology and everyday life . as such , considerable effort has gone into understanding and preventing it . organic molecule based coatings can in certain circumstances act as effective corrosion inhibitors . although they have been used to great effect for more than sixty years , how they function at the atomic - level is still a matter of debate . in this work , computer simulation approaches based on density functional theory are used to investigate benzotriazole ( btah ) , one of the most widely used and studied corrosion inhibitors for copper . in particular , the structures formed by protonated and deprotonated btah molecules on cu(111 ) have been determined and linked to their inhibiting properties . it is found that hydrogen bonding , van der waals interactions and steric repulsions all contribute in shaping how btah molecules adsorb , with flat - lying structures preferred at low coverage and upright configurations preferred at high coverage . the interaction of the dehydrogenated benzotriazole molecule ( bta ) with the copper surface is instead dominated by strong chemisorption via the azole moiety with the aid of copper adatoms . structures of dimers or chains are found to be the most stable structures at all coverages , in good agreement with scanning tunnelling microscopy results . benzotriazole thus shows a complex phase behaviour in which van der waals forces play an important role and which depends on coverage and on its protonation state and all of these factors feasibly contribute to its effectiveness as a corrosion inhibitor .
|
nowadays , peer - to - peer ( p2p ) overlay live streaming systems are of significant interest , thanks to their low implementation complexity , scalability and reliability properties , and ease of deployment .leveraging on the well understood p2p communication paradigm , the viability to deliver live streaming content on top of a self - organizing p2p architecture has been widely assessed both in terms of research contributions , as well as in terms of real - life applications . in principle , the most natural and earlier solution for deploying a p2p streaming system was to organize peer nodes in one or more overlay multicast trees , and hence continuously deliver the streamed information across the formed paths .this is the case in .however , in practice , this approach may not be viable in large - scale systems and with nodes characterized by intermittent connectivity ( churn ) .in fact , whenever a node in the middle of a path abruptly disconnects , complex procedures would be necessary to i ) allow the reconstruction of the distribution path , and ii ) allow the nodes affected by such event to recover the amount of information lost during the path reconfiguration phases .to overcome such limitations , a completely different approach , called _ data - driven _, delivers content on the basis of content availability information , locally exchanged among connected peers , without any a priori pre - established path .this approach essentially creates a mesh topology among overlay nodes .several proposed solutions , such as , adopt the data - driven approach . in this paper we focus on _ chunk - based_ systems , where , similarly to most file - sharing p2p applications , the streaming content is segmented into smaller pieces of information called chunks .chunks are elementary data units handled by the nodes composing the network in a store - and - forward fashion. a relaying node can start distributing a chunk only when it has completed its reception from another node .while the solutions based on multicast overlay trees usually organize the information in form of small ip packets to be sequentially delivered across the trees and can not be regarded as chunk - based , some data - driven solutions , like the ones proposed in , may be regarded as chunk - based .a characterizing feature of the chunk - based approach is that , in order to reduce the per - chunk signalling burden , the chunk size is typically kept to a fairly large value , greater than the typical packet size . in this paperwe raise some very basic and foundational questions on chunk - based systems : what are the theoretical performance limits , with specific attention to delay , that _ any _ chunk - based peer - to - peer streaming system is bounded to ? which fundamental laws describe how performances depend on network parameters such as the available bandwidth or system parameters such as the number of nodesa peer may at most connect to ? and which are the system topologies and operations which would allow to approach such bounds ?the aim of this paper is to answer these questions .the answer is completely different from the case of systems where the streaming information , optionally organized in sub - streams , is continuously delivered across overlay paths ( for a theoretical investigation of such class of approaches refer to and references therein contained ) .as we will show , in our scenario the time needed for a chunk to be forwarded across a node significantly affects delay performance .in more detail , we focus on the ability to reach the greatest possible number of nodes in a given time interval ( this will be later on formally defined as `` stream diffusion metric '' ) or equivalently the ability to reach a given number of nodes in the smallest possible time interval ( i.e. absolute delay ) .we derive analytic expressions for the maximum asymptotic stream diffusion metric in an homogeneous network composed of stable nodes whose upload bandwidth is the same ( for simplicity , multiple of the streaming rate ) . with reference to such homogeneous and ideal scenario, we show how this bound relates to two fundamental parameters : the upload bandwidth available at each node , and the number of neighbors a node may deliver chunks to . in addition , we show that the serialization of chunk transmissions and the organization of peer nodes into multiple overlay unbalanced trees allow to achieve the proposed bound .this suggests that the design of real - world applications could be driven by two simple basic principles : i ) the serialization of chunk transmissions , and ii ) the organization of chunks in different groups so that chunks in different groups are spread according to different paths . as a matter of fact , in a companion paper , we have indeed presented a simple data - driven heuristic , called _ o - streamline _ , which exploits the idea of using serial transmissions over multiple paths and relies on a pure data - oriented operation ( i.e. chunk paths are not pre - established ) .such heuristic successfully achieves performances close to the ones of the theoretical bound .this paper is organized as follows .section [ s : moti ] explains the rational behind this work .section [ s : bound ] introduces the stream diffusion metric and derives the relative bound . the overlay topology that allows to achieve the presented boundis described in section [ s : algo ] .sections [ s : perfo ] presents some performance evaluation results .section [ s : related ] reviews the related work .finally , section [ s : conclu ] concludes the paper .goal of this section is to clarify why p2p _ chunk - based _ streaming systems have significantly different performance issues with respect to streaming systems , where the information content continuously flows across one or more overlay paths or trees .unless ambiguity occurs , such systems will be referred to as , with slight abuse of name , _ flow - based _ systems .more precisely , we will show that i ) theoretical bounds derived for the flow - based case may not be representative for chunk - based systems , and new , _ fundamentally different _ , bounds are needed , ii ) the methodological approaches which are applicable in the two cases are completely diverse , and fluidic approaches may be replaced with inherently discrete - time approaches where , as shown later on , -step fibonacci series and sums enter into play .we recall that `` flow - based '' system denotes a stream distribution approach where the streaming information , possibly organized in multiple sub - streams , is delivered with continuity across one or more overlay network paths .clearly , in the real ip world , continuous delivery is an abstraction , as the streaming information will be delivered in the form of ip packets .however , the small size of ip packets yields marginal transmission times at each node . as such , the remaining components that cause delay over an overlay link ( propagation and path delay because of queueing in the underlying network path ) may be considered predominant .we can conclude that the delay performances of flow - based systems ultimately depend on the delay characterizing a path between the source node and a generic end - peer . more specifically ,if we associate a delay figure to each overlay link , then the source to destination delay depends on the sum of the link delays : the transmission times needed by the flow to `` cross '' a node may be neglected , or , more precisely , they play a role only because the ` crossed' nodes compose the vertices of the overlay links , whose delays dominate the overall delay performance . as a consequence ,the delay performance optimization becomes a minimum path cost problem , as such addressed with relevant analytical techniques .if we further assume that the network links are homogeneous ( i.e. characterized by the same delay ) , then the problem of finding a delay performance bound is equivalent to finding what is the minimum depth of the tree ( or multiple trees ) across which the stream is distributed . this problem has been thoroughly addressed in , under the general assumption that a stream may be subdivided into sub - streams ( delivered across different paths ) , and that each node may upload information to a given maximum number of children .for instance , if we assume no restriction on the number of children a node may upload to , then it is proven in that a tree depth equal to two is always sufficient .this is indeed immediate to understand and visualize in the special case of all links with a `` sufficient '' amount of available upload bandwidth - see figure [ fig:2a ] for a constructive example , while each peer node has a bandwidth at least equal to , being the number of peer nodes composing the overlay .as shown in the same result holds under significantly less restrictive assumptions on the available bandwidth . ] . ,ii ) delivering each sub - stream to a different node , and iii ) letting each node replicate and deliver the -th sub - stream to the remaining nodes . ] at this stage , it should be clear that , in the context of flow - based systems , as long as some feasibility conditions are met ( see e.g. ) , the bandwidth available on each link plays a limited role with respect to the delay performance achievable .this is clearly seen by looking again at figure [ fig:2a ] : if for instance we double the bandwidth available on each link , the delay performances do not change ( at least until the source is provided with a large enough amount of bandwidth to serve all peers in a single hop ) . chunk - based systemshave a key difference with respect to flow - based systems : the streaming information is organized into chunks whose size is significantly greater than ip packets .since a peer must complete the reception of a chunk before forwarding it to other nodes ( i.e. chunks are delivered in a store - and - forward fashion ) , the obvious consequence is that delay performance are mostly affected by the chunk transmission time .thus , in terms of delay performance , the behavior of chunk - based systems is opposite to the one of flow - based systems . not only chunk transmission times can not be neglected anymore with respect to link - level delays ( propagation and underlying network queueing ) , but also we can safely assume that in most scenarios any other delay component at the link - level has negligible impact when compared with the chunk transmission timesthis consideration can be restated as : the delay performances of chunk - based systems do not depend on the sum of the delays experienced while traveling over an overlay link , but depend on the sum of the delays experienced while _ crossing a node_. from a superficial analysis , one might argue that the overall delay optimization problem does not change . in fact, the transmission delay of a chunk at a given node could be attributed to the overlay link over which the chunk is being transmitted , and , also in this case , the optimization could be stated as a minimum path cost problem .however , a closer look reveals that this is not at all the case .the reasons are manifold and can be illustrated with the help of figure [ fig:2b ] . in this figure , and consistently throughout the paper , we rely on the following notation . is the chunk size ( in bit ) ; is the streaming constant bit rate ( in bps ) . is the chunk `` inter - arrival '' time at the source , being such arrival process a direct consequence of the segmentation into chunks done at the source : a new chunk will be available for delivery only when information bits , generated at rate , are accumulated ( see top of figure [ fig:2b ] ) . is the available upload bandwidth , assumed to be the same for all network nodes , including the source ( homogeneous bandwidth conditions ) . is the normalized upload bandwidth of each node with respect to the streaming bit rate . in this paper , for simplicity , we consider the case of integer greater or equal than 1 , i.e. being either equal or a multiple of .the _ minimum _transmission time for a chunk is equal to ; this is true only if the whole upload bandwidth is used to transmit _ a single chunk to a single node_. moreover , we rely on the common simplifying assumption , in overlay p2p systems , that the only bandwidth bottleneck is the uplink bandwidth of the access link that connects the peer to the underlying network ( the downlink bandwidth is considered sufficiently large not to be a bottleneck - this is common in practice , due to the large deployment of asymmetric access links - e.g. , adsl ) . the first reason why the overall delay optimization problem can not be stated as a minimum path cost problem in the case of chunk - based systems is the sharing of the available upload bandwidth across multiple overlay links .as a consequence , i ) it is not possible to _ a priori _ associate a constant delay cost to overlay links originating from a given node , ii ) the delay experienced while transmitting a chunk depends on the fraction of the bandwidth that the node is dedicating to such transmission .for instance , figure [ fig:2b ] shows that the source node is transmitting a given chunk in parallel to two nodes ; as such , the transmission delay is . if the source were transmitting the chunk only to node 1 , this delay would be halved .the second reason is that the transmission time may not be the _ only _ component of the overall chunk delivery delay .this is highlighted for the case of node n1 .after receiving chunk 1 , node n1 adopts the strategy of _ serializing _ the delivery of chunk 1 to nodes n4 and n5 . on the one side , in both casesthe chunk will be transmitted in the same time , namely ; this is the minimum transmission time for a chunk , as all the available bandwidth is always dedicated to a single transmission .on the other side , the time elapsing between the instant at which the chunk is available at node n1 and the instant at which the chunk is received by node n5 is greater than the transmission time , as it includes also the time spent by node n1 while transmitting the chunk to node n4 . the third and final aspect which characterizes chunk - based systems in a _ streaming _ context is that there is a tight constraint which relates the number of peer nodes that can be _ simultaneously _ served and the available upload bandwidth .if we look back flow - based systems in figure [ fig:2a ] , we see that only practical implementation issues may impede the source node to arbitrarily subdivide the stream into sub - streams , and the tree depth may be indeed trivially optimized by using as many sub - streams as the number of nodes in the network . on the contrary , in chunk - based systems ,the number of nodes that can be served is no more a `` free '' parameter , but it is tightly constrained by the stream rate and the available upload bandwidth .this fact can be readily understood by looking at the source node in the example illustrated in figure [ fig:2b ] . due to their granularity ,new chunks are available for delivery at the source node every seconds .hence , in order to keep the distribution of chunks balanced ( i.e. , to avoid introducing delays with respect to the time instant at which chunks are available at source and to privilege specific chunks by giving them extra distribution time ) , the source node must complete the delivery of every chunk before the next new chunk is available for the delivery ( i.e. within seconds ) .this implies that the source node can not deliver a single chunk to more than nodes , being the ratio between the upload bandwidth and the streaming rate .let be the set of all peers which compose a p2p streaming network , and let be the cardinality of such network .let be a generic peer in the network .since the streamed information is organized into subsequently generated chunks , is expected to receive all these chunks with some delay after their generation at the source .let us define with the specific interval of time elapsing between the generation of chunk ( ) at the source , and its completed reception at peer . in most generality, different chunks belonging to the stream may be delivered through different paths .this implies that may vary with the chunk index .let be the maximum delay experienced by peer among all possible chunks . to characterize the delay performance of a whole p2p streaming network , we are interested in finding the maximum of the delay experienced across all peers composing the network , i.e. : we refer to this network - wide performance metric as _ absolute network delay_. however, for reasons that will be clear later on , this performance metric does not yield to a convenient analytical framework .thus , we introduce an alternative delay - related performance metric , which we call _ stream diffusion metric_. this is formally defined as follows : in plain words , is the number of peers that may receive each chunk in at most a time interval after its generation at the source .the most interesting aspect of the stream diffusion metric is that it can be conveniently applied also to networks composed of an infinite number of nodes ( for such networks , obviously , the absolute network delay would be infinite ) .moreover , for finite - size networks , it is straightforward to derive the absolute network delay from the stream diffusion metric . since is a non - decreasing monotone function of the continuous time variable and it describes the number of peers that may receive the whole stream within a maximum delay , for a finite size network composed of peers the value of at which reaches is also the maximum delay experienced across all peers .the formal relation between the absolute network delay and the stream diffusion metric is hence before stating the bound , we need to provide some preliminary notation .let be the -step fibonacci sequence defined as follows : let be a new sequence defined as the sum of the first non - null terms of the -step fibonacci sequence , i.e. , let us assume that propagation delays and queueing delays experienced in the underlying physical network because of congestion are negligible with respect to the minimum chunk transmission time , namely the time needed to transmit a chunk by dedicating , to such transmission , _ all _ the upload capacity of a node .in what follows , we measure the time using , as time unit , the value above defined .we can now state the following theorem on the upper bound of .[ th:1]in a p2p chunk - based streaming system where all peer nodes have the same normalized upload capacity ( assumed integer greater or equal than 1 ) and overlay neighbors to delivery chunks to , the stream diffusion metric is upper bounded by for integer values of ( i.e. multiple of ) while , for non integer values of , must be considered .the proof of theorem [ th:1 ] is omitted for reasons of space .we refer the reader to for the full details .we only observe that the proof is based on the following property : the minimum amount of time elapsing between the time instant at which a peer receives a chunk and the time instant at which it has transmitted the received chunk to _i _ , , of its neighbors is lower bounded by , and this is achieved if and only if the chunk transmission is serialized .in other words , the bound in ( [ e : n - upper - bound ] ) may be achieved only by serializing chunk transmissions .thanks to the asymptotic expression of -step fibonacci sums , which has been derived in , equation ( [ e : n - upper - bound ] ) can be more conveniently expressed in the following asymptotic closed form : where i ) represents the so said -step fibonacci constant and it is the only real root with modulo greater than of the characteristic polynomial of the -step fibonacci sequence , and ii ) is a suitable polynomial about which more details can be found in .for the convenience of the reader , the first few values of the fibonacci constants are , while the first few values of the terms are .the derived bound explicitly accounts for the fact that each node at most can feed neighbors .if this restriction is removed , we obtain a more simple and immediate expression ( see for more details ) provided bound offers only limited insights on how chunks should be forwarded across the overlay topology .specifically , the bound clearly suggests that delay performances are optimized only if chunks are serially delivered towards the neighbor nodes , but does not make any assumption on which specific paths the chunks should follow , or in other words , which overlay topologies should be used .we now show that , to attain the performance bound , peer nodes have to be organized according to i ) an overlay unbalanced tree if , ii ) multiple overlay unbalanced trees if and multiple of ( generalization to arbitrary integer values of being straightforward ) .when the number of neighbor nodes is equal to the normalized upload capacity , the source node can deliver each chunk to _ all _ its neighbors before a new chunk arrives .as such , the source node can repeatedly apply a round - robin scheduling policy during the time interval , which elapses between the arrivals of consecutive chunks . specifically , in the first seconds it can send a given chunk to a given node , say peer , then send the chunk to peer , and so on until peer .if this policy is repeated for every chunk , the result is that any neighbor of the source also receives a new chunk every seconds .hence , each neighbor of the source may apply the same scheduling policy with respect to its neighbors , and so on . as a consequence , every node in the network receives chunks from the same parent , and in the original order of generation : in other words ,chunks are delivered over a tree topology .the operation of the above described chunk distribution mechanism is depicted in figure [ f : serial - tree ] , which refers to the case and a network composed of nodes . in this figurethe source is denoted with an `` s '' .the nodes and the chunks are progressively indexed starting from . going from the upper part of the figure to its lower part ,we see how the first two chunks are progressively distributed starting from the source ; the time since the start of the transmission , measured in time units , until time instant is reported on the left side of the figure .the tree on the left hand side of the figure distributes the first chunk , while the tree on the right hand side of the figure distributes the second chunk . in more detail , since the first chunk is assumed to be available for transmission at the source at time instant , the source starts transmitting the first chunk to node 1 at and after finishing this transmission , i.e at , it sends the first chunk to node 2 , in series . in its turn, node 1 sends the first chunk first to node 3 and then to node 4 , in series , and so on .likewise , node 2 sends the first chunk first to node 5 and then to node 7 , in series , and so on .as regards the second chunk , the source starts transmitting it to node 1 at time , exactly when that chunk is available for the transmission . after finishing transmitting the first chunk to node 1, the source sends the same chunk to node 2 , in series . in their turn ,node 1 and 2 distribute the second chunk in same manner as the first chunk , i.e. sending the second chunk in series first to nodes 3 and 5 respectively , and then to nodes 4 and 7 respectively .it is to be noted that , even if two distribution trees are depicted in figure [ f : serial - tree ] , actually there is only one distribution , which repeats itself for each chunk with period .in other words , a given node receives all chunks through the same path .it is also interesting to note that the tree formed in figure [ f : serial - tree ] is unbalanced in terms of number of hops .for instance , the first chunk reaches node 19 at time after crossing nodes 1,3,6 and 11 .conversely , the same chunk reaches node 15 , again at time , after crossing nodes 2 and 7 .the unbalancing in terms of number of hops is a consequence of the fact that the proposed approach achieves equal - delay source - to - leaves paths , and that the time in which a chunk waits for its transmission turn at a node ( because of serialization ) contributes to such path delay .we are now in condition to evaluate the stream diffusion metric . to this end , let us introduce as number of new nodes that complete the download of a chunk exactly time units after the generation of that chunk at the source node , in such a way that can be assessed according to the equation . with reference to figure [f : serial - tree ] , ( node 1 ) , ( nodes 2 and 3 ) , ( nodes 4 , 5 and 6 ) , ( nodes 7 , 8 , 9 , 10 and 11 ) , ( nodes 12 , 13 , 14 , 15 , 16 and 17 ) .thus , , which is equal to the performance bound evaluated at . to generalize the evaluation of , we observe that only the nodes which have completed the download of a chunk exactly after since the generation of that chunk have still children to be served , whereas nodes that have completed the download of that chunk with a delay less than have already served all their children . as a consequence , if we set to take the children served by the source into account , it results for and for .it is then easy to evaluate the sequence for a given and to verify that and consequently .easy algebraic manipulations allow to turn the last equality into , which guarantees the matching between the stream diffusion metric of the described chunk distribution mechanism and the performance bound for each value of .when , the source can not deliver a chunk to all its neighbors , but only to a subset of peers .hence , in principle , it might distribute chunks through the same tree as discussed before , and hence every peer in the network would use only neighbors out of the available .however , the provided bound assures that performance in the case are better than in the case .for instance , if , the case outperforms the case as follows : [ cols="<,<,<,<,<,<,<,<,<",options="header " , ] a thorough general explanation of how to design a mechanism which attains the bound in the case and multiple of is complex ( for reasons that will emerge later on ) .hence , in this paper we limit ourselves to show how the bound may be achieved through the simple example depicted in figure [ f : serial - forest ] , which refers to the case and and a network composed of nodes .the notation in this figure is the same as in figure [ f : serial - tree ] . as in the case , at time the source node receives chunk # 1 and serially delivers it to nodes 1 and 2 .however , with respect to the case , at time , when the source node receives chunk # 2 , instead of sending it again to nodes 1 and 2 , it delivers that chunk to the remaining two neighbors ( nodes 13 and 14 ) .this process is repeated for the subsequent chunks , and specifically the odd - numbered chunks are serially delivered to nodes 1 and 2 , while the even - numbered ones are serially delivered to nodes 13 and 14 . as a consequence of this operation of the source , each neighbor of the source i ) receives directly from the source only half chunks , ii ) receives a new chunk from the source every 4 time units . as such, neighbors of the source have the necessary extra time to deliver each chunk they receive from the source to all their neighbors .the same holds for the remaining peer nodes .for instance , with regard to chunk # 1 , node 1 serves that chunk to all its four neighbors ( nodes 3 , 4 , 7 and 13 ) in series .node 2 serves instead chunk # 1 only to three neighbors ( nodes 5 , 8 and 14 ) out of four available , since all nodes in the network have already received chunk # 1 at and there are no nodes to be served . in their turn , all nodes that have been served by nodes 1 and 2 , transmit chunk # 1 to their neighbors ( unless their neighbors have already received that chunk ) in series , and so on , until all nodes in the network receive chunk # 1 .this allows delivering chunk # 1 to 24 nodes in 5 time units , instead of the previous 19 nodes .it is to be noted that chunks are now distributed by means of two distinct unbalanced trees , the left one for odd - numbered chunks and the right one for even - numbered chunks , which repeat themselves with period . in general , the number of distribution trees is , where we use the assumption that is integer multiple of .we are now in condition to evaluate the stream diffusion metric . as in the case ,let us introduce as number of new nodes that complete the download of a chunk exactly time units after the generation of that chunk at the source node , in such a way that can be assessed according to the equation .with reference to figure [ f : serial - tree ] and to the left hand side tree , ( node 1 ) , ( nodes 2 and 3 ) , ( nodes 4 , 5 and 6 ) , ( nodes 7 , 8 , 9 , 10 , 11 and 12 ) , ( nodes 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 and 24 ) . the amounts take on the same values even in the right hand side tree .thus , , which is equal to the performance bound evaluated at . to generalize the evaluation of , we observe that , if , the source is still serving a given chunk ; otherwise , the source is already serving the next chunk .in addition , only the nodes which have completed the download of a chunk exactly after since the generation of that chunk have still children to be served , whereas nodes that have completed the download of that chunk with a delay less than have already served all their children . as a consequence , if we set to take the children served by the source into account , it results for , for and for .it is then easy to evaluate the sequence for a given pair of and values and to verify that and consequently .easy algebraic manipulations allow to turn the last equality into , which guarantees the matching between the stream diffusion metric of the described chunk distribution mechanism and the performance bound for each value of . before concluding the description of the case and multiple of , we finally observe that a peer node needs to be part of all the trees in order to properly receive the full stream .this leads to a complex issue which we call the `` tree intertwining problem '' , that is : how nodes should be placed in every tree so that the different role of a node in every considered tree does not lead to sharing the node s upload capacity among the different trees ( and hence to performance impairments with respect to the bound s prediction , or even congestion ) .this can be more easily illustrated through the following example .let us first consider node 5 . in the left ( odd - numbered ) tree ,node 5 is in charge of serving two neighbors , namely 11 and 17 .if node 5 were used by the right ( even - numbered ) tree in place of node 15 , it would also have to forward even - numbered chunks to three additional neighbors , thus breaking the assumption that a node has at most neighbors .the problem is actually more complex , as we can understand by considering the following second case . in the odd - numbered tree ,node 2 has to serve three nodes , namely nodes 5 , 8 , and 14 . at a first glance, we might conclude that node 2 can be also used by the even - numbered tree provided that it is placed in a position of the tree that requires the node to serve only a single node . however , this is not the case .in fact , let us assume to replace node 7 in the even - numbered tree with node 2 .this implies that node 2 would be required to deliver an even - numbered chunk to node 24 at every time instant .however , node 2 is required by the left tree to deliver an odd - numbered chunk at instants of time , and .thus , since , node 2 should simultaneously deliver an odd - numbered chunk to node 5 , and an even - numbered chunk to node 24 , which would not allow reaching the bound .unfortunately , the `` intertwining problem '' for unbalanced trees can not be solved by letting interior nodes of a given tree play the role of leaves in the remaining trees is greater than times the number of non - leaf nodes . ] .however , we proved in that i ) the tree - intertwining problem can be solved via exhaustive search for arbitrary and and for any network size for which the bound is attainable , and that ii ) there exists a constructive approach which allows finding one of the many possible solutions without relying on exhaustive search . sincethis proof is complex and it requires significant extra space and technical elaboration , we refer the interested readers to for the details .figure [ f : change_groups ] plots the stream diffusion metric as a function of in a bandwidth scenario , for a single unbalanced tree ( ) , two unbalanced trees ( ) , infinite unbalanced tree ( ) and a single _ balanced _ tree ( and parallel transmissions ) .the first important observation about figure [ f : change_groups ] regards the impact of the number of neighbor nodes on the stream diffusion metric bound .the figure shows that there is a significant improvement when moving from the case of single tree to that of multiple trees . interestingly ( but expected , as the fibonacci constants increase only marginally when becomes large ) ,the advantage in using more than a few trees is limited : this is especially important if an algorithm is designed to mimic the unbalanced multiple tree operation , as complexity ( i.e. signalling burden ) increases with .the second important observation regards the improvement brought about by serializing the transmissions ( and hence unbalanced trees ) with respect to parallel chunk transmissions ( and hence balanced trees ) .the figure shows that the performance improvement is significant : in the case the stream diffusion metric for serial chunk transmissions ( i.e. , the bound ) is one order of magnitude greater than for parallel chunk transmissions at , and three orders of magnitude at . as a function of in a bandwidth scenario , for ( balanced and unbalanced tree ) , ( two unbalanced trees ) , and ( infinite unbalanced trees ) . ]the literature abounds of papers proposing practical and working distribution algorithms for p2p streaming systems ; however very few theoretical works on their performance evaluation have been published up to now . as a matter of fact , due to the lack of basic theoretical results and bounds , common sense and intuitions and heuristicshave driven the design of p2p algorithms so far .the few available theoretical works mostly focus on the flow - based systems , as they have been defined in subsection [ ss : flow ] . in such case ,a fluidic approach is typically used to evaluate performance and the bandwidth available on each link plays a limited role with respect to the delay performance , which ultimately depend on the delay characterizing a path between the source node and a generic end - peer .this is the case in and .moreover , there are also other studies that address the issue of how to maximize throughput by using various techniques , such as network coding or pull - based streaming protocol .this work differs from the previously cited ones mainly because it focuses on chunk - based systems , for which discrete - time approaches are most suitable than fluidic approaches .surprisingly enough , according to the best of our knowledge and our literature survey , there is only one work where chunk - based systems are theoretically analyzed . in more detail ,the author of derives a minimum delay bound for p2p video streaming systems , and proposes the so called _ snow - ball _ streaming algorithm to achieve such bound . like the theoretical bound presented in this paper , the bound in , that is expressed in terms of delay in place of stream diffusion metric , can be achieved only in case of serial chunk transmissions and it is equivalent to the one that we found as a particular case when . however , the assumptions under which such bound has been derived in are completely different .in fact , with reference to a network composed of nodes excluding the source node , the proposed _ snow - ball _ algorithm for chunk dissemination requires that i ) the source node serves each one of the network nodes with different chunks , ii ) nodes other than the source serve different neighbors . in other words ,the resulting overlay topology is such that i ) the source node is connected to all the network nodes , ii ) nodes other than the source have overlay neighbors . due to this , our approach may be definitely regarded as significantly different from the one in .differently from , we indeed consider the case of limited overlay connectivity among nodes and we show that organizing nodes in a forest - based topology allows to achieve performance very close to the ones of the snow - ball case .in this paper we derived a theoretical performance bound for chunk - based p2p streaming systems .such bound has been derived in terms of the stream diffusion metric , a performance metric which is directly related to the end - to - end minimum delay achievable in a p2p streaming system . the presented bound for the stream diffusion metric depends on i ) the upload bandwidth available at each node , assumed homogeneous for all nodes , and ii ) the number of neighbors to transmit chunks to .k - step fibonacci sequences play a fundamental role in such a bound .the importance of the presented theoretical bound is twofold : on the one hand , it provides an analytical reference for performance evaluation of chunk - based p2p streaming systems ; on the other hand , it suggests some basic principles , which can be exploited to design real - world applications .in particular , it suggests i ) the serialization of chunk transmissions , and ii ) the organization of chunks in different groups so that chunks in different groups are spread according to different paths .m.castro , p.druschel , a.kermarrec , a.nandi , a.rowston , a.singh , _ splitstream : high - bandwidth multicast in cooperative environments _ , in proc . of the nineteenth acm symposium on operating systems principles , 298 - 313 , 2003 .f. pianese , d. perino , j. keller , e. biersack , _ pulse : an adaptive , incentive - based , unstructured p2p live streaming system _ , ieee transactions on multimedia , special issue on content storage and delivery in peer - to - peer networks , volume 9 , n. 6 , 2007 .g. bianchi , n. blefari melazzi , l. bracciale , f. lo piccolo , s. salsano , _ fundamental delay bounds in peer - to - peer chunk - based real - time streaming systems _ , technical report , 2008 ( on line available at netgroup.uniroma2.it/p2p/streaming-tech-rep.pdf ) .
|
this paper addresses the following foundational question : what is the maximum theoretical delay performance achievable by an overlay peer - to - peer streaming system where the streamed content is subdivided into chunks ? as shown in this paper , when posed for chunk - based systems , and as a consequence of the store - and - forward way in which chunks are delivered across the network , this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows ( sub - streams ) . to circumvent the complexity emerging when directly dealing with delay , we express performance in term of a convenient metric , called `` stream diffusion metric '' . we show that it is directly related to the end - to - end minimum delay achievable in a p2p streaming network . in a homogeneous scenario , we derive a performance bound for such metric , and we show how this bound relates to two fundamental parameters : the upload bandwidth available at each node , and the number of neighbors a node may deliver chunks to . in this bound , k - step fibonacci sequences do emerge , and appear to set the fundamental laws that characterize the optimal operation of chunk - based systems .
|
compressive sampling and multi - coset sampling have drawn a lot of interest from the signal processing community due to the possibility to reconstruct a signal sampled at sub - nyquist rate with no or little information loss under the constraint that the signal is sparse in a particular basis .all these works on sub - nyquist sampling are important especially when it is needed to relax the requirements on the analog - to - digital converters ( adcs ) . for a wide - sense stationary ( wss ) signal, it has also been shown that perfect reconstruction of its second - order statistics from sub - nyquist rate samples is theoretically possible even without sparsity constraint .this invention is important for some applications , such as wideband spectrum sensing for cognitive radio , where only perfect reconstruction of the temporal auto - correlation function is required instead of the signal itself .the principle of reconstructing the temporal auto - correlation function of a signal from the time - domain compressive measurements has in a dual form also been proposed in the spatial domain . given a linear antenna array , and that if the locations of the antennas are arranged according to a nested or coprime array , the spatial correlation values between the outputs of the antennas in the array can be used to generate the spatial correlation values between the outputs of the antennas in the virtual array or difference co - array ( which is uniform in this case ) which generally has more antennas and a larger aperture than the actual array .this enhances the degrees of freedom and allows and to estimate the direction of arrival ( doa ) of more uncorrelated sources than sensors . the minimum redundancy array ( mra ) of also be used to produce this feature but in a more optimal way .this has been exploited by to perform compressive angular power spectrum reconstruction .the advantage offered by the nested and coprime arrays over the mra however , is the possibility to derive a closed - form expression for the array geometry and the achievable number of correlation values in the resulting uniform difference co - array . in the aforementioned concept ,the spatial compression is performed in the sense that we select a subset of antennas from a uniform linear array ( ula ) . in this paper , we jointly reconstruct both the frequency - domain and angular - domain power spectrum using compressive samples .we use a ula as the underlying array and activate only some of its antennas leading to a spatial - domain compression .the received signal at each active antenna is then sampled at sub - nyquist - rate using multi - coset sampling .next , we compute all the correlation values between the resulting sub - nyquist rate samples at all active antennas both in the time domain and the spatial domain and use them to reconstruct the two - dimensional ( 2d ) power spectrum matrix where each row gives the power spectrum in the frequency domain for a given angle and where each column contains the power spectrum in the angular domain for a given frequency .further , we can estimate the doa of the sources active at each frequency by locating the peaks in the angular power spectrum .this 2d power spectrum reconstruction can be done for more uncorrelated sources than active sensors without any sparsity constraint on the true power spectrum .first , consider a ula having antennas receiving signals from uncorrelated wss sources .we assume that the distance between the sources and the ula is large enough compared to the length of the ula and thus the wave incident on the ula is assumed to be planar and the sources can be assumed as point sources .we also assume that the inverse of the bandwidth of the aggregated incoming signals is larger than the propagation delay across the ula , which allows us to represent the delay between the antennas as a phase shift .based on these assumptions , we can write the ula output as where is the output vector containing the received signal at the antennas of the ula , is the additive white gaussian noise vector , ^t ] is the extended array manifold matrix with the array response vector containing the phase shifts experienced by at each element of the ula .note that is known and might only approximately contain the actual doas of the sources .we generally assume that and are uncorrelated , that the impact of the wireless channel has been taken into account in , and that the noises at different antennas are uncorrelated with variance , i.e. , =\sigma_n^2 { \bf i}_{n_s} ] , where and is the distance between two consecutive antennas in wavelengths , which is set to in order to prevent spatial aliasing . in order to simplify the further analysis, we introduce ={\bf x}(mt) ] , and ={\bf s}(mt) ] at consecutive sample indices into the matrix ] and write ] is similarly defined as ] is given by =[{\bf s}[nn_t],{\bf s}[nn_t+1],\dots,{\bf s}[(n+1)n_t-1]] ] as =[{s}_1[nn_t+i],{s}_2[nn_t+i],\dots,{s}_q[nn_t+i]]^t ] a digital representation of .in this section , we introduce the compression operations on the output matrix ]is then compressed in the spatial - domain by leading to the matrix ={\bf c}_s{\bf x}[n]%={\bf c}_s{\bf a}{\bf s}[n]+{\bf c}_s{\bf n}[n ] \stackrel{\delta}{=}{\bf b}{\bf s}[n]+{\bf m}[n ] \vspace{-1 mm }\label{eq : y_as_x}\ ] ] where =[{\bf y}[nn_t],{\bf y}[nn_t+1],\dots,{\bf y}[(n+1)n_t-1]] ] , ] is given by =\left[{\bf m}[nn_t],{\bf m}[nn_t+1],\dots,{\bf m}[(n+1)n_t-1]\right] ] is the discrete noise vector given by ={\bf c}_s{\bf n}[m] ] generally has correlation matrix {\bf m}^h[m']\right]=\sigma_n^2{\bf i}_{m_s}\delta[m - m'] ] in in the time domain , leading to the matrix ={\bf y}[n]{\bf c}_t^t .\label{eq : yprime_as_y}\ ] ]denote the -th row of ] in as ] , respectively , and write the vector ] and the vector ] .this allows us to rewrite the time - domain compression in in terms of the row vectors of ] , i.e. , ={\bf c}_t{\bf y}_j[n],\quad j=1,2,\dots , m_s .\vspace{-1 mm } \label{eq : y_jprime_as_y_j}\ ] ] using , our next step is to calculate the correlation matrix between ] for all as {\bf z}_j[n]^h\right]= e\left[{\bf c}_t{\bf y}_i[n]{\bf y}_j[n]^h{\bf c}_t^h\right]={\bf c}_t{\bf r}_{y_i , y_j}{\bf c}_t^h .\vspace{-1 mm } \label{eq : r_yi_yj_prime}\ ] ] in practice , the expectation operator in can be estimated by taking an average over available matrices ] in also form a wss sequence .this means that the matrix in has a toeplitz structure allowing us to condense into the vector ,r_{y_i , y_j}[1],\dots , r_{y_i , y_j}[n_t-1],r_{y_i , y_j}[1-n_t],\dots , r_{y_i , y_j}[-1]]^t ] in . by taking into account the fact that every row of ] is a wss sequence and the assumption that the extended source vector ] are uncorrelated , it is straightforward to find that the correlation matrix between ] is given by %&=&e\left[{\bf y}[nn_t+l]{\bf y}^h[nn_t+l']\right]\nonumber \\ % = e\left[({\bf b}{\bf s}[nn_t+l]+{\bf m}[nn_t+l])({\bf b}{\bf s}[nn_t+l']+{\bf m}[nn_t+l'])^h\right]\nonumber \\ % & = & e\left[{\bf b}{\bf s}[nn_t+l]{\bf s}^h[nn_t+l']{\bf b}^h\right]+e\left[{\bf m}[nn_t+l]{\bf m}^h[nn_t+l']\right ] \nonumber \\ = { \bf b}{\bf r}_{{s}}[l - l']{\bf b}^h+\sigma_n^2{\bf i}_{m_s}\delta[l - l']% , % \quad l , l'=0,1,\dots , n_t-1 , \vspace{-1 mm } \label{eq : ry_only}\ ] ] for .since the point sources are assumed to be uncorrelated , the elements of ] is a diagonal matrix . by exploiting this fact and stacking all columns of the matrix ] , we obtain )%&=&({\bf b}^*\otimes{\bf b})\text{vec}({\bf r}_{s}[l - l'])+\sigma_n^2\text{vec}({\bf i}_{m_s})\delta[l - l ' ] \nonumber \\ = ( { \bf b}^*\odot{\bf b})\text{diag}({\bf r}_{s}[l - l ' ] ) \nonumber \\ & + \sigma_n^2\text{vec}({\bf i}_{m_s})\delta[l - l ' ] , \quad l , l'=0,1,\dots , n_t-1 , \vspace{-1 mm } \label{eq : vec_ry_only}\end{aligned}\ ] ] where represents the khatri - rao product operation .let us now investigate the relationship between the elements of in and ) ] is actually related to as )=[r_{y_1,y_1}[l - l'],r_{y_2,y_1}[l - l'],\dots , r_{y_{m_s},y_{m_s}}[l - l']]^t ] in and then use them to reconstruct )\}_{l , l'=0}^{n_t-1} ] as ),\text{diag}({\bf r}_{s}[1]),\dots,\text{diag}({\bf r}_{s}[n_t-1]), ] , we can observe that the -th row of actually corresponds to the temporal auto - correlation of the incoming signal from the investigated angle , which can be written as ,r_{s_q}[1],\dots , r_{s_q}[n_t-1],r_{s_q}[1-n_t],\dots , r_{s_q}[-1]] ] as , where is the power spectrum vector of the incoming signal from the investigated angle . by combining into the matrix ^t ] in , by solving and using ls and then applying the dft on the rows of the resulting matrix .we now discuss the choice of the selection matrix and the extended array response matrix that ensure the uniqueness of the ls solution of and , respectively .we first investigate the choice of that results in a full column rank matrix .since the rows of and in are formed by selecting the rows of the identity matrix , it is clear that every row of both and only contains a single one and zeros elsewhere .this fact guarantees that each row of has only a single one and thus , in order to ensure the full column rank condition of , we need to ensure that each column of it has at least a single one .this problem actually has been encountered and solved in where the solution is to construct by selecting the rows of based on the so - called minimal length- sparse ruler problem . in practice , this results in a multi - coset sampling procedure called the minimal sparse ruler sampling .next , we examine the choice of , which boils down to the selection of the activated antennas in the ula and the investigated angles .let us write in terms of as \vspace{-1 mm } \label{eq: btilde_as_btilde}\ ] ] and in terms of as ^t \vspace{-1 mm } \label{eq : btilde}\ ] ] where is the distance in wavelengths between the -th _ active _ antenna and the reference antenna of the ula defined in section [ preliminary ] .it is clear from and that the -th column of contains the elements , for .while our task to find general design conditions to guarantee the full column rank of is not trivial , the following theorem suggests one possible way to achieve a full column rank .* theorem 1 * : the matrix has full column rank if : 1 ) there exist distinct values of satisfying , and 2 ) there exists an integer such that contains an arithmetic sequence of terms having a difference of between each two consecutive terms . the proof of theorem 1 can be found in appendix [ appendix_fullrank_b_kr_b ] .the second condition indicates that there exist distinct rows from that form the array response matrix of a virtual ula with antennas , which can only be achieved for .this second condition also implies that we have more antennas in this virtual ula than investigated angles .some possible ways to satisfy theorem 1 is to select the active antennas from the antennas in the ula based on the mra discussed in ( which also obeys the minimal sparse ruler problem ) , the two - level nested array , or the coprime array . for the mra and the two - level nested array , theorem 1can be satisfied even for .note that although the different values of can be chosen in an arbitrary fashion , they should not be too close to each other , since otherwise the resulting might be ill - conditioned .theorem also implies that the maximum number of detectable sources is upper bounded by since we can not detect more than sources .apart from satisfying theorem 1 , another way to achieve a full column rank is suggested by theorem 2 .* theorem 2 * : the matrix has full column rank if:1 ) has at least different values and 2 ) the grid of investigated angles is designed based on the inverse sinusoidal angular grid where the proof for this theorem can be found in appendix [ appendix_fullrank_b_kr_b_two ] .note that the first condition from theorem 2 is less strict than the second condition from theorem 1 .a good option is to use a configuration satisfying theorem 1 with and , and to use with .this will not only ensure that the resulting matrix has full column rank but also that there exists a submatrix from that forms a row - permuted version of the inverse dft matrix , meaning that is well - conditioned .in this section , we examine the proposed approach with some numerical study . we consider a ula having antennas as the underlying array and construct an mra of active antennas by selecting the antenna indices based on the minimal length- sparse ruler problem discussed in .this leads to activated antennas with where is set to .the set of investigated angles is set according to with . in the receiver branch corresponding to each active antenna ,the time - domain compression rate of is obtained by setting and .we construct the selection matrix by first solving the minimal length- sparse ruler problem which gives the indices of the rows of that have to be selected .the selection of these rows will ensure that the resulting matrix in has at least a single one in each column .the additional rows of are then randomly selected from the remaining rows of that have not been selected .we simulate the case when we have more sources than active antennas by generating uncorrelated sources having doas with 9 degrees of separation , i.e. , the set of doas is given by .the sources produce complex baseband signals whose frequency bands are given in table [ tab : mytabel ] and which are generated by passing circular complex zero - mean gaussian i.i.d .noise with variance into a digital filter of length with the unit - gain passband of the filter for each source set according to table [ tab : mytabel ] .this will ensure that the true auto - correlation sequence for each source is limited to .we assume a spatially and temporally white noise with variance and set the number of measurement matrices ] , which is a submatrix of in , that forms the array response matrix of a virtual ula of antennas with given by ^t ] , with given by ^t$ ] . observe that is a row - wise vandermonde matrix since the elements of are ordered according to geometric progression . in order to ensure that has full column rank , we need distinct values of modulo which is guaranteed by the first requirement of theorem 2 .candes , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inf . theory _489 - 509 , february 2006 .m. mishali and y. eldar , `` blind multiband signal reconstruction : compressed sensing for analog signals , '' _ ieee trans . signal process .3 , pp . 993 - 1009 , march 2009 .ariananda and g. leus , `` compressive wideband power spectrum estimation , '' _ ieee trans. signal process .9 , pp . 4775 - 4789 , september 2012 .p. pal and p.p .vaidyanathan , `` nested arrays : a novel approach to array processing with enhanced degrees of freedom , '' _ ieee trans . signal process .4167 - 4181 , august 2010 .p. pal and p.p .vaidyanathan , `` coprime sampling and the music algorithm , '' _ proc .ieee digital signal process . and signal process .workshop _ , sedona ,arizona , pp .289 - 294 , january 2011 .a. moffet , `` minimum - redundancy linear arrays , '' _ ieee trans .antennas propag .172 - 175 , march 1968 .s. shakeri , d.d .ariananda and g. leus , `` direction of arrival estimation using sparse ruler array design , '' _ proc .ieee workshop signal process . adv .wireless commun ._ , cesme , turkey , june 2012 .
|
we introduce a new compressive power spectrum estimation approach in both frequency and direction of arrival ( doa ) . wide - sense stationary signals produced by multiple uncorrelated sources are compressed in both the time and spatial domain where the latter compression is implemented by activating only some of the antennas in the underlying uniform linear array ( ula ) . we sample the received signal at every active antenna at sub - nyquist rate , compute both the temporal and spatial correlation functions between the sub - nyquist rate samples , and apply least squares to reconstruct the full - blown two - dimensional power spectrum matrix where the rows and columns correspond to the frequencies and the angles , respectively . this is possible under the full column rank condition of the system matrices and without applying any sparsity constraint on the signal statistics . further , we can estimate the doas of the sources by locating the peaks of the angular power spectrum . we can theoretically estimate the frequency bands and the doas of more uncorrelated sources than active sensors using sub - nyquist sampling .
|
as first noted by , asteroids can form prominent groupings in the space of orbital elements .these groups , nowadays well - known as _ asteroid families _ , are believed to have resulted from catastrophic collisions among asteroids , which lead to the ejection of fragments into nearby heliocentric orbits , with relative velocities much lower than their orbital speeds . to date, several tens of families have been discovered across the whole asteroid main belt ( e.g. * ? ? ?* ; * ? ? ?also , families have been identified among the trojans , and most recently , proposed to exist in the transneptunian region .studies of asteroid families are very important for planetary science .families can be used , e.g. to understand the collisional history of the asteroid main belt , the outcomes of disruption events over a size range inaccessible to laboratory experiments ( e.g. * ? ? ?* ; * ? ? ?* ) , to understand the mineralogical structure of their parent bodies ( e.g. * ? ? ?* ) and the effects of related dust `` showers '' on the earth .obtaining the relevant information is , however , not easy .one of the main complications arises from the fact that the age of a family is , in general , unknown .thus , accurate dating of asteroid families is an important issue in the asteroid science .a number of age - determination methods have been proposed so far .probably the most accurate procedure , particularly suited for young families ( i.e. age ) , is to integrate the orbits of the family members backwards in time , until the orbital orientation angles cluster around some value .as such a conjunction of the orbital elements can occur only immediately after the disruption of the parent body , the time of conjunction indicates the formation time .the method was successfully applied by to estimate the ages of the karin cluster ( 5.8.2 myr ) and of the veritas family ( 8.3.5 myr ) .this method is however limited to groups of objects residing on regular orbits . for older families ( i.e. age ) , one can make use of the fact that asteroids slowly spread in semi - major axis due to the action of yarkovsky thermal forces .as small bodies drift faster than large bodies , the distribution of family members in the plane where is the proper semi - major axis and is the absolute magnitude can be used as a clock .that method was used by to estimate ages of many asteroid families . in these estimationsthe initial sizes of the families were neglected , so that this methodology can overestimate the real age by a factor of as much as 1.5 -2 .an improved version of this method , which accounts for the initial ejection velocity field and the action of yorp thermal torques , has been successfully applied to several families by and .again , it is not straightforward to apply this method to families located in the chaotic regions of the asteroid belt . suggested that asteroid families , which reside in chaotic zones , can be approximately dated by _chaotic chronology_. this method is based on the fact that the age of the family can not be greater than the time needed for its most chaotic members to escape from the family region . in its original form , this method provides only an upper bound for the age .recently , introduced an improved version of this method , based on a statistical description of transport in the phase space . applied it to the family of ( 490 ) veritas , finding an age of 8.7.7 myr , which is statistically the same as that of 8.3.5 myr , obtained by . despite these improvements , the chaotic chronology still suffered from two important limitationsit did not account for the variations in diffusion in different parts of a chaotic zone , which can significantly alter the distribution of family members ( i.e.the shape of the family ) .moreover , it did not account for yarkovsky / yorp effects , thus being inadequate for the study of older families .in this paper we extend the chaotic chronology method , by constructing a more advanced transport model , which alleviates the above limitations .we first use the veritas family as benchmark , since its age can be considered well - defined .local diffusion coefficients are numerically computed , throughout the region of proper elements occupied by the family .these local coefficients characterize the efficiency of chaotic transport at different locations within the considered zone .a monte - carlo - type model is then constructed , in analogy to the one used by .the novelty of the present model is that it assumes variable transport coefficients , as well as a drift in semi - major axis due to yarkovsky / yorp effects , although the latter is ignored when studying the veritas family .applying our model to veritas , we find that both ( a ) the shape of its chaotic component and ( b ) its age are correctly recovered .we then apply our model to the family of ( 3556 ) lixiaohua , another outer - belt family but much older than veritas and hence much more affected by the yarkovsky / yorp thermal effects .we find the age of the lixiaohua family to be myr .we note that , depending on the variability of diffusion coefficients in the considered region of proper elements , this new transport model can be computationally much more expensive than the one applied in .this is because , if the values of the diffusion coefficients vary a lot across the considered region , one would have to calculate them in many different points .however , even so , this computation needs to be performed only once .then , the random - walk model can be used to perform multiple runs at very low cost , e.g. to test different hypotheses about the original ejection velocities field or about the physical properties of the asteroids .on the other hand , for smooth " diffusion regions in which the coefficients only change by a factor of 2 - 3 across the considered domain , the model can be simplified . in such regions , the age of a familycan be accurately determined even by assuming an average ( i.e. constant over the entire region ) diffusion coefficient , as we show in section 3 .our study begins by selecting the target phase - space region .this is done by identifying the members of an asteroid family crossed by resonances , from a catalog of numbered asteroids .apart from the largest hirayama families , for the other smaller and more compact ones , in the current catalog one typically finds up to several hundred members .thus the chaotic component of the family consists of a few tens to a few hundreds of asteroids .although this may be adequate to compute the average values of the diffusion coefficients ( as in tsiganis et al .2007 ) , a detailed investigation of the local diffusion characteristics requires a much larger sample of bodies .the latter can be obtained by adding in the fictitious bodies , selected in such a way that they occupy the same region of the proper elements space as the real family members .since we wish to study just these local diffusion properties and the effect of the use of variable coefficients in our chronology method , we are going to follow here this strategy .the 3-d space of proper elements , occupied by the selected family , is divided into a number of cells .then , in each cell , the diffusion coefficients are calculated for both relevant action variables , namely and ( denotes jupiter s semi - major axis , the proper eccentricity and the proper inclination of the asteroid ) .this is done by calculating the time evolution of the mean squared displacement ( i=1,2 ) in each action , the average taken over the set of bodies ( real or fictitious ) that reside in this cell .the diffusion coefficient is then defined as the least - squares - fit slope of the curve , while the formal error is computed as in .the simulation of the spreading of family members in the space of proper actions and the determination of the age of the family is done using a markov chain monte carlo ( mcmc ) technique ( e.g. * ? ? ?* ; * ? ? ?at each step in the simulation the _ random walkers _ can change their position in all three directions , i.e. the proper semi - major axis and the two actions and .although no macroscopic diffusion occurs in proper semi - major axis , the random walker can change its value due to the yarkovsky effect , while the changes in and are controlled by the local values of the diffusion coefficients . in the case of normal diffusionthe transport properties in action space are determined by the solution of the fokker - planck equation ( see ) .the mcmc method is in fact equivalent to solving a discretized 2-d fokker - planck equation with variable coefficients , combined here with a 1-d equation for the yarkovsky - induced displacement in .the latter acts as a _ drift _ term , contributing to the variability of diffusion in and .the rate of change of due to the yarkovsky thermal force , is given by the following equation ( e.g. * ? ? ?* ; * ? ? ?* ) : where the coefficients and depend on parameters that describe physical and thermal characteristics of the asteroid and denotes the obliquity of the body s spin axis . for km - sized asteroids ,the drift rate is inversely proportional to their radius .this simplified yarkovsky model assumes that the asteroid follows a circular orbit , and thus linear analysis can be used to describe heat diffusion across the asteroid s surface .the obliquity of the spin axis and the angular velocity of rotation ( ) of the asteroid are subject to thermal torques ( yorp ) that change their values with time , according to the following equations : ( e.g. * ? ? ?* ; * ? ? ?* ) , where the functions and describe the mean strength of the yorp torque and depend on the asteroid s surface thermal conductivity .the length of the jump in that a random - walker undertakes at each time - step in the mcmc simulation , is determined by equations ( 1)-(2 ) , in their discretized form .of course , a set of values of the physical parameters must be assigned to each body . as the majority of the veritas family members are of c - type , while the lixiaohua family members seem to be c / x - type , the following values for these parameters adopted : thermal conductivity [ w(m k) , specific heat capacity [j(kkg) , and the same value for surface and bulk density [kgm . in ,the geometric albedos ( ) of several veritas and lixiaohua family members are listed , yielding a mean for veritas and for lixiaohua .the rotation period , , is chosen randomly from a gaussian distribution peaked at , while the distribution of initial obliquities , , is assumed to be uniform . to assign the appropriate values of absolute magnitude to each body, we need to have an estimate of the cumulative distribution of family members .a power - law approximation is used ( e.g. * ? ? ?* ) where depends on the considered interval for ; e.g. for the veritas family , we find for [11.5,13.5 ] and for [13.5,15.5 ] . having the values of and ,the radius of a body can be estimated , using the relation ( e.g. * ? ? ?* ) at each time - step in the mcmc simulation , a random - walker suffers a jump in and , whose length is given by = ( ) , where is a random number from a gaussian distribution .since the values of the diffusion coefficients vary in space , the maximum allowable jump , for a given , changes from cell to cell . in our simulations , the values of and used for each body , are given by : where and denote the distances of the random - walker from the two nearest nodes ( left and right ) and and denote the corresponding values of the diffusion coefficients at these nodes.= could be used instead of an arithmetic one ; we actually found negligible differences . ] for a correct determination of the age of the family , the random walkers have to be placed initially in a region , whose size is as close as possible to the size that the real family members occupied , immediately after the family - forming event .this is in fact a source of uncertainty for our model . in our calculations we assumed the initial spread of the family in ( ) and ( ) to be accurately represented by a gaussian equivelocity ellipse ( see morbidelli et al .1995 ) , computed such that ( i ) the spread in of the whole family and ( ii ) the spread in and of family members that follow regular orbits is well reproduced .in this section we use our model to study the evolution of the chaotic component of two outer - belt asteroid families : ( 490 ) veritas and ( 3556 ) lixiaohua . in both cases , a number of mean motion resonances ( mmr ) cut - through the family , such that a significant fraction of members follow chaotic trajectories . on the other hand ,their ages differ significantly , according to previous estimates . in this respect ,the yarkovsky effect can be neglected in the study of veritas , but not in the study of lixiaohua .we begin by performing an extensive study of the local diffusion properties in the chaotic region of the veritas family .then , the mcmc model is used to simulate the evolution of the chaotic members and to derive an estimate of the age of the family .the results are compared to the ones given by the model of .finally , we apply the mcmc model to the lixiaohua family and derive estimates of its age , for different values of the yarkovsky - related physical parameters .the veritas family is a comparatively small and compact outer - belt family , spectroscopically different from the background population of asteroids . in terms of dynamics, it occupies a very interesting and complex region , crossed by several mean motion resonances .application of the hierarchical clustering method ( hcm ) to the astdys catalog of synthetic proper elements ( numbered asteroids http://hamilton.unipi.it/astdys as of december 2007 ) , yields 409 family members , for a velocity cut - off of m s as in .although the family appears now to extend beyond ( see fig .[ fig01 ] ) , the main dynamical groups remain practically the same ( see tsiganis et al .2007 , for a detailed description of the groups ) .since the scope of this paper is to present a refined transport model , we will briefly describe here only the main relevant features , referring to a forthcoming paper for a renewed analysis of the veritas family itself .the main chaotic zone , where appreciable diffusion in proper elements is observed , is located around ( fig . 1 ) andis associated with the action of the ( 5,-2,-2 ) three - body mean motion resonance ( mmr ) ; see fig .[ fig02 ] for the typical short - term evolution of such a resonant asteroid .the family members that reside in this resonance can disperse over the observed range in and on a myr time - scale .this is exactly the group of bodies ( group a ) that was used by tsiganis et al .( 2007 ) , to compute the age of veritas . as the number of bodies in group a is small, we need to generate a uniform distribution of fictitious bodies , in order to compute local diffusion coefficients across the observed range in ( ) . for this reasonwe start by selecting initial conditions ( fictitious bodies ) , covering the same region as the real veritas family members , in the space of osculating elements .we note that the actual number of bodies used in the calculations of the coefficients is much smaller than that ( see below ) .the orbits of the fictitious bodies are integrated for a time - span of 10 myr , using the orbit9 integrator ( version 9e ) , in a model that includes the four major planets ( jupiter to neptune ) as perturbing bodies .the indirect effect of the inner planets is accounted for by applying a barycentric correction to the initial conditions .this model is adequate for studying outer - belt asteroids .note that the integration time used here is in fact longer than the known age of the veritas family .this is done in order to study the convergence of the computation of the diffusion coefficients , with respect to the integration time - span . for each bodymean elements are computed on - line , by applying digital filtering , and proper elements are subsequently computed according to the analytical theory of .synthetic proper elements are also calculated , for comparison and control .since the mapping from osculating to proper elements is not linear , the distribution of the fictitious bodies in the space of proper elements is not uniform , which can complicate the statistics . a smaller sample of bodies , with practically uniform distribution in proper elements ,is therefore chosen .thus , our statistical sample , on which all computations are based , is in fact times larger than the actual population of the family . as explained in the previous section , we computed local diffusion coefficients , by dividing the space occupied by the veritas family in a number of cells .our preliminary experiments suggested that , while a large number of cells is needed to accurately represent the dependence of the coefficients on , the same is not true for and , except for the wide chaotic zone of the ( ) mmr .thus , we decided to follow the strategy of using a large number of cells in and a small number of cells in and , except in the ( 5,-2,-2 ) region .the efficiency of the computation is improved if we use a moving - average technique ( i.e. overlapping cells ) , instead of a large number of static cells , because in the latter case we would need a significantly larger number of fictitious bodies .we selected the size of a cell in each dimension as well as an appropriate step - size , by which we shift the cell through the family , as follows : for , the cell - size was au and the step - size ; for the cell - size was and the step - size ; finally , for , the cell - size was and the step - size .thus , the total number of ( overlapping ) cells used in our computations was .the time evolution of the mean squared displacement in and is shown in fig .[ fig03 ] , for a representative cell in the ( 5,-2,-2 ) resonance .the evolution is basically linear in time , as it should be for normal diffusion .the slope of the fitted line defines the value of the local diffusion coefficient .when performing such computations , one needs to know ( i ) what is the shortest possible integration time - span , and ( ii ) what is the smallest possible number of fictitious bodies per cell , for which reliable values of the coefficients can be obtained . for several different groups of fictitious bodies ( i.e. different cells ) , we calculated the diffusion coefficients using different values of the integration time - span , between 1 and 10 myr .our results suggest , that an integration time of is sufficient to obtain reliable values , as shown in fig .[ fig04 ] .saturation time _ is about half the known age of the veritas family .hence , for this case , computing diffusion coefficients is practically as expensive as studying the evolution of the family by long - term integrations .however , the saturation time is related to the resonance in question and not to the age of the family , which could be much longer .thus , as a matter of principle , the computational gain can become important when dealing with much older families .resonances of similar order are characterized by similar lyapunov and diffusion times ( see ) , and thus similar computation time - spans ( i.e. a few myr ) should be used , for various resonances throughout the belt .the dependence of the diffusion coefficients on the number of bodies considered in each cell was also tested . for a number of different cells , we calculated the coefficients , using from 10 to 100 bodies for the computation of the corresponding averages .our results suggest that at least bodies per cell are needed , for an accurate computation .the values of the diffusion coefficients in and , along with their formal errors , are given as functions of in fig.[fig05 ] .the largest diffusion rate is measured in the ( 5,-2,-2 ) mmr , which cuts through the family at . both coefficients increase as the center of the resonanceis approached , but show local minima at .174 au , which is approximately the location of the center .the maximum values are = ( 1.28) yr , and = ( 1.36) yr , while the values at the local minima are = ( 0.73) yr , and = ( 1.16) yr .this form of dependence in is in agreement with the results of , where it was shown that the dynamics in this resonance are similar to those of a modulated pendulum ( see also ) , with an island of regular motion persisting at the center of the resonance .however , the size of the island decreases with decreasing , so that the diffusion coefficients may decrease as the resonance center is approached , but do not go to zero . as also seen in fig .[ fig05 ] , the ( 5,-2,-2 ) is by far the most important resonance in the veritas region , associated with the widest chaotic zone .bodies inside this resonance exhibit a complex behaviour , as already noted by .most resonant bodies show oscillations in proper semi - major axis around = 3.174 au , but some are temporarily trapped near the resonance s borders ( see also fig .[ fig02 ] ) , at = 3.172 au or = 3.1755 au .this `` stickiness '' can be important , as it can affect the diffusion rate .in fact , we find that the values of these bodies shift towards the resonance borders , where slower diffusion rates are also measured .the diffusion properties are quite different at the ( 3,3,-2 ) ( located at ) and ( 7,-7,-2 ) ( at ) mmrs , which are of higher order in eccentricity with respect to the ( 5,-2,-2 ) mmr ( see nesvorn & morbidelli 1999 ) .the values of for the ( 3,3,-2 ) resonance ( see fig . [fig05]a ) are also increasing as the center of the resonance is approached , but no local minimum is seen near the center of the resonance , at least in this resolution .the maximum values are only = ( 8.30) yr and = ( 0.22) yr , clearly much smaller than in the ( 5,-2,-2 ) mmr .note that has practically zero value .the region of the ( 7,-7,-2 ) resonance is even less exciting . is almost constant across the resonance , with a very small value ( 4.1 yr ) , and is practically zero .the above results suggest that the ( 5,-2,-2 ) mmr is essentially the only resonance in the veritas region characterized by appreciable macroscopic diffusion .we now focus on the variation of the diffusion coefficients with respect to along this resonance . as shown in fig .[ fig06 ] , the dependence of the diffusion rate on the initial values of the actions and and then translated into proper elements space . ]is very complex .the values of vary from ( 0.60) yr to ( 1.66) yr while , for , they vary from ( 0.63) yr to ( 2.31) yr .consequently , chaotic diffusion along this resonance can in principle produce asymmetric `` tails '' in the distribution of group - a members .note , however , that the coefficients only vary by a factor of and that their average values are essentially the same as in .we can now use the mcmc method to simulate the evolution and determine the age of the veritas family , assuming that all the dynamically distinct groups originated from a single brake - up event .a set of six values ( ,,,,, ) is assigned to each random walker in the simulation .all bodies are initially distributed uniformly inside a region of predefined size in and semi - major axes in the range [ 3.172 , 3.176 ] .the age of the family , ( ) , is defined as the time needed for of the random walkers to leave an ellipse in the ( ) plane , corresponding to a 3- confidence interval of a 2-d gaussian distribution .we note that , for veritas , the mobility in semi - major axis due to yarkovsky is very small and practically insignificant for what concerns the estimation of its age , since the family is young and distant from the sun . for older familiesone should also define appropriate borders in . using the values of the coefficients obtained above , and the values of =() , =() , calculated from the distribution of the real group a members , we simulate the spreading of group a and estimate its age .of course , the model depends on some free parameters : the initial spread of the group in ( , ) , the time - step , , and the number of random - walkers , .therefore , the dependence of the age , , on these parameters was checked .uncertainties in the values of the current borders of the group ( i.e. the confidence ellipse ) and the values of the diffusion coefficients were taken into account , when calculating the formal error in .different sets of simulations were performed , the results of which are given in fig .[ fig07](a)-(d ) .each `` simulation '' ( i.e. each point in a plot ) actually consists of 100 different realizations ( runs ) of the mcmc code . in each run , the values of and ( ) were varied , according to the previously computed distributions of their values .the values of the free parameters were the same for all runs in a given simulation .the first set of simulations was performed in order to check how the results depend on the time step , .five simulations were made , with ranging from 1000 to 5000 yr ( fig .[ fig07]a ) .the standard deviation of is relatively small , suggesting that is roughly independent of .according to this set of simulations , the age of the family is , where is the mean value and the standard error of the mean .this estimate is in excellent agreement with those of and .the second set of simulations was performed in order to check how the results depend on the number of random walkers , . as shown in fig .[ fig07]b ) , is weakly dependent of , the variation of the mean value of is slightly larger than in the previous case .the age of the family , according to this set , is = 8.6.3 myr .the third group of simulations was performed in order to check how the results depend on the assumed initial spread of group a in , a parameter which is poorly constrained from the respective equivelocity ellipse in fig .[ fig01 ] .we fixed the value of to 2.3 , which can be considered an upper limit , according to fig .( 1), ) , large enough to encompass both the regular part of the family and the ( 3,3,-2 ) bodies , is a better constraint . ] .six simulations were performed , with ranging from 3.5 to 11.0 , and the results are shown in fig .[ fig07]c .the values of tend to decrease as increases .this is to be expected , since increasing the initial spread of the family , while targeting for the same final spread , should take a shorter time for a given diffusion rate .the results yield = 8.8.1 myr . as a final check, we performed a set of simulations with = 2.3 and = 11.0 .these values correspond to equivelocity ellipses that contain almost all regular and ( 3,3,-2)-resonant family members , except for the very low inclination bodies ( ) .five sets of runs , for five different values of , were performed ( see fig .[ fig07]d ) . as in our first set of simulations , is practically independent of . on the other hand , turns out to be smaller than in the previous simulations , since the assumed values for and are quite large .even so , we find = 7.6.1 myr , which is still an acceptable value .combining the results of the first three sets of simulations and taking into account all uncertainties , we find an age estimate of = 8.7.2 myr for the veritas family .this result is very close to the one found by , the error though being smaller by .in addition to the determination of the family s age , we would like to know how well the mcmc model reproduces the evolution of the spread of group - a bodies , in the ( , ) space .for this purpose we compared the evolution of group a for in the future , as given ( i ) by direct numerical integration of the orbits , and ( ii ) by an mcmc simulation with variable diffusion coefficients .figures [ fig08](a)-(b ) show the outcome of this comparison . as shown in fig .[ fig08](a ) , the random - walkers of the mcmc simulation ( triangles ) practically cover the same region in ( , ) as the real group - a members ( circles ) .moreover , the time evolution of the ratio of the standard deviations , which characterizes the shape of the distribution , is reproduced quite well , as shown in fig .[ fig08](b ) .an additional mcmc simulation with constant ( i.e. average with respect to and ) coefficients of diffusion was also performed .as shown in fig .[ fig08](b ) , the value of in this simulation appears to slowly deviate from the one measured in the previous mcmc simulation , as time progresses .however , this deviation is not very large , also compared to the result of the numerical integration .thus , we conclude that an mcmc model with constant coefficients is adequate for deriving a reasonably accurate estimate of the age of a family , provided that the variations of the local diffusion coefficients are not very large . given this result ,we decided to use average coefficients for the lixiaohua case .note also that , in the veritas case , the observed deviation in between the two mcmc models is reflected in the error of ( i.e. 1.7 myr vs.1.2 myr ) .the lixiaohua family is another typical outer - belt family , crossed by several mmrs .this results into a significant component of family members that follow chaotic trajectories .at the same time , a clear ` v'-shaped distribution is observed in the plane ( see fig .[ fig09 ] ) , suggesting that the family is old - enough for yarkovsky to have significantly altered its size in . in this way estimated the age of this family to .thus , we choose to study the lixiaohua family because , on one hand , it is relatively old , so yarkovsky / yorp effects are important , but , on the other hand , it has the feature we need ( i.e. a significant chaotic zone ) to test the behaviour of our model on longer time scales . here, we use our mcmc method to derive a more accurate estimate of its age , taking into account also the yarkovsky / yorp effects . the distribution of the family members in and is shown in fig .[ fig09 ] . using velocity cut - off m s we find 263 bodies ( database as of february 2009 ) linked to the family .the shape of this family is , as in the veritas case , intriguing . for , the family appears to better fit inside the equivelocity ellipse shown in the figure , with only a few bodies showing a significant excursion in and . on the other hand , for ,the family members occupy a wider area in and . throughout the family regionwe find thin , `` vertical '' , strips of chaotic bodies , with lyapunov times .these strips are associated to different mean motion resonances .the most important chaotic domain ( hereafter main chaotic zone , mcz ) is the one centered around ; indicated by the grey - shaded area in both plots of fig .[ fig09 ] .a number of two- and three - body mmrs can be associated to the formation of the mcz , such as the 17:8 mmr with jupiter and the ( 7 , 9 , -5 ) three - body mmr .note that the two largest members of this family , ( 3330 ) gantrisch and ( 5900 ) jensen , are located just outside the mcz , as indicated by their larger values of ( ) .in fact , a significant group of bodies just outside the mcz ( see fig . [fig09]a ) has higher values of but similar spread in as the mcz bodies .this suggests that bodies around the chaotic zone could have once resided therein , evolving towards high / low values of by chaotic diffusion .numerical integrations of the orbits of selected lixiaohua members for 100 myr indeed confirmed that bodies could enter ( or leave ) the mcz .we believe that the distribution of family members on either side of the mcz is strongly indicative of an interplay between yarkovsky drift in semi - major axis and chaotic diffusion in and , induced by the overlapping resonances ; bodies can be forced to cross the mcz , thus receiving a `` kick '' in and , before exiting on the other side of the zone .a population of fictitious bodies was selected and used for calculation of the local diffusion coefficients . here , we restrict ourselves in calculating coefficients as functions of only ( i.e. averaged in and ) . as shown in fig.[fig10 ] , there are several diffusive zones , corresponding to the low- strips of fig .[ fig09 ] .however , as in the veritas case , only one zone appears diffusive in both actions ; is practically zero everywhere outside the mcz ( ) , and significant dispersion in proper elements is observed only in this zone . given the above results , we conclude that the mcz family members can be used to estimate the age of the family , much like the veritas ( 5,-2,-2 ) resonant bodies .given the fact that random - walkers can drift in , the way of computing the age is accordingly modified .a large number of random walkers , uniformly distributed across the whole family region ( i.e. the equivelocity ellipses ) is used .the simulation again stops when of mcz - bodies are found to be outside the observed borders of the family .however , the number of mcz bodies is not constant during the simulation , because bodies initially outside ( resp .inside ) the mcz can enter ( resp .leave ) that region .thus , the aforementioned percentage is calculated with respect to the corresponding number of mcz bodies at each time - step .the size of the mcz in the space of proper actions is given by and . in order to compute the age of the family we performed 2400 mcmc runs .this was repeated three times , for three different values of thermal conductivity ( see above ) and once more , neglecting the yarkovsky effect .given the uncertainty in determining the initial size of the family , we repeated the computations for two more sets of .the results of these computations are presented in fig .[ fig11 ] .we find an upper limit of myr for the age of the family and a lower limit of .taking into account only the runs performed for our `` nominal '' initial size and including the yarkovsky effect , we find the age of lixiaohua family to be myr .this value lies towards the lower end but comfortably within the range ( ) given by .we note that , when the yarkovsky effect is taken into account , the age of the family turns out to be longer by ( see also fig.[fig11 ] ) .this is a purely dynamical effect , related to the fact that bodies can drift towards the mcz from the adjacent non - diffusive regions .however , as more bodies enter the mcz near its center in and , it takes longer for 0.3% of random walkers to diffuse outside the confidence ellipse in . at the same time, bodies that are initially inside the mcz can also drift outside , to lower / higher values of , thus slowing down or even stop diffusing in and .this also explains the large spread in observed for family members located just outside the mcz .we have presented here a refined statistical model for asteroid transport , which accounts for the local structure of the phase - space , by using variable diffusion coefficients .also , the model takes into account the long - term drift in semi - major axis of asteroids , induced by the yarkovsky / yorp effects .this model can be applied to simulate the evolution of asteroid families , also giving rise to an advanced version of the `` chaotic chronology '' method for the determination of the age of asteroid families .we applied our model to the veritas family , whose age is well constrained from previous works .this allowed us to assess the quality and to calibrate our model .we first analyzed the local diffusion characteristics in the region of veritas .our results showed that local diffusion coefficients vary by about a factor of across the region covered by the ( 5,-2,-2 ) mmr .thus , although local coefficients are needed to accurately model ( by the mcmc method ) the evolution of the distribution of group - a members , average coefficients are enough for a reasonably accurate estimation of the family s age .we note though that the variable coefficients model reduces the error in by , but requires a computationally expensive procedure . using the variable coefficients mcmc model, we found the age of the veritas family to be = ( 8.7.2 ) myr ; a result in very good agreement with that of and .we used our model to estimate also the age of the lixiaohua family .this family is similar to the veritas family in many respects ; it is a typical outer - belt family of c - type asteroids , crossed by several mmrs . like the veritas family ,only the main chaotic zone ( mcz ) shows appreciable diffusion in both eccentricity and inclination . on the other hand , this is a much older family and the yarkovsky effect can no longer be ignored .this is evident from the distribution of family members , adjacent to the mcz .our model suggests that the age of this family is between 100 and 230 myr , the best estimate being .note that the relative error is , i.e. close to the that found for the veritas case , using a constant coefficients mcmc model .our model shares some similarities with the yarkovsky / yorp chronology .both methods are basically statistical and make use of the quasi - linear time evolution of certain statistical quantities ( either the spread in or the dispersion in and ) , describing a family .there are , however , important differences .the yarkovsky / yorp chronology method works better for older families and the age estimates are more accurate for this class of asteroid families ( provided there are no other important effects on that time scale ) . on the other hand , for our method to be efficient , we need that diffusion is fast enough to cause measurable effects , but slow enough so that most of the family members are still forming a robust family structure ( i.e. there is no dynamical `` sink '' that would lead to a severe depletion of the chaotic zone ) .thus , our model can be applied to a limited number of families that reside in complex phase - space regions , but , in the same time , this is the only model that takes into account the chaotic dispersion of these families. there are at least a few families for which both chronology methods can be applied , thus leading to more reliable age estimates , as well as to a direct comparison of the two different chronologies .for example , the families of ( 20 ) massalia and ( 778 ) theobalda would be good test cases .we , however , reserve this for future work .an important advantage of the model is that it can be used to estimate the physical properties of a dynamically complex asteroid family , provided that its age is known by independent means ( e.g. by applying the method of to the regular members of the family ) .a large number of mcmc runs can be performed at low computational cost , thus allowing a thorough analysis of the physical parameters of family members or the properties of the original ejection velocities field that better reproduce the currently observed shape of the family .the work of b.n . and z.k .has been supported by the ministry of science and technological development of the republic of serbia ( project no 146004 `` dynamics of celestial bodies , systems and populations '' ) .nesvorn ' y , d. , bottke , w.f . ,vokrouhlick ' y , d. , morbidelli , a. , jedicke , r. , 2006 , in : daniela , l. , sylvio ferraz , m. , and angel , f. julio ( eds . ) _ proceedings of the 229th symposium of the iau , asteroids , comets , meteors _ , cambridge university press , cambridge , 289
|
we present a transport model that describes the orbital diffusion of asteroids in chaotic regions of the 3-d space of proper elements . our goal is to use a simple random - walk model to study the evolution and derive accurate age estimates for dynamically complex asteroid families . to this purpose , we first compute local diffusion coefficients , which characterize chaotic diffusion in proper eccentricity ( ) and inclination ( ) , in a selected phase - space region . then , a monte - carlo - type code is constructed and used to track the evolution of random walkers ( i.e.asteroids ) , by coupling diffusion in ( , ) with a drift in proper semi - major axis ( ) induced by the yarkovsky / yorp thermal effects . we validate our model by applying it to the family of ( 490 ) veritas , for which we recover previous estimates of its age ( myr ) . moreover , we show that the spreading of chaotic family members in proper elements space is well reproduced in our randomk - walk simulations . finally , we apply our model to the family of ( 3556 ) lixiaohua , which is much older than veritas and thus much more affected by thermal forces . we find the age of the lixiaohua family to be myr . celestial mechanics , minor planets , asteroids , methods : numerical
|
3d printing technology is evolving continuously in various directions .the development of scientific tools is one of the areas that is more rapidly growing since it is opening the possibility of using state - of - the - art scientific equipment at a fraction of the cost with respect to commercial alternatives .the implementation of opto - mechanical components using 3d printing has a direct impact on the photonics community .researchers are no longer constrained to work with available commercial elements and therefore their experimental setups can be more versatile . thanks to the fact that the fabrication time is minimal , the components can evolve very fast from the researchers experience and that evolution process can be a creative way to engage young researchers in photonics .moreover , the components 3d printed can be considered as prototypes and thus large manufacturers can take this ideas and build better equipment .additionally , since the 3d printers based on fused filament fabrication ( fff ) are becoming more affordable the opto - mechanical components of our set can be fabricated practically in any location .it is important to highlight that since the components fabricated have a similar performance than low - end commercial alternatives in terms of stability and robustness , the approach presented here makes photonics more accessible to industry and academia both in developed and developing countries , enlarging significantly the size of the community interested in performing experiments in photonics and related fields . in economically developed countries, the ideas presented here constitutes a way to reduce increasingly rising costs , and in developing countries , allows to overcome funding restrictions and large lead times due to customs and administrative procedures . the toolbox presented in this letteris composed of components highly customizable , low - cost , that require a short time to be fabricated , offer a performance that compares favourably with respect to low - end commercial alternatives , that are also aimed at complementing other component libraries already available on internet .we have designed and fabricated a set of opto - mechanical components that are essential in any optics laboratory either for research or teaching . for the sake of illustrationwe have fabricated all the opto - mechanical components to construct a michelson - morley interferometer , since it is an experimental setup that requires a very diverse set of opto - mechanical components for its implementation . in figure[ fig : figure1]a we present some of the components fabricated , such as * kinematic mounts * , used to set the tip and tilt of mirrors or lenses , * translation stages * , used to set the position of a component along a single axis with high precision , * kinematic platforms * , used to support and set the position of components such as prisms or beam splitters .in addition we have also built an * integrating sphere * , used to measure the power of a light source and other basic components such as * post holders * and * post clamps*. all the plastic parts were printed using a prusa - tairona 3d printer that cost and is manufactured in colombia .+ the complete scheme of fabrication used to build the opto - mechanical components consists of two parts . in the first ,the main elements are built in plastic using a standard 3d fabrication process that can be described as follows : to start , a 3d model of the component is designed using a cad software for instance openscad , blender , solid works , or can be downloaded from a digital design repository like thingiverse if the model has been already designed . for our purposes ,all the plastic components were designed using openscad , an open - source , script - based software that generates 3d models by combining ( adding or subtracting ) primitive shapes such as cylinders , spheres and cubes , that is very easy to use .once the model is available , it is further converted using another software , like slicer or cura , into printing instructions for the 3d printer . in our case, the prusa tayrona printer uses the programs repetier host and slicer to generate the instructions ( gcode ) and print the piece , respectively .afterwards , the component is fabricated using an additive manufacturing technology known as fused filament fabrication ( fff ) in which a filament of pla ( polylactic acid , a non - toxic , biodegradable thermoplastic polymer made from plant - based resources such as corn starch or sugar cane ) is heated and then extruded through a hot nozzle .the hot plastic is deposited layer by layer following a given pattern so that each layer binds with the layer below to build a solid object . a moving platform or moving nozzledetermines the position of the hot filament and thus the shape of the solid object printed ( figures [ fig : figure1]b and [ fig : figure1]c ) .once the main components are printed , the second part of the construction process starts .the plastic elements printed are combined with components like nuts , screws , bolts , washers , springs and rods , easily found on a hardware store , to create a fully functional opto - mechanical component .figures [ fig : figure2]a and [ fig : figure2]b depict a drive screw mechanism implemented by embedding a nut in the plastic .figure [ fig : figure2]c shows the implementation of a linear bearing using a rod .it is interesting to note that even though the individual components added in the second part are not designed for high precision applications , we have found that the opto - mechanical components fabricated with them , provide very similar performance with respect to its commercial counterpart , as is shown in the results section .+ a kinematic mount ( km ) is an opto - mechanical component used to adjust precisely the tip and tilt of a mirror ( or lens ) , while it holds the component securely in place , as shown in figure [ fig : figure3]a .similarly , a kinematic platform ( kp ) may be seen as a rotated kinematic mirror mount that is mainly used to control the tip and tilt of a flat surface where other components such as prisms , beam splitters or non - standard optics are secured .+ our proposed implementation of the kinematic mirror mount , follows the widely used cone , groove , and flat constraint scheme , and is based on the original design of doug marett that can be found on thingiverse .the mount is implemented by printing two pieces of plastic ( figure [ fig : figure3]b ) that are joint together using a sphere and two springs secured with two rods on each side .the drive screw mechanism is built using two nuts that are embedded into the plastic .two m4 screws with rounded nuts in one end are used to adjust the tip and tilt respectively ( figure [ fig : figure3]c ) .the rounded nuts are used to keep the two plastic pieces into position and to reduce any unwanted motion .a complete list of the required materials is presented in table [ table1 ] . .* bill of materials for kinematic mount * [ cols="<,<,<,<",options="header " , ] printing cost of 0.5 eur / is assumed .to determine the performance of the opto - mechanical components fabricated , a direct comparison with respect to its commercial counterpart was carried out by using the experimental setups presented in fig .[ fig : figure6 ] . in fig .[ fig : figure6]a it is shown the scheme used to compare the kinematic mounts .the input beam is generated using a he - ne laser with gaussian spatial profile and a beam waist of .after two reflections , the beam is reflected in a mirror mounted on the kinematic mount to be tested ( indicated in blue ) and its centroid position is monitored using a webcam located at with respect to the mirror vertical axis .a routine written in python , that uses the opencv library , records the beam centroid as a function of different angles .the beam reference position is determined by aligning initially the beam with respect to the two irises 1 and 2 located before the camera . + to test the performance of the translation stage , the setup shown in fig .[ fig : figure6]b is used .the beam reflected by the beam splitter ( bs ) is reflected by a mirror located in the translation stage to be characterized .the ts is positioned in such a way that the beam reflected passes through the two irises for different positions . in order to evaluate the performance of each opto - mechanical component , two sets of measurementswere carried out .the first is taken using the 3d printed component and the second the commercial device . for the kinematic mount , the interval in the horizontal and vertical directionsis divided equally in seven points .each direction is scanned ten times by rotating either the x and y knob , in order to evaluate the hysteresis and repetability of the component .figures [ fig : figure7]a and [ fig : figure7]b show the centroid position as a function of the x or y knob rotation , respectively . in both casesis clearly seen that the 3d printed kinematic mount , exhibits the same behaviour as its commercial counterpart , in this case a thorlabs km100 mount .in fact , when the beam is shifted in one direction , the other is confined within the same small interval as the commercial component . from the characterization ,the only difference observed between both components appears in the sensitivity experienced in the knob rotation , determined by the screw thread .+ for the translation stage , two set of measurements where performed over the interval , where the translation step was set to .the results are shown in fig .[ fig : figure8 ] . from the resultsis observed for the commercial ts , the beam drift lies within the interval . on the other hand ,the 3d printed ts provides a worse performance , particularly for displacements beyond , where the rounded screw exerts a significant pressure on the moving platform .this pressure gives rise to the unwanted displacement observed in the y - direction .fortunately , for small translations , below , the device presents an acceptable performance where the beam experiences a drift that lies within an interval of tenths of degrees .notice that this unwanted beam displacement is imperceptible to the eye .regarding the integrating sphere , it is found that it exhibits a non linear response as a function of the input beam intensity .figure [ fig : figure9]a displays an example of the device response as a function of the input intensity . from the fitted data , a calibration curve is obtained that is further used to determine the response of the power meter . for the sake of example ,[ fig : figure9]b presents a single wavelength ( ) comparison between a commercial power meter and a 3d printed integrating sphere . from the datais observed a maximum relative error of less than . in the experiment the light intensity is controlled by changing the angle between two crossed polarizers .we have developed and characterized a set of opto - mechanical components that can be easily implemented using a 3d printer based on fused filament fabrication ( fff ) and parts that can be found on any hardware store .in particular we have compared three of the main components required to implement a michelson - morley interferometer , namely a kinematic mount , a translation stage and an integrating sphere with respect to commercial alternatives . from our results , we have found that 3d printing provides a suitable alternative to implement experimental equipment in scenarios where is not required a high precision .surprisingly , the 3d printed kinematic mount provides a very similar performance with respect to its commercial counterpart .even though the results obtained for the translation stage are not so optimistic , since the beam drift is imperceptible to the eye , the device can be suitable for undergraduate laboratories .regarding the integrating sphere , we have developed and demonstrated a simple accessory that can be printed in order to convert a webcam into a power detector .importantly , in all cases we have found that a 3d printer is an extremely useful resource in any laboratory since it opens the possibility to fabricate experimental equipment that is highly customizable , at a low cost with respect to commercial alternatives , and more importantly in a very small period of time .ljss and av acknowledge support from facultad de ciencias , u. de los andes .w. gao , y. zhang , d. ramanujan , k. ramani , y. chen , c. b. williams , c. c. l. wang , y. c. shin , s. zhang , p. d. zavattieri , `` the status , challenges , and future of additive manufacturing in engineering '' , computer - aided design , 69 , 65 ( 2015 )
|
nowadays is very common to find headlines in the media where it is stated that 3d printing is a technology called to change our lives in the near future . for many authors , we are living in times of a third industrial revolution . howerver , we are currently in a stage of development where the use of 3d printing is advantageous over other manufacturing technologies only in rare scenarios . fortunately , scientific research is one of them . here we present the development of a set of opto - mechanical components that can be built easily using a 3d printer based on fused filament fabrication ( fff ) and parts that can be found on any hardware store . the components of the set presented here are highly customizable , low - cost , require a short time to be fabricated and offer a performance that compares favorably with respect to low - end commercial alternatives .
|
wireless sensor networks are used for detection or classification , whether for surveillance , environmental monitoring , or any of the myriad other application domains that are emerging in the age of big data . in many such applications , the likelihood functions of the hypotheses , e.g. , the presence or absence of a particular physical phenomenon ,are not known before the sensor network is deployed ; in these applications , the sensor network requires training prior to operation via supervised learning .the resulting classification accuracy improves with the number of measurements taken during training , but increasing length of the training stage further reduces the limited battery capacity for the operational stage .therefore , the amount of resources expended during training mediates operational lifetime and accuracy of the sensor network . the energy consumption of sensor nodes , and thus the lifetime of the network , is dominated by energy expended on communication .node transmissions in wireless sensor networks are commonly regulated by the carrier - sense multiple - access ( csma ) algorithm .this algorithm is implemented in tinyos , a popular open source operating system for wireless sensor networks , and is part of the ieee 802.15.4 standard for wireless sensor network communication .nodes using csma access the medium in a distributed manner , and wait some random back - off time between successive transmissions . in this paperwe consider a scenario where a set of measurements and classification is required every time unit .only nodes that are active at that time perform a measurement and transmit the result , so the number of measurements collected varies over time . we develop and analyze a model of sensor networks that perform supervised classification _ in situ _ , using the fisher discriminant analysis ( fda ) learning algorithm , with a training stage and an operational stage enabled by csma .the specific analysis of focus is the relationship between operational accuracy and lifetime , which we show to be of a fundamentally different character than for the case of detection with known likelihood functions , due to overfitting . in characterizing operational classification accuracy ( in contrast to classification accuracy on training samples ) , we make use of generalization approximations for fda developed by raudys et al . .battery capacity is characterized by the number of transmissions ( and thus measurements ) that can be performed , whether they be during training or operation . as every measurement corresponds to one transmission, the expected network lifetime is inversely proportional to the node throughput in our model .the performance measures of interest are the classification accuracy and operational lifetime , which is the lifetime spent in the operational stage , not in the training stage .the two main parameters available for configuring the sensor network are the csma back - off rates ( the reciprocal of the mean back - off time ) , and the fraction of the lifetime spent in the training stage . as the back - off rates of the nodes increase , states with many actively transmitting nodes are more likely .this requires more energy consumption , and also affect classification accuracy .classification accuracy is not monotonically increasing in the number of active nodes due to the phenomenon of overfitting , as we discuss in .we also show that operational accuracy as a function of back - off rate exhibits the hallmarks of overfitting in one regime , but in another regime , has a behavior quite different than any behavior usually encountered in statistical learning .the analysis of supervised classification for sensor networks in the researcg literature is limited : investigations have been predominantly concerned with the detection case where the likelihood functions are known. moreover , sensor network research tends to separate learning issues from the communication aspect .there are several works that model csma communication in sensor networks generally , e.g. and references therein , but not with the supervised classification application as part of the formulation .cross - layer work that does consider the networking issues together with a detection or estimation application , e.g. , the correlation - based collaborative mac protocol , is again focused on the case with known likelihoods .so although fda and the performance of csma - like algorithms has been widely studied in the research literature , we are the first to jointly consider classification accuracy and communication aspects of wireless sensor networks .we consider both the case of statistically independent and identically distributed ( i.i.d . )measurements from different nodes , and the case of measurements exhibiting correlation that depends on the spatial distance between the nodes .having i.i.d .measurements is a common simplifying assumption in wireless sensor network detection .a model with spatially - correlated measurements is much closer to reality in most applications .we assume that the learning algorithm has no prior information on the distribution and correlation of the measurements ; the fda has to estimate means and covariances as part of the training stage .the spatial correlation is encoded via a gauss markov random field ( gmrf ) model .the csma model under consideration was first introduced in the 1980s in the context of packet radio networks and was later applied to networks based on the ieee 802.11 standard .more recently , it has been used to study so - called adaptive csma algorithms , where the back - off rate of the nodes changes with their congestion level .although the representation of binary exponential back - off mechanism in the above - mentioned models is far less detailed than in the landmark work of bianchi and similar results focusing on sensor networks , e.g. , , the general interference graph offers greater versatility and covers a broad range of topologies .the remainder of the paper is organized as follows . in section [ sec : fda ] , we describe the setup of the sensor network system from the fda supervised classification perspective and in section [ sec : csma ] , we describe the setup of the sensor network system from the csma communication perspective . in section[ sec : relation ] we derive the relationship between operational lifetime and accuracy , and section [ sec : example ] presents numerical results of lifetime and accuracy for two special cases , illustrating the complicated balancing act that is involved .section [ sec : discuss ] provides a discussion and several ideas for future directions of research .consider a sensor network consisting of sensor nodes each taking a scalar measurement combined into a joint measurement vector . in the generalsupervised classification problem , we are given sample pairs known as the training set , with measurement and the class label or hypothesis .the training samples are acquired by the network after deployment and before the operational stage .the availability of labels for the training measurements is an assumption made in as well .once the training set is acquired , the samples are used to learn a classification function or decision rule that will accurately classify new unseen and unlabeled samples from the same distribution from which the training set was drawn . in this paper , we focus on a simple , classical decision rule , the fisher discriminant analysis classifier : where and , , , and are the conditional sample means and covariances of the training samples . the fisher discriminant analysis rule is a plug - in classifier that follows from the likelihood ratio test for optimal signal detection between gaussian signals with the same covariance and different means .the rule is applied in the operational stage of the sensor network to classify new observations . given the fda decision rule , we would like to characterize its performance , specifically its classification accuracy as it generalizes to new unseen samples in the operational stage .generalization accuracy , however , is a functional of the underlying data distribution , and we must first specify a probability distribution of the sensor measurements .we employ the same gmrf statistical model for sensor measurements as .that is , the sensor nodes are deployed on the plane with spatial locations , .the likelihoods of the two hypotheses are gaussian : and .the prior probabilities of the hypotheses are equal : = \pr[\mathssf{y } = 1 ] = 1/2 ] for the fda decision rule as described above is found in .based on , this approximation is given as ^{-\tfrac{1}{2}}\right ) , \quad m > n,\ ] ] where is the gaussian cumulative distribution function and is known as the mahalanobis distance in case there are insufficient training samples for accurate classification and we have . in the i.i.d .case , simplifies to .the csma algorithm is an example of a random - access algorithm , where nodes decide for themselves when to transmit , based on local information only .we assume that the nodes share the wireless medium according to a csma - type protocol .the network is described by an undirected conflict graph , where the set of vertices represents the nodes of the network and the set of edges indicates which pairs of nodes can not activate simultaneously .for ease of presentation we assume that the conflict graph is the same as the nearest neighbor graph introduced in section [ sec : fda ] .nodes that are neighbors in the conflict graph are prevented from simultaneous activity by the carrier - sensing mechanism .an inactive node is said to be blocked whenever any of its neighbors is active , and unblocked otherwise .the transmission times of node are independent and exponentially distributed with unit mean .when node is blocked it remains silent until all its neighbors are inactive , at which point it tries to activate after an exponentially distributed back - off time with mean .the set of all feasible joint activity states of the network in this case corresponds to the incidence vectors of all independent sets of the conflict graph .let the network state at time be denoted by , with indicating whether node is active at time ( ) or not ( ) .then is a markov process which is fully specified by the state space and the transition rates here denotes the vector of length with all zeroes except for a 1 at position .since is reversible ( see ) , the following product - form stationary distribution exists : where is the normalization constant that makes a probability measure . the rate at which sensor node makes observations ( or , alternatively , the rate at which it does transmissions ) is referred to as the throughput of this node , and may be written as sensor nodes rely on batteries for energy , and we assume that all nodes have a battery that allows them to make transmissions each before their battery is drained .consequently , the expected lifetime of a node can be written as the activity process in the training stage is the same as in the operational stage .we denote by the fraction of the battery capacity that is dedicated to training the sensor network . so the testing lifetime of node is , and the operational lifetime ( the quantity we would like to be large ) is : the model we have specified is fully general for any -node conflict graph .we work with this general model throughout the remainder of the paper , but also focus on two illustrative special cases .the two special cases of the csma network we consider are an -node network where all networks are disjoint and a three - node linear network .first , consider an -node network where all nodes can be active simultaneously .this corresponds to an interference graph with an empty edge set .we have and set so the stationary distribution simplifies to the stationary probability of any particular state only depends on the number of active nodes in that state and on the back - off rate .thus , for notational convenience , we introduce as the stationary probability of being in any state with active nodes , and we write which follows since there are different activity states with nodes transmitting . with equal back - off rates and disjoint nodes ,the stationary throughput is the same for all nodes moreover , all nodes have the same lifetime , and the operational lifetime of the network may be written as the three - node network where the nodes are positioned such that the carrier - sensing mechanism prevents node 2 from activating while either node 1 or node 3 is active . nodes 1 and 3 can be active simultaneously , but their observations are correlated .the network can take five possible states using we compute the following stationary probabilities : in order to make sure that all nodes have the same throughput and lifetime , we fix some parameter and choose and .so node 2 has a shorter mean back - off time in order to compensate for its disadvantageous position in the network , and all nodes have throughput ( see ) and operational lifetime the normalization constant with these back - off rates is given by are now in position to combine the fda model from section [ sec : fda ] and the csma model presented in section [ sec : csma ] to derive the relationship between generalization accuracy and operational lifetime .this is mediated by two parameters : the back - off rate or and the fraction of the lifetime spent in the training stage .due to the interference constraints and the intermittent nature of csma communications , not all nodes produce and validly communicate measurements at all times .so the training samples are acquired under different activity states . thus studying the relationship between accuracy and lifetimeis not simply a matter of joining the corresponding expressions and .this issue of incomplete data due to the activity process can be addressed in several ways , including data imputation .although various elaborate schemes are available , they come at the cost of additional computation , communication , and coordination that are at a premium in the sensor network setting . instead , we choose to model the classification by having separately learned classifiers for different activity states . in the operational stagethe appropriate classifier is used for prediction based on the activity state of the measurements . in this setup ,we associate with each state a number of training samples then we compute the overall generalization accuracy as the weighted sum of the individual generalization accuracies for each pattern according to their stationary probabilities : ^{-\tfrac{1}{2}}\right),\ ] ] with the stationary distribution and as in .we now compute the generalization accuracies for the two special cases introduced in section [ sec : csma ] with the gmrf of the measurements having the same graph structure as the csma network .as discussed in section [ sec : csma ] for a set of disjoint nodes , all patterns with active nodes have the same stationary probability given in , and all nodes have equal throughput and lifetime .we denote by the number of training samples for patterns with active nodes , and by summing over all states with active nodes , we write as discussed in section [ sec : fda ] , with i.i.d .measurements from sensors , the squared mahalanobis distance is .thus , with active sensors , the squared mahalanobis distance is . substituting the expression for the stationary distribution and the number of training samples into the expression for the generalization accuracy we obtain ^{-\tfrac{1}{2}}\right).\end{gathered}\ ] ] recall from section [ sec : three - node ] that the three - node network has 5 feasible states .the four non - empty states have squared mahalanobis distance note that since , the mahalanobis distance of the larger state is larger than that of the states with only one node active , and is more valuable .evaluating we obtain an expression for the number of training samples for each state : by weighting the individual generalization accuracies , we obtain ^{-\tfrac{1}{2}}\right ) + \\( \eta^2 + \eta)\phi\left(\frac{1}{2\sigma}\left[\left(1 + \frac{4\sigma^2}{m_{{\boldsymbol e}_2}}\right)\frac{m_{{\boldsymbol e}_2}}{m_{{\boldsymbol e}_2}-1}\right]^{-\tfrac{1}{2}}\right ) + \\ \eta\phi\left(\frac{1}{2\sigma}\left[\left(1 + \frac{4\sigma^2}{m_{{\boldsymbol e}_3}}\right)\frac{m_{{\boldsymbol e}_3}}{m_{{\boldsymbol e}_3}-1}\right]^{-\tfrac{1}{2}}\right ) + \\\eta^2\phi\left(\frac{\delta_{{\boldsymbol e}_1 + { \boldsymbol e}_3}}{2}\left[\left(1 + \frac{8}{m_{{\boldsymbol e}_1 + { \boldsymbol e}_3}\delta^2_{{\boldsymbol e}_1 + { \boldsymbol e}_3}}\right)\frac{m_{{\boldsymbol e}_1 + { \boldsymbol e}_3}}{m_{{\boldsymbol e}_1 + { \boldsymbol e}_3}-2}\right]^{-\tfrac{1}{2}}\right)\bigg].\end{gathered}\ ] ]in section [ sec : relation ] we derived the operational lifetime , the number of training samples and the operational classification accuracy for a wireless sensor network with random - access communication as a function of the back - off rate and the fraction of the lifetime spent in training .here we numerically evaluate these quantities for the special cases of independent nodes and the three - node linear network .we include a comparison to the bayes optimal detector with known likelihood functions and see that the accuracy behavior is markedly different .additionally , we see that there are two different regimes in the accuracy behavior as a function of the back - off rate , the second regime different than that usually seen in statistical learning .the overall behavior is unique due to the combination of csma and fda .we consider a network of independent nodes with transmissions allowed by the battery per node .the sensor measurement noise variance is set to .other parameter settings produce qualitatively similar results .first , in fig . [ fig : indep_unu ] , we plot the operational lifetime as a function of the back - off rate for a fixed lifetime fraction devoted to training : .the non - monotonicity of the classification accuracy in the back - off rates makes an analytic approach to optimization difficult , and an alternative solution would be to approximate the expression for the detection accuracy with some convex function .this would reduce the complexity of numerical optimization , and may even allow for analytical results .the effect of overfitting for medium back - off rates can be mitigated by choosing different back - off rates of the training stage and operational stage .for example , choosing larger back - off rates during training should increase the number of samples for states with many active nodes , thus reducing the risk of overfitting .although this would simultaneously reduce the number of samples for smaller states , the risk of overfitting is not as high there due to the smaller number of active nodes . another direction for futureresearch is to model temporal correlation in the sensor measurements in addition to spatial correlation . in the present work ,successive measurements in time are assumed independent , but including temporal correlation is more realistic . if temporal correlation is part of the sensing and classification model , its interaction with the temporal back - off mechanism may produce quite interesting phenomena . a markov model for temporal correlation could be analyzed together with the markov activity process of the csma model .finally , we also mention that asymptotic analysis is of interest in the future study of this cross - layer supervised learning and random - access communication setup .developing expressions for the three - node dependent network , e.g. , and , requires us to keep track of many details ; larger networks will require us to keep track of many more . by performing an asymptotic analysis of an increasing number of randomly placed sensor nodes with constant density in ,we are able to eliminate many such details in the sensor network generalization error using geometric probability . having now set forth this extended model with csma communication , similar asymptotic analysis using geometric probability is certainly warranted .k. r. varshney and a. s. willsky , `` linear dimensionality reduction for margin - based classification : high - dimensional data and sensor networks , '' _ ieee trans . signal process ._ , vol .59 , no . 6 , pp . 24962512 , jun. 2011 .hong and p. k. varshney , `` data - centric and cooperative mac protocols for sensor networks , '' in _ wireless sensor networks : signal processing and communications perspectives _, a. swami , q. zhao , y .- w .hong , and l. tong , eds.1em plus 0.5em minus 0.4emchichester , uk : john wiley & sons , 2007 , pp . 311348 .e. feo and g. a. di caro , `` an analytical model for ieee 802.15.4 non - beacon enabled csma / ca in multihop wireless sensor networks , '' istituto dalle molle di studi sullintelligenza artificiale , lugano , switzerland , tech . rep . 05 - 11 , may 2011 .s. liew , c. kai , j. leung , and b. wong , `` back - of - the - envelope computation of throughput distributions in csma wireless networks , '' _ ieee trans .mobile comput ._ , vol . 9 , no . 9 , pp .13191331 , sep .2010 .p. m. vande ven , j. s. h. van leeuwaarden , d. denteneer , and a. j. e. m. janssen , `` spatial fairness in linear random - access networks , '' _ perform .121134 , mar.apr . 2012 .s. rajagopalan , d. shah , and j. shin , `` network adiabatic theorem : an efficient randomized protocol for content resolution , '' in _ proc .acm sigmetrics / performance _ , seattle , wa , jun .2009 , pp .
|
wireless sensor networks are composed of distributed sensors that can be used for signal detection or classification . the likelihood functions of the hypotheses are often not known in advance , and decision rules have to be learned via supervised learning . a specific such algorithm is fisher discriminant analysis ( fda ) , the classification accuracy of which has been previously studied in the context of wireless sensor networks . previous work , however , does not take into account the communication protocol or battery lifetime of the sensor networks ; in this paper we extend the existing studies by proposing a model that captures the relationship between battery lifetime and classification accuracy . in order to do so we combine the fda with a model that captures the dynamics of the carrier - sense multiple - access ( csma ) algorithm , the random - access algorithm used to regulate communications in sensor networks . this allows us to study the interaction between the classification accuracy , battery lifetime and effort put towards learning , as well as the impact of the back - off rates of csma on the accuracy . we characterize the tradeoff between the length of the training stage and accuracy , and show that accuracy is non - monotone in the back - off rate due to changes in the training sample size and overfitting .
|
detection aims at highlighting salient foreground objects automatically from the background , and has received increasing attentions for many computer vision and graphics applications such as object recognition , content - aware image retargeting , video compression and image classification . driven by these recent applications ,saliency detection has also evolved to aim at assigning pixel - accurate saliency values , going far beyond its early goal of mimicing human eye fixation . due tolacking of a rigorous definition of saliency itself , inferring the ( pixel - accurate ) saliency assignment for diversified natural images without any user intervention is a highly ill - posed problem . to tackle this problem ,a myriad of computational models have been proposed using various principles or priors ranging from high - level biological vision to low - level image properties .focusing on bottom - up , low - level saliency computation models in this paper , we identify several remaining issues to be addressed though existing models have demonstrated impressive results . how to uniformly highlight the salient objects .natural images usually contain diverse patterns ( i.e. rich appearances ) so that the saliency computed through the bottom - up feature extraction could be discrete or incomplete without regard to salient objects .like other low - level vision tasks ( e.g. , image segmentation ) , most existing saliency models were built upon color information only , and they may degenerate when similar colors distribute on both foreground and background objects , e.g. , fig . [fig : fig1](fourth row : f - h ) .moreover , these approaches may render some elements inside a salient object as non - salient or some elements of the background as salient , due to their shortcoming on handling inhomogeneous structures in foreground ( e.g. , fig .[ fig : fig1](third row : e - h ) ) and background ( e.g. , fig . [fig : fig1](second row : e - h ) ) . how to make the saliency values coherent with image content .several saliency detection approaches demonstrated impressive results on generating pixelwise saliency maps .they usually assign the saliency values based on the over - segmentation of images ( i.e. small regions or superpixels ) , and further exploit the post - relaxation ( e.g. local filtering ) to smooth the saliency values over pixels .however , the image segmentation may introduce errors in processing complex image content ( e.g. , local cluttered textures ) , upon which the incompatibility with saliency values and object details could be caused by the post - relaxation step .these phenomenons are exhibited with the examples in fig .[ fig : fig1](first row : g - h ) .[ fig : original1 ] [ fig : color1 ] [ fig : gradient1 ] [ fig : pisa1 ] [ fig : ca1 ] [ fig : hc1 ] [ fig : rc1 ] [ fig : sf1 ] inspired by the insights and lessons from a significant amount of previous work as well as several priors supported by psychological evidences and observations of natural images , we address these above mentioned challenges in a more holistic manner .in particular , we propose a unified framework called pisa , which stands for pixelwise image saliency aggregating complementary saliency cues .it enables to generate spatially coherent yet detail - preserving , pixel - accurate and fine - grained image saliency . in the following ,we briefly discuss the motivations and main components of pisa . _i ) complementary appearance features for measuring saliency . _though color information is a popular saliency cue used dominantly in many methods , other influential factors do exist , which can also be used to make salient pixels or regions outstanding , even these pixels or regions are not unique or rare by color information .for instance , they can have unique appearance features in edge / texture patterns , demonstrating distinct contrast expressed by structure information . in fact , color and structure can be complementary to each other to provide more informative evidences for extracting complete salient objects .in addition , it is known from the perceptual research that different local receptive fields are associated with different kinds of visual stimuli , so local analysis regions where saliency cues are extracted should be adapted to match specific image attributes . instead of using color only treatment , pisa directly performs saliency modeling for each individual pixel on two complementary cues ( i.e. color and structure features ) and makes use of densely overlapping , feature - adaptive observations for saliency confidence computation .[ fig : fig1 ] shows a few motivating examples that highlight the advantage of our pisa method , compared with some leading methods . _ii ) non - parametric feature modeling in a global context ._ existing saliency detection approaches usually group image pixels based on local small regions or superpixels , which could give rise to less informative saliency measures . in contrast , using non - local approaches to summarize the extracted features tends to be more robust and reasonable than those of local homogeneous superpixel - based methods , and its advantage has been demonstrated in recent works . rather than using superpixel - based representations , we propose to compute the saliency confidence by considering both the global appearance contrast in the feature space as well as the image domain smoothness . specifically , we first group all image pixels by summarizing their extracted features ( i.e. either the color or structure histograms ) , and model the saliency confidence according to the global rarity ( i.e. uniqueness ) of the pixel group in the color / structure feature space .meanwhile , we further impose the spatial priors , including the center preference and boundary exclusion in the image domain to complete the saliency modeling for each pixel ._ iii ) fine - grained saliency assignment ._ many high level tasks prefer generating more abundant and fine - grained saliency maps ( i.e. each pixel can be assigned with several saliency levels ) .pixel - accurate saliency maps are often required to be spatially coherent with discontinuities well aligned to image edges , according to existing studies .in particular , the spatial connectivity and correlation involved in neighborhood pixels should be preserved in saliency computing . in this work , we pose the fine - grained saliency assignment as a multiple labeling problem , in which the appearance contrast based saliency measure is jointly modeled with the neighborhood coherence constraint .the resulting target function can be minimized by using global discrete labeling optimizers such as graph cuts or belief propagation .these methods , however , are often relatively time - consuming and do not scale well to fine - grained labeling ( i.e. a large space of labels ) .some other continuous approaches are efficient but usually require a restricted form of the energy function . in this paper, we employ a recently proposed filter - based method , namely cost - volume filtering , to smoothly assign the saliency levels while preserving structural coherence ( i.e. keeping the edges and boundaries of salient objects ) .to balance the accuracy - efficiency trade - off , we also propose a faster version called f - pisa .it first performs saliency computation for a feature - driven , subsampled image grid , and then uses an adaptive upsampling scheme with the color image as the guidance signal to recover a full - resolution saliency map .compared to segmentation - based saliency methods , our f - pisa method reduces the computational complexity similarly by considering a coarse image grid , while having the advantage of utilizing image structural information for saliency reasoning over .our extensive experiments on six public benchmarks demonstrate the superior detection accuracy and competitive runtime speed of our approaches over the state - of - the - arts .moreover , we construct a new and meaningful database of image saliency including real commodity images from online shops .the remainder of the paper is organized as follows : sect .[ sec : related ] reviews related works of saliency detection . sect .[ sec : formul ] introduces the proposed framework and its main components .more details for inference and implementation are discussed in sect .[ sec : alg ] .extensive experimental evaluations and comparisons are presented in sect .[ sec : exper ] .the paper concludes in sect .[ sec : conclusion ] .recently , numerous bottom - up saliency detection models have been proposed for explaining visual attention based on different mathematical principles or priors .we classify most of the previous methods into two basic classes depending on the way that saliency cues are defined : _ contrast priors _ and _ background priors _ . assuming that saliency is unique and rare in appearance , contrast priors have been widely adopted in many previous methods to model the appearance contrast between foreground salient objects and the background .et al . _ presented a bottom - up method in which an input image is represented with three features including color , intensity and orientation in different scales .et al . _ proposed a frequency - tuned method that defines the saliency likelihood of each pixel based on its difference from the average image color by exploiting the center - prior principle .et al . _ used a patch based approach to incorporate global properties to highlight salient objects along with their contexts . however , due to using the local contrast only , it tends to produce higher salient values near edges . to highlight the entire object , cheng _et al . _ presented color histogram contrast ( hc ) in the color space and region contrast ( rc ) in a global scope .et al . _ formulated saliency estimation using two gaussian filters by which color and position are respectively exploited to measure region uniqueness and variance of the spatial distribution ._ et al . _ proposed a hierarchical framework that infers important values from three image layers in different scales . also using a hierarchical indexing mechanism , cheng _et al . _ proposed a gaussian mixture model based abstract representation which decomposes an image into large scale perceptually homogeneous elements .but their saliency cues integration based on the compactness measure may not always be effective .typical limitations of the existing methods based on contrast priors include attenuated object interior and ambiguous saliency detection for images with rich structures in foreground or / and background . complementing the prime role of contrast priors in this research topic , background priors been proposed recently to exploit two interesting priors about backgrounds connectivity and boundary priors .the background prior is based on an observation that the distance of a pair of background regions is shorter than that of a region from the salient object and a region from the background .et al . _ exploited background priors and the geodesic distance for the saliency detection .et al_. proposed a graph - based manifold ranking approach to characterize the overall differences between salient objects and background .et al_. integrated the background cues into the designed absorbing markov chain .regarding image boundaries as likely cues for background templates , li _et al_. proposed a saliency detection algorithm from the perspective of dense and sparse appearance model reconstructions .however , these methods fail when objects touch the image boundary to quite some extent , or when connectivity assumptions are invalid in the presence of complex backgrounds or textured scenes . for instance , the maple leave case in fig .[ fig : fig1 ] poses a challenge for the method .energy minimization based methods have also been introduced for saliency detection . liu _et al_. proposed a nonparametric saliency model based on kernel density estimation ( kde ) .et al_. proposed an iterative energy minimization framework to integrate both bottom - up salient stimuli and an object - level shape prior . treating saliency computation as a regression problem , jiang _et al_. integrated regional contrast , regional property and regional backgroundness .et al_. proposed to account for the relationships of objectness and saliency by iteratively optimizing an energy function .this paper provides a more complete understanding of the pisa algorithm first presented in the conference version , giving further background , insights , analysis , and evaluation .furthermore , we improve the previous framework in two aspects . first , the improved pisa is cast as the energy minimization problem , which efficiently solved by the edge - aware cost - volume filter to generate the spatially coherent and fine - grained saliency maps in one shot .second , for suppressing the effect of background , a more general spatial prior is integrated in our framework to obtain more compact saliency maps .in this section , we introduce the formulation of pisa , and briefly overview the main components . given an input image ,the objective of pisa is to extract salient objects automatically and assign consistently high saliency levels to them . without loss of generality, we achieve this goal by minimizing the following energy function where represents the cost of labeling pixel with the saliency level , which composes the data term according to the contrast based measures . defines the neighborhood coherence to preserve the local structures and edges centered at .we further specify as where denotes the normalized feature measure of , aggregating two complementary contrast measures defined in a global context .[ fig : framework ] illustrates the main flowchart of pisa .we introduce two types of features to capture contrast information of salient objects with respect to the scene background .they are a color - based contrast feature and a structure - based contrast feature , each of which is further integrated with the spatial priors holistically .these two features complement each other in detecting saliency cues from different perspectives , and are combined together in a pixelwise adaptive manner to measure the saliency .more formally , given an image , we compute the feature - based saliency confidence for each pixel by aggregating the two contrast measures ( i.e. the uniqueness in the feature spaces ) with the spatial priors , as * appearance contrast term * . the contrast measure is proposed based on the observation or principle that rare or infrequent visual features in a global context give rise to high salient values . herewe exploit the structure - based contrast measure in addition to the well exploited color - based contrast measure , and we fuse the two measures to achieve better performance . denotes the uniqueness of pixel with respect to the entire image in the color feature space , and denotes the uniqueness of pixel in the orientation - magnitude ( om ) feature space .their detailed implementations will be discussed in sect .[ sec : cc ] and sect .[ sec : sc ] , respectively . instead of describing the features for pixel via its assigned superpixel , we use the non - parametric histogram distribution to capture and represent both the color and structure features with an appropriate observation region around .it is worth mentioning that our framework is very general to incorporate more saliency cues in the similar way .[ fig : framework ] * spatial priors term *. they are evaluated based on the generally valid spatial prior that salient pixels tend to distribute near the image center and away from the image boundary , i.e. people tend to frame an image by placing salient objects of interest in the center with background borders .thus , we integrate the image center preference and boundary exclusion in the saliency reweighting process .we use and to denote the integration of image center spatial distance and image boundary exclusion of visually similar peers on the color and structure contrast measurement , respectively ( sect .[ sec : spatial ] ) .after reweighting the above saliency measurement based on appearance contrast , we keep the salient pixels compact and centered with the exclusion to the image boundary in the image domain .we normalize the feature - based saliency confidence to the discrete saliency level set for further calculating the label cost .this normalization is given by the following sigmoid - like function : where denotes a rounding function , which rounds a float - point number to the nearest integer , and is the user defined maximum saliency level .we fix to 24 in our all experiments . to suppress spurious noises and non - uniform saliency assignment , we further incorporate the spatial connectivity and correlation constraint among neighborhood pixels together with the feature - based measures .the saliency level for pixel should be consistent with its neighborhood pixels which have similar appearance with within its local observation region in the image domain .the coherence constraint can be thus defined as where the observation window for the anchor pixel delineates an arbitrarily - shaped and connected local support region ( see fig .[ fig : feature ] ) , represents a neighboring pixel to in , and is the saliency level assigned to . encodes the similarities between and within , which will be explained in the next section .in this section , we unfold the framework of pisa and discuss the implementation details .in addition , a faster version of pisa , namely f - pisa , is also developed to greatly improve the runtime efficiency and keep comparable performance . [ !t ] unlike the traditional methods that usually process fixed - size windows or over - segmented superpixels , pisa computes saliency by generating an arbitrarily - shaped observation region for each pixel in the image .this pixelwise observation plays a key role in feature extraction and fine - grained saliency assignment . for a pixel centered at a square window , we first define a color similarity criterion for a test pixel as follows , where is the intensity of the color band of the median smoothed input image . set empirically, denotes the preset maximum arm length of the observation window centered at pixel ( the size of is ) , and controls the confidence level of the color similarity .the method of generating follows our previous study in image filtering ( i.e. cross - based local multipoint filtering ) .we first decide a pixelwise adaptive cross with four arms ( left , right , up , bottom ) for every pixel . by changing four arms of every pixel adaptively ,the local image structure is captured reliably .these arms record the largest left / right horizontal and up / bottom vertical span of the anchor pixel , where all the pixels covered by the arms are similar to pixel in color ( i.e. they satisfy eqn .( [ equ : adaptive ] ) ) .let and denote all the pixels covered by the horizontal and vertical arms of the pixel , respectively .let denote any pixel covered by the vertical arms of the pixel ( i.e. ) , as shown in fig .[ fig : clmf ] .then we can further construct the arbitrarily - shaped , connected local observation window by integrating multiple sliding along [ !t ] directly computing pixelwise color contrast in a global image context is computationally expensive , as its complexity is ( ) with being the number of pixels in the image .recently , cheng _ et al._ proposed an effective and efficient color - based contrast measure , i.e. , histogram - based contrast ( hc ) .they assume that if neglecting spatial correlations , pixels with the similar color value should have the same saliency value . however , without taking the neighborhood of pixels into consideration , their strategy of defining contrast on color information of individual pixels is sensitive to noise , and it is not extensible for measuring additional attributes . in this work ,we compute the color contrast based on a non - parametric color distribution extracted from a local homogeneous region . as pixels within the homogeneous region share similar appearance with the central pixel , it is more robust to define a contrast measure on color information of homogeneous regions rather than individual pixels . for each pixel , we first construct a local observation region efficiently as described in sect .[ sec : observationwindow ] .a color histogram for pixel is then built from the pixels covered in the localized homogeneous region .using rather than is more consistent with psychological evidences on human eyes receptive field on homogeneous regions . using the color space ,we quantize each color channel uniformly into 12 bins , so the color histogram is a 36-d descriptor ( see fig .[ fig : feature ] ) .next , we cluster pixels that share similar color histograms together using _ kmeans_. the whole color feature space for the input image is then quantized into clusters , indexed by . as a result, we use the rarity of color clusters as the proxy to evaluate the rarity or contrast measure of pixels .let denote the cluster that pixel , or more precisely , is assigned to .we estimate the color - based contrast measure for pixel as where uses the number of pixels belonging to the cluster as a weight to emphasize the color contrast to bigger clusters , and is the average color histogram of cluster . fig . [fig : spatial ] ( a ) illustrates an example image with eight color clusters and their contrast measure . [ !t ] feature space quantization may cause undesirable artifacts .when directly calculating the distance of histograms or giving an inappropriate cluster number , similar color histograms can sometimes be quantized into different clusters .we tackle this problem in three aspects : i ) improve clustering with color dissimilarity .we sightly modify _ kmeans _ in its distance when clustering .in addition to the distance between the two histograms , we add the color dissimilarity between the center pixels into the distance measurement .ii ) decide adaptively according to the histogram distribution .the cluster number of the color feature space is adaptively decided with regard to the image content .similar to that used in , we choose the most frequently occurring color features by ensuring they cover 95% of the histogram distributions of all pixels in the input image .iii ) reweight the salient values of clusters with respect to their visual similarities .we adopt a linearly - varying smoothing scheme to refine the quantization - based saliency measurement .the saliency value of each cluster is replaced by the weighted average of the saliency values of visually similar clusters .larger weights are assigned to those clusters which share similar color features .such a refinement smooths the saliency assignments to each pixel .our proposed method that computes the color contrast based on non - parametric color distribution reduces the computational complexity from to , where the second term corresponds to the complexity of _ kmeans _ and usually is very small . as we observed , typically takes values in the range of 6 to 403 in the asd dataset which contains 1000 images . as discussed in sect .[ sec : intro ] , using only color information is not adequate to completely depict salient objects or parts of them against the non - salient background .even though in the cases that the color - based measure produces good results , other complementary measures can still contribute to reinforce the saliency assignment .therefore , we propose a structure - based measure to complement the color - based contrast measure here .the proposed structure - based measure models the image gradient distribution for every pixel by a histogram in a rectangular region . measures the occurrence frequency of a concatenated vector consisting of a gradient orientation component and a gradient magnitude component .similarly , we quantize both components into eight bins , and call the resulting feature space the om space .it is clear that a point in such a om space is 16-d ( see fig .[ fig : feature ] ) . in this paper, we fix the local window to the same size as the maximum observation window of the color histogram extraction for the comparability .as will be shown later , we find that our om structure descriptor , though simple , is more effective and reliable than other gradient features such as gabor and lbp in the image saliency detection task .similar to the color contrast measure , _ kmeans _ is utilized to partition the om feature space into clusters , indexed by .the structure contrast measure for pixel is equivalent to measuring that of the cluster which is grouped to as where is the weight stressing the contrast against bigger clusters , and is the average om histogram of the cluster . may suffer from the influence of side effects caused by the brute - force feature space quantization process . again , we alleviate these artifacts by adopting the same strategy illustrated in sect . [sec : cc ] .i.e. , using slightly modified _ kmeans _ , determining the cluster number adaptively by representing the most frequent om vectors and accounting for at least 95% pixels , and applying local smoothing scheme .we observe typically varies from 11 to 43 in the asd dataset .motivated by recent works , we impose a spatial prior term on each of the two contrast measures , constraining pixels rendered salient to be centered and excluded to the image boundary in the image domain based on the image center preference and the image boundary exclusion . for each pixel , we evaluate the initial spatial prior term based on the cluster that contains from two aspects : i ) preference to the image center , and ii ) exclusion to the image boundary .combining these two criteria , we compute as follows : where is the number of pixels which are contained in the same color ( or om ) cluster ( or ) with . is the image center position . indicates the image border region , which is formed by the pixels close to the image borders . as a matter of fact, this region typically belongs to non - salient background .thus , we incorporate the as the probability of / belonging to the image border region .we use a user - specified parameter to control the relative weight of the image boundary exclusion .[ fig : spatial ] ( d ) and ( e ) illustrate the spatial prior together with using the color - based measure and the effectiveness for saliency assignment .since clusters closer to the image border or farther from the image center are often unlikely to be salient , we compute the final spatial prior term for pixel using a threshold as where controls the fall - off rate of the exponential function . bynow we have defined all the four terms necessary for computing in eqn .( [ equ : contrast ] ) .our goal is to assign each pixel in the image to a saliency level from the discrete saliency level set , with the formulation in eqn .( [ equ : energyequ ] ) .this is a multi - labeling minimization task integrating a data term and a smoothness term . instead of using global discrete optimization methods, we employ the cost - volume filtering technique to achieve this goal , which computes the discrete assignment efficiently while keeping local labeling coherence . specifically , this method aggregates the label costs within a support window by applying a local edge - preserving smoothing filter , and then selects the label in a winner - takes - all fashion .the fine - grained saliency is computed for each pixel with the following steps . *( i ) constructing the cost - volume * : following , the cost - volume is a three dimensional array , and each element in the array represents the cost for choosing a saliency level at pixel .we compute as the square difference between and the normalized feature - based saliency measure : * ( ii ) filtering the cost - volume * : to smooth the label costs in the image domain , the cost - volume will be further filtered with an edge - preserving filter .the original cost volume filtering method uses the guided filter , which employs fixed - sized square observation windows , and it derives the output of the filtering simply as an average of multiple linear regression results from shifted windows of neighboring pixels . in this work , to incorporate the local edge - aware coherence ( eqn .( [ straint ] ) ) and also to achieve more efficient runtime , we extend the guided filter into a new form based on the pixelwise adaptive observation .specifically , for pixel we estimate the correlation of and its neighbor by where is the observation region of pixel ( a neighbor of ) , and denotes the number of pixels in . intuitively , the correlation of pixel and the neighbor is proportional to .we refer to for the technical background .the cost of can be updated by the weighted average of the initial costs of all pixels in as this step encourages the saliency values to be smooth in the homogeneous regions and also preserves the object details ( e.g. edges and structures ) in the fine - grained saliency assignment . * ( iii ) winner - takes - all label selection * : after the cost - volume is updated , the final saliency level at pixel is selected by salient object detection is always cast as a preprocessing technique for subsequent applications , which demands a fast and accurate solution . to optimize accuracy - complexity trade - off , we present a faster version f - pisa , which contains well - designed algorithmic choices . instead of processing the full image grid ,we perform a gradient - driven subsampling of the input image , so the saliency computation in eqn .( [ equ : energyequ ] ) is only applied to this set of selected pixels .more specifically , for a given image , we pick the pixel with the largest gradient magnitude from a 3 rectangular patch on the regular image grid to form a sparse image .the two proposed contrast saliency measures with edge - preserving coherence are then computed for , giving a sparse saliency map . to obtain a full - resolution saliency map , we propagate the saliency values among pixels in the same pixel - adaptive observation region , as they share the similar appearance .this propagation scheme resembles the principle of joint bilateral upsampling , using a high - resolution color image as a guidance to upsample a sparsely - valued solution map .it can produce a smoothly varying dense saliency map without blurring the edges of salient objects .thus given a pixel , its saliency value is obtained as where belongs to and its pixel - adaptive support region contains , is the total number of such pixels , and . in sect .[ sec : exper ] , we evaluate the performance of this fast version quantitatively and qualitatively on six public benchmark datasets .we present empirical evaluation and analysis of the proposed pisa against several state - of - the - art methods ( including the conference version ) on six public available datasets .we further analyze the effectiveness of the two complementary components , i.e. , color - based contrast measure and structure - based contrast measure , as well as their corresponding spatial priors ( image center preference and boundary exclusion ) .we justify the importance of the proposed energy minimization framework and the sigmoid - like function for the feature - based saliency confidence normalization . at last ,we discuss our limitations through failure cases .we evaluate the proposed methods on six public available datasets .they are asd , sod , sed1 , ecssd , pascal-1500 and the taobao commodity dataset ( tcd ) newly created by us .the asd is also called msra-1000 which contains 1000 images with accurate human - labeled masks for salient objects and has been widely used by recent methods .the sod dataset is more challenging with complex objects and scenes included in its 300 images , and we obtain the ground - truth for this dataset from the authors of the work . the sed1 dataset is exploited recently which contains 100 images of single objects , and we consider a pixel salient if it is annotated as salient by all subjects .the ecssd contains 1000 diversified patterns in both background and foreground images , which includes many semantically meaningful but structurally complex images for evaluation .the pascal-1500 , created from pascal voc 2012 , is also a challenge dataset , in which the images contain multiple objects appearing at a variety of locations and scales with cluttered background .the tcd dataset that we make available with this paper contains 800 commodity images from the shops on the taobao website .the ground truth masks of the tcd dataset are obtained by inviting common sellers of taobao website to annotate their commodities , i.e. , masking salient objects that they want to show from their exhibition .these images include all kinds of commodity with and without human models , thus having complex backgrounds and scenes with highly complex foregrounds .we choose the total saliency level . for the step of generating pixelwise adaptive observation , we set \{ , } = \{60 , 10 } to extract color features and build saliency coherence support regions .we set \{ } = \{ , 0.006 , 0.001 , 30}. while for f - pisa , we set \{ , } = \{50 , 5 } and \{ } = \{ , 0.035 , 0.001 , 30}. these parameters are fixed in all experiments for the six datasets .we use ( p)recision-(r)ecall curves ( pr curves ) , metric and mae to evaluate all the algorithms . given the binarized saliency map via the threshold value from 0 to 255 , precision means the ratio of the correctly assigned salient pixel number in relation to all the detected salient pixel number , and recall means the ratio of the correct salient pixel number in relation to the ground truth number .different from ( p)recision-(r)ecall curves using a fixed threshold for every image , the metric exploits an adaptive threshold of each image to perform the evaluation .the adaptive threshold is defined as where and denote the width and height of an image , respectively .the f - measure is defined as follows with the precision and recall of the above adaptive threshold : where we set the = 0.3 to emphasize the precision as suggested in . as pointed out in ,pr curves and metric are aimed at quantitative comparison , while mean absolute error ( mae ) are better than them for taking visual comparison into consideration to estimate dissimilarity between a saliency map and the ground truth , which is defined as where is the number of image pixels .we compare our methods with thirteen recent state - of - the - art works : dense and sparse reconstruction ( dsr ) , global cues ( gc ) , histogram - based contrast ( hc ) , context - aware saliency ( ca ) , frequency - tuned saliency ( ft ) , spectral residual saliency ( sr ) , spatial - temporal cues ( lc ) , context - based saliency ( cb ) , markov chain saliency ( mc ) , hierarchical saliency ( hs ) , graph - based manifold ranking ( gm ) , saliency filter ( sf ) , and region - based contrast ( rc ) .whenever they are available , we use the author - provided results . results of hc , ft , sr , lc , rc are generated by using the codes provided by , and we adopt the public implementations from the original authors for dsr , gc , ca , cb , hs , gm , mc and sf . note that the saliency maps of all methods are mapped to the range [ 0 , 255 ] by the same max - min normalization method for the further evaluation .the evaluation results are shown in fig .[ fig : prf ] and [ fig : prf2 ] , respectively .[ htbp ] [ htbp ] in fig .[ fig : prf ] , based on the pr curves of asd , sod and sed1 , our proposed method pisa performs nearly the same as compared methods . to evaluate the overall performance of the pr curve, we calculate the average precision , which is the integral area under the pr curve . for the asd dataset ,our pisa , dsr , hs , gm and mc all achieve more than 93.0% accuracy , while the average precision of pisa is 1.5% , 0.6% , 2.1% , 1.7% less than dsr , hs , gm , mc , respectively . for the sod dataset , pisa , dsr and mc all achieve more than 80% accuracy , while the average precision of pisa is 0.5% better than both of them .for the sed1 dataset , pisa , dsr , hs , gm and mc all achieve more than 90.0% accuracy , while the average precision of pisa is only 2.8% less than gm .based on the metric in fig .[ fig : prf ] , pisa obtains 2% less than gm / mc on asd , 0.5% less than dsr / mc on sod , 4% less than gm/ mc on sed1 . based on the mae in fig .[ fig : prf ] , pisa obtains the best results on the sod datasets and advances together with the best method gm on asd / sed1 .hence , compared with all the compared methods , pisa is only slightly better on sod , and is only a little worse on asd and sed1 .since asd and sed1 datasets are simple and not challenging , it is not suitable for showing the advantage of pisa . the superior performance of pisa is demonstrated in fig .[ fig : prf2 ] . based on the pr curves , and mae in fig .[ fig : prf2 ] , one can clearly see that our pisa consistently outperforms all the compared methods on ecssd , pascal-1500 , tcd , respectively .in particular , tcd is different in focusing on commodity images , whose salient objects contain diverse patterns and rich structure information .this is consistent with our motivations i ) and ii ) in sect [ sec : intro ] . designed to meet these objectives, our pisa achieves clearly higher performance than the compared methods .in addition , pisa in this paper performs 2% better than the conference version ( pisa - prev ) on average , and readers are encouraged to see the supplementary file for more details .[ htbp ] [ htbp ] we further analyze the effectiveness of the two complementary measures , i.e. color - based contrast ( cc ) and structure - based contrast ( sc ) . the quantitative results on the six datasets in fig .[ fig : cuepr ] demonstrate the requisite of aggregating the two measures : pisa ( sc + cc ) performs consistently better than sc or cc alone .we can observe that the aggregated saliency detection achieves superior performance , as cc and sc capture saliency from different aspects , verified by the visual results in fig .[ fig : fig1 ] .it is worth noting that we obtain favorable results on the images in the second and third rows in fig .[ fig : fig1 ] , which are exhibited in and as failure cases .they serve as good evidences to advocate our choice in fusing complementary saliency cues .we also analyze the contribution of the introduced spatial priors , i.e. image center preference and boundary exclusion .the quantitative results on the six datasets in fig .[ fig : comppr ] illustrate the advantage of introducing these spatial priors .`` without be '' represents the pisa framework without boundary exclusion ( be ) only , while `` without cp '' represents without image center preference ( cp ) only. justified by the experiments on the six datasets , the introduced spatial priors contribute to achieve superior performance , as cp and be represent the typical choices when people take pictures .we also justify the significance of the proposed energy - minimization framework ( eqn .( [ equ : energyequ ] ) ) by comparing with our conference framework ( pisa - prev framework ) . for fair comparison ,we conduct the experiment with all other parameters fixed , i.e. they share the same normalized feature - based saliency measure , and the only difference is the framework .[ fig : compframework ] demonstrates that our energy - minimization framework obtains higher precision when the recall belongs to [ 0 , 0.2 ] on pr curves and achieves better mae results .thus , by modeling the appearance contrast based saliency measure and the neighborhood coherence constraint jointly , the proposed energy - minimization framework can highlight saliency objects more uniformly .we have also explored other commonly used features gabor and lbp to substitute om for capturing structure information .for all the features , we choose their best results for comparison by tuning their quantizations .the dimensions for gabor and lbp features are and , respectively .the pr - curves of the experiments evaluated on the asd dataset are shown in fig .[ fig : grad ] .the om descriptor outperforms the others .meanwhile , under the proposed framework , our om descriptor also shows higher computational efficiency than gabor and lbp due to its low dimension .[ htbp ] in our proposed framework , the normalization step , which maps the feature - based saliency measure into discrete saliency level set \{0 , ... , } , has an impact on the final saliency maps .[ fig : norm_visual ] illustrates this impact of different normalizations , such as commonly used max - min ( linear ) , log - like ( nonlinear ) , and exp - like ( nonlinear ) .compared with the linear normalization , log - like increases the saliency levels of the whole pixels ( fig .[ fig : norm_visual](c ) ) , while exp - like decreases all pixels saliency levels ( fig .[ fig : norm_visual](d ) ) .sigmoid - like increases the number of high salient value pixels and reduces those of low salient value for its s shape ( fig .[ fig : norm_visual](e ) ) . for exploring these normalization functions ,we conduct the experiment on the pascal-1500 dataset as it is the most challenging and the largest dataset with metric and mae evaluation .note that we discard the pr curves for that the change of normalization methods will not affect pr results as long as the mapping is one - to - one .[ fig : normalization ] demonstrates that a sigmoid - like function performs a little better in metric and much better in mae evaluation than others .thus we adopt a sigmoid - like normalization ( eqn .( [ equ : sigmoid ] ) ) to produce better visual saliency maps .the experiments are carried out on a desktop with an intel i7 3.4ghz cpu and 8 gb ram .the average runtime with ranking of our approaches ( pisa and f - pisa ) and competing methods on the asd dataset , whose most images have a resolution of , are reported in table [ tab : time ] . though pisa is a little slow ( rank 13 , slightly faster than our conference version pisa - prev ), our fast implementation f - pisa , significantly improves the efficiency ( rank 6 , times faster than pisa ) , while keeping comparable accuracy ( better than the top five methods in the rank list , see fig .[ fig : prf ] and [ fig : prf2 ] ) . specifically , for pisa : calculating the normalized feature - based saliency measure costs 310ms ( about 50% ) , minimizing the energy function costs 280ms ( about 45% ) , and others cost 30ms ( about 5% ) . for f - pisa : computing saliency costs 30ms ( about 68% ) , while subsampling and joint bilateral upsampling costs 12ms ( about 27% ) , and others cost 2ms ( about 5% ) .[ tab : time ] .comparison of the average running time ( seconds per image ) on the asd dataset . [ cols="^,^,^,^",options="header " , ] in fig .[ fig : limit ] , we present unsatisfying results generated by pisa . as our approach uses the spatial priors , it has problems when such priors are invalid .for example , if the saliency object occurs near the image boundary to quite extent , some regions of it can be suppressed ( see fig . [fig : limit](first row ) ) due to the image boundary exclusion prior .if the center prior does not hold , the background regions located near the image center can not be effectively suppressed in saliency evaluation ( see fig . [fig : limit](second row ) ) . by adjusting the relative contribution of these priors through tuning , we can alleviate their influences .thus , the weakness of the proposed methods is : for any background regions that have been assigned high saliency values from either of the contrast cues after the modulation of the spatial priors , they remain salient in the final saliency map .this problem could be tackled by incorporating high - level knowledge to adjust the confidence of two measures in the formulation .we have presented a generic and unified framework for pixelwise saliency detection by aggregating multiple image cues and priors , where the feature - based saliency confidence are jointly modeled with the neighborhood coherence constraint . based on the saliency model , we employed the shape - adaptive cost - volume filtering technique to achieve fine - grained saliency value assignment while preserving edge - aware image details .we extensively evaluated our pisa on six public datasets by comparing with previous works .experimental results demonstrated the advantages of our pisa in detection accuracy consistency and runtime efficiency . for future work, we plan to incorporate high - level knowledge and multilayer information , which could be beneficial to handle more challenging cases , and also investigate other kinds of saliency cues or priors to be embedded into the pisa framework .k. shi , k. wang , j. lu , and l. lin .pisa : pixelwise image saliency by aggregating complementary appearance contrast measure with spatial priors . in _ proc .ieee conf .pattern recognit ._ , pp . 2115 - 2122 , 2013 . keze wang received the bs degree in software engineering from sun yat - sen university , guangzhou , china , in 2012 .he is currently pursuing the ph.d .degree in computer science and technology at sun yat - sen university , advised by professor liang lin .his current research interests include computer vision and machine learning .liang lin is a professor with the school of advanced computing , sun yat - sen university ( sysu ) , china .he received the b.s . and ph.d .degrees from the beijing institute of technology ( bit ) , beijing , china , in 1999 and 2008 , respectively . from 2006to 2007 , he was a joint ph.d .student with the department of statistics , university of california , los angeles ( ucla ) . his ph.d .dissertation was achieved the china national excellent ph.d .thesis award nomination in 2010 .he was a post - doctoral research fellow with the center for vision , cognition , learning , and art of ucla .his research focuses on new models , algorithms and systems for intelligent processing and understanding of visual data such as images and videos .he has published more than 70 papers in top tier academic journals and conferences .he was supported by several promotive programs or funds for his works , such as `` program for new century excellent talents '' of ministry of education ( china ) in 2012 , and guangdong nsfs for distinguished young scholars in 2013 .he received the best paper runners - up award in acm npar 2010 , google faculty award in 2012 , and best student paper award in ieee icme 2014 .he has served as an associate editor for neurocomputing and the visual computer .jiangbo lu(m09 ) received the b.s . anddegrees in electrical engineering from zhejiang university , hangzhou , china , in 2000 and 2003 , respectively , and the ph.d .degree in electrical engineering from katholieke universiteit leuven , leuven , belgium , in 2009 . from april 2003 to august 2004 , he was with via - s3 graphics , shanghai , china , as a graphics processing unit ( gpu ) architecture design engineer . in 2002 and 2005, he conducted visiting research at microsoft research asia , beijing , china . since october 2004, he has been with the multimedia group , interuniversity microelectronics center , leuven , belgium , as a ph.d .researcher . since september 2009, he has been with the advanced digital sciences center , singapore , which is a joint research center between the university of illinois at urbana - champaign , urbana , and the agency for science , technology and research ( a*star ) , singapore , where he is leading a few research projects currently as a senior research scientist .his research interests include computer vision , visual computing , image processing , video communication , interactive multimedia applications and systems , and efficient algorithms for various architectures .chenglong li received the bs degree in applied mathematics in 2010 , and the m.s .degree in computer science in 2013 from anhui university , hefei , china .he is currently pursuing the ph.d .degree in computer science at anhui university .his current research interests include computer vision , machine learning , and intelligent media technology . keyang shi received the bs and ms degrees in software engineering and computer science respectively , from sun yat - sen university , guangzhou , china , in 2011 and 2014 .his research interests include computer vision , machine learning and cloud computing .
|
driven by recent vision and graphics applications such as image segmentation and object recognition , computing pixel - accurate saliency values to uniformly highlight foreground objects becomes increasingly important . in this paper , we propose a unified framework called pisa , which stands for pixelwise image saliency aggregating various bottom - up cues and priors . it generates spatially coherent yet detail - preserving , pixel - accurate and fine - grained saliency , and overcomes the limitations of previous methods which use homogeneous superpixel - based and color only treatment . pisa aggregates multiple saliency cues in a global context such as complementary color and structure contrast measures with their spatial priors in the image domain . the saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation , in which each pixel will be evaluated with multiple hypothetical saliency levels . instead of using global discrete optimization methods , we employ the cost - volume filtering technique to solve our formulation , assigning the saliency levels smoothly while preserving the edge - aware structure details . in addition , a faster version of pisa is developed using a gradient - driven image sub - sampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy . extensive experiments on a number of public datasets suggest that pisa convincingly outperforms other state - of - the - art approaches . in addition , with this work we also create a new dataset containing commodity images for evaluating saliency detection . k. wang : pisa : pixelwise image saliency by aggregating complementary appearance contrast measures with edge - preserving coherence visual saliency , object detection , feature engineering , image filtering
|
what are the laws that regulate learning on a neuronal level in animals or humans ? so far this important question is open , however , the imagination one has for a biological learning rule is that the synaptic weights are changed according to a local rule . in the context of neural networksmeans that only the adjacent neurons of a synapse contribute to changes of the synaptic weight .such a mechanism with respect to synaptic strengthening was proposed by donald hebb in 1949 and experimentally found by t. bliss and t. lomo . in a biological terminus hebbian learningis called _ long - term potentiation _ ( ltp ) .experimentally as well as theoretically there is a great body of investigations aiming to formulate precise conditions under which learning in neural networks takes place .for example the influence of the precise timing of pre- and postsynaptic neuron firing or the duration of a synaptic change ( for a review see ) termed _ short _ or _ long - term plasticity _ have been studied extensively .all of these contributions share the locality condition proposed by hebb . in this articlewe present a novel stochastic hebb - like learning rule inspired by experimental findings about heterosynaptic plasticity .this form of neural plasticity affects not only the synpase between pre- and postsynaptic neuron in which a synaptic modification was induced , but also further remote synapses of the pre- and postsynaptic neuron .additionally , we demonstrate that this learning rule can be successfully applied to train multilayer neural networks .this paper is organized as follows . in section [ intro_nn ]we give the motivation for our learning rule by a summary of experimental observations concerning synaptic plasticity and properties of biological and artificial neural networks as far as they are useful for a better understanding of our learning rule . in section [ def_lr ]we propose our learning rule and give a mathematical definition .we investigate our learning rule in section [ results ] by numerical simulations . in section [ discussion ]we discuss and compare our stochastic learning rule with other learning rules .this article ends in section [ end ] with a conclusion and an outlook on further investigations .one property that have all neural networks in common , biological as well as artificial , is that there are two different processes taking place simultaneously .the first process concerns signal processing and the second learning .signal processing is reflected by the time dependent activity of a neuron , whereas learning concerns the dynamical behavior of the synaptic weights between two neurons and in the network .one major difference between both dynamics is that they occur on different timescales .normally , learning is much slower than the neural activity . despite our focus in this article on the learning dynamics, we can not neglect a treatment of the neural activity , because both processes are coupled and influence each other .figure [ fig1 ] shows a schematic neural network consisting of neurons .the synapses are not drawn directly from neuron to neuron but in two pieces .this shall depict the synaptic cleft of chemical synapses .the reason for this becomes more clear , when we describe our learning rule below .the left figure describes a signal path within a feed - forward network involving the neurons and the synapses between these neurons . in this andall following figures we suppose that the signal flow and , hence , the orientation of the path , is from the top to the bottom .the neurons ( synapses ) , which were actively involved in this signal processing , are drawn as black circles ( full lines ) . concerning this information flow , frey et al . found in the hippocampus of rats in vivo that there is a _ synaptic tagging _ mechanism .this mechanism tagges synapses which were repeatly involved in information processing within a certain time window of up to 1.5 hours . if one of these synapses is restimulated within this time interval then a synaptic modification is induced .one can interpret this as a kind of echo or memory within the neural network of past activity .hence , the left fig .[ fig1 ] can be interpreted in a way that the depicted path from neuron to is not the actual information flow , but the reflection of recent past activity , which the neurons and synapses can remember by an additional degree of freedom .suppose now , that this signal flow caused a synaptic modification on as depicted in the right fig .this situation corresponds to the so called hebbian learning .necessary conditions for this kind of learning are that the neurons , surrounding the synapse , were both active within a certain time window , which is in the range , and that the presynaptic neuron fires before the postsynaptic neuron . in biological terms hebbian learningis also called _ long - term potentiation _ ( ltp ) , because it strengthens the synaptic weight in contrast to _ long - term depression _ ( ltd ) , which weakens the synaptic weight , if the spiking time points of pre- and postsynaptic neuron are reversed . however , both kinds of learning , ltp as well as ltd , have one common ground , they are homosynaptic in respect to the number of synapses which are changed .recently , there is an increasing number of experimental results , which investigate a new form of synaptic modification , the so called heterosynaptic plasticity .in contrast to homosynaptic plasticity , where only the synapse between active pre- and postsynaptic neuron is changed , heterosynaptic plasticity concerns also further remote synapses of the pre- and postsynaptic neuron . this scenario is depicted in the left fig .we suppose again , that the synapse was changed either by ltp or ltd .fitzsimonds et al . found in cultured hippocampal neurons that the induction of ltd in is also accompanied by back propagation of depression in the dendrite tree of the presynaptic neuron . further more , depression also propagates laterally in the pre- and postsynaptic neuron .similar results hold for the propagation of ltp , see for a review .these experimental findings are depicted in the left fig .we emphasize all synapses , whose weights are changed , and all neurons , which enclose these synapses by drawing full lines respectively black circles . a direct comparison between the left fig . [ fig2 ] , which depicts heterosynaptic plasticity , with the right fig .[ fig1 ] , which depicts homosynaptic plasticity , reveals the tremendous difference in the affected number of synapses and the starlike spread of plasticity in some of the synapses connected with the two neurons , which were the case for the induction of plasticity in synapse .we want explicitly to emphasize , that fitzsimonds et al . found up to now no forward propagated postsynaptic plasticity .this would correspond to the synapses of neuron , which are drawn as dotted lines in the left fig.[fig2 ] .a biological explanation for the cellular mechanisms of these findings is currently under investigation .fitzsimonds et al . suggest the existence of retrograde signaling from the post- to the presynaptic neuron which could produce a secondary cytoplasmic factor for back - propagation and presynaptic lateral spread of ltd . on the postsynaptic side lateral spread of ltd could be explained similarly under the assumption that there is a blocking mechanism for the cytoplasmic factor which prevents forward propagated ltd .they are of the opinion that extracellular diffusible factors are of minor importance . are drawn as black circles ( full lines ) .right : otmakova et al. found , that neurons in the ca1 region of the hippocampus receive a global reinforcement signal in form of dopamin.,width=192 ] are drawn as black circles ( full lines ) .right : otmakova et al. found , that neurons in the ca1 region of the hippocampus receive a global reinforcement signal in form of dopamin.,width=188 ] the experiments of fitzsimonds et al . are certainly an extention of homosynaptic learning , which we denote briefly as hebbian learning , but nevertheless both principles can be characterized as unsupervised learning because both learning types use exclusively local information available in the neural system .this is in contrast to the famous back - propagation learning rule for artificial neural networks .the back - propagation algorithm is famous because until the 1980 s there was no systematic method known to adjust the synaptic weights of an artificial multilayer ( feed - forward ) network to learn a mapping .still , the problem with the back - propagation algorithm is that it is not biological plausible because it requires a back - propagation of an error in the network .we emphasize , that the problem is not the back - propagation process itself , because , e.g. , heterosynaptic plasticity could provide such a mechanism as depicted in the left fig .[ fig2 ] , but the knowledge of the error , which can not be known explicitely to the neural network .for this reason learning by back - propagation is classified as supervised learning or learning by a teacher . however , there is a modified form of supervised learning namely reinforcement learning that is biologically plausible .reinforcement learning reduces the information provided by a teacher to a binary reinforcement signal that reflects the quality of the network s performance .interestingly , experimental observations from the hippocampus ca1 region have shown that there is a global signal in form of dopamine which is feedback to the neurons and causes thereby a modulation of ltd .schematically , this is depicted in the right fig .[ fig2 ] . in this figureeach neuron is connected with an additional edge which represents the feedback of dopamin in form of a reinforcement signal .based on the experimental findings by frey et al . and otmakova et al . , bak and chialvo as well as klemm et al . suggested biologically inspired learning rules for neural networks that combine unsupervised hebbian ( homosynaptic ) with reinforcement learning .we call this kind of combination of hebbian and reinforcement learning hebb - like learning to indicate that the learning rule is different from hebb , but contains nevertheless characteristics which are biological plausible .this includes the extention from purely unsupervised to a combination of unsupervised and reinforcement learning .the question which arises now is : how can one construct a hebb - like learning rule which mimics additionally the learning behavior of heterosynaptic plasticity found by fitzsimonds et al .. this question will be addressed in the next section .the working mechanism of the learning rule we suggest is based on the explanation of fitzsimonds et al . for heterosynaptic plasticity given above . to understand what kind of mathematical formulation is capable to describe a secondary cytoplasmic factor in a qualitative way we start our explanation with emphasizing that a neuron is from a biological point of view first of all a cell .the subdivision of a neuron in synapses , soma ( cell body ) and axon is a model and reflects already the direction of the information flow within the neuron namely from the synapses ( input ) to the soma ( information processing ) to the axon ( output ) . here , we do not question this model view with respect to the direction of signal processing , but to learning .we see no biological reason why the model of a neuron for signal processing should be the same as the model of a neuron for learning . in fig .[ fig3 ] we emphasize the cell character of a neuron by underlying the contour of the whole neuron in gray .now , our reason for drawing the synapses in an unusual way becomes clear , because it emphasizes automatically the cell character of a neuron .suppose now , we assign to each neuron in the network one additional parameter as shown in fig .we call these parameters neuron counters .the neuron counters shall modulate the synaptic modification in a certain way defined in detail below .according to our cell view of the neuron , we assume further that the neuron counters of adjacent neurons , which are connected by synapses , can communicate with each other in an additive way . e.g. , in fig .[ fig3 ] the neuron counters and form a new value in synapse , which we call the approximated synapse counter . by this mechanismwe obtain a star - like influence of , e.g. , the neuron counters and on all synapses connected with neuron or , because either or holds and regulates the synaptic update of the corresponding synaptic weight of the synapses and respectively .this situation corresponds in a qualitative way to the learning behavior of heterosynaptic plasticity , however , with the difference , that we have a fully symmetrical learning rule .an interpretation of the communication between adjacent neuron counters can be given , if one views the neuron counters as cytoplasmic factors , which are allowed to freely move within the cytoplasm of the corresponding neuron ( cell ) . because , we introduced no blocking mechanism for the forward propagation of the postsynaptic neuron counter we result in a fully symmetric communication between adjacent neuron counters . in the next section ,we define the qualitative principle for heterosynaptic learning presented above mathematically .unfortunately , there are no experimental data available that would allow to specify the influence of on the corresponding synapse quantitatively . for this reason, we use an ansatz to close this gap and make it plausible . if one assumes , that the neuron counters shall modulate learning , then it is plausible to determine the values of as a function of a reinforcement signal reflecting the performance of the network qualitatively . in the most simple case , the dynamics of the neuron counters depends linearly on the reinforcement signal . here , is a threshold that restricts the possible values of the neuron counters to possible values .the value of reflects the network s performance , but it has only relative and no absolute meaning with respect to the mean network error .this can be seen by the following example .suppose , then it is clear that at least the last output of the network was right , .however , we know nothing about the outputs which occurred before the last one .e.g. , the following two sequences of reinforcement signals can lead to the same value of the neuron counter : and if the start value is for and for .obviously , the estimated mean error is different in both cases , if averaged over the last seven time steps .the crucial point is , that the start value of the neuron counter is not available for the neuron and , hence , the neuron can not directly calculate the mean error of the network .however , we can introduce a simple assumption , which allows an estimate of the mean network error .we claim that , if for one neuron in a network , but trained by two different learning rules then the mean error of network one is lower then of network two .this may not hold for all cases , but it is certainly true in average . by thiswe couple the value of the neuron counter to the mean error of the network . due to the fact , that this holds only statistically , we will introduce a stochastic rather than a deterministic update rule for the synapses that depends on the neuron counters . in the previous section we said , that adjacent neuron counter can communicate , if both neurons are connected by a synapse .this gives a new variable we call the approximated synapse counter .we will use the approximated synapse counter as the driving parameter of our stochastic update rule , because its value reflects the performance of the synapse in the network which shall be updated , because the synapses are the adaptive part of a neural network .hence , evaluating the value of an approximated synapse counter of a synapse will give us indirectly a decision for the update of this synapse .it is clear that , roughly speaking , the higher the approximated synapse counter of a synapse is the higher should be the probability the synapse is updated .this intuitively plausible assumption will now be quantified .similar to only active synapses which were involved in the last signal processing step can be updated , if the output of the network was wrong .this is plausible , because it prevents that already learned mappings in the neural network are destroyed possibly .if the probability , that synapse is updated is given by this probability has to be calculated for each synapse in the network .we want to emphasize , that this needs only local information besides the reinforcement signal .hence , it is a biologically possible mechanism .if the synapse is actually chosen for update , the synaptic weight will be modified by here , is a positive constant which determines the amount of the synaptic depression . to evaluate the stochastic update condition eq .[ supdate ] the two auxiliary variables and have to be identified .this is done in the following way : 1 .calculate the approximated synapse counter 2 .map the value of the approximated synapse counter to by we call rank ordering probability distribution 3 .the random variable is drawn from the continuous coin distribution .\label{p_coin}\\\ ] ] we had three reasons to choose a power law in eq .[ p_coin ] for the coin distribution instead of an equal distribution , which would be the simplest choice .first , we see no evidence that a random number generator occurring in a neural system should favor a equal distribution .second , it is highly probable that two different random number generators of the same biological system are not identical .instead , they could have different parameters , in our case they could have different exponents . in this paperwe will content ourself investigating the case of identical random number generators , but our framework can be directly applied to the described scenario .third , by choosing , the coin distribution in eq . [ p_coin ] becomes the equal distribution .this allows us to investigate the influence of the distance of the coin distribution to an equal distribution on the learning behavior of a neural network by studying different parameters of .we want to remark , that in this case the update probability eq .[ supdate ] simplifies to before we present our results in the next section , we want to visualize the stochastic update probability .figure [ fig3_new ] shows the update probability as function of .the different curves correspond to different values of the exponent of the coin distribution .one can see , that the update probability follows the values of .this holds for each curve in fig .[ fig3_new ] .that means , the higher the values of are the higher is the update probability .this is the behavior one would intuitively expect , because high values of correspond to high values of the approximated synapse counters indicating high values of the neuron counters , which correspond to a bad network performance .moreover , one can see in fig .[ fig3_new ] that the larger the higher is the update probability for fixed . in the limit the update probability equals one for all values of .hence , higher values of the exponent of the coin distribution result in a higher update probability .that means , by one can control the sensitivity by which the update probability depends on .another parameter our stochastic update rule depends on is the exponent of the rank ordering distribution .we display in fig .[ fig_update2 ] as function of and to visualize its influence on the update probability .the values of the update probability are color - coded and blue corresponds to and red to . for the left fig .[ fig_update2 ] we used and for the right as exponent for the coin distribution .if no update takes place . for increasing values of the approximated synapse counter and fixed values of one obtains increasing values for the update probability .moreover , higher values of lead to higher values of . this can be seen by comparing the left and right fig .[ fig_update2 ] . increasing values of result in decreasing values of for fixed . to summarize , the stochastic update condition we introduced for a synaptic update depends on six parameters from the visualizations we gave in fig .[ fig3_new ] and [ fig_update2 ] we saw that increasing values of and as well as decreasing values of lead to an increase in the update probability .for the following simulations we use a three - layer feed - forward network . the neural network consist of input- , hidden- and output neurons .the neurons of adjacent layers are all to all connected with synapses . as neuron model we us binary neurons for .the network dynamics is regulated by a winner - take - all mechanism whereas the inner fields of the neurons are calculated by here , _ all _ means all neurons of the preceding layer . as active neuron in each layerwe choose the neuron with the highest activity which is set to .all other neurons are set to zero . by thiswe enforce a sparse coding .bak and chialvo have called this _extremal dynamics_. the training of the neural network works as follows : we choose randomly one of the possible input patterns and initialize the neurons in the input layer .then we calculate according to the network dynamics eq .[ innerfield]-[argmax ] the activity of the neurons in the subsequent layers .if the output of the network is correct we set otherwise the reinforcement signal is set to . according to eq .[ synmem2 ] we calculate the new values of the neuron counters for the neurons which were active during the signal processing of the input pattern .if we apply our stochastic learning rule otherwise we proceed with the next input pattern until the network converged .the mapping which shall be learned by the network is the exclusive - or ( xor ) function and higher dimensional extensions thereof called the parity problem .one can describe the mappings from the parity problem class as indicator functions for an odd or even number of s in the binary input vector of the network .if the number of s in the input vector is odd the output of the network shall be if it is even . in this sense ,the exclusive - or ( xor ) function is the two dimensional representative of this class . to avoid the case of a zero input vector , which would result in zero activity of subsequent layers ,we introduce a bias neuron . here , the index is given by the exponent of the maximal number of patterns which can be realized by a random binary vector of length . for the following simulationsthe initial weights of the network were chosen randomly from ] , with , each time when a synaptic modification was induced .we start our investigations by studying the influence of the memory length of the neuron counters and the exponents and on the mean ensemble error of the network s performance during learning the xor function . the contour plot in fig .[ fig_res1 ] shows the simulation results for and three neurons in the hidden layer .the mean ensemble error was obtained by averaging over independent runs of an ensemble of size and is displayed at the time steps ( left figure ) and ( right figure ) during the learning process . to find the optimal parameter configuration which minimizes the mean ensemble error we keep fixed and vary and in the interval ] with . from both figuresone can see that the mean learning time decreases with an increasing number of neurons in the hidden layer as expected whereas the increase from to neurons has the biggest effect .this is due to the fact that the destructive path inference , which means that already correctly learned paths in the network are destroyed by a new synaptic modification , is strongly reduced by increasing the number of possible paths as a result of additional neurons in the hidden layer . increasingthe number of neurons beyond has only marginal influence because an additional increase of redundant paths has no affect . even in the presence of noiseour learning rule is capable of learning the xor function .one can nicely see how an increasing number of neurons in the hidden layer can efficiently reduce the amount of noise in the system .in this subsection we study the influence of the number of patterns to be learned on the mean learning time .we use input patterns , for , and correspondingly neurons in the input layer and neurons in the hidden layer .neurons in the hidden layer . the mean learning time was averaged over an ensemble of size .the symbols correspond to results obtained from simulations whereas the lines are the results from a least mean square fit .the exponents for the power laws are in acceding order of .,width=321 ] the network dynamics was again regulated by a winner - take - all mechanism .our results shown in fig .[ fig_res3 ] for the mean learning times are comparable to the results obtained by bak and chialvo with the difference that they even used neurons in the hidden layer .moreover , the mean learning time scales for numerical values for for the three different curves . ] with problem size according to a power law with exponent .this demonstrates not only , that our stochastic learning rule is able to learn the problem but also , that learning is efficient , because otherwise the mean learning times would follow an exponential function . finally , we investigated the influence of the type of the probability distribution used for the coin and rank ordering distribution . here, we use an exponential distribution for the coin and rank ordering distribution and study the learning behavior .we found significantly worse results compared to the results for the power law ( not shown ) presented in the last section . to understand this, we display in fig .[ fig_exp ] the update probability as function of and .one can see , there are essentially only two states , the update probability can take , zero and one ( upper right ) .that means , produces a rather deterministic update behavior which is inappropriate , because the information provided by the approximated synapse counters is uncertain .other values for show qualitatively the same results .this demonstrates that the larger variability provided by a power law distribution is important for a good learning behavior .mathematical investigations of biological as well as artificial learning rules for neural networks have been attractive to scientists since decades , because of the importance of the underlying problem and implications arising out of an understanding thereof .we want to finish this article , by discussing and comparing our novel stochastic hebb - like learning rule with other models introduced so far , which are constrained in a way that makes them biologically plausible .bak and chialvo introduced a learning rule which combines anti - hebb or long - term depression ( ltd ) and reinforcement learning .klemm et al . extended the learning rule from bak and chialvo by introducing one additional degree of freedom for each synapse in the network .they called this degree of freedom synapse counter .moreover , bosman et al .proposed a learning rule which incorporates hebb ( ltp ) , anti - hebb ( ltd ) and reinforcement learning .all these approaches have in common with our learning rule , that they utilize a reinforcement signal as feedback reflecting the current performance of the network .the usage of a reinforcement signal seems not only to be plausible but indispensable to learn mappings , because the neural network has to adapt to its environment by interacting with it otherwise the animal will die fast .similar to physical energy , it is also impossible to generate information out of nothing in a meaningful way .the reinforcement signal makes a neural network and , hence , a brain , an open system according to the flow of information .this depicts intuitively the difficulty of the system under investigation , because open or dissipative systems are by far less understood than closed , e.g. , hamiltonian systems .in contrast , all models proposed before are purely deterministic with respect to the decision if an update for a synapse shall take place or not . additionally , all learning rules can only explain homosynaptic plasticity .we think , due to the fact that the neural network is an open system it can not make deterministic decisions which are objective , because of the lack of complete information .of course , one can always search for the best decision based on the amount of information available in the system .however , this internal ( in the neural network ) optimality does not guarantee external ( the overall network performance ) optimality . in this article , we took the point of view , that we assume we have incomplete information and , hence , we are only able to provide an update probability indicating a kind of confidence level for this update based on our incomplete information .explicitely , this enters our model in form of the approximated synapse counters . for every network topologyone can calculate the synapse counter as a function of the neuron counters introduced by klemm et al .however , this results normally in relations , which involve not only the neuron counters enclosing the synapse , but also further remote neuron counters .this can be seen with the help of fig .for example , the neuron counter of neuron five can be written as a linear sum of the synapse counters : these equations represent a failure conservation for the incoming and outgoing connections respectively .if the neuron counter of neuron five is then the sum of all synapse counters leading to neuron five has to be equal to this number , because there is no other way information can involve neuron five in the signal processing .the same holds for the outgoing information , represented by eq .[ coding ] . in general ,such linear failure conservation relations between the neuron and synapse counters lead to the linear system here , represents the -dimensional vector of neuron and the -dimensional vector of synapse counters .the integer valued times matrix depends on the network topology .the problem becomes nonlinear if one wants to obtain the synapse counters as function of the neuron counters , because the inverse of the non - quadratic matrix in eq .[ coding2 ] can only be done by calculating a pseudoinverse to obtain .this is the situation we are facing .explicite calculation by using the moore - penrose pseudo inverse leads to the statement given above .hence , a biologically plausible learning rule can not use these relations , because this would violate the local information condition in neural networks .one possibility around this obstacle is to approximate the synapse counter by the sum of the neuron counters enclosing this synapse , however , with the additional assumption to view the resulting value in a probabilistic rather than deterministic way .our simulations showed , that a merely addition ( or multiplication ) of the neuron counters does not lead to meaningful results at all .moreover , also the used probability distributions have significant influence on the learning dynamics as demonstrated in the results section [ results ] .the fact , that power law distributions give significantly better results than exponential distributions for the coin and rank ordering distribution corresponds to results of recent investigations of heuristic optimization strategies .boettcher et al . demonstrated that the usage of power law distributions in optimization problems , e.g. , finding the energy ground states for spin glasses and graph bi - partitioning , which are both np - hard optimization problems , can give better results compared to simulated annealing or genetic algorithms .they explained this effect by the positive influence of the inherently large fluctuations within the system , which prevents to get trapped a long time in local minima of the error function . from a biological point of viewthe most significant difference between our stochastic hebb - like learning rule and the other learning rules is certainly that our model aims to explain heterosynaptic plasticity , which has been found experimentally , instead of homesynaptic plasticity , in a qualitative way .this is also the major objective of this paper .hence , a direct comparison between our model and the other learning rules can not be given fairly without neglecting or underestimating significant components of our model .for example , we introduced one new degree of freedom for each neuron in the form of neuron counters .bosman et al . do not rely on this or similar parameters whereas klemm et al . introduced one additional degree of freedom for each synapse .that means , in this context our model has parameters , the model of bosman et al .none , and klemm et al . parameters . here, let be the average number of synapses a neuron has in a network .this makes the learning rule of bosman et al . in a mathematical senseminimal compared to ours .however , biologically it can not describe heterosynaptic plasticity and , hence , lacks this ability , which makes a comparison in the number of parameters meaningless .interestingly , despite the fact , that heterosynaptic plasticity is more complex then homosynaptic plasticity the learning rule of klemm et al . uses times more parameters than our model .in general , we think that due to the almost overwhelming complexity of biological phenomena mathematical modeling should stay always in tight contact with experimental findings to constrain the model by regularities found in nature .these constrains can only lead to minimal mathematical models in the context under consideration , but not beyond .we presented a novel stochastic hebb - like learning rule for neural networks and demonstrated its working mechanism exemplary in learning the exclusive - or ( xor ) problem in a three - layer network .we investigated the convergence behavior by extensive numerical simulations depending on three different network dynamics which correspond all to biological forms of lateral inhibition .we found in all cases parameter configurations for , the length of the neuron memory , , the exponent of the coin distribution and , the exponent of the rank ordering distribution , which constitute the hebb - like learning rule , to obtain not only a solution to the exclusive - or ( xor ) problem but comparably well results to a learning rule recently proposed by klemm , bornholdt and schuster .this is remarkable , if one keeps in mind that our learning rule uses less parameters than the model proposed by . because the number of neurons is always ( much ) less then the number of synapses the same holds for the respective numbers of synaptic and neuron counters which were used in the learning rules .an interesting implication of our learning rule and its inherent stochastic character is that it offers a quantitative biologically plausible explanation of heterosynaptic plasticity which is observed experimentally .in addition to the experimentally observed back - propagation , pre- and postsynaptic lateral spread of _ long - term depression _ ( ltd ) our learning rule predicts forward propagated postsynaptic ltd for reasons of a symmetric communication between adjacent neurons .as far as we know there is no theoretical explanation of that phenomenon so far and we are looking forward to new experiments helping to clarify this important question .we would like to thank tom bielefeld , rolf d. henkel , jens otterpohl , klaus pawelzik , roland rothenstein , peter ryder , heinz georg schuster and helmut schwegler for fruitful discussions .
|
in this article we intoduce a novel stochastic hebb - like learning rule for neural networks that is neurobiologically motivated . this learning rule combines features of unsupervised ( hebbian ) and supervised ( reinforcement ) learning and is stochastic with respect to the selection of the time points when a synapse is modified . moreover , the learning rule does not only affect the synapse between pre- and postsynaptic neuron , which is called homosynaptic plasticity , but effects also further remote synapses of the pre- and postsynaptic neuron . this more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity . we demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive - or ( xor ) mapping in a multilayer feed - forward network . we find , that our stochastic learning rule works well , even in the presence of noise . importantly , the mean learning time increases with the number of patterns to be learned polynomially , indicating efficient learning .
|
genetical genomics experiments have now been routinely conducted to measure both the genetic variants and the gene expression data on the same subjects .such data have provided important insights into gene expression regulations in both model organisms and humans [ , , ] .gene expression levels are treated as quantitative traits and are subject to standard genetic analysis in order to identify the gene expression quantitative loci ( eqtl ) . however , the genetic architecture for many gene expressions may be complex due to possible multiple genetic effects and gene gene interactions , and poorly estimated genetic architecture may compromise the inferences of the dependency structures of genes at the transcriptional level [ ] . for a given gene , typical analysis of such eqtl datais to identify the genetic loci or single nucleotide polymorphisms ( snps ) that are linked or associated with the expression level of this gene .depending on the locations of the eqtls or the snps , they are often classified as distal _trans_-linked loci or proximal _ cis_-linked loci [ , ] .although such a single gene analysis can be effective in identifying the associated genetic variants , gene expressions of many genes are in fact highly correlated due to either shared genetic variants or other unmeasured common regulators .one important biological problem is to study the conditional independence among these genes at the expression level .eqtl data provide important information about gene regulation and have been employed to infer regulatory relationships among genes [ , , ] .gene expression data have been used for inferring the genetic regulatory networks , for example , in the framework of gaussian graphical models ( _ ggm _ ) [ , , , ] .graphical models use graphs to represent dependencies among stochastic variables .in particular , the _ ggm _ assumes that the multivariate vector follows a multivariate normal distribution with a particular structure of the inverse of the covariance matrix , called the concentration matrix . for such gaussian graphical models, it is usually assumed that the patterns of variation in expression for a given gene can be predicted by those of a small subset of other genes .this assumption leads to sparsity ( i.e. , many zeros ) in the concentration matrix and reduces the problem to well - known neighborhood selection or covariance selection problems [ , ] .in such a concentration graph modeling framework , the key idea is to use partial correlation as a measure of the independence of any two genes , rendering it straightforward to distinguish direct from indirect interactions . due to high - dimensionality of the problem, regularization methods have been developed to estimate the sparse concentration matrix where a sparsity penalty function such as the penalty or scad penalty is often used on the concentration matrix [ , , ] . among these methods ,the coordinate descent algorithm of , named _ glasso _ , provides a computationally efficient method for performing the lasso - regularized estimation of the sparse concentration matrix .although the standard _ _ ggm__s can be used to infer the conditional dependency structures using gene expression data alone from eqtl experiments , such models ignore the effects of genetic variants on the means of the expressions , which can compromise the estimate of the concentration matrix , leading to both false positive and false negative identifications of the edges of the gaussian graphs .for example , if two genes are both regulated by the same genetic variants , at the gene expression level , there should not be any dependency of these two genes .however , without adjusting for the genetic effects on gene expressions , a link between these two genes is likely to be inferred . for eqtl data, we are interested in identifying the conditional dependency among a set of genes after removing the effects from shared regulations by the markers . such a graph can truly reflect gene regulation at the expression level . in this paperwe introduce a sparse conditional gaussian graphical model ( _ cggm _ ) that simultaneously identifies the genetic variants associated with gene expressions and constructs a sparse gaussian graphical model based on eqtl data .different from the standard __ ggm__s that assume constant means , the _ cggm _ allows the means to depend on covariates or genetic markers .we consider a set of regressions of gene expression in which both regression coefficients and the error concentration matrix have many zeros .zeros in regression coefficients arise when each gene expression only depends on a very small set of genetic markers ; zeros in the concentration matrix arise since the gene regulatory network and therefore the corresponding concentration matrix is sparse .this approach is similar in spirit to the seemingly unrelated regression ( sur ) model of in order to improve the estimation efficiency of the effects of genetic variants on gene expression by considering the residual correlations of the gene expression of many genes . in the analysis of eqtl data, we expect sparseness in both the regression coefficients and also the concentration matrix .we propose to develop a regularized estimation procedure to simultaneously select the snps associated with gene expression levels and to estimate the sparse concentration matrix .different from the original sur model of that focuses on improving the estimation efficiency of the regression coefficients , we focus more on estimating the sparse concentration matrix adjusting for the effects of the snps on mean expression levels .we develop an efficient coordinate descent algorithm to obtain the penalized estimates and present the asymptotic results to justify our estimates . in the next sections we first present the formulation of the _ cggm _ for both the mean gene expression levels and the concentration matrix .we then present an efficient coordinate descent algorithm to perform the regularized estimation of the regression coefficients and concentration matrix .simulation experiments and asymptotic theory are used to justify our proposed methods .we apply the methods to an analysis of a yeast eqtl data set .we conclude the paper with a brief discussion .all the proofs are given in the supplementary material [ ] . we have independent observations from a population of a vector , where is a random vector of gene expression levels of genes and is a vector of the numerically - coded snp genotype data for snps . furthermore ,suppose that conditioning on , follows a multivariate normal distribution , where is a coefficient matrix for the means and the covariance matrix does not depend on .we are interested in both the effects of the snps on gene expressions and the conditional independence structure of adjusting for the effects of , that is , the gaussian graphical model for conditional on . in applications of gene expression data analysis , we are more interested in the concentration matrix after their shared genetic regulators are accounted for .it has a nice interpretation in the gaussian graphical model , as the -element is directly related to the partial correlation between the and components of after their potential joint genetic regulators are adjusted . in the gaussian graphical model with undirected graph ,vertices correspond to components of the vector and edges indicate the conditional dependence among different components of .the edge between and exists if and only if , where is the -element of .we emphasize that in the graph representation of the random variable , the nodes include only the genes and the markers are not part of the graph .we call this the sparse conditional gaussian graph model ( _ cggm _ ) of the genes . hence , of particular interest is to identify zero entries in the concentration matrix .note that instead of assuming a constant mean as in the standard _ ggm _ , model ( [ ssur ] ) allows heterogeneous means . in eqtl experiments , each row of and the concentration matrix expected to be sparse and our goal is to simultaneously learn the gaussian graphical model as defined by the matrix and to identify the genetic variants associated with gene expressions based on independent observations of . from now on, we use to denote the vector of gene expression levels of the genes and to denote the vector of the genotype codes of the snps for the observation unless otherwise specified .finally , let be the genotype matrix and .suppose that we have independent observation from the _ cggm _ ( [ ssur ] ) .let , and .then the negative of the logarithm of the likelihood function corresponding to the _ cggm _ model can be written as where represents the associated parameters in the _cggm_. the hessian matrix of the negative log - likelihood function is ( see proposition 1 in the supplementary material [ ] , section 3 ) . in addition , is a bi - convex function of and . in words, this means that for any fixed , is a convex function of , and for any , is a convex function of .when , the global minimizer of is given by under the penalized likelihood framework , the estimate of the and in model ( [ ssur ] ) is the solution to the following optimization problem : where and denote the generic penalty functions , is the element of the matrix and is the element of the matrix , and here and are the two tuning parameters that control the sparsity of the sparse _cggm_. we consider in this paper both the lasso or penalty function [ ] and the adaptive lasso penalty function for some and any consistent estimate of , denoted by [ ] . in this paperwe use .we present an algorithm for the optimization problem ( [ objadap ] ) with lasso penalty function for and .a similar algorithm can be developed for the adaptive lasso penalty with simple modifications . under this penalty function ,the objective function is then the subgradient equation for maximization of the log - likelihood ( [ loglik ] ) with respect to is where .if is known , and have cast the optimization problem ( [ loglik ] ) as a block - wise coordinate descent algorithm , which can be formulated as iterative lasso problems . before we proceed , we first introduce some notation to better represent the algorithm .let be the estimate of .we partition and as show that the solution for satisfies which by convex duality is equivalent to solving the dual problem where .then the solution for can be obtained via the solution of the lasso problem and through the relation .the estimate for can also be updated in this block - wise manner very efficiently through the relationship [ ] .after we finish an updating cycle for , we can proceed to update the estimate of .since the object function of our penalized log - likelihood is quadratic in given , we can use a direct coordinate descent algorithm to get the penalized estimate of . forthe ( , )th entry of , , note that for an arbitrary matrix , , where and are the corresponding base vector with and dimensions .so the derivative of the penalized log - likelihood function ( [ loglik])with respect to is where function is defined as setting equation ( [ eq_f ] ) to zero , we get the updating formula for : where and , are the estimates in the last step of the iteration .taking these two updating steps together , we have the following coordinate descent - based regularization algorithm to fit the sparse _ cggm _ : _ the coordinate descent algorithm for the sparse _ cggm_. _start with and .if is not invertible , use and instead . for each ,solve the lasso problem ( [ glasso_step ] ) under the current estimate of .fill in the corresponding row and column of using .update . for each , and update each entry in using the formula ( [ update_f ] ) , under the current estimate for .repeat step ( 2 ) and step ( 3 ) until convergence .output the estimate , and .the adaptive version of the algorithm can be derived in the same steps with adaptive penalty parameters and is omitted here .note that when , this algorithm simply reduces to the _ glasso _ or the adaptive _ glasso _ ( _ aglasso _ ) algorithm of . a similar algorithm was used in for sparse multivariate regressions .proposition 2 in the supplementary material [ ] proves that the above iterative algorithm for minimizing with respective to and converges to a stationary point of .while the iterative algorithm reaches a stationary point of , it is not guaranteed to reach the global minimum .since the objective function of the optimization problem ( [ objadap ] ) is not always convex in , it is convex in either or with the other fixed .there are potentially many stationary points due to the high - dimensional nature of the parameter space .we also note a few straightforward properties of the iterative procedure , namely , that each iteration monotonically decreases the penalized negative log - likelihood and the order of minimization is unimportant . finally , the computational complexity of this algorithm is plus the complexity of the _glasso_. the tuning parameters and in the penalized likelihood formulation ( [ objadap ] ) determine the sparsity of the _ cggm _ and have to be tuned .since we focus on estimating the sparse precision matrix and the sparse regression coefficients , we use the bayesian information criterion ( bic ) to choose these two parameters .the bic is defined as where is the dimension of , is the number of nonzero off - diagonal elements of and is the number of nonzero elements of .the bic has been shown to perform well for selecting the tuning parameter of the penalized likelihood estimator [ ] and has been applied for tuning parameter selection for _ ggms _ [ ] .sections 4 and 5 in the supplementary material [ ] state and prove theoretical properties of the proposed penalized estimates of the sparse _ cggm _ : its asymptotic distribution , the oracle properties when and are fixed as and the convergence rates and sparsistency of the estimators when and diverge as . by sparsistency , we mean the property that all parameters that are zero are actually estimated as zero with probability tending to one [ ] .we observe that the asymptotic bias for is at the same rate as for sparse __ ggm__s , which is multiplied by a logarithm factor , and goes to zero as long as is at a rate of with some .the total square errors for are at least of rate since each of the nonzero elements can be estimated with rate .the price we pay for high - dimensionality is a logarithmic factor .the estimate is consistent as long as is at a rate of with some .in this section we present results from monte carlo simulations to examine the performance of the proposed estimates and to compare it with the _ glasso _ procedure for estimating the gaussian graphical models using only the gene expression data .we also compare the _ cggm _ with a modified version of the neighborhood selection procedure of , where each gene is regressed on other genes and also the genetic markers using the lasso regression , and a link is defined between gene and if gene is selected for gene and gene is also selected by gene .we call this procedure the multiple lasso ( _ mlasso _ ) .note that the _ mlasso _ does not provide an estimate of the concentration matrix . for adaptive procedures , the mles of both the regression coefficients andthe concentration matrix were used for the weights when and . for each simulated data set ,we chose the tuning parameters and based on the bic . to compare the performance of different estimators for the concentration matrix , we used the quadratic loss function where is an estimate of the true concentration matrix .we also compared , , and , where is the difference between the true concentration matrix and its estimate , is the operator or spectral norm of a matrix , is the element - wise norm of a matrix , for is the matrix norm of a matrix , and is the frobenius norm , which is the square - root of the sum of the squares of the entries of . in order to compare how different methods recover the true graphical structures , we considered the hamming distance between the estimated and the true concentration matrix , defined as , where is the entry of and is the indicator function . finally , we considered the specificity ( spe ) , sensitivity(sen ) and matthews correlation coefficient ( mcc ) scores , which are defined as follows : where tp , tn , fp and fn are the numbers of true positives , true negatives , false positives and false negatives in identifying the nonzero elements in the concentration matrix . herewe consider the nonzero entry in a sparse concentration matrix as `` positive . '' in the following simulations , we considered a general sparse concentration matrix , where we randomly generated a link ( i.e. , nonzero elements in the concentration matrix , indicated by ) between variables and with a success probability proportional to .similar to the simulation setup of , and , for each link , the corresponding entry in the concentration matrix is generated uniformly over \cup[0.5 , 1] ] , where is the minimum absolute nonzero value of generated .after and were generated , we generated the marker genotypes by assuming .finally , given , we generated the multivariate normal distribution .for a given model and a given simulation , we generated a data set of _ i.i.d ._ random vectors .the simulations were repeated 50 times ..2cccd1.2d3.2ccc@ & & + & & + * method * & & & & & & & & & + & + [ 4pt ] _ cggm _ & 10.73 & 0.33 & 1.17 & 0.67 & 3.18 & 279.56 & 0.99 & 0.48 & 0.56 + _ acggm _ & 10.29 & 0.31 & 1.17 & 0.66 & 3.01 & 313.48 & 0.99 & 0.42 & 0.50 + _ glasso _ & 19.17 & 0.69 & 1.89 & 1.12 & 5.19 & 596.12 & 0.97 & 0.24 & 0.21 + _ aglasso _ & 17.93 & 0.69 & 1.89 & 1.11 & 4.98 & 541.32 & 0.97 & 0.32 & 0.28 + _ mlasso _ & & & & & & 309.50 & 0.99 & 0.38 & 0.48 + & + [ 4pt ] _ cggm _ & 5.15 & 0.37 & 1.30 & 0.72 & 2.36 & 106.88 & 0.98 & 0.69 & 0.66 + _ acggm _ & 4.62 & 0.29 & 1.14 & 0.63 & 1.97 & 83.20 & 0.99 & 0.66 & 0.71 + _ glasso _ & 13.95 & 0.75 & 2.12 & 1.20 & 4.57 & 391.84 & 0.87 & 0.37 & 0.18 + _ aglasso _ & 13.15 & 0.74 & 2.11 & 1.19 & 4.4 & 389.00 & 0.87 & 0.49 & 0.25 + _ mlasso _ & & & & & & 185.68 & 0.95 & 0.60 & 0.48 + & + [ 4pt ] _ cggm _ & 1.70 & 0.24 & 0.90 & 0.52 & 1.21 & 67.08 & 0.91 & 0.76 & 0.62 + _ acggm _ & 1.58 & 0.22 & 0.87 & 0.49 & 1.12 & 56.36 & 0.94 & 0.72 & 0.65 + _ glasso _ & 5.97 & 0.65 & 1.99 & 1.12 & 2.77 & 315.84 & 0.43 & 0.73 & 0.12 + _ aglasso _ & 6.05 & 0.65 & 1.98 & 1.12 & 2.78 & 264.30 & 0.54 & 0.65 & 0.14 + _ mlasso _ & & & & & & 111.28 & 0.84 & 0.68 & 0.44 + we first consider the setting when the sample size is larger than the number of genes and the number of genetic markers . in particular , the following three models were considered : , where , ; , where , ; , where , .we present the simulation results in table [ simu.tb2 ] . clearly , _ cggm _provided better estimates ( in terms of the defined loss function and the four metrics of `` closeness '' of the estimated and true matrices ) of the concentration matrix over _ glasso _ for all three models considered in all measurements .this is expected since _glasso _ assumes a constant mean of the multivariate vector , which is not a misspecified model .we also observed that the adaptive _ cggm _ and adaptive _ glasso _ both resulted in better estimates of the concentration matrix , although the improvements were minimal .this may be due to the fact that the mles of the concentration matrix when is relatively large do not provide very informative weights in the penalty functions . in terms of graph structure selection , we first observed that different values of the tuning parameter for the penalty on the mean parameters resulted in different identifications of the nonzero elements in the concentration matrix , indicating that the regression parameters in the means indeed had effects on estimating the concentration matrix .table [ simu.tb2 ] shows that for all three models , the _ cggm _ or the adaptive _ cggm _ resulted in higher sensitivities , specificities and mccs than the _ glasso _ or the adaptive _glasso_. we observed that _glasso _ often resulted in much denser graphs than the real graphs .this is partially due to the fact that some of the links identified by _ glasso _ can be explained by shared common genetic variants . by assuming constant means , in order to compensate for the model misspecification , _ glasso _tends to identify many nonzero elements in the concentration matrix and result in larger hamming distance between the estimate and the true concentration matrix .the results indicate that by simultaneously considering the effects of the covariates on the means , we can reduce both false positives and false negatives in identifying the nonzero elements of the concentration matrix .the modified neighborhood selection procedure using multiple lasso accounts for the genetic effects in modeling the relationship among the genes .it performed better than _ glasso _ or adaptive _ glasso _ in graph structure selection , but worse than the _ cggm _ or the adaptive _cggm_. this procedure , however , did not provide an estimate of the concentration matrix . in this sectionwe consider the setting when and simulate data from the following three models with values of , and specified as follows : , , ; , , ; , , .note that for all three models , the graph structure is very sparse due to the large number of genes considered .since in this setting we did not have consistent estimates of or , we did not consider the adaptive _ cggm _ or adaptive _ glasso _ in our comparisons .instead , we compared the performance of _ cggm _ , _ glasso _ and the modified neighborhood selection procedure using multiple lasso in terms of estimation of the concentration matrix and graph structure selection .the performances over 50 replications are reported in table [ simu.tbl ] for the optimal tuning parameters chosen by the .2cccd2.2rccd2.2@ & & + & & + * method * & & & & & & & & & + & + [ 4pt ] _ cggm _ & 164.22 & 0.59 & 1.81 & 0.97 & 13.48 & 2414.28 & 1.00 & 0.31 & 0.47 + _ glasso _ & 257.12 & 0.71 & 2.86 & 1.31 & 19.82 & 23746.98 & 0.98 & 0.08 & 0.02 + _ mlasso _ & & & & & & 3886.96 & 1.00 & 0.12 & 0.16 + [ 4pt ] & + [ 4pt ] _ cggm _ & 142.30 & 0.75 & 2.30 & 1.20 & 12.82 & 2341.28 & 1.00 & 0.21 & 0.34 + _ glasso _ & 219.33 & 0.76 & 2.97 & 1.40 & 18.39 & 20871.44 & 0.97 & 0.07 & 0.02 + _ mlasso _ & & & & & & 23750.04 & 0.96 & 0.61 & 0.19 + [ 4pt ] & + [ 4pt ] _ cggm _ & 48.73 & 0.44 & 1.55 & 0.77 & 6.86 &2044.52 & 1.00 & 0.05 & 0.21 + _ glasso _ & 87.32 & 0.69 & 2.72 & 1.22 & 11.01 & 9258.92 & 0.95 & 0.03 & -0.01 + _ mlasso _ & & & & & & 2967.30 & 0.99 & 0.08 & 0.10 + bics . for all three models , we observed much improved estimates of the concentration matrix from the proposed _ cggm _ as reflected by both smaller loss functions and different norms of the difference between the true and estimated concentration matrices .the _ mlasso _procedure did not provide estimates of the concentration matrix . in terms of graph structure selection , since _glasso _ does not adjust for potential effects of genetic markers on gene expressions , it resulted in many wrong identifications and much lower sensitivities and smaller mccs than the _cggm_. compared to the modified neighborhood selection using multiple lasso , estimates from the _ cggm _ have smaller hamming distance and larger mcc than _ mlasso_. in general , we observed that when is larger than the sample size , the sensitivities from all three procedures are much lower than the settings when the sample size is larger . for models 5 and 6 , _mlasso _ gave higher sensitivities but lower specificities than _ cggm _ or _glasso_. this indicates that recovering the graph structure in a high - dimensional setting is statistically difficult .however , the specificities are in general very high , agreeing with our theoretical sparsistency result of the estimates .to demonstrate the proposed methods , we present results from the analysis of a data set generated by . in this experiment , 112 yeast segregants , one from each tetrad ,were grown from a cross involving parental strains by4716 and wild isolate rm11 - 1a .rna was isolated and cdna was hybridized to microarrays in the presence of the same by reference material .each array assayed 6,216 yeast genes .genotyping was performed using genechip yeast genome s98 microarrays on all 112 segregants .these 112 segregants were individually genotyped at 2,956 marker positions .since many of these markers are in high linkage disequilibrium , we combined the markers into 585 blocks where the markers within a block differed by at most 1 sample .for each block , we chose the marker that had the least number of missing values as the representative marker .due to small sample size and limited perturbation to the biological system , it is not possible to construct a gene network for all 6,216 genes .we instead focused our analysis on two sets of genes that are biologically relevant : the first set of 54 genes that belong to the yeast mapk signaling pathway provided by the kegg database [ ] , another set of 1,207 genes of the protein protein interaction ( ppi ) network obtained from a previously compiled set by combined with protein physical interactions deposited in the munich information center for protein sequences ( mips ) .since the available eqtl data are based on observational data , given limited sample size and limited perturbation to the cells from the genotypes , it is statistically not feasible to learn directed graph structures among these genes . instead , for each of these two data sets , our goal is to construct a conditional independent network among these genes at the expression levels based on the sparse conditional gaussian graphical model in order to remove the false links by conditioning on the genetic marker information .such graphs can be interpreted as a projection of true signaling or a protein interaction network into the gene space [ , ] .the yeast genome encodes multiple map kinase orthologs , where fus3 mediates cellular response to peptide pheromones , kss1 permits adjustment to nutrient - limiting conditions and hog1 is necessary for survival under hyperosmotic conditions .last , slt2/mpk1 is required for repair of injuries to the cell wall . a schematic plot of this pathway is presented in figure [ kegg ] . note that this graph only presents our current knowledge about the mapk signaling pathway .since several genes such as ste20 , ste12 and ste7 appear at multiple nodes , this graph can not be treated as the `` true graph '' for evaluating or comparing different methods .in addition , although some of the links are directed , this graph does not meet the statistical definition of either a directed or undirected graph . rather than trying to recover the mapk pathway structure , we chose this set of 54 genes on the mapk pathway to make sure that these genes are potentially dependent at the expression level .http://www.genome.jp/kegg/pathway/sce/sce04011.html[kegg/pathway/sce/sce__04011__.html ] .] for each of the 54 genes , we first performed a linear regression analysis for the gene expression level using each of the 585 markers and selected those markers with a -value of 0.01 or smaller .we observed a total of 839 such associations between the 585 markers and 54 genes , indicating strong effects of genetic variants on expression levels .we further selected 188 markers associated with the gene expression levels of at least two out of the 54 genes , resulting in a total of 702 such associations .in addition , many genes are associated with multiple markers [ see figure [ mapk](a ) ] .this indicates that many pairs of genes are regulated by some common genetic variants , which , when not taken into account , can lead to false links of genes at the expression level .@ - .the undirected graph of 43 genes constructed based on the _ cggm_.,title="fig : " ] + ( a ) + - .the undirected graph of 43 genes constructed based on the _ cggm_.,title="fig : " ] + ( b ) we applied our proposed _ cggm _ on this set of 54 genes and 188 markers and used the bic to choose the tuning parameters .the bic selected and . with these tuning parameters ,the _ cggm _ procedure selected 188 nonzero elements of the concentration matrix and therefore 94 links among these 54 genes .in addition , under the _ cggm _ model , 677 elements of the regression coefficients are not zero , indicating the snps have important effects on the gene expression levels of these genes .the numbers of snps associated with the gene expressions range from 0 to 17 with a mean number of 4 .figure [ mapk](b ) shows the undirected graph for 43 linked genes on the mapk pathway based on the estimated sparse concentration matrix from the _ cggm_. this undirected graph constructed based on the _ cggm _ can indeed recover lots of links among the 54 genes on this pathway .for example , the kinase fus3 is linked to its downstream genes dig1 ,ste12 and fus1 .the _ cggm _ model also recovered most of the links to ste20 , including bni1 , ste11 , ste12 , ste5 and ste7 .ste20 is also linked to cdc42 through bni1 . clearly ,most of the links in the upper part of the mapk signaling pathway were recovered by _cggm_. this part of the pathway mediates cellular response to peptide pheromones .similarly , the kinase slt2/mpk1 is linked to its downstream genes swi4 and rlm1 .three other genes on this second layer of the pathway , fks1 , rho1 and bck1 , are also closed linked .these linked genes are related to cell response to hypotonic shock . as a comparison, we applied the _ glasso _ to the gene expression of these 54 genes without adjusting the effects of genetic markers on gene expressions and summarized the results in table [ compare ] .the optimal tuning parameter was selected based on the bic , which resulted in selection of 341 edges among the 54 genes ( i.e. , 682 nonzero elements of the concentration matrix ) , including all 94 links selected by the _cggm_. the difference of the estimated graph structures between the _ cggm _ and _ glasso _ can be at least partially explained by the genetic variants associated with the expression levels of multiple genes . among these 247 edges that were identified by only the _ glasso _ ,41 pairs of genes were associated with at least one genetic variant .the _ cggm _ adjusted the genetic effects on gene expression and therefore did not identify these edges at the expression levels .another reason is that the _ glasso _ assumes a constant mean vector for gene expression , which clearly misspecified the model and led to the selection of more links .@ & & + & * _ cggm _ * & * _ mlasso _ * + & + & + _ cggm _ & & 0 ( 0 ) + _ mlasso _ & 10 ( 218 ) & + _ glasso _ & 41 ( 1,569 ) & 2 ( 66 ) + we also compared the graph identified by the modified neighborhood selection procedure of using multiple lasso .specifically , each gene was regressed on all other genes and 188 markers using the lasso .again , the bic was used for selecting the tuning parameter .this procedure identified a total of 45 links among the 54 genes .in addition , a total of 33 associations between the snps and gene expressions were identified . among these 45 links ,36 were identified by the _ cggm _ and 45 were identified by _glasso_. .0d2.2d2.0 ccd2.2d2.0@ & & + & & + * method * & & & & & & & & + _ cggm _ & 0 & 11 & 3.48 & 3 & 0 & 57 & 19.94 & 21 + glasso & 5 & 19 & 12.63 & 13 & 5 & 60 & 31.46 & 32 + _ mlasso _ & 0 & 6 & 1.67 & 1 & 0 & 12 & 3.18 & 3 + table [ degree ] shows a summary of the degrees of the graphs estimated by these three procedures .it is clear that _ glasso _ resulted in a much denser graph than the neighborhood selection and _ cggm _ , and the _ mlasso _ tends to select few links .we next applied the _ cggm _ to the yeast protein protein interaction network data obtained from a previously compiled set by combined with protein physical interactions deposited in mips .we further selected 1,207 genes with variance greater than 0.05 .based on the most recent yeast protein protein interaction database biogrid [ ] , there are a total of 7,619 links among these 1,207 genes .the bic chose and , which resulted in selection of 12,036 links out of a total of 727,821 possible links , which gives a sparsity of 1.65% .results from comparisons with the two other procedures are shown in table [ compare ] .glasso _ without adjusting for the effects of genetic markers resulted in a total of 18,987 edges with an optimal tuning parameter .there were 9,854 links that were selected by both procedures .again _ glasso _ selected a lot more links than the _ cggm _ ; among the links that were identified by the _ glasso _ only , 1,569 pairs are associated with at least one common genetic marker ( see table [ compare ] ) , further explaining that some of the links identified by gene expression data alone can be due to shared comment genetic variants .the modified neighborhood selection procedure _ mlasso _ identified only 1,917 edges with , out of which 1,750 were identified by the _ cggm _ and 1,916 were identified by the _glasso_. there was a common set of 1,749 links that were identified by all three procedures .a summary of the degrees of the graphs estimated by these three procedures is given in table [ degree ] .we observe that the _ glasso _ gave a much denser graph than the other two procedures , agreeing with what we observed in simulation studies .if we treat the ppi of the biogrid database as the true network among these genes , the true positive rates from _ cggm _ , _ glasso _ and the modified neighborhood selection procedure were 0.067 , 0.071 and 0.019 , respectively , and the false positive rates were 0.016 , 0.026 and 0.0025 , respectively .the mcc scores from _ cggm _ , _ glasso _ and the modified neighborhood selection procedure were 0.041 , 0.030 and 0.033 , respectively .one reason for having low true positive rates is that many of the protein protein interactions can not be reflected at the gene expression level .figure [ hist](a ) shows the histogram of the correlations of @ & + ( a ) & ( b ) + & + ( c ) & ( d ) genes that are linked on the biogrid ppi network , indicating that many linked gene pairs have very small marginal correlations .the gaussian graphical models are not able to recover these links .figure [ hist ] plots ( b)(d ) show the marginal correlations of the genes pairs that were identified by _ cggm _ , _ glasso _ and _ mglasso _ , clearly indicating that the linked genes identified by the _cggm _ have higher marginal correlations .in contrast , some linked genes identified by _ glasso _ have quite small marginal correlations .another reason is that the ppi represents the marginal pair - wise interactions among the proteins rather than the conditional interactions.=1we have presented a sparse conditional gaussian graphical model for estimating the sparse gene expression network based on eqtl data in order to account for genetic effects on gene expressions . sincegenetic variants are associated with expression levels of many genes , it is important to consider such heterogeneity in estimating the gene expression networks using the gaussian graphical models .we have demonstrated by simulation studies that the proposed sparse _cggm _ can estimate the underlying gene expression networks more accurately than the standard _ggm_. for the yeast eqtl data set we analyzed , the standard gaussian graphical model without adjusting for possible genetic effects on gene expressions identified many possible false links that result in very dense graphs and make the interpretation of the resulting networks difficult . on the other hand , our proposed _ cggm _ resulted in a much sparser and biologically more interpretable network .we expect similarly good performance on data from other published sources , such as from and . due to the limits of the gene expression data , one should not expect to recover completely the true signaling networks since many dependencies among these genescan be observed only at the protein or metabolite level . in any global biochemical networksuch signaling network or protein interaction network , genes do not interact directly with other genes ; instead , gene induction or repression occurs through the activation of certain proteins , which are products of certain genes [ , ] .similarly , gene transcription can also be affected by protein - metabolite complexes . despite these limitations of the gene expression , it is still useful to abstract the actions of proteins and metabolites and represent genes acting on other genes in a gene network [ ] .this gene network is what we aim to learn based on the proposed _cggm_. as we observed from our analysis of the yeast eqtl data , such graphs or gene networks constructed from the _ cggm _ can indeed explain the data and provide certain biological insights into gene interactions .such graphs can be interpreted as a projection of true signaling or protein interaction network into the gene space [ , ] .we have focused in this paper on estimating the sparse conditional gaussian graphical model for gene expression data by adjusting for the genetic effects on gene expressions .however , we expect that by explicitly modeling the covariance structure among the gene expressions , we should also improve the identification of the genetic variants associated with the gene expressions [ ] .this is in fact the original motivation of the sur models proposed by . it would be interesting to investigate theoretically as to how modeling the concentration matrix can lead to improvement in estimation and identification of the genetic variants associated with the gene expression traits .we used the gaussian graphical models for studying the conditional independence among genes at the transcriptional level .such undirected graphs do not provide information on causal dependency .data from genetic genomics experiments have been proposed to construct the gene networks represented by directed causal graphs . for example , and used structural equation modeling and a genetic algorithm to construct causal genetic networks among genetic loci and gene expressions . developed an efficient markov chain monte carlo algorithm for joint inference of causal network and genetic architecture for correlated phenotypes .although genetical genomics data can indeed provide opportunity for inferring the causal networks at the transcriptional level , these causal graphical model - based approaches can often only handle a small number of transcripts because the number of possible directed graphs is super - exponential in the number of genes considered [ ] .regularization methods may provide alternative approaches to joint modeling of genetic effects on gene expressions and causal graphs among genes at the expression level .we thank the three reviewers and the editor for many insightful comments that have greatly improved the presentation of this paper .
|
genetical genomics experiments have now been routinely conducted to measure both the genetic markers and gene expression data on the same subjects . the gene expression levels are often treated as quantitative traits and are subject to standard genetic analysis in order to identify the gene expression quantitative loci ( eqtl ) . however , the genetic architecture for many gene expressions may be complex , and poorly estimated genetic architecture may compromise the inferences of the dependency structures of the genes at the transcriptional level . in this paper we introduce a sparse conditional gaussian graphical model for studying the conditional independent relationships among a set of gene expressions adjusting for possible genetic effects where the gene expressions are modeled with seemingly unrelated regressions . we present an efficient coordinate descent algorithm to obtain the penalized estimation of both the regression coefficients and the sparse concentration matrix . the corresponding graph can be used to determine the conditional independence among a group of genes while adjusting for shared genetic effects . simulation experiments and asymptotic convergence rates and sparsistency are used to justify our proposed methods . by sparsistency , we mean the property that all parameters that are zero are actually estimated as zero with probability tending to one . we apply our methods to the analysis of a yeast eqtl data set and demonstrate that the conditional gaussian graphical model leads to a more interpretable gene network than a standard gaussian graphical model based on gene expression data alone . .
|
one challenge in constraint programming is to develop effective search methods to deal with common modelling patterns .one such pattern is row and column symmetry : many problems can be modelled by a matrix of decision variables where the rows and columns of the matrix are fully or partially interchangeable .such symmetry is a source of combinatorial complexity .it is therefore important to develop techniques to deal with this type of symmetry .we study here simple constraints that can be posted to break row and column symmetries , and analyse their effectiveness both theoretically and experimentally .we prove that we can compute in polynomial time the lexicographically smallest representative of an equivalence class in a matrix model with row and column symmetry if the number of rows ( or of columns ) is bounded and thus remove all symmetric solutions .we are therefore able for the first time to see how much symmetry is left by these commonly used symmetry breaking constraints .a constraint satisfaction problem ( csp ) consists of a set of variables , each with a domain of values , and a set of constraints specifying allowed values for subsets of variables .when solving a csp , we often use propagation algorithms to prune the search space by enforcing properties like domain consistency .a constraint is _ domain consistent _( _ dc _ ) iff when a variable in the scope of a constraint is assigned any value in its domain , there exist compatible values in the domains of all the other variables in the scope of the constraint .a csp is domain consistent iff every constraint is domain consistent .an important feature of many csps is symmetry .symmetries can act on variables or values ( or both ) .a _ variable symmetry _ is a bijection on the variable indices that preserves solutions .that is , if \} ] is also .a _ value symmetry _ is a bijection on the values that preserves solutions .that is , if \} ] is also .a simple but effective method to deal with symmetry is to add _ symmetry breaking constraints _ which eliminate symmetric solutions .for example , crawford _ et al ._ proposed the general lex - leader method that posts lexicographical ordering constraints to eliminate all but the lexicographically least solution in each symmetry class .many problems are naturally modelled by a matrix of decision variables with variable symmetry in which the rows and/or columns are interchangeable .we say that a csp containing a matrix of decision variables has row symmetry iff given a solution , any permutation of the rows is also a solution .similarly , it has column symmetry iff given a solution , any permutation of the columns is also a solution . the equidistant frequency permutation array ( efpa ) problem is a challenging problem in coding theory .the goal is to find a set of code words , each of length such that each word contains copies of the symbols 1 to , and each pair of code words is hamming distance apart .for example , for , , , , one solution is : this problem has applications in communication theory , and is related to other combinatorial problems like finding orthogonal latin squares . consider a model for this problem with a by array of variables with domains to .this model has row and column symmetry since we can permute the rows and columns and still have a solution .to break all row symmetry we can post lexicographical ordering constraints on the rows . similarly , to break all column symmetry we can post lexicographical ordering constraints on the columns . when we have both row and column symmetry , we can post a constraint that lexicographically orders both the rows and columns . this does not eliminate all symmetry since it may not break symmetries which permute both rows and columns .nevertheless , it is often effective in practice .consider again solution .if we order the rows of lexicographically , we get a solution with lexicographically ordered rows and columns : similarly if we order the columns of lexicographically , we get a different solution in which both rows and columns are again ordered lexicographically : all three solutions are thus in the same row and column symmetry class . however , both and satisfy the constraint .therefore can leave multiple solutions in each symmetry class .the lex - leader method breaks all symmetry by ensuring that any solution is the lexicographically smallest in its symmetry class .this requires linearly ordering the matrix .lexicographically ordering the rows and columns is consistent with a linearization that takes the matrix in row - wise order ( i.e. appending rows in order ) .we therefore consider a complete symmetry breaking constraint which ensures that the row - wise linearization of the matrix is lexicographically smaller than all its row or column permutations , or compositions of row and column permutations .consider the symmetric solutions to .if we linearize these solutions row - wise , the first two are lexicographically larger than the third .hence , the first two solutions are eliminated by the constraint . breaks all row and column symmetries .unfortunately , posting such a constraint is problematic since it is np - hard to check if a complete assignment satisfies .we now give our first major result .we prove that if we can bound the number of rows ( or columns ) , then there is a polynomial time method to break all row and column symmetry .for example , in the efpa problem , the number of columns might equal the fixed word size of our computer .[ tm : fpt ] for a by matrix , we can check if a complete assignment satisfies a constraint in time .* proof : * consider the matrix model .we exploit the fact that with no row symmetry and just column symmetry , lexicographically ordering the columns gives the lex - leader assignment .let be a row permutation of .to obtain , the smallest column permutation of we lexicographically sort the columns of in time .finally , we check that \leq_{\rm lex } [ z_{1,1},\ldots , z_{1,m},\ldots , z_{n,1},\ldots , z_{n , m}] ] where is a delimiter between and .each constraint ensures that exactly one position in the row is set to and the variable stores this position .the automaton s states are represented by the 3-tuple where is the row sum , is the current position and records the position of the 1 on this row .this automaton has states and a constant number of transitions from each state , so the total number of transitions is .the complexity of propagating this constraint is .we also post a constraint over to ensure that they form a decreasing sequence of numbers and the number of occurrences of each value is decreasing .the first condition ensures that rows and columns are lexicographically ordered and the second condition ensures that the sums of the columns are decreasing .the states of this automaton are 3-tuples where is the last value , is the number of occurrences of this value , and is the number of occurrences of the previous value .this automaton has states , while the number of transition from each state is bounded .therefore propagating this constraint requires time .this decomposition is logically equivalent to the constraint , therefore it is sound .completeness follows from the fact that the decomposition has a berge acyclic constraint graph .therefore , enforcing dc on each constraint enforces dc on in time .problems with row and column symmetry also often contain value symmetries .for example , the efpa problem has row , column and value symmetry .we therefore turn to the problem of breaking row , column and value symmetry .consider again the solution .if we interchange the values 1 and 2 , we get a symmetric solution : in fact , all values in this csp are interchangeable .how do we break value symmetry in addition to breaking row and column symmetry ?for example , huczynska _ et al ._ write about their first model of the efpa problem : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` to break some of the symmetry , we apply lexicographic ordering ( lex - ordering ) constraints to the rows and columns these two constraint sets do not explicitly order the symbols. it would be possible to order the symbols by using value symmetry breaking constraints .however we leave this for future work . '' _( page 53 of ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we turn to this future work of breaking row , column and value symmetry .we first note that the interaction of the problem and constraints can in some circumstances break all value symmetry .for instance , in our ( and huczynska _ et al . _s ) model of the efpa problem , _ all _ value symmetry is already eliminated .this appears to have been missed by .consider any solution of the efpa problem which satisfies ( e.g. or ) . by ordering columns lexicographically ,ensures that the first row is ordered .in addition , the problem constraints ensure copies of the symbols 1 to to appear in the first row .hence , the first row is forced to be : all value symmetry is broken as we can not permute the occurrences of any of the values . in general , value symmetries may remain after we have broken row and column symmetry .how can we eliminate these value symmetries ?puget has given a general method for breaking any number of value symmetries in polynomial time .given a surjection problem in which all values occur at least once , he introduces variables to represent the index of the first occurrence of each value : value symmetry on the is transformed into variable symmetry on the .this variable symmetry is especially easy to break as the take all different values .we simply need to post appropriate ordering constraints on the .consider , for example , the inversion symmetry which maps onto , onto , etc .puget s method breaks this symmetry with the single ordering constraint : .unfortunately puget s method for breaking value symmetry is not compatible in general with breaking row and column symmetry using .this corrects theorem 6 and corollary 7 in which claim that , provided we use the same ordering of variables in each method , it is compatible to post lex - leader constraints to break variable symmetry and puget s constraints to break value symmetry .there is no ordering of variables in puget s method which is compatible with breaking row and column symmetry using the lex - leader method ( or any method like based on it ) .there exist problems on which posting and applying puget s method for breaking value symmetry remove all solutions in a symmetry class irrespective of the ordering on variables used by puget s method .* proof : * consider a 3 by 3 matrix model with constraints that all values between 0 and 8 occur , and that the average of the non - zero values along every row and column are all different from each other .this problem has row and column symmetry since we can permute any pair of rows or columns without changing the average of the non - zero values .in addition , it has a value symmetry that maps onto for .this maps an average of onto . if the averages were all - different before they remain so after .consider the following two solutions : both matrices satisfy as the smallest entry occurs in the top left corner and both the first row and column are ordered .they are therefore both the lex leader members of their symmetry class .puget s method for breaking value symmetry will simply ensure that the first occurrence of 1 in some ordering of the matrix is before that of 8 in the same ordering .however , comparing the two solutions , it can not be the case that the middle square is both before _ and _ after the bottom right square in the given ordering used by puget s method . hence, whichever ordering of variables is used by puget s method , one of these solutions will be eliminated . all solutions in this symmetry classare thus eliminated .we can pinpoint the mistake in puget s proof which allows him to conclude incorrectly that his method for value symmetry can be safely combined with variable symmetry breaking methods like .puget introduces a matrix of 0/1 variables and observes that variable symmetries on variables correspond to row symmetries on the matrix , while value symmetries of the variables correspond to column symmetries of the matrix . using the lex - leader method on a column - wise linearisation of the matrix, he derives the value symmetry breaking constraints on the variables .finally , he claims that we can derive the variable symmetry breaking constraints on the variables with the same method ( equation ( 13 ) of ) .however , this requires a row - wise linearisation of the matrix .unfortunately , combining symmetry breaking constraints based on row and column - wise linearisations can , as in our example , eliminate all solutions in a symmetry class .in fact , we can give an even stronger counter - example to theorem 6 in which shows that it is incompatible to post together variable and value symmetry breaking constraints _ irrespective _ of the orderings of variables used by _ both _ the variable and the value symmetry breaking method .there exist problems on which posting lex - leader constraints to break variable symmetries and applying puget s method to break value symmetries remove all solutions in a symmetry class irrespective of the orderings on variables used by both methods .* proof : * consider variables to taking values 1 to 4 , an all - different constraint over to and a constraint that the neighbouring differences are either all equal or are not an arithmetic sequence .these constraints permit solutions like ( neighbouring differences are all equal ) and ( neighbouring differences are not an arithmetic sequence ) .they rule out assignments like ( neighbouring differences form the arithmetic sequence ) .this problem has a variable symmetry which reflects a solution , swapping with , and with , and a value symmetry that inverts a solution , swapping with , and with .consider and .these two assignments form a symmetry class of solutions .suppose we break variable symmetry with a lex - leader constraint on to .this will permit the solution and eliminate the solution .suppose we break the value symmetry using puget s method on the same ordering of variables .this will ensure that first occurs before . butthis will eliminate the solution .hence , all solutions in this symmetry class are eliminated . in this case , both variable and value symmetry breaking use the same order on variables .however , we can show that all solutions in at least one symmetry class are eliminated whatever the orders used by both the variable and value symmetry breaking .the proof is by case analysis . in each case, we consider a set of symmetry classes of solutions , and show that the combination of the lex - leader constraints to break variable symmetries and puget s method to break value symmetries eliminates all solutions from one symmetry class . in the first case , suppose the variable and value symmetry breaking constraints eliminate and permit . in the second case , suppose they eliminate and permit .this case is symmetric to the first except we need to reverse the names of the variables throughout the proof .we therefore consider just the first case . in this case, the lex - leader constraint breaks the variable symmetry by putting either first in its ordering variables or first .suppose goes first in the ordering used by the lex - leader constraint .puget s method ensures that the first occurrence of 1 is before that of 4 .puget s method therefore uses an ordering on variables which puts before .consider now the symmetry class of solutions : and .puget s method eliminates the first solution as 4 occurs before 1 in any ordering that put before . andthe lex - leader constraint eliminates the second solution as is larger than its symmetry .therefore all solutions in this symmetry class are eliminated .suppose , on the other hand , goes first in the lex - leader constraint .consider now the symmetry class of solutions : and .the lex - leader constraint eliminates the first solution as is greater than its symmetry .suppose now that the second solution is not eliminated .puget s method ensures the first occurrence of 1 is before that of 4 .puget s method therefore uses an ordering on variables which puts before .consider now the symmetry class of solutions : and .puget s method eliminates the first solution as 4 occurs before 1 in any ordering that put before . andthe lex - leader constraint eliminates the second solution as is larger than its symmetry .therefore all solutions in this symmetry class are eliminated .we end with a special but common case where variable and value symmetry breaking do not conflict . when values partition into interchangeable sets , puget s method is equivalent to breaking symmetry by enforcing value precedence .given any two interchangeable values and with , a value constraint ensures that if occurs then the first occurrence of is before that of .it is safe to break row and column symmetry with and value symmetry with when value precedence considers variables either in a row - wise or in a column - wise order .this is a simple consequence of theorem 1 in .it follows that it is also safe to use to break value symmetry when using constraints like derivable from the lex - leader method .a promising alternative to for breaking row and column symmetries is .this is also derived from the lex leader method , but now applied to a snake - wise unfolding of the matrix . to break column symmetry , ensures that the first column is lexicographically smaller than or equal to both the second and third columns , the reverse of the second column is lexicographically smaller than or equal to the reverse of both the third and fourth columns , and so on up till the penultimate column is compared to the final column . to break row symmetry , ensures that each neighbouring pair of rows , and satisfy the entwined lexicographical ordering : like , is an incomplete symmetry breaking method .in fact , like , it may leave a large number of symmetric solutions .[ tm : snake - expsol ] there exists a class of by 0/1 matrix models on which leaves symmetric solutions , for all .* proof : * consider the following 4 by 4 matrix : this is a permutation matrix as there is a single 1 on each row and column .it satisfies the constraints .in fact , we can add any 5th column which reading top to bottom is lexicographically larger than or equal to and reading bottom to top is lexicographically larger than or equal to .we shall add a 4 bit column with 2 bits set .that is , reading top to bottom : , , or .note that all 4 of these 4 by 5 matrices are row and column symmetries of each other .for instance , consider the row and column symmetry that reflects the matrix in the horizontal axis , and swaps the 1st column with the 2nd , and the 3rd with the 4th : in general , we consider the by permutation matrix : this satisfies the constraints . we can add any column which reading top to bottom is lexicographically larger than or equal to the column and reading bottomto top is lexicographically larger than or equal to the column .in fact , we can add any column with eactly of the bits set .this gives us a set of by matrices that are row and column symmetries of each other .there are bit vectors with exactly of bits set .hence , we have matrices which satisfy that are in the same row and column symmetry class . using stirling s formula , this grows as .the proof of theorem [ tm : fpt ] gives a polynomial method to break all row and column symmetry .this allows us to compare symmetry breaking methods for matrix models like and , not only with respect to each other but for the first time in absolute terms .our aim is to evaluate : first , whether the worst - case scenarios identified in theorems [ tm : expsol ] and [ tm : snake - expsol ] are indicative of what can be expected in practice ; second , how effective these methods are with respect to each other ; third , in cases where they differ significantly , how much closer the best of them is to the optimal . to answer these questions , we experimented with different symmetry breaking constraints : , the column - wise ( ) or the row - wise ( ) .we use to denote no symmetry breaking constraints . for each probleminstance we found the total number of solutions left by symmetry breaking constraints ( ) and computed how many of them were symmetric based on the method outlined in the proof of theorem [ tm : fpt ] .the number of _ non symmetric _ solutions is equal to the number of symmetry classes ( ) if the search space is exhausted . in all instancesat least one model exhausted the search space to compute the of symmetry classes , shown in the column .we use ` ' to indicate that the search is not completed within the time limit . as the model typically could not exhaust the search space within the time limit , we use ` ' to indicate a lower bound on the number of solutions . finally , we used a variable ordering heuristic that follows the corresponding lex - leader variable ordering in each set of symmetry breaking constraints ( i.e. row - wise snake ordering with ) .we ran experiments in gecode 3.3.0 on an intel xeon x5550 , 2.66 ghz , 32 gb ram with sec timeout . _ unconstrained problems ._ we first evaluated the effectiveness of symmetry breaking constraints in the absence of problem constraints .this gives the `` pure '' effect of these constraints at eliminating row and column symmetry .we considered a problem with a matrix , ] , $ ] whose rows and columns are interchangeable .table [ t : t2 ] summarizes the results .the first part presents typical results for 0/1 matrices whilst the second part presents results for larger domains .the results support the exponential worst case in theorems [ tm : expsol ] and [ tm : snake - expsol ] , as the ratio of solutions found to symmetry classes increases from 1.25 ( 3,3,2 ) to over 6 ( 6,6,2 ) , approximately doubling with each increase of the matrix size .as we increase the problem size , the number of symmetric solutions left by and grows rapidly .interestingly , achieves better pruning on 0/1 matrices , while performs better with larger domains . .[t: t4 ] covering arrays .number of solutions found by posting different sets of symmetry breaking constraints . is the number of vectors , is the length of a vector , is the size of the domains , is the covering strength . [ cols=">,^,^,^,^,^,^",options="header " , ] _ constrained problems . _ our second set of experiments was on three benchmark domains : equidistant frequency permutation array ( efpa ) , balanced incomplete block designs and covering array ( ca ) problems .we used the non - boolean model of efpa ( table [ t : t1 ] ) , the boolean matrix model of bibd ( table [ t : t3 ] ) and a simple model of ca ( table [ t : t4 ] ) .we consider the satisfaction version of the ca problem with a given number of vectors . in all problemsinstances the , and constraints show their effectiveness , leaving only a small fraction of symmetric solutions .note that often leaves fewer symmetric solutions .however , it is significantly slower compared to and because it tends to prune later ( thereby exploring larger search trees ) .for example , the number of failures for the efpa problem is , and for , and respectively .on efpa problems , is about twice as fast as and leaves less solutions .on the ca problems and show similar results , while performs better on bibd problems in terms of the number of solution left .overall , our results show that and prune most of the symmetric solutions . outperforms and in terms of the number of solutions left , but it explores larger search trees and is about two orders of magnitude slower. however , there is little difference overall in the amount of symmetry eliminated by the three methods .lubiw proved that any matrix has a row and column permutation in which rows and columns are lexicographically ordered and gave a nearly linear time algorithm to compute such a matrix .shlyakhter and flener _independently proposed eliminating row and column symmetry using . to break some of the remaining symmetry ,frisch , jefferson and miguel suggested ensuring that the first row is less than or equal to all permutations of all other rows . as an alternative to ordering both rows and columns lexicographically , frisch _ et al . _ proposed ordering the rows lexicographically but the columns with a multiset ordering .more recently , grayland _ et al ._ have proposed , an alternative to based on linearizing the matrix in a snake - like way .an alternative way to break the symmetry of interchangeable values is to convert it into a variable symmetry by channelling into a dual 0/1 viewpoint in which iff , and using lexicographical ordering constraints on the columns of the 0/1 matrix .however , this hinders propagation .finally , dynamic methods like sbds have been proposed to remove symmetry from the search tree .unfortunately , dynamic techniques tend not to work well with row and columns symmetries as the number of symmetries is usually too large .we have provided a number of positive and negative results on dealing with row and column symmetry . to eliminate some ( but not all ) symmetry we can post static constraints like and . on the positive side , we proposed the first polynomial time method to eliminate _ all _ row and column symmetry when the number of rows ( or columns ) is bounded . on the negative side, we argued that and can leave a large number of symmetric solutions .in addition , we proved that propagating completely is np - hard .finally , we showed that it is not always safe to combine puget s value symmetry breaking constraints with row and column symmetry breaking constraints , correcting a claim made in the literature .flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , pearson , j. , walsh , t. : breaking row and column symmetry in matrix models . in : 8th international conference on principles and practices of constraint programming ( cp-2002 ) , springer ( 2002 ) flener , p. , frisch , a. ,hnich , b. , kiziltan , z. , miguel , i. , walsh , t. : matrix modelling .technical report apes-36 - 2001 , apes group ( 2001 ) presented at formul01 ( workshop on modelling and problem formulation ) , cp2001 post - conference workshop .crawford , j. , ginsberg , m. , luks , g. , roy , a. : symmetry breaking predicates for search problems . in : proceedings of 5th international conference on knowledge representation and reasoning , ( kr 96 ) .( 1996 ) 148159 huczynska , s. , mckay , p. , miguel , i. , nightingale , p. : modelling equidistant frequency permutation arrays : an application of constraints to mathematics . in gent , i. ,ed . : principles and practice of constraint programming - cp 2009 , 15th international conference , cp 2009 ,lisbon , portugal , september 20 - 24 , ( 2009 ) 5064 carlsson , m. , beldiceanu , n. : arc - consistency for a chain of lexicographic ordering constraints .technical report t2002 - 18 , swedish institute of computer science ( 2002 ) .katsirelos , g. , narodytska , n. , walsh , t. : breaking generator symmetry in : proceedings of symcon09 - 9th international workshop on symmetry and constraint satisfaction problems , colocated with cp2009 .flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , pearson , j. , walsh , t. : symmetry in matrix models .technical report apes-30 - 2001 , apes group ( 2001 ) presented at symcon01 ( symmetry in constraints ) , cp2001 post - conference workshop .puget , j.f . : breaking all value symmetries in surjection problems . in van beek ,: proceedings of 11th international conference on principles and practice of constraint programming ( cp2005 ) , springer ( 2005 ) law , y. , lee , j. : global constraints for integer and set value precedence . in : proceedings of 10th international conference on principles and practice of constraint programming ( cp2004 ) , springer ( 2004 ) 362376 grayland , a. , miguel , i. , roney - dougal , c. : snake lex : an alternative to double lex . in gent , i.p . ,ed . : proceedings of 15th international conference on principles and practice of constraint programming .springer ( 2009 ) 391399 frisch , a. , jefferson , c. , miguel , i. : constraints for breaking more row and column symmetries . in rossi , f. ,ed . : proceedings of 9th international conference on principles and practice of constraint programming ( cp2003 ) , springer ( 2003 )
|
we consider a common type of symmetry where we have a matrix of decision variables with interchangeable rows and columns . a simple and efficient method to deal with such row and column symmetry is to post symmetry breaking constraints like and . we provide a number of positive and negative results on posting such symmetry breaking constraints . on the positive side , we prove that we can compute in polynomial time a unique representative of an equivalence class in a matrix model with row and column symmetry if the number of rows ( or of columns ) is bounded and in a number of other special cases . on the negative side , we show that whilst and are often effective in practice , they can leave a large number of symmetric solutions in the worst case . in addition , we prove that propagating completely is np - hard . finally we consider how to break row , column and value symmetry , correcting a result in the literature about the safeness of combining different symmetry breaking constraints . we end with the first experimental study on how much symmetry is left by and on some benchmark problems .
|
the conformal einstein field equations ( cefe ) constitute a powerful tool for the global analysis of spacetimes see e.g. .the cefe provide a system of field equations for geometric objects defined on a lorentzian manifold ( the so - called _ unphysical spacetime _ ) which is conformally related to a to a spacetime ( the so - called _ physical spacetime _ ) satisfying the ( vacuum ) einstein field equations .the metrics and are related to each other via a rescaling of the form where is the so - called _ conformal factor_. the cefe have the property of being regular at the points where ( the so - called _ conformal boundary _ ) and a solution thereof implies , wherever , a solution to the einstein field equations .the great advantage of the conformal point of view provided by the cefe is that it allows to recast global problems in the physical spacetime as local problems in the unphysical one .the cefe have been extended to include matter sources consisting of suitable trace - free matter seee.g. .the cefe can be expressed in terms of a _ weyl connection _( i.e. a connection which is not metric but nevertheless preserves the conformal structure ) to obtain a more general system of field equations the so - called _ extended conformal einstein field equations _ , see . inwhat follows , the conformal field equations expressed in terms of the levi - civita connection of the metric will be known as the _standard cefe_. the analysis of the present article is restricted to this version of the cefe .the standard cefe can be read as differential conditions on the conformal factor and some concomitants thereof : the schouten tensor , the rescaled weyl tensor and the components of the unphysical metric this version of the equations is known as the _metric cefe_. alternatively , by supplementing the field equations with the cartan structure equations , one can replace the metric components by the coefficients of a frame and the associated connection coefficients as unknowns . this _ frame version _ of the equations allows a direct translation of the cefe into a spinorial formalism the so - called _spinorial cefe_. in view of the tensorial nature of the cefe , in order to make assertions about the existence and properties of their solutions , it is necessary to derive from them a suitable evolution system to which the theory of hyperbolic partial differential equations can be applied .this procedure is known as a _ hyperbolic reduction_. part of the hyperbolic reduction procedure consists of a specification of the gauge inherent to the equations .a systematic way of proceeding to the specification of the gauge is through so - called _ gauge source functions_. these functions are associated to derivatives of the field unknowns which are not determined by the field equations .this idea can be used to extract a first order symmetric hyperbolic system of equations for the field unknowns for the metric , frame and spinorial versions of the standard cefe .more recently , it has been shown that gauge source functions can be used to obtain , out of the metric conformal field equations , a system of quasilinear wave equations see .this particular construction requires the specification of a _ coordinate gauge source function _ and a _ conformal gauge source function _ and is close , in spirit , to the classical treatment of the cauchy problem in general relativity in see also . in the present articlewe show how to deduce a system of quasilinear wave equations for the unknowns of the spinorial cefe and analyse its relation to the original set of field equations .the use of the spinorial cefe ( or , in fact , the frame cefe ) gives access to a wider set of gauge source functions consisting of _ coordinate , frame and conformal gauge source functions_. another advantage of the spinorial version of the cefe is that they have a much simpler algebraic structure than the metric equations .in fact , one of the features of the spinorial formalism simplifying our analysis is the use of the symmetric operator instead of the usual commutator of covariant derivatives ] . for conciseness, we have introduced the notation which is to be interpreted as a shorthand for the longer expression using the irreducible decomposition of a spinor representing an antisymmetric tensor we obtain that where is a reduced zero - quantity which can be written in terms of the frame coefficients using equation as using the decomposition of a valence-2 spinor in the first term of the right - hand side we get introducing the _ coordinate gauge source function _ , a wave equation can then be deduced from the condition observe that this equation is satisfied if that is , if the corresponding cefe is satisfied .adapting the general procedure described in section [ section : wavegeneralprocedure ] as required , we get finally , using the spinorial ricci identities and rearranging the last term we get the following wave equation the spinorial counterpart of the riemann tensor can be decomposed as where the _ reduced curvature spinor _ is expressed in terms of the spin connection coefficients as in the last equation , has been introduced for convenience as a shorthand for the longer expression now , observe that the zero quantity defined in equation has the symmetry .exploiting this fact , the reduced spinors associated to the geometric and algebraic curvatures and can be split , respectively , as where are the _ reduced geometric curvature spinors_. analogous definitions are introduced for the algebraic curvature and are not complex conjugate of each other . ] .the adjetive _ geometric _ is used here to emphasise the fact that and are expressed in terms of the reduced connection coefficients while and , the reduced algebraic curvature spinors , are written in terms of the weyl spinor and the spinorial counterpart of the schouten tensor .together , these two reduced geometric and algebraic curvature spinors give the reduced zero quantities * remark . * observe that although and are independent , their derivatives are related through the _ second bianchi identity _ , which implies that this observation is also true for the algebraic curvature as a consequence of the conformal field equations and since they encode the second bianchi identity written as differential conditions on the spinorial counterpart of the schouten tensor and the weyl spinor . to verify the last statement ,recall that the equation for the schouten tensor encoded in comes from the frame equation which can be regarded as the second bianchi identity written in terms of the schouten and weyl tensors .this is can be easily checked , since the last equation is obtained from the substitution of the expression for the riemann tensor in terms of the weyl and schouten tensors ( i.e. the algebraic curvature ) in the second bianchi identity .this means that , as long as the conformal field equations and are satisfied we can write therefore , the reduced quantities and are related via now , we compute explicitly the reduced geometric and algebraic curvature .recalling the definition of in terms of the weyl spinor and the spinorial counterpart of the schouten tensor as given in equation it follows that or , equivalently similarly , computing the reduced version of the geometric curvature from expression we get if the no - torsion condition is satisfied , then the first term in each of the last expressions vanishes . in this manner oneobtains an expression for the reduced geometric curvature purely in terms of the reduced connection coefficients and , in turn , a wave equation from either or . in what follows , for concreteness we will consider adapting the procedure described in section [ section : wavegeneralprocedure ] and taking into account equations and one obtains the gauge source function that appears in the last expression is the _ frame gauge source function _ defined by using the spinorial ricci identities to replace in equation and exploiting the symmetry we get & _ ^_ = -3_ + ^__ + & + 2_(^_||^ _ ) -2_(||)-2^_(_|| ) . [ boxsymconnection2 ] substituting the last expression into we get the wave equation & _ -2 ( ^ _ _ -3_ + 2 _ ( ^_||^ _ ) + & -2_(||)-2^_(_|| ) ) + 2^__(^_||^_ ) + & -2 ^__ - _ f_(x ) = 0 .[ waveeqgamma2 ] the zero - quantity defined by equation is expressed in terms of the spinorial counterpart of the schouten tensor .the spinor can be decomposed in terms of the ricci spinor and as see appendix [ appendix : spinorialrelations ] for more details . in the context of the cefethe field can be regarded as a gauge source function .thus , in what follows we regard the equation as differential conditions on . in order to derive a wave equation for the ricci spinor we consider proceeding , again , as described in section [ section : wavegeneralprocedure ] and using that that is , assuming that the equation encoded in the the zero - quantity is satisfied we get using the decomposition and symmetrising in we further obtain that to find a satisfactory wave equation for the ricci tensor we need to rewrite the last three terms of equation . to compute the third termobserve that the second contracted bianchi identity as in equation and the decomposition of the schouten spinor given by equation render thus , one finds that this last expression is satisfactory since , as already mentioned , the ricci scalar ( or equivalently ) can be regarded as a gauge source function the so - called _ conformal gauge source function _in order to replace the last term of equation we use field equation encoded in and the decomposition , to obtain finally , computing and substituting equations and we conclude that proceeding as in the previous subsections , consider the equation observe that in this case we do not need a gauge source function since we already have a unsymmetrised derivative in the definition of .following the procedure described in section [ section : wavegeneralprocedure ] we get thus , to complete the discussion we need to calculate . using the spinorial ricci identities we obtain the symmetries of the equation since taking into account the last expression we obtain the following wave equation for the rescaled weyl spinor observe that the wave equation for the rescaled weyl spinor is remarkably simple .since is a scalar field , the general procedure described in section [ section : wavegeneralprocedure ] does not provide any computational advantage .the required wave equation is derived from considering explicitly , the last equation can be written as using the the contracted second bianchi identity to replace the second term and the field equation along with the decomposition to replace the third term we get a wave equation for the conformal factor follows directly from the contraction and the decomposition : we summarise the results of this section in the following proposition : [ cewe ] let denote smooth functions on such that if the cefe are satisfied on , then one has that [ waveequations ] on . * remark . *the unphysical metric is not part of the unknowns of the system of equations of the spinorial version of the cefe .this observation is of relevance in the present context because when the operator is applied to a spinor of non - zero range one obtains first derivatives of the connection if the metric is part of the unknowns then these first derivatives of the connection representing second derivatives of would enter into the principal part of the operator .therefore , since in this setting the metric is not part of the unknowns , the principal part of the operator is given by . * remark .* in the sequel let denote vector - valued unknowns encoding the independent components of and let . additionally , let denote collectively the derivatives of . with this notationthe wave equations of proposition [ cewe ] can be recast as a quasilinear wave equation for having , in local coordinates , the form where is a vector - valued function of its arguments and denotes the components , in local coordinates , of contravariant version of a lorentzian metric . in accordance with our notation , in local coordinates , one writes .the starting point of the derivation of the wave equations discussed in the previous section was the cefe . therefore , any solution to the cefe is a solution to the wave equations .it is now natural to ask : under which conditions a solution to the wave equations will imply a solution to the cefe ?the general strategy to answer this question is to use the spinorial wave equations of proposition [ cewe ] to construct a subsidiary system of homogeneous wave equations for the zero - quantities and impose vanishing initial conditions .then , using a standard existence and uniqueness result , the unique solution satisfying the data will be given by the vanishing of each zero - quantity .this means that under certain conditions ( encoded in the initial data for the subsidiary system ) a solution to the spinorial wave equations will imply a solution to the original cefe .the procedure to construct the subsidiary equations for the zero quantities is similar to the construction of the wave equations of proposition [ cewe ] .there is , however , a key difference : the covariant derivative is , a priori , not assumed to be a levi - civita connection .instead we assume that the connection is metric but not necessarily torsion - free .we will denote this derivative by .therefore , whenever a commutator of covariant derivatives appears , or in spinorial terms the operator , it is necessary to use the -spinorial ricci identities involving a non - vanishing torsion spinor this generalisation is given in the appendix a and is required in the discussion of the subsidiary equations where the torsion is , in itself , a variable for which a subsidiary equation needs to be constructed . as in the previous section ,the procedure for obtaining the subsidiary system is similar for each zero - quantity .therefore , we first give a general outline of the procedure . in the general procedure described in section [ section : wavegeneralprocedure ] , the spinor played the role of a zero - quantity , while the spinor played the role of the variable for which the wave equation was to be derived . in the construction of the subsidiary systemwe are not interested in finding an equation for but in deriving an equation for under the hypothesis that the wave equation for is satisfied . as already discussed , since we can not assume that the connection is torsion - free the equation for has to be written in terms of the metric connection .before deriving the subsidiary equation let us emphasise an important point . in section [ section : wavegeneralprocedure ]we defined .then , decomposing this quantity as usual we obtained at this point in the discussion of section [ section : wavegeneralprocedure ] we introduced a gauge source function .now , instead of directly deriving an equation for we have derived an equation using the modified quantity accordingly , the wave equations of proposition [ cewe ] can be succinctly written as .later on , we will have to show that , in fact , if the appropriate initial conditions are satisfied . in addition , observe that can be written in terms of the connection by means of a _ transition spinor _ see appendix [ appendix : torsion ] for the definition . using equation of appendix [appendix : torsion ] we get where is the last index of the string . for a connection which is metric , the transition spinor can be written entirely in terms of the torsion as if the wave equation is satisfied , the first term of equation vanishes .therefore , the wave equations of proposition [ cewe ] can be written in terms of the connection as in what follows , the right hand side of the last equation will be denoted by now , we want to show that by setting the appropriated initial conditions , if the wave equation holds then . the strategy will be to obtain an homogeneous wave equation for written in terms of the connection .first , observe that can be decomposed as replacing the second term using i.e .using that the wave equation holds we get that applying to the previous equation and expanding the symmetrised term in the right - hand side one obtains from this expression , after some rearrangements we obtain it only remains to reexpress the right - hand side of the above equation using the - spinorial ricci identities .this can be computed for each zero - quantity using the expressions given in appendix [ appendix : spinorialrelations ] .observe that the result is always an homogeneous expression in the zero - quantities and its first derivatives .the last term also shares this property since the transition spinor can be completely written in terms of the torsion , as shown in equation , which is one of the zero - quantities .finally , once the homogeneous wave equation is obtained we set the initial conditions on a initial hypersurface , and using a standard result of existence and uniqueness for wave equations we conclude that the unique solution satisfying this data is . ** the crucial step in the last derivation was the assumption that the equation is satisfied i.e .the wave equation for .now , we take a closer look at the initial conditions as will be shown in the sequel , these conditions will be used to construct initial data for the wave equations of proposition [ cewe ] .the important observation is that only is essential , while holds by virtue of the condition . in order to show this ,first observe that as the spatial derivatives of can be determined from , it follows that is equivalent to only specify the derivative along the normal to .let be an hermitian spinor corresponding to a timelike vector such that is the normal to .the spinor can be used to perform a _space spinor split _ of the derivative : where denote , respectively , the derivative along the direction given by and is the _ sen connection _ relative to .. ] we have chosen the normalisation , in accordance with the conventions of . using this split and it follows that therefore , requiring is equivalent to having as previously stated .now , observe that the wave equation or , equivalently , implies .is given entirely in terms of zero - quantities since the transition spinor can be written in terms of the torsion . ] therefore , if we require that all the zero - quantities vanish on the initial hypersurface then . using , again , the space spinor decomposition of and considering we get which also implies that .summarising , the only the condition that is needed is that all the zero - quantities vanish on the initial hypersurface since the condition is always satisfied by virtue of the wave equation .we still need to show that .one can write where encodes the difference between and .computing the trace of the last equation and taking into account the definition of one finds that .now , invoking the results derived in the last subsection it follows that if the wave equation is satisfied and all the zero - quantities vanish on the initial hypersurface then .this observation also implies that if then . the later result , expressed in terms of means that if then .therefore , requiring that all the zero - quantities vanish on and that the wave equation holds everywhere , is enough to ensure that everywhere .moreover , implies that and the gauge conditions hold .namely , one has that the essential ideas of the section [ section : genericsubsidiarysystem ] can be applied to every single zero - quantity .one only needs to take into account the particular index structure of each zero - quantity encoded in the string of spinor indices .the problem then reduces to the computation of the result of which is to be substituted into the latter can be succinctly computed using the equations in appendix [ appendix : spinorialrelations ] .the explicit form can be easily obtained and renders long expressions for each zero - quantity .the key observation from these computations is that leads to an homogeneous wave equation .the explicit form is given in appendix [ appendix : subsidiarysystem ] .these results can be summarised in the following proposition : [ proposition : subsidiaryequations ] assume that the wave equations are satisfied everywhere . then the zero - quantities satisfy the homogeneous wave equations {}^{\bmq'}{}_{\bmb}{}{}^{\bmc } = 0 ,\nonumber \\ & & \widetriangle{\square}\widehat{\xi}_{\bma \bmb \bmc ' \bmd ' } -2\widetriangle{\square}_{\bmp ' \bmc'}\widehat{\xi}_{\bma \bmb}{}^{\bmp'}{}_{\bmd ' } + 2\widetriangle{\nabla}_{\bmc ' \bmq}w[{\xi}]{}^{\bmq}{}_{\bma \bmb \bmd ' } = 0 , \\ & & \widetriangle{\square}\widehat{\delta}^{\bmp}{}_{\bmd \bmb \bmb ' } -2\widetriangle{\square}_{\bmp \bmc}\widehat{\delta}^{\bmp}{}_{\bmd \bmb \bmb ' } + 2 \widetriangle{\nabla}_{\bmc \bmq'}w[{\delta}]{}^{\bmq'}{}_{\bmd \bmb \bmb ' } = 0 , \nonumber \\ & & \widetriangle{\square}{\lambda}_{\bmb \bmb ' \bma \bmc } -2\widetriangle{\square}_{\bmp ' \bmb'}{\lambda}_{\bmb}{}^{\bmp'}{}_{\bma \bmc } + 2\widetriangle{\nabla}_{\bmb ' \bmq } w[\lambda]{}^{\bmq}{}_{\bmb \bma \bmc}=0 , \nonumber \\ & & \widetriangle{\nabla}_{\bma \bma'}z^{\bma \bma ' } - w[z]{}^{\bma \bma'}{}_{\bma \bma'}=0,\end{aligned}\ ] ] where {}^{\bmq'}{}_{\bmb}{}{}^{\bmc } \equiv \widetriangle{\nabla}^{\bmq'}{}_{\bme}\widehat{\sigma}^{\bme}{}_{\bmb}{}^{\bmc } , \quad w[{\xi}]{}^{\bmq}{}_{\bma \bmb \bmd ' } \equiv \widetriangle{\nabla}^{\bmq}{}_{\bme'}\widehat{\xi}_{\bma \bmb}{}^ { \bme'}{}_{\bmd ' } , \quadw[{\delta}]{}^{\bmq'}{}_{\bmd \bmb \bmb ' } \equiv \widetriangle{\nabla}^{\bmq'}{}_{\bmf}\widehat{\delta}^{\bmf}{}_{\bmd \bmb \bmb ' } , & \\ &w[\lambda]{}^{\bmq}{}_{\bmb \bma \bmc } \equiv \widetriangle{\nabla}_{\bme'}{}^{\bmq}{\lambda}_{\bmb}{}^{\bme'}{}_{\bma \bmc } , \quad w[z]{}^{\bma \bma'}{}_{\bma \bma ' } \equiv \widetriangle{\nabla}^{\bma \bma'}z_{\bma \bma'}. & \end{aligned}\ ] ] we will refer to the set of equations given in the last proposition as the _ subsidiary system_. it should be noticed that the terms of the form and can be computed using the -ricci identities and the transition spinor respectively . using the subsidiary equations from the previous proposition onereadily obtains the following _ reduction lemma _ : [ reductionlemma ] if the initial data for the subsidiary system of proposition [ proposition : subsidiaryequations ] is given by where in an spacelike hypersurface , and the wave equations of proposition [ cewe ] are satisfied everywhere , then one has a solution to the vacuum cefe in other words in . moreover , whenever , the solution to the cefe implies a solution to the vacuum einstein field equations .it can be verified , using the -ricci identities given in the appendix [ appendix : spinorialrelations ] , that the equations of proposition [ proposition : subsidiaryequations ] are homogeneous wave equations for the zero - quantities .notice , however , that the equation for is not a wave equation but of first order and homogeneous .therefore , if we impose that the zero - quantities vanish on an initial spacelike hypersurface then by the homogeneity of the equations we have that everywhere on .moreover , since initially , and , we have that , , on . in addition , using that a solution to the cefe implies a solution to the einstein field equations whenever , it follows that a solution to the wave equations of proposition [ cewe ] with initial data consistent with the initial conditions given in proposition [ reductionlemma ] will imply a solution to the vacuum einstein field equations whenever . * remark .* it is noticed that the initial data for the subsidiary equations gives a way to specify the data for the wave equations of proposition [ cewe ] .this observation is readily implemented through a space spinor formalism which mimics the hyperbolic reduction process to extract a first order hyperbolic system out of the cefe see e.g. . in oder to illustrate this procedure let us consider the data for the rescaled weyl spinor encoded in .we need to provide the initial data a convenient way to specify the initial data is to use the space spinor formalism to split the equations encoded in . from this split, a system of evolution and contraint equations can be obtained .recall that . making use of the the decomposition of in terms of the operators and we get evolution and constraint equations are obtained , respectively , from considering restricting the last equations to the initial hypersurface it follows that the initial data must satisfy and the initial data for can be read form . the procedure for the other equations is analogous and can be succinctly obtained by revisiting the derivation of the first order hyperbolic equations derived from the cefe using the space spinor formalism see for instance .corresponds to the limit of the region where the coordinates are well defined.,scaledwidth=25.0% ] as an application of the hyperbolic reduction procedure described in the previous sections we analyse the stability of the _ milne universe _ , .this spacetime is a friedman - lematre - robinson - walker vacuum solution with vanishing cosmological constant , energy density and pressure .in fact , it represents flat space written in comoving coordinates of the world - lines starting at see .this means that the milne universe can be seen as a portion of the minkowski spacetime , which we know is conformally related to the _ einstein cosmos _ , ( sometimes also called the _ einstein cylinder _ ) see figure [ fig : milne ] .the metric of the milne universe is given in comoving coordinates by where , \hspace{0.5 cm } \phi\in[0,2\pi).\ ] ] in fact , introducing the coordinates the metric reads therefore , and the milne universe corresponds to the non - spatial region of minkowski spacetime as shown in the penrose diagram of figure [ fig : milne ] .as already discussed , this metric is conformally related to the metric of the einstein cosmos .more precisely , one has that where the metric of the einstein cylinder , , is given by with denoting the standard metric of the conformal factor relating the metric of the milne universe to metric of the einstein universe is given by and the coordinates are related to via equivalently , in terms of the original coordinates and we have therefore , the milne universe is conformal to the domain since the milne universe is a solution to the the einstein field equations , it follows that the pair implies a solution to the cefe which , in turn , constitutes a solution to the wave equations of proposition [ cewe ] . following the discussion of section [ section : cefe ] , this solution consists of the frame fields or , equivalently , the spinorial fields where we have written and as a shorthand for the derivative of the conformal factor . for later use, we notice that in the einstein cosmos we have =0 , \qquad \textbf{r}[\mathring{\bmg}]=-6 , \qquad \textbf{schouten}[\mathring{\bmg}]=\tfrac{1}{2}\left ( \mathbf{d } t \otimes \mathbf{d } t + \bmhbar \right).\end{aligned}\ ] ] the spinorial version of the above tensors can be more easily expressed in terms of a frame . to this end , now consider a geodesic on the einstein cosmos given by where is fixed. using the congruence of geodesics generated varying over we obtain a gaussian system of coordinates on the einstein cylinder where are some local coordinates on .in addition , in a slight abuse of notation _ we identify the standard time coordinate on the einstein cylinder with the parameter of the geodesic_. a globally defined orthonormal frame on the einstein cosmos can be constructed by first considering the linearly independent vector fields in where are cartesian coordinates in .the vectors are tangent to and form a global frame for see e.g. .this spatial frame can be extended to a spacetime frame by setting and . using this notationwe observe that the components of the basis respect to this frame are given by . with respect to this orthogonal basis the components of the schouten tensorare given by so that the components of the traceless ricci tensor are given by where the curly bracket around the indices denote the symmetric trace - free part of the tensor .in addition , since the weyl tensor vanishes . now , let denote the connection coefficients of the levi - civita connection of with respect to the spatial frame .observe that the structure coefficients defined by =c_{\bmi}{}^{\bmk}{}_{\bmj}\bmc_{\bmk} making small that is , the size of the extended data is controlled by the data in the initial hypersurface .therefore , the extended data will be given by which are well defined on .using equation we observe that * remark .* the fact that the extension of the data obtained in the previous paragraph is not unique and it does not necessarily satisfy the constraints of proposition [ reductionlemma ] is not a problem in our analysis since the proof of the last statement follows by contradiction .let .then , in the one hand we have that , so that it follows that there exists a future timelike curve from to . on the other hand which means that every past in extendible causal curve through intersects , therefore .this is a contradiction since .we are now in position to make use of a local existence and cauchy stability result adapted from see appendix [ appendix : pdetheory ] , to establish the following theorem : [ theorem : existencecauchystability ] let be hyperboloidal initial data for the conformal wave equations on an 3-dimensional manifold where denotes initial data for the milne universe .let denote the extension of these data to .then , for and there exist an such that : * for , there exist a unique solution to the wave equations of proposition [ cewe ] with a minimal existence interval ] . * given a sequence such that then for the solutions with and , it holds that uniformly in as .* the solution is unique in and implies , wherever , a solution to the einstein vacuum equations with vanishing cosmological constant. points _ ( i ) _ and _ ( ii ) _ are a direct application of theorem [ theorem : hugkatmar ] given in appendix [ appendix : pdetheory ] .the condition ensuring that is lorentzian is encoded in the requirement of the perturbation for the initial data being small as discussed in section [ perturbedsolution ] .the statement of point _ ( iii ) _ follows from the discussion of section [ propagationconstraintssubsidiarysystem ] for the propagation of the constraints and the subsidiary system as summarised in propositions [ cewe ] and [ reductionlemma ] . in particular , in this section it was shownthat a solution to the spinorial wave equations is a solution to the conformal einstein field equations if initial data satisfies the appropriate conditions . as exemplified in section [ sec : initialdatawave - weyl ] for the rescaled weyl spinor , requiring the zero - quantities to vanish in the initial hypersurface renders conditions on the initial data . finally , recall that a solution to the cefe implies a solution to the einstein field equations wherever see .now , we will complement theorem [ theorem : existencecauchystability ] by showing that the conformal boundary coincides with the cauchy horizon of .the argument of this section is based on analogous discussion in .since the cauchy horizon is generated by null geodesics with endpoints on the null generators of i.e the null vectors tangent to are given at by as it follows by the initial hyperboloidal data .we then define two null vectors on by setting we complement this pair of null vectors , where is tangent to on and is normal to , with a pair of complex conjugate vectors and tangent to such that , so as to obtain the tetrad . in order to obtain a newman - penrose frame off along the null generators of we propagate them by parallel transport in the direction of by requiring now , suppose that we already have a solution to the conformal wave equations . using the result of proposition [ reductionlemma ] , we know that the solution will also satisfy the cefe . in this section we will make use of the cefe equations to study the conformal boundary . from the tensorial ( frame ) version of the cefeas given in appendix [ appendix : cfe ] , one notices the following subset consisting of equations , and the definition of as the gradient of the conformal factor : transvecting the first two equations , respectively , with , and we get where we have used and the fact that is null and orthogonal to .the latter equations can be read as a system of homogeneous transport equations along the integral curves of for a vector - valued variable containing as components , and . written in matricial formone has observe that the column vector shown in the last equation is zero on , since = 0 , and which follows from and .since equation is homogeneous and it has vanishing initial data on we have that , and will be zero along until one reaches a caustic point .consequently , we conclude that the conformal factor vanishes in the portion of which is free of caustics .thus , this portion of can be interpreted as the conformal boundary of the physical spacetime .in addition , notice that from the vanishing of the column vector of equation it follows that on .therefore , the only component of that can be different from zero is .accordingly , is parallel to and .moreover , since it follows that .. ] now , in order to extract the information contained in one transvects with , to obtain using that and that vanishes on one concludes that we can obtain a further equation transvecting with it follows then that one has the system since ( i.e. non - vanishing ) , the solution for the column vector formed by and can not be zero .accordingly , and can not vanish simultaneously .finally , transvecting equation with we get using that and restricting to where we obtain using it follows that the left hand side of the last equation is equivalent to finally , recalling the definition of the expansion ( in the newman - penrose notation ) we finally obtain we already know that the only possible non - zero component of the gradient of is and that it can not vanish simultaneously with .this means that implies on . to be able to identify the point where with timelike infinity we need to calculate the hessian of the conformal factor .observe that this information is contained in the conformal field equation . considering this equation at , where we have already shown that the conformal factor vanishes , we get now , as we have shown that and ( or , equivalently , ) do not vanish simultaneously we conclude that and that is non - degenerate .thus , we can consider the point on where both and vanish as representing future timelike infinity for the physical spacetime . * remark . *observe that the construction discussed in the previous paragraphs crucially assumes that is zero on the boundary of the initial hypersurface .this construction can not be repeated if we were to take another hypersurface with boundary where the conformal factor does not vanish .this is the case of an initial hypersurface that intersects the cosmological horizon , where for the reference solution the conformal factor does not vanish see figure [ fig : hypdata ] . where the the hyperboloidal data is prescribed .at the conformal factor vanishes and the argument of section [ sec : conformalboundary ] can be applied .the dark gray area represents the development of the data on .compare with the case of the hypersurface which intersects the horizon at where the argument can not be applied .analogous hypersurfaces can be depicted for the lower diamond of the complete diagram of figure [ fig : milne].,scaledwidth=40.0% ] the results of the analysis of this section are summarised in the following : * ( structure of the conformal boundary ) * let denote a solution to the conformal wave equations equations constructed as described in theorem [ theorem : existencecauchystability ] , then , there exists a point where and but the hessian is non - degenerate .in addition , on . moreover . from the conclusions of theorem [ theorem : existencecauchystability ] and the discussion of section [ sec : conformalboundary ] it follows that if we have a solution to the conformal wave equations which , in turn implies a solution to the conformal field equations , then there exists a point in where both the conformal factor and its gradient vanish but is non - degenerate .this means that can be regarded as future timelike infinity for the physical spacetime .in addition , null infinity will be located at where the conformal factor vanishes but its gradient does not .in this article we have shown that the spinorial frame version of the cefe implies a system of quasilinear wave equations for the various conformal fields .the use of spinors allows a systematic and clear deduction of the equations and the not less important issue of the propagation of the constraints .the fact that the metric is not part of the unknowns in the spinorial formulation of the cefe simplifies the considerations of hyperbolicity of the operator .the application of these equations to study the semiglobal stability of the milne universe exemplifies how the extraction of a system of quasilinear wave equations out of the cefe allows to readily make use of the general theory of partial differential equations to obtain non - trivial statements about the global existence of solutions to the einstein field equations .the analysis of the present article has been restricted to the vacuum case .however , a similar procedure can be carried out , in the non - vacuum case , for some suitable matter models with trace - free energy - momentum tensor see e.g. .in addition , the present analysis has been restricted to the case of the so - called standard cefe .there exists another more general version of the cefe , the so - called , extended conformal einstein field equations ( xcefe ) in which the various equations are expressed in terms of a weyl connection i.e .a torsion free connection which is not necessarily metric but , however , respects the structure of the conformal class , . the hyperbolic reduction procedures for the xcefe available in the literature do not make use of gauge source functions .instead , one makes use of conformal gaussian systems based on the congruence of privileged curves known as conformal geodesics to extract a first order symmetric hyperbolic system .it is an interesting open question to see whether it is possible to use conformal gaussian systems to deduce wave equations for the conformal fields in the xcefe .e. gaspern gratefully acknowledges the support from consejo nacional de ciencia y tecnologa ( conacyt scholarphip 494039/218141 ) .in this appendix we recall several relations and identities that are used repeatedly throughout this article see . in addition , using the remarks made in we give a generalisation for the spinorial ricci identities for a connection which is metric but not necessarily torsion free .in this subsection we recall some well known relations satisfied the curvature spinors of a levi - civita connection .the discussion of this subsection follows .first recall the decomposition of a general curvature spinor we can further decompose the reduced spinor as where in the above expressions the symbol over the kernel letter indicates that this relation is general i.e .the connection is not necessarily neither metric nor torsion - free .the spinors and are not necessarily symmetric in .it is well known that if that the connection is metric , then the spinors and have the further symmetries we add the symbol over the kernel letter to denote that only the metricity of the connection is being assumed . now , if the connection is not only metric but , in addition , is torsion free ( i.e. it is a levi - civita connection ) then the _ first bianchi identity _ }=0 ] valid for a levi - civita connection extends to a connection with torsion as ^{\bmd}=\widetriangle{r}^{\bmd}{}_{\bmc \bma \bmb}u^{\bmc } + \sigma_{\bma}{}^{\bmc}{}_{\bmb}\widetriangle{\nabla}_{\bmc}u^{\bmd } .\ ] ] another way to think the last equation is to define a modified commutator of covariant derivatives through - \sigma_{\bma}{}^{\bmc}{}_{\bmb}\widetriangle{\nabla}_{\bmc } \right ) u^{\bmd}.\ ] ] in this way we can recast the ricci identities as this observation leads us to an expression for the generalised operator the relation between this operator and the commutator of covariant derivatives is = \epsilon_{\bma ' \bmb ' } \widetriangle{\square}_{\bma \bmb } + \epsilon_{\bma \bmb}\widetriangle{\square}_{\bma ' \bmb'}.\ ] ] we can not directly write down the equivalent spinorial ricci identities simply by replacing and by and because of appearance of the term in the definition of the curvature tensor . a way to get aroundthis difficulty is to define a modified operator formed using the modified commutator of covariant derivatives instead of the usual commutator . in this waywe can directly translate the previous formulae simply by replacing and by and .now , the relation between and can be obtained by observing that -\sigma_{\bmc \bmc'}{}^{\bme \bme'}{}_{\bmd \bmd'}\widetriangle{\nabla}_{\bme \bme ' } \right ) \nonumber \\ & & \phantom{\widetriangle{\boxminus}_{\bmc \bmd } } = \tfrac{1}{2 } \left ( \widetriangle{\nabla}_{\bmd ' \bmc}\widetriangle{\nabla}_{\bmd}{}^{\bmd ' } + \widetriangle{\nabla}_{\bmd'\bmd } \widetriangle{\nabla}_{\bmc}{}^{\bmd ' } - \sigma_{\bmc \bmd'}{}^ { \bme \bme'}{}_{\bmd}{}^{\bmd'}\widetriangle{\nabla}_{\bme \bme ' } \right ) .\label{boxdes1}\end{aligned}\ ] ] using the antisymmetry of the torsion spinor we have the decomposition where the reduced spinor is given by . using this decomposition and symmetrising expression in the indices in one obtains in order to compute explicitly how acts on spinors we only need to compute the generalised the spinors and .as discussed in previous paragraphs , the fact that the connection is not torsion free is reflected in the symmetries of the curvature spinors .we still have that the symmetries in hold due to the metricity of . however , the interchange of pairs symmetry of the riemann tensor , the reality condition on and the hermiticity of do not longer hold as these properties rely on the the cyclic identity }=0 ] if is sufficiently small .* if and are chosen as in and one has a sequence such that then for the solutions with and , it holds that uniformly in as .
|
the spinorial version of the conformal vacuum einstein field equations are used to construct a system of quasilinear wave equations for the various conformal fields . as a part of analysis we also show how to construct a subsidiary system of wave equations for the zero quantities associated to the various conformal field equations . this subsidiary system is used , in turn , to show that under suitable assumptions on the initial data a solution to the wave equations for the conformal fields implies a solution to the actual conformal einstein field equations . the use of spinors allows for a more unified deduction of the required wave equations and the analysis of the subsidiary equations than similar approaches based on the metric conformal field equations . as an application of our construction we study the non - linear stability of the milne universe . it is shown that sufficiently small perturbations of initial hyperboloidal data for the milne universe gives rise to a solution to the einstein field equations which exist towards the future and has an asymptotic structure similar to that of the milne universe . * keywords : * conformal methods , spinors , wave equations , milne universe , global existence * pacs : * 04.20.ex , 04.20.ha , 04.20.gz
|
evolutionary algorithms ( eas ) are probabilistic search algorithms based on evolution .they operate by exploiting the information contained in a population of possible solutions ( via similarities between individuals ) .the aim is to find an individual that maximises ( or minimises ) an objective function , which maps from individuals to the real line .the population is transformed by first selecting individuals .mutation and/or recombination is then used to either replace a few individuals from the population or create an entirely new population .the most prevalent methods for selecting individuals are proportionate , linear rank , tournament , and truncation . in proportionate selection individuals are chosen with a probability proportional to their fitness ( the value of the objective function evaluated at the individual ) .a common method to gain more control over selection pressure , is to scale the fitness values before the selection is made .linear ranking proceeds by ordering the population according to their fitness .the chance that an individual is selected is then a linear function of its ( unique ) rank .tournament selection creates a tournament by randomly choosing individuals , the best individual in the tournament is then selected . for truncation selection the fittest individuals have uniform probability of selection , while the remainder have zero chance of being selected .the choice of selection scheme is crucial to algorithm performance .if the selection pressure is too high then diversity of the population decreases rapidly and the algorithm converges prematurely to local optima or worse . with too little pressurethere is not enough push toward better individuals and the population takes too long to converge .many methods to choose or adapt the selection pressure or avoid the problem otherwise have been invented ( see for some references ) .a particularly simple one is fitness uniform selection , which uniformly selects a fitness value , and then the individual with fitness closest to this value .it is quite profitable to study selection schemes due to their generality .they depend only on the set of fitness values and not on the rest of the algorithm .hence their behaviour can be studied in isolation and the results applied to any evolutionary algorithm . in this paperwe introduce and study generalizations of rank and tournament selection ( both actually only depend on the rank and not the absolute fitness value itself ) .linear ranking has a small range of selection pressures ( from , for a population of individuals the probability that the fittest individual is selected must be between and ) , but it has the flexibility of a real - valued parameter that can vary continuously ( the slope of the linear function ) .ranking schemes with high selection pressures , such as when the probability of selection is an exponential function of the rank , have occasionally been used .it is natural then to generalise from linear to polynomial functions to cover the instances where medium pressure is required .hence the probability of an individual with rank being selected with a polynomial rank scheme of degree is : [ eqpoly1 ] p(i = k)=_l=1^d+1 a_l k^l-1 where are parameters defined by the algorithm designer . for simplicitywe assume that selection is performed with replacement and each individual has unique rank , however our results still hold when there are ties in the rank .the only restriction on the is that they must produce a proper probability distribution , i.e. for a population of individuals : for all and .hence , while the population is ordered , the schemes may favour low ranks , high ranks or neither , depending on the choice of the .this selection method encompasses the low pressures of linear schemes ( ) and can give good approximations of the high pressure exponential cases ( via taylor polynomials ) .furthermore the wealth of general knowledge about polynomials means that while it has numerous parameters ( coefficients of the monomials ) , it is also easy to predict their impact .tournament selection has a large range , but a discrete parameter , leaving the possible selection pressures somewhat restricted .this can be overcome by selecting probabilistically from the tournament , rather than always choosing the best in the tournament. however the extra parameters required are not easy to understand .their precise effect on the behaviour is not at all obvious .probabilistic tournament selection still only sorts individuals , making it much faster than any ranking scheme .let be the ( rank of the ) individual in position of the rank - ordered tournament .we call the _ seed _ of . let be the probability that seed has rank . in any given tournament , the probability that the seed individual is chosen will be a user defined constant .then the probability of an individual being selected through a size probabilistic tournament is : [ eqtourn1 ] p(i = k)=_s=1^t _ s p(i_s = k ) standard ( deterministic ) tournament always selects the individual of highest rank in the tournament , i.e. and . to ensure that choosing a winner from the tournament makes sense, the must satisfy the probability constraints and .we assume that the tournament is created by random selection _ with _ replacement and for now that each individual in the population has a unique fitness .this defines ( section [ secpst ] ) .note that even if every individual in the population is unique , it is possible for it to be repeated in the tournament . in this paperwe investigate the equivalence between the generalised schemes and with the aim of providing a scheme that combines the superior understanding of polynomial rank with the speed of probabilistic tournament .bck found that an individual s chance of selection in deterministic tournament selection is a polynomial , hence each is equivalent to a polynomial rank selection method .wieczorek and czech , and blickle arrived at the same conclusion using a different method .so while the name ` polynomial rank selection ' is new , its concept is fairly old .the study of probabilistic tournaments is nt new either : hutter proved that every size probabilistic tournament is a linear rank scheme , and goldberg did the same but only for a continuous population .fogel applied to the traveling salesman problem , a variation wherein each individual underwent numerous t=2 tournaments .the probability of winning each tournament was dependent on the fitness of the individuals involved and the individuals selected were those with the highest number of wins .we extend these results by finding that every sized probabilistic tournament is equivalent to a polynomial rank scheme with a polynomial degree of or less ( section [ secpst ] ) .we continue on to show that the equivalence is unique ( section [ secunique ] ) , and give an explicit expression for the inverse map ( section [ secptot ] ) .this allows the establishment of simple criteria for polynomial rank schemes that are probabilistic tournaments ( section [ secpist ] ) .unfortunately not every possible polynomial rank scheme satisfies the criteria , but most ( and in the limit of an infinite population , all ) linear and most `` interesting '' quadratic ones are equivalent to probabilistic tournaments .this is good enough for all practical purposes , if it generalises to higher order polynomials . throughout the paperwe use the following notation . if not otherwise indicated , an index has the full range as defined in this table .= = + kronecker symbol + ( for and for ) + number of individuals in the population + rank ( unique label ) of individuals + rank indices that only run from + seed index + rank of the individual with seed + rank of the individual selected + probability that is selected + polynomial coefficients index + coefficients of for the polynomial + tournament selection coefficients + vector + dimensional probability simplex +in this section we find the probability of an individual being successful ( the winner ) via tournament selection .this will provide a formula for an equivalent ranking selection scheme .it is sufficient to consider just one selection event in isolation , since we consider selection with replacement .we assume a population consisting of individuals with fitness . without loss of generality we assume that they are ordered , i.e. for all .for now we also assume that all fitness values are different , hence individual has rank .the rank is all we need in the following , and we will say `` individual '' , meaning `` individual '' ._ polynomial -ranking selects individual from population with probability _ _ a probabilistic -tournament selects individuals from population uniformly at random with replacement .let be the individual of rank in the tournament , called seed ( while it has rank in the population ) .finally the seed individual , , is chosen with probability as the winner ._ [ thmpst]_probabilistic -tournament selection coincides with polynomial -ranking ( for and suitable ) ._ we derive an explicit expression for the probability that the tournament winner has rank .any seed may have rank ( ) and may be the winner ( ) , hence _k p(i = k ) + = _s=1^t p(i = i_s)p(i_s = k ) = where we have exploited that by definition the probability that is independent of the rank . is the probability that seed has rank .it is difficult to formally derive an expression for , but we can easily get it by considering distribution functions . the probability of an individual selected into the tournament having a particular rank is , hence having rank equal to or less than is and larger than is .further , if and only if seeds have rank and seeds have rank , hence p(i_r k i_r+1>k ) = ( ) ^r(1-)^t - r since there are ways of choosing individuals with rank from individuals .the above expression is a polynomial in of degree .together with p(i_sk ) = _ r = s^t p(i_r k i_r+1>k ) , we get the explicit expression [ eqpis ] & & p(i_s = k ) = p(i_sk)-p(i_sk-1 ) + & & = _ r = s^t using the binomial theorem to find the and coefficients in the square brackets above reveals that the former coefficients cancel out while the latter do not .this implies that is a polynomial in of degree ( at most ) , and thus the weighted average is as well . summing over the population yields , as it should , since the tournament coefficients are such that some individual is always chosen .consequently , every tournament is a polynomial rank scheme of degree at most ( one can choose such that it is of lower degree ) .expression can be rewritten as p(i_s = k ) = _ r=0^s-1 which will be convenient in the following examples .standard tournament always selects ( ) , hence p(i = k ) = p(i_1=k ) = ( 1-)^t - ( 1-)^t see figure [ figtourndet ] . for is no selection pressure , .for we get p(i_1=k ) = p(i_2=k ) = hence probabilistic tournaments of size 2 lead to linear ranking & & p(i = k ) = _ 1 p(i_1=k)+_2 p(i_2=k ) = a_1+a_2 k , + [ pikt2 ] & & a_1=1n^2[(2n+1)_1-_2 ] , a_2 = 2n^2(_2-_1 ) ( 89,78)(-4,-1 ) ( 0,0)[tournament probabilities for large ] _ probability density that the tournament winner has rank , for tournament size ._,title="fig:",width=321 ] ( -4,15.5) ( 6.5,20) ( -4,31.5) ( -4,48) ( 8,65) ( 40,-1) more interesting is actually the converse , replacing rank selections by equivalent efficient tournaments . before we can answer this , we need to break down into a product of simple regular matrices .the next natural question is whether different tournament bias implies different selection probability .it seems plausible that the maps from tournaments to rank probabilities and to polynomial coefficients are injective , but the proof is fairly involved .the good news is that construction in the proof allows us to find a closed form expression for the desired inverse .let be the dimensional probability simplex , i.e. and .[ thmunique]_the function in , mapping tournament probabilities to rank probabilities , is total , linear , and injective : _ k= p(i = k)=_s=1^t r_k^s_s , v = rv , where is defined in .matrix can also be written as a product with matrices , , , , , , and defined in , , , , , , and .similarly , the function , mapping to polynomial coefficients , is unique , linear and injective : a_l=_s=1^t t_l^s_s , = tv , where matrix ._ tournament always selects one individual from as the winner , hence for every .see the proof of theorem [ thmpst ] for how to prove this formally .we now prove injectivity .with h_k^r : = ( ) ^r(1-)^t - r g_k^r : = ( h_k^r - h_k-1^r ) we can write as [ eqrsumg ] r_k^s p(i_s = k ) = _ r = s^t g_k^r einstein s sum convention will be convenient in the following argument : when an index occurs repeatedly in the multiplication of two objects , a sum over the index over its full range is implicitly understood , e.g. means . the lower - triangular matrix [ defv ] d_r^s:=\ { rcl 1 & & sr + 0 & & s > r . has the property that .using einstein s sum convention this allows us to rewrite as r_k^s = g_k^r d_r^s i.e. as a product of an matrix with a matrix .the `` inverse '' of is : [ defvinv ] d_k^i : = \ { rl 1 & k = i + -1 & i = k-1 + 0 & } = _ k , i-_k-1,i this is a matrix with on the primary diagonal ; on the diagonal that is below the primary diagonal ; and otherwise . itself can actually be decomposed into and and a pure diagonal matrix [ defc ] c_q^r = _ q , r comprised of the binomial coefficients : g_k^r = ( h_k^q - h_k-1^q)c_q^r = d_k^i h_i^q c_q^r ( note that is the inverse of an sized d matrix here ) .we can decompose further be using the binomial identity : h_i^q & = & ( ) ^q(1-)^t - q + & = & ( ) ^q_s=1^t - q()^t - q - s + & = & _ s=1^t - q(-)^t - q - s()^t - s + & = & _ p = q^t(-)^p - q()^p so , where is a matrix of monomials : [ defp ] p_i^p:=()^p , and is a lower - triangular matrix composed of various binomials : [ deff ] f_p^q:=\ { cl ( -)^p - q & qp + 0 & .putting everything together we have r_k^s = d_k^i p_i^p f_p^q c_q^r d_r^s the ( linear ) map is a polynomial in of degree ( at most ) .we can find its coefficients by rewriting [ vpeqqn ] d_k^i p_i^p & = & p_k^p - p_k-1^p = ( ) ^p-()^p + & = & _ l=1^p k^l-1(-)^p - l(1n)^p = v_k^l n_l^p where [ defq ] v_k^l & : = & k^l-1 , + [ defn ] n_l^p & : = & \ { cr ( -)^p - l(1n)^p & lp + 0 & .hence we get the alternative representation [ eqralt ] _k = r_k^s_s = v_k^l n_l^p f_p^q c_q^r d_r^s_s matrices , , and are lower - triangular matrices with 1 in the diagonal , and hence are invertible ( thus injective ) . is diagonal and upper triangular , both nowhere zero on the diagonal , hence invertible too .the first rows of map from a set of coefficients to the polynomial evaluated at . a degree polynomial like is uniquely determined by image points ( see appendix ) , hence is injective . similarly for p or exploit ( no summation ) .this proves that is injective . combining the map from to _k p(i = k ) = _ l=1^ta_l k^l-1 = v_k^l a_l v = v , with we get _ k = v_k^l t_l^s_s comparing this with and using injectivity of we see that [ deft ] t_l^s = n_l^p f_p^q c_q^r d_r^s which is injective , since , , , and are invertible . given a polynomial rank scheme it is possible and easy ( using computer software ) to find if it is equivalent to a probabilistic tournament ( and get the corresponding parameters ) by applying the inverse of to .if the output satisfies the probability requirements , then it is indeed a probabilistic tournament .we now derive explicit expressions for the really interesting converse of map , which allows replacement of inefficient rank selections by equivalent efficient tournaments . from the last section we know that the inverse exists .[ thmttop]_the function , mapping polynomial coefficients to tournament parameters is linear _ s=_l=1^tt_s^l a_l , v = t where matrix , with , , , defined in , , , .-polynomial ranking can be implemented as an -tournament if and only if , ._ in the following and respectively denote the upper submatrix of and .the inverse matrices are as follows [ defcinv ] c_r^q & : = & _ r , q/ + [ deffinv ] f_q^r & : = & rq + [ defninv ] n_p^l & = & p_p^d_^v_^l + [ defpinv ] p_l^&:= & n^lv_l^1 the inverse of the diagonal matrix is obvious .the expression for immediately follows from ( no summation ) . for ( since then either or ) and for we have f_p^qf_q^r & = & _ q = r^p ( -)^p - q + & = & _ q = r^p ( -)^p - q = _p , r the first equality is by definition , the second equality is a simple reshuffling of factorials , and the last equality follows from the well - known binomial identity for .this proves that is the inverse of .unfortunately we were not able to invert directly , although seems similar to ( the transpose of ) .so we used relation to invert in .but now we need the inverse of , which can be reduced by to the inverse of . the most difficult matrix to invert is .this special vandermonde matrix can be written as a product of a lower and upper - triangular matrix , whose inverses are : [ defqinv ] v_l^&:= & u_l^s l_s^ + l_s^&:= & ( -)^s-(s-)!(-1 ) !s + u_l^s & : = & s_s^(l ) the stirling numbers numbers are defined as the coefficients of the polynomial , i.e. by _l=0^s s_s^(l ) x^l = x!(x - s ) ! s_s^(l)=0 l > s there are many ways to compute , e.g. recursively by or directly . for get v_r^lv_l^ & = & _ l=1^t r^l-1_s=^l s_s^(l)(-)^s-(s-)!(-1 ) ! + & = & _ s=^t [ _ l=1^s r^l-1s_s^(l ) ] ( -)^s-(s-)!(-1 ) ! + & = & _ s=^t [ ( r-1) ...(r - s+1 ) ] ( -)^s-(s-)!(-1 ) ! + & = & _ s=^r(-)^s- = _ , r the case is similar .this shows that is the inverse of ( the first rows of ) . for we can compute the matrices by handthis list of ( reduced ) matrices is a useful sanity check for the reader s own implementation : ll f = , f = , & h = 1n^2^ + c = , c = 12 , & g = 1n^2^ + n = 1n^2 , & n = 2 + p = 1n^2^ , & p = 2 + v = ^ , & v = + t= 1n^2 , & t = 4 + u = , l = , & r = 1n^2^ we see that and coincide with , as they should .together this allows us to compute from and vice versa in time and from in time .once is known , tournament selection needs only time per winner selection .theorem [ thmttop ] does not give us conditions under which the resulting tournament parameters are valid .we look for such conditions so that we can reliably change / create tournament schemes in the more understandable set of polynomial rank schemes . without these conditions there can be no guarantee that whatever created would be a probabilistic tournament .let us first consider the case of linear ranking ( ) , p(i = k)=a_1+a_2 k we want to find the range of and for which this is a proper probability distribution in .the sum - constraint leads to 1 = _k=1^n p(i = k ) = a_1 n+a_2 12 n(n+1 ) [ a1froma2 ] a_1=1n[1 - 12 a_2(n^2+n ) ] next are the positivity constraints .a linear function is if and only if it is at its ends , i.e. and .inserting into these constraints yields : p(i=1 ) & & a_1+a_2 0 a_2 + p(i = n ) & & a_1+a_2n 0 a_2 - so the possible linear rank schemes are those with |a_2| a_1 [ linrang ] the example shows that size probabilistic -tournaments have . since , has range .as it should be , this is a subset of the possible linear rank schemes .hence the linear rankings that are probabilistic tournaments are those with |a_2| 2n^2 a_1 [ binrang ] this is slightly narrower than , i.e. there are some rankings that are not probabilistic tournaments . on the other hand , tends to 1 as grows , hence for large ( e.g. about 100 ) nearly all linear rankings can be translated into probabilistic tournaments .the coverage is good enough for all practical purposes .a probabilistic selection scheme is completely determined by , different correspond to different selection schemes , and every is a valid selection scheme .hence , is the set of all possible probabilistic selection schemes .the set of ( valid ) size tournament schemes is r_t : = \{v = rv : v_t } _n since is injective , this is a dimensional irregular simplex embedded in the dimensional simplex . the set of ( incl .invalid ) degree ( up to ) polynomial ranking schemes is v^t : = \{v = v : ^t } _n this is a -dimensional hyperplane .only in are valid , hence is the set of ( valid ) polynomial ranking schemes .the intersection of a simplex with a plane gives a closed , bounded , convex polytope , in our case of dimension .the krein - milman theorem says that for a closed , bounded , convex subset of with a finite number of extreme points ( = corners ) , is the convex hull of the extreme points of .hence the extreme points of completely characterize / define the set .if / since we are not concerned with the covering of in itself , we can study the covering in the lower - dimensional polynomial coefficient space .the set = polytope of all polynomial coefficients that lead to valid selection probabilities is v_n : = \{^t : v_n } while the set = simplex of coefficients reachable by tournaments is t_t : = \ { = tv : v_t } v_n these sets are the images of and the simplex under and respectively. these maps are injective ( section [ secptot ] ) so and are completely determined by their extreme points .the extreme points of are just the conventional basis vectors , so is the convex hull of .the polytope can be quite complex , and finding the extreme points daunting .this is essentially what we did for the case in the above paragraphs .we estimated the proportion of degree polynomials covered by for various using a monte - carlo algorithm case was calculated directly from and ] ( table [ areacov ] ) .it shows that for , practically all linear rank schemes are probabilistic tournaments .nothing concrete can be concluded about the coverage for .table [ areacov ] only suggests that the number of degree polynomials equivalent to -sized tournaments decreases as increases . in the case we can extend our knowledge by finding graphically .the restriction means that the coefficient , is completely determined by and .-12 n(n+1)a_2 ) [ t3plane ] hence is a 2 dimensional hyperplane . for each defines a set of halfspaces : ; is their intersection ( over ) restricted to the plane given by . is simply a filled triangle with corners .comparison with ( figures [ pic3 ] , [ pic20 ] and [ pic300 ] ) suggests that the coverage of is stable for .hence for large populations about a third of the quadratic polynomials can be written as size- probabilistic tournaments . in practice ,selection schemes with probability monotonically increasing with fitness are used .so not the whole of is interesting , but only the subset of monotonically increasing or possibly decreasing probabilities on ( light grey in figures [ pic3 ] , [ pic20 ] and [ pic300 ] ) .the remainder of is composed of schemes that favour the middle ranks or both high and low ranked individuals ( dark grey ) .any polynomial scheme is a parabola , so it is symmetric about it s stationary point , .hence is monotonic on if and only if lies outside the interval .i.e. x_st.pt.=(_i=1^n i^2 ) 1 + 12 _ or _ x_st.pt.n-12 figure [ pic300 ] suggests that these regions of usefulness effectively lie entirely in for .hence for sufficiently large the most useful degree 2 polynomial schemes are perfectly reproduced by some probabilistic tournament .an example of a less applicable selection scheme is the polynomial given by and ( which lies in the dark grey region ) .it favours both high ranks and low ranks ( figure [ notint ] ) and any algorithm using this scheme will spend half of the time searching in the wrong place .however it is still usable ( like in fitness uniform selection ) .the points , ... are extreme points of .they indicate that the range of values is significantly smaller than the range of ( which in turn has a smaller range than ) . being the intersection of a finite number of halfspaces and planes means its boundary is actually a series of straight lines . appears curved in figures [ pic20 ] and [ pic300 ] simply due to the many halfspaces that are involved .& 0.7500 & 0.9000 & 0.9500 & 0.9900 & 0.9967 + & 0.270 & & 0.348 & 0.342&0.332 + & & 0.12 & 0.15 & 0.16 & + & & 0.02 & & & + ( 80,74)(-4,0 ) ( 0,0)[ _ the shaded region is the set of possible polynomials , whilst the light grey area is the set of the most useful polynomials .the triangle is the boundary of the set that can be written as tournaments . at : . at : . at : . at : ._ , title="fig:",width=340 ] ( 12.8,61) ( 13.8,63)(1,1)5 ( 71,22) ( 71,23)(-4,-1)5 ( 71,13) ( 71,14)(-4,0)5 ( 35,26) ( 37,28)(1,1)5 ( 38,0) ( -4,40) ( 87,79)(-5,-3 ) ( 0,0)[, ] _ the shaded region is the set of possible polynomials , whilst the light grey area is the set of the most useful polynomials .the triangle is the boundary of the set that can be written as tournaments . at : . at : . at : ._,title="fig:",width=309 ] ( 12,64) ( 13,66)(1,1)5 ( 73.5,12.5) ( 73.5,13.5)(-4,1)5 ( 14.5,48) ( 16.5,50)(4,1)5 ( 41,-3) ( -5,40) ( 85,81)(-3,-2 ) ( 0,0)[, ] _ the shaded region is the set of possible polynomials , whilst the light grey area is the set of the most useful polynomials .the triangle is the boundary of the set that can be written as tournaments . at : . at : . at : ._,title="fig:",width=325 ] ( 6,75) ( 18,71) ( 17.5,71)(-4,-1)5 ( 73,10) ( 73,11)(-4,0)6 ( 9,44.2) ( 10,46.2)(1,2)2 ( 38,-2) ( -3,40) [ n=300 ] the polynomial .this is an example of a usable quadratic polynomial that is not equivalent to a probabilistic tournament . ]individuals with the same fitness lead to ties in the ranking . if we break ties ( arbitrarily but consistently ) , our theorems still apply .the disadvantage is that the selection probability for two individuals with the same fitness may not be the same .we can fix this problem by breaking ties ( uniformly ) at random .for instance , given a population of 3 individuals with two of them having the same fitness , this results in effective selection probabilities and .investigation of the set of possible polynomials with degree will be helpful for those applications requiring higher selective pressures .furthermore , finding the proportion that are equivalent to probabilistic tournaments may provide a reliable method for making high - degree polynomial rank schemes more efficient .tournaments of size are significantly faster than ranking schemes , so it would be beneficial to obtain a thorough understanding of how many polynomial rank schemes are equivalent to sized probabilistic tournaments .we have found a strong connection between polynomial ranking and probabilistic tournament selection .we derived an explicit operator ( [ deft ] ) that maps any probabilistic tournament to its equivalent polynomial ranking scheme , which is unique and always exists .polynomial rank schemes thus encompass linear ranking and deterministic ( normal ) tournament selection , leaving designers with one less selection method ( but more parameters ) to worry about .unfortunately , turning polynomial rank schemes into equivalent probabilistic tournaments is not so straightforward .only about a third of the possible quadratic polynomials can be written as size- probabilistic tournaments .however , nearly all linear rank schemes have an equivalent size- probabilistic tournament . hence nearly allcan be made faster by simply rewriting the scheme as a probabilistic tournament .furthermore , almost all the practical quadratic polynomials are equivalent to some tournament .this is a good indication for the investigation of .let be the vector of image points for some of a polynomial with coefficient vector .in particular we have _ = p(x _ ) = _ l=1^ta_l v_^l , v_^l = x_^l-1 if matrix is invertible , the polynomial ( coefficients ) would be uniquely defined by , which is what we set out to prove .we now show that is invertible .define the polynomials of degree p_s(x ) = _ r=1rs^tx - x_rx_s - x_r = _l=1^t a_l^s x^l-1 expanding the product in the numerator defines the coefficients . on get _ s = p_s(x _ ) = _ l=1^n v_^l a_l^s hence is the inverse of . by explicitly expanding can get an explicit expression for , which is unfortunately pretty useless .t. bck .selective pressure in evolutionary algorithms : a characterization of selection mechanisms . in _ proceedings of the first ieee conference on evolutionary computation _ , volume 1 , pages 5762 , orlando , fl , usa , 1994 .ieee world congress on computational intelligence .t. blickle and l. thiele . a comparison of selection schemes used in genetic algorithms .tik - report 11 , tik institut fur technische informatik und kommunikationsnetze , computer engineering and networks laboratory , eth , swiss federal institute of technology , gloriastrasse 35 , 8092 zurich , switzerland , 1995 .d. e. goldberg and k. deb . a comparative analysis of selection schemes used in genetic algorithms . in g.j. e. rawlings , editor , _ foundations of genetic algorithms _ , pages 6993 .morgan kaufmann , san mateo , 1991 .m. hutter .mplementierung eines klassifizierungs - systems .master s thesis , theoretische informatik , tu mnchen , 1991 .72 pages with c listing , in german , http://www.idsia.ch//ai/pcfs.htm .w. wieczorek and z.j .selection schemes in evolutionary algorithms . in _ intelligent information systems 2002 , proceedings of the iis2002 symposium , sopot , poland , june 3 - 6 , 2002 _ ,advances in soft computing , pages 185194 .physica - verlag , 2002 .
|
* crucial to an evolutionary algorithm s performance is its selection scheme . we mathematically investigate the relation between polynomial rank and probabilistic tournament methods which are ( respectively ) generalisations of the popular linear ranking and tournament selection schemes . we show that every probabilistic tournament is equivalent to a unique polynomial rank scheme . in fact , we derived explicit operators for translating between these two types of selection . of particular importance is that most linear and most practical quadratic rank schemes are probabilistic tournaments . *
|
sponsored search advertising is a significant growth market and is witnessing rapid growth and evolution .the analysis of the underlying models has so far primarily focused on the scenario , where advertisers / bidders interact directly with the auctioneers , i.e. , the search engines and publishers .however , the market is already witnessing the spontaneous emergence of several categories of companies who are trying to mediate or facilitate the auction process .for example , a number of different adnetworks have started proliferating , and so have companies who specialize in reselling ad inventories .hence , there is a need for analyzing the impact of such incentive driven and for - profit agents , especially as they become more sophisticated in playing the game . in the present work ,our focus is on the emergence of market mechanisms and for - profit agents motivated by capacity constraint inherent to the present models .for instance , one natural constraint comes from the fact that there is a limit on the number of slots available for putting ads , especially for the popular keywords , and a significant pool of advertisers are left out due to this capacity constraint .we ask whether there are sustainable market constructs and mechanisms , where new players interact with the existing auction mechanisms to increase the overall capacity .in particular , lead - generation companies who bid for keywords , draw traffic from search pages and then redirect such traffic to service / product providers , have spontaneously emerged .however , the incentive and equilibria properties of paid - search auctions in the presence of such profit - driven players have not been explored .we investigate key questions , including what happens to the overall revenue of the auctioneers when such mediators participate , what is the payoff of a mediator and how does it dependent on her quality , how are the payoffs of the bidders affected , and is there an overall value that is generated by such mechanisms . formally , in the current models , there are slots to be allocated among ( ) bidders ( i.e. the advertisers ) .a bidder has a true valuation ( known only to the bidder ) for the specific keyword and she bids . the expected _ click through rate _ ( ctr ) of an ad put by bidder when allocated slot has the form i.e. separable in to a position effect and an advertiser effect . s can be interpreted as the probability that an ad will be noticed when put in slot and it is assumed that . can be interpreted as the probability that an ad put by bidder will be clicked on if noticed and is refered as the _ relevance _ of bidder .the payoff / utility of bidder when given slot at a price of per click is given by and they are assumed to be rational agents trying to maximize their payoffs . as of now , google as well as yahoo ! uses schemes closely modeled as rbr(rank by revenue ) with gsp(generalized second pricing ) .the bidders are ranked according to and the slots are allocated as per this ranks . for simplicity of notation , assume that the bidder is the one allocated slot according to this ranking rule , then is charged an amount equal to .formal analysis of such sponsored search advertising model has been done extensively in recent years , from algorithmic as well as from game theoretic perspective . in the following section , we propose and study a model whereinthe additional capacity is provided by a for - profit agent who competes for a slot in the original auction , draws traffic and runs its own sub - auction for the added slots .we discuss the cost or the value of capacity by analyzing the change in the revenues due to added capacity as compared to the ones without added capacity .in this section , we discuss our model motivated by the capacity constraint , which can be formally described as follows : * * primary auction ( -auction ) :* mediators participate in the original auction run by the search engine ( called _ -auction _ ) and compete with advertisers for slots ( called _ primary slots _ ) . for the agent ( an advertiser or a mediator ) , let and denote her true valuation and the bid for the -auction respectively .further , let us denote by where is the relevance score of agent for -auction .let there are mediators and there indices are respectively . +* * secondary auctions ( -auctions ) : * * * * secondary slots : * suppose that in the primary auction , the slots assigned to the mediators are respectively , then effectively , the additional slots are obtained by forking these _ primary slots _ in to additional slots respectively , where for all . by forking we mean the following : on the associated landing page the mediator puts some information relevant to the specific keyword associated with the -auction along with the space for additional slots .let us call these additional slots as _ secondary slots_. + * * * properties of secondary slots and _ fitness _ of the mediators : * for the mediator , there will be a probability associated with her ad to be clicked if noticed , which is actually her relevence score and the position based ctrs might actually improve say by a factor of .this means that the position based ctr for the secondary slot of mediator in modeled as for and otherwise .therefore , we can define a _ for the mediator , which is equal to . thus corresponding to the primary slot ( the one being forked by the mediator ) ,the _ effective _ position based ctr for the secondary slot obtained is where note that , however could be greater than . + * * * -auctions : * mediators run their individual sub - auctions ( called _ -auctions _ ) for the secondary slots provided by them . for an advertiserthere is another type of valuations and bids , the ones associated with -auctions . for the agent ,let and denote her true valuation and the bid for the -auction of mediator respectively .in general , the two types of valuations or bids corresponding to -auction and the -auctions might differ a lot .we also assume that and whenever is a mediator .further , for the advertisers who do not participate in one auction ( -auction or -auction ) , the corresponding true valuation and the bid are assumed to be zero . also , for notational convenience let us denote by where is the relevance score of agent for the -auction of mediator . + * * * payment models for -auctions : * mediators could sell their secondary slots by impression ( ppm ) , by pay - per - click ( ppc ) or pay - per - conversion(ppa ) . in the following analysis , we consider ppc . +* * freedom of participation : * advertisers are free to bid for primary as well as secondary slots . + * * true valuations of the mediators : * the true valuation of the mediators are derived from the expected revenue ( total payments from advertisers ) they obtain from the corresponding -auctions .for simplicity , let us assume participation of a single mediator and the analysis involving several mediators can be done in a similar fashion. for notational convenience let the -auction as well as the -auction is done via _ rbr _ with _ gsp _ , i.e. the mechanism currently being used by google and yahoo ! , and the solution concept we use is _ symmetric nash equilibria(sne)_ .suppose the allocations for the -auction and -auction are and respectively .then the payoff of the agent from the combined auction ( -auction and -auction together ) is where from the mathematical structure of payoffs and strategies available to the bidders wherein two different uncorrelated values can be reported as bids in the two types of auctions independently of each other , it is clear that the equilibrium of the combined auction game is the one obtained from the equilibria of the -auction game and the -auction game each played in isolation . in particular at _ sne_ , and which implies that ( see eq .( [ eq : effec - ctr ] ) ) where is the true valuation of the mediator multiplied by her relevance score as per our definition , which is the expected revenue she derives from her -auction _ ex ante _ given a slot in the -auction and therefore the mediator s payoff at sne is this section , we discuss the change in the revenue of the auctioneer due to the involvement of the mediator .the revenue of the auctioneer with the participation of the mediator is and similarly , the revenue of the auctioneer without the participation of the mediator is therefore , thus revenue of the auctioneer always increases by the involvement of the mediator . as we can note from the above expression , smaller the better the improvement in the revenue of the auctioneer . to ensure a smaller value of , the mediator s valuation which is the expected payments that she obtains from the -auction should be better , therefore fitness factor should be very good .there is another way to improve her true valuation .the mediator could actually run many subauctions related to the specific keyword in question .this can be done as follows : besides providing the additional slots on the landing page , the information section of the page could contain links to other pages wherein further additional slots associated with a related keyword could be provided . with this variation of the model , a better value of could possibly be ensured leading to a win - win situation for everyone . increasing the capacity viamediator improves the revenue of auctioneer .now let us turn our attention to the change in the efficiency and as we will prove below , the efficiency always improves by the participation of the mediator . increasing the capacity via mediator improves the efficiency .clearly , for the newly accommodated advertisers , that is the ones who lost in the -auction but win a slot in -auction , the payoffs increase from zero to a postitive number .now let us see where do these improvements in the revenue of the auctioneer , in payoffs of newly accommodated advertisers , and in the efficiency come from ?only thing left to look at is the change in the payoffs for the advertisers who originally won in the -auction , that is the winners when there was no mediator .the new payoff for ranked advertiser in -auction is where is her payoff from the -auction .also , for , her payoff when there was no mediator is similarly , for , her payoff when there was no mediator is therefore , in general we have , thus , for the ranked winning advertiser from the auction without mediation , the revenue from the -auction decreases by and she faces a loss unless compensated for by her payoffs in -auction .further , this payoff loss will be visible only to the advertisers who joined the auction game before the mediator and they are likely to participate in the -auction so as to make up for this loss .thus , via the mediator , a part of the payoffs of the originally winning advertisers essentially gets distributed among the newly accommodated advertisers . however , when the mediator s fitness factor is very good , it might be a win - win situation for everyone .depending on how good the fitness factor is , sometimes the payoff from the -auction might be enough to compensate for any loss by accommodating new advertisers .let us consider an extreme situation when and .gain _ in payoff for the advertiser is therefore as long as the advertiser faces no net loss in payoff and might actually gain .in the present work , we have studied the emergence of diversification in the adword market triggered by the inherent capacity constraint . we proposed and analyzed a model where additional capacity is created by a for - profit agent who compete for a slot in the original auction ,draws traffic and runs its own sub - auction .our study potentially indicate a -fold diversification in the adword market in terms of ( i ) the emergence of new market mechanisms , ( ii ) emergence of new for - profit agents , and ( iii ) involvement of a wider pool of advertisers .therefore , we should expect the internet economy to continue to develop richer structure , with room for different types of agents and mechanisms to coexist .in particular , capacity constraints motivates the study of yet another model where the additional capacity is created by the search engine itself , essentially acting as a mediator itself and running a single combined auction .this study will be presented in an extended version of the present work .100 g. aggarwal , a. goel , r. motwani , truthful auctions for pricing search keywords , ec 2006 . b. edelman , m. ostrovsky , m. schwarz , internet advertising and the generalized second price auction : selling billions of dollars worth of keywords , american economic review 2007 .s. lahaie , an analysis of alternative slot auction designs for sponsored search , ec 2006 .s. lahaie , d. pennock , revenue analysis of a family of ranking rules for keyword auctions , ec 2007 .m. mahdian , h. nazerzadeh , a. saberi , allocating online advertisement space with unreliable estimates , ec 2007 a. mehta , a. saberi , u. vazirani , v. vazirani , adwords and generalized on - line matching , focs 2005 .h. varian , position auctions , to appear in international journal of industrial organization .
|
one natural constraint in the sponsored search advertising framework arises from the fact that there is a limit on the number of available slots , especially for the popular keywords , and as a result , a significant pool of advertisers are left out . we study the emergence of diversification in the adword market triggered by such capacity constraints in the sense that new market mechanisms , as well as , new for - profit agents are likely to emerge to combat or to make profit from the opportunities created by shortages in ad - space inventory . we propose a model where the additional capacity is provided by for - profit agents ( or , mediators ) , who compete for slots in the original auction , draw traffic , and run their own sub - auctions . the quality of the additional capacity provided by a mediator is measured by its _ fitness _ factor . we compute revenues and payoffs for all the different parties at a _ symmetric nash equilibrium _ ( sne ) when the mediator - based model is operated by a mechanism currently being used by google and yahoo ! , and then compare these numbers with those obtained at a corresponding sne for the same mechanism , but without any mediators involved in the auctions . such calculations allow us to determine the value of the additional capacity . our results show that the revenue of the auctioneer , as well as the social value ( i.e. efficiency ) , always increase when mediators are involved ; moreover even the payoffs of _ all _ the bidders will increase if the mediator has a high enough fitness . thus , our analysis indicates that there are significant opportunities for diversification in the internet economy and we should expect it to continue to develop richer structure , with room for different types of agents and mechanisms to coexist .
|
hpc system administration has to satisfy two seemingly contradictory demands58 on one hand administrators seek stability , which leads to a conservative approach to software management , and on the other hand users demand recent tool chains and huge scientific software stacks .in addition , users often need different versions and different variants of a given software package . to satisfy both , support teams end up playing the role of `` distributionmaintainers''58 they build and install tool chains , libraries , and scientific software packages manually multiple variants thereof and make them available _ via _ `` environment modules'' , which allows users to pick the specific packages they want .unfortunately , software is often built and installed in an _ ad hoc _ fashion , leaving users little hope of redeploying the same software environment on another system .worse , support teams occasionally have to remove installed software or to upgrade it in place , which means that users may eventually find themselves unable to reproduce their software environment , _ even on the same system_. recently - developed tools such as easybuild and spack address part of the problem by automating package builds , supporting non - root users , and adding facilities to create package variants . however , these tools fall short when it comes to build reproducibility. first , build processes can trivially refer to tools or libraries already installed on the system .second , the _ ad hoc _ naming conventions they rely on to identify builds fail to capture the directed acyclic graph ( dag ) of dependencies that led to this particular build . gnu guix is a general - purpose package manager that implements the functional package management paradigm pioneered by nix .many of its properties and features make it attractive in a multi - user hpc context58 per - user profiles , transactional upgrades and roll - backs , and , more importantly , a controlled build environment to maximize reproducibility .details our motivations .describes the functional package management paradigm , its implementation in guix , its impact on reproducibility , and how it can be applied to hpc systems . gives concrete use cases where guix empowers users while guaranteeing reproducibility and sharing , while discusses limitations and remaining challenges .finally , compares to related work , and concludes .recent work on reproducible research insufficiently takes software environment reproducibility into account . for example , the approach for verifiable computational results described in focuses on workflows and conventions but does not mention the difficulty of reproducing full software environments .likewise , the new replicated computational results ( rcr ) initiative of the acm transactions on mathematical software acknowledges the importance of reproducible results , but does not adequately address the issue of software environments , which is a prerequisite . the authors of propose a methodology for reproducible research experiments in hpc . to address the software - environment reproducibility problem they propose two unsatisfying approaches58 one is to write downthe version numbers of the dependencies being used , which is insufficient , and the other is to save and reuse full system images , which makes verifiability impractical peers would have to download large images and would be unable to combine them with their own software environment . yet , common practices on hpc systems hinder reproducibility . for understandable stability reasons , hpc systems often run old gnu / linux distributions that are rarely updated .thus , packages provided by the distribution are largely dismissed .instead support teams install packages from third - party repositories but then they clobber the global ` /usr ` prefix , which sysadmins may want to keep under control , or install them from source by themselves and make them available through environment modules .modules allow users to choose different versions or variants of the packages they use without interfering with each other .however , when installed software is updated in place or removed , users suddenly find themselves unable to reproduce the software environment they were using .given these practices , reproducing the exact same software environment on a _ different _ hpc system seems out of reach .it is nonetheless a very important property58 it would allow users to assess the impact of the hardware on the software s performance something that is very valuable in particular for developers of run - time systems and it would allow other researchers to reproduce experiments on their system .essentially , by deploying software and environment modules , hpc support teams find themselves duplicating the work of gnu / linux distributions , but why is that ?historical package managers such as apt and rpm suffer from several limitations .first , package binaries that every user installs , such as ` .deb ` files , are actually built on the package maintainer s machine , and details about the host may leak into the binary that is uploaded a shortcoming that is now being addressed ( see . ) second , while it is in theory possible for a user to define their own variant of a package , as is often needed in hpc , this is often difficult in practice .users of rpm - based systems , for example , may be able to customize a ` .spec ` file to build a custom , relocatable rpm package , but only the administrator can install the package alongside its dependencies and register it in the central ` yumdb ` package database .the lower - level ` rpm ` tool can use a separate package registry , which could be useful for unprivileged users ; however rpm package databases can not be composed , so users would need to manually track down and register the complete graph of dependencies , which is impractical at best .third , these tools implement an _ imperative _ and _ stateful _ package management model .it is imperative in the sense that it modifies the set of available packages in place .for example , switching to an alternative mpi implementation , or upgrading the openmp run - time library means that suddenly all the installed applications and libraries start using them .it is stateful in the sense that the system state after a package management operation depends on its previous state .namely , the system state at a given point in time is the result of the series of installation and upgrade operations that have been made over time , and there may be no way to reproduce the exact same state elsewhere .these properties are a serious hindrance to reproducibility ._ functional paradigm ._ functional package management is a discipline that transcribes the functional programming paradigm to software deployment58 build and installation processes are viewed as pure functions in the mathematical sense whose result depends exclusively on the inputs , and their result is a value that is , an immutable directory .since build and installation processes are pure functions , their results can effectively be `` cached '' on disk .likewise , two independent runs of a given build process for a given set of inputs should return the same value_i.e ._ , bit - identical files .this approach was first described and implemented in the nix package manager .guix reuses low - level mechanisms from nix to implement the same paradigm , but offers a unified interface for package definitions and their implementations , all embedded in a single programming language .an obvious challenge is the implementation of this paradigm58 how can build and install processes be viewed as pure ? to obtain that property , nix and guix ensure tight control over the build environment . in both cases ,build processes are started by a privileged daemon , which always runs them in `` containers '' as implemented by the kernel linux ; that is , they run in a chroot environment , under a dedicated user i d , with a well - defined set of environment variables , with separate name spaces for pids , inter - process communication ( ipc ) , networking , and so on .the chroot environment contains only the directories corresponding to the explicitly declared inputs .this ensures that the build process can not inadvertently end up using tools or libraries that it is not supposed to use .the separate name spaces ensure that the build process can not communicate with the outside world .although it is not perfect as we will see in , this technique gives us confidence that build processes can indeed be viewed as pure functions , with reproducible results .each build process produces one or more files in directories stored in a common place called _ the store _ , typically the ` /gnu / store ` directory .each entry in ` /gnu / store ` has a name that includes a hash of _ all the inputs _ of the build process that led to it . by `` all the inputs '' , we really mean all of them58 this includes of course compilers and libraries , including the c library , but also build scripts and environment variable values .this is recursive58 the compiler s own directory name is a hash of the tools and libraries used to build , and so on , up to a set of pre - built binaries used for bootstrapping purposes which can in turn be rebuilt using guix .thus , for each package that is built , the system has access to the _ complete dag _ of dependencies used to build it .package recipes in guix are written in a domain - specific language ( dsl ) embedded in the scheme programming language .shows , as an example , the recipe to build the open mpi library . the ` package ` form evaluates to a _ package object _ , which is just a `` regular '' scheme value ; the ` define ` form defines the ` openmpi ` variable to hold that value . ...yields58 line 14 specifies that the package is to be built according to the gnu standards_i.e .the well - known ` ./configure & & make & & make install ` sequence ( similarly , guix defines ` cmake - build - system ` , and so on . )the ` inputs ` field on line 15 specifies the direct dependencies of the package .the field refers to the ` hwloc ` , ` gfortran-4.8 ` , and ` pkg - config ` variables , which are also bound to package objects ( their definition is not shown here . ) it would be inconvenient to specify all the standard inputs , such as make , gcc , binutils so these are implicit here ; as it compiles package objects to a lower - level intermediate representation , ` gnu - build - system ` automatically inserts references to specific package objects for gcc , binutils , etc .since we are manipulating `` normal '' scheme objects , we can use the api of guix to query those package objects , as illustrated with the code in , which queries the name and version of the direct and indirect dependencies of our package . with that definition in place , running ` guix build openmpi `returns the directory name ` /gnu/ store / rmnib3ggm0dq32ls160ja882vanb69fi - openmpi-1.8.1 ` .if that directory did not already exist , the daemon spawns the build process in its isolated environment with write access to this directory .of course users never have to type these long ` /gnu / store ` file names .they can install packages in their _ profile _ using the ` guix package ` command , which essentially creates symbolic links to the selected ` /gnu / store ` items . by default ,the tree of symbolic links is rooted at ` _ { \mbox{\char126}}/.guix - profile ` , but users can also create independent profiles in arbitrary places of the file system .for instance , a user may choose to have gcc and open mpi in the default profile , and to populate another profile with clang and mpich2 .it is then a matter of defining the search paths for the compiler , linker , and other tools _ via _ environment variables .fortunately , guix keeps track of that and the ` guix package search - paths ` command returns all the necessary environment variable definitions in bourne shell syntax .for example , when both the gcc tool chain and open mpi are installed , the command returns definitions for the ` path ` , ` cpath ` , and ` library_path ` environment variables , and these definitions can be passed to the ` eval ` shell built - in command .we explore practical use cases where guix improves experimentation reproducibility for a user of a given system , supports the deployment of complex software stacks , allows a software environment to be replicated on another system , and finally allows fine customization of the software environment .one of the key features of guix and nix is that they securely permit unprivileged users to install packages in the store . to build a package , the ` guix ` commands connect to the build daemon , which then performs the build ( if needed ) on their behalf , in the isolated environment .when two users build the exact same package , both end up using the exact same `/gnu / store ` file name , and storage is shared .if a user tries to build , say , a malicious version of the c library , then the other users on the system will not use it , simply because they can not guess its ` /gnu/ store ` file name unless they themselves explicitly build the very same modified c library .guix is deployed at the max delbrck center for molecular medicine ( mdc ) , berlin , where the store is shared among 250 cluster nodes and an increasing number of user workstations .it is now gradually replacing other methods of software distribution , such as statically linked binaries on group network shares , relocatable rpms installed into group prefixes , one - off builds on the cluster , and user - built software installed in home directories .the researchers use tens of bioinformatics tools as well as frameworks such as biopython , numpy , scipy , and sympy . the functional packaging approachproved particularly useful in the ongoing efforts to move dozens of users and their custom software environments from an older cluster running ubuntu to a new cluster running a version of centos , because software packaged with guix does not depend on any of the host system s libraries and thus can be used on very different systems without any changes to the packages .research groups now have a shared profile for common applications , whereas individual users can manage their own profiles for custom software , legacy versions of bioinformatics tools to reproduce published results , bleeding - edge tool chains , or even for complete workflows .guix supports two ways to manage a profile .the first one is to make transactions that add , upgrade , or remove packages in the profile58 ` guix package install openmpi remove mpich2 ` installs open mpi and removes mpich2 in a single transaction that can be rolled back .the second approach is to _ declare _ the desired contents of the profile and make that effective58 the user writes in a file a code snippet that lists the requested packages ( see ) and then runs ` guix package manifest = my - packages.scm ` .this declarative profile management makes it easy to replicate a profile , but it is symbolic58 it uses whatever package objects the variables are bound to ( ` gnu - make ` , ` gcc - toolchain ` , etc . ) , but these variables are typically defined in the ` ( gnu packages ) ` modules that guix comes with .thus the precise packages being installed depend on the version of guix that is available .specifying the git commit of guix in addition to the declaration in is all it takes to reproduce the exact same ` /gnu / store ` items .another approach to achieve bit - identical reproduction of a user s profile is by saving the contents of its transitive closure using ` guix archive export ` .the resulting archive can be transferred to another system and restored at any point in time using ` guix archive import ` .this should significantly facilitate experimentation and sharing among peers .our colleagues at inria in the hiepacs and runtime teams develop a complete linear algebra software stack going from sparse solvers such as pastix and dense solvers such as chameleon , to run - time support libraries and compiler extensions such as starpu ] and hwloc .while developers of simulations want to be able to deploy the whole stack , developers of solvers only need their project s dependencies , possibly several variants thereof . for instance , developers of chameleon may want to test their software against several versions of starpu , or against variants of starpu built with different compile - time options .finally , developers of the lower - level layers , such as starpu , may want to test the effect of changes they make on higher - level layers .this use case leads to two requirements58 that users be able to customize and non - ambiguously specify a package dag , and that they be able to reproduce any variant of their package dag .guix allows them to define variants ; the code for these variants can be stored in a repository of their own and made visible to the ` guix ` commands by defining the ` guix_package_path ` environment variable .shows an example of such package variants58 based on the pre - existing ` starpu ` variable , the first variant defines a package for a new starpu release candidate , simply by changing its ` source ` field , while the second variant adds the optional dependency on the simgrid simulator a variant useful to scheduling practitioners , but not necessarily to solver developers . these starpu package definitions are obviously useful to users of starpu58 they can install them with ` guix package -i starpu ` and similar commands .but they are also useful to starpu developers58 they can enter a `` pristine '' development environment corresponding to the dependencies given in the recipe by running ` guix environment starpu pure ` .this command spawns a shell where the usual ` path ` , ` cpath ` etc .environment variables are redefined to refer precisely to the inputs specified in the recipe .this amounts to creating a profile on the fly , containing only the tools and libraries necessary when developing starpu .this is notably useful when dealing with build systems that support optional dependencies . now that we have several starpu variants , we want to allow direct and indirect users to select the variant that they want .a simple way to do that is to write , say , a function that takes a ` starpu ` parameter and returns a package that uses it as its input as show in . to allow users to refer to one or the other variant at the command line , we use different values for the ` name ` field .this approach is reasonable when there is a small number of variants , but it does not scale to more complex dags . as an example , starpu can be built with mpi support , in which case chameleon also needs to be explicitly linked against the same mpi implementation .one way to do that is by writing a function that recursively adjusts the package labeled ` mpi ` in the ` inputs ` field of packages in the dag .no matter how complex the transformations are , a package object unambiguously represents a reproducible build process . in that sense, guix allows environments to be reproduced at different sites , or by different users , while still supporting users needing complex customization ._ privileged daemon . _ nix and guix address many of the reproducibility issues encountered in package deployment , and guix provides apis that can facilitate the development of package variants as is useful in hpc . yet , to our knowledge , neither guix nor nix are widely deployed on hpc systems .an obvious reason that limits adoption is the requirement to have the build daemon run with root privileges without which it would be unable to use the linux kernel container facilities that allow it to isolate build processes and maximize build reproducibility .system administrators are wary of installing privileged daemons , and so hpc system users trade reproducibility for practical approaches . _cluster setup ._ all the ` guix ` commands are actually clients of the daemon .in a typical cluster setup , system administrators may want to run a single daemon on one specific node and to share ` /gnu / store ` among all the nodes . at the time of writing, guix does not yet allow communication with a remote daemon .for this reason , guix users at the mdc are required to manage their profiles from a specific node ; other nodes can use the profiles , but not modify them . allowing the ` guix ` commands to communicate with a remote daemon will address this issue . additionally , compute nodestypically lack access to the internet .however , the daemon needs to be able to download source code tarballs or pre - built binaries from external servers .thus , the daemon must run on a node with internet access , which could be contrary to the policy on some clusters ._ os kernel ._ by choosing not to use a full - blown vm and thus relying on the host os kernel , our system assumes that the kernel interface is stable and that the kernel has little or no impact on program behavior .while this may sound like a broad assumption , our experience has shown that it holds for almost all the software packages provided by guix . indeed , while applications may be sensitive to changes in the c library , only low - level kernel - specific user - land software is really sensitive to changes in the kernel .the build daemon itself relies on features that have been available in the kernel for several years ._ non - determinism ._ despite the use of isolated containers to run build processes , there are still a few sources of non - determinism that build systems of packages might use and that can impede reproducibility . in particular , details about the operating system kernel andthe hardware being used can `` leak '' to build processes .for example , the kernel linux provides system calls such as ` uname ` and interfaces such as ` /proc/ cpuinfo ` that leak information about the host ; independent builds on different hosts could lead to different results if this information is used .likewise , the ` cpuid ` instruction leaks hardware details . fortunately , few software packages depend on this information . yet ,the proportion of packages depending on it is higher in the hpc world .a notable example is the atlas linear algebra system , which fine - tunes itself based on details about the cpu micro - architecture .similarly , profile - guided optimization ( pgo ) , where the compiler optimizes code based on a profile gathered in a previous run , undermines reproducibility .running build processes in full - blown vms would address some of these issues , but with a potentially significant impact on build performance , and possibly preventing important optimization techniques in the hpc context ._ proprietary software ._ gnu guix does not provide proprietary software packages .unfortunately , proprietary software is still relatively common in hpc , be it linear algebra libraries or gpu support .yet , we see it as a strength more than a limitation . often , these `` black boxes '' inherently limit reproducibility how is one going to reproduce a software environment without permission to run the software in the first place ?what if the software depends on the ability to `` call home '' to function at all ?more importantly , we view reproducible software environments and reproducible science as a tool towards improved and shared knowledge ; developers who deny the freedom to study and modify their code work against this goal ._ reproducible builds ._ reproducible software environments have only recently become an active research area .one of the earliest pieces of work in this area is the vesta software configuration system .vesta provides a dsl that allows users to describe build operations , similar to nix .more recently , projects such as debian s reproducible , fedora s mock , or gitian have intended to improve reproducibility and verifiability of mainstream package distributions .google s recent bazel build tool relies on container facilities provided by the kernel linux and provides another dsl to describe build operations .reproducibility can be achieved with heavyweight approaches such as full operating system deployments , be it on hardware or in vms or containers .in addition to being resource - hungry , these approaches are coarse - grain and do not compose58 if two different vm / container images or `` software appliances '' provide useful features or packages , the user has to make a binary choice and can not combine the features or packages they offer . furthermore ,`` docker files '' , `` vagrant files '' , and kameleon `` recipes '' suffer from being too broad for the purposes of reproducing a software environment they are about configuring complete operating systems and from offering an inappropriate level of abstraction these recipes list commands to _ modify _ the state of the system image to obtain the desired state , whereas guix allows users to _ declare _ the desired environment in terms of software packages .lastly , the tendency to rely on complete third - party system images is a security concern . ]building upon third - party binary images also puts a barrier on reproducibility58 users may have recipes to rebuild their own software from source , but the rest of the system is essentially considered as a `` black box '' , which , if it can be rebuilt from source at all , can only be rebuilt using a completely different tool set ._ hpc package management ._ in the hpc community , efforts have focused primarily on the automation of software deployment and the ability for users to customize their build environment independently of each other .the latter has been achieved by `` environment modules '' , a simple but efficient tool set that is still widely used today . build and deployment automation is more recent with the development of specialized package management tools such as easybuild and spack . both easybuild and spack have the advantage of being installable by unprivileged users since they do not rely on privileged components , unlike guix and nix .the downside is that they can not use the kernel s container facilities , which seriously hinders build reproducibility .when used in the user s home directories , each user may end up rebuilding the same compiler , libraries , etc ., which can be costly in terms of cpu , bandwidth , and disk usage .conversely , nix and guix support safe sharing of builds .easybuild aims to support multiple package variants , such as packages built with different compilers , or linked against different mpi implementations . to achieve that, it relies on directory naming conventions ; for instance , ` openmpi/1.7.3-gcc-4.8.2 ` contains packages built with the specified mpi implementation and compiler .such conventions fail to capture the full complexity of the dag and configuration space . for instance, the convention arbitrarily omits the c library , linker , or configuration flags being used .easybuild is tightly integrated with environment modules , which are familiar to most users of hpc systems .while modules provide users with flexible environments , they implement an imperative , stateful paradigm58 users run a sequence of ` module load ` and ` module unload ` commands that _ alter _ the current environment .this can make it much harder to reason about and reproduce an environment , as opposed to the declarative approaches implemented by ` guix package manifest ` and ` guix environment ` .like easybuild and similarly to guix , spack implements build recipes as first - class objects in a general - purpose language , python , which facilitates customization and the creation of package variants .in addition , spack provides a rich command - line interface that allows users to express variants similar to those discussed in .this appears to be very convenient for common cases , although there are limits to the expressivity and readability of such a compact syntax .functional package managers provide the foundations for reproducible software environments , while still allowing fine - grain software composition and not imposing high disk and ram costs . today, gnu guix comes with 2,060 packages , including many of the common hpc tools and libraries as well as around 50 bioinformatics packages .it is deployed on the clusters of the mdc berlin , and being discussed as one of the packaging options by the open bioinformatics foundation , a non - profit for the biological research community .we hope to see more hpc deployments of guix in the foreseeable future .gnu guix benefits from contributions by about 20 people each month .it is the foundation of the guix system distribution , a standalone , reproducible gnu / linux distribution .we would like to thank florent pruvost , emmanuel agullo , and andreas enge at inria and eric bavier at cray inc .for insightful discussions and comments on an earlier draft .we are grateful to the guix contributors who keep improving the system .
|
support teams of high - performance computing ( hpc ) systems often find themselves between a rock and a hard place58 on one hand , they understandably administrate these large systems in a conservative way , but on the other hand , they try to satisfy their users by deploying up - to - date tool chains as well as libraries and scientific software . hpc system users often have no guarantee that they will be able to reproduce results at a later point in time , even on the same system software may have been upgraded , removed , or recompiled under their feet , and they have little hope of being able to reproduce the same software environment elsewhere . we present gnu guix and the functional package management paradigm and show how it can improve reproducibility and sharing among researchers with representative use cases .
|
there has been extensive research on learning probabilistic networks from data by maximizing some suitable scoring function . ( ) gave an efficient algorithm for the class of _ branchings _ , that is , directed forests with in - degree at most one ; the algorithm was discovered independently by ( ) , and it has been later simplified and expedited by others . ( ) showed that for general directed acyclic graphs , dags , the problem is np - hard even if the in - degree is at most two .motivated by this gap , ( ) asked for a network class that is more general than branchings yet admitting provably good structure - learning algorithms ; his findings concerning _ polytrees _ , that is , dags without undirected cycles , were however rather negative , showing that the optimization problem is np - hard even if the in - degree is at most two .given the recent advances in exact exponential algorithms in general ( see , e.g. , the book by ( ) ) , and in finding optimal dags in particular , it is natural to ask , whether `` fast '' exponential - time algorithms exist for finding optimal polytrees .for general dags the fastest known algorithms run in time within a polynomial factor of , where is the number of nodes .however , it is not clear , whether even these bounds can be achieved for polytrees ; a brute - force algorithm would visit each polytree one by one , whose number scales as the number of directed labelled trees .do significantly faster algorithms exist ? does the problem become easier if only a small number of nodes are allowed an in - degree larger than one ? in this work, we take a first step towards answering these questions by considering polytrees that differ from branchings by only a few arcs .more precisely , we study the problem of finding an optimal _ -branching _ , defined as a polytree that can be turned into a branching by deleting arcs .we make the standard assumption that the scoring function decomposes into a sum of local scores ; see the next section for precise definitions .we note that -branchings generalize branchings in a different direction than the tree - augmented naive bayes classifier ( tan ) due to ( ) .namely , in a tan the in - degree of each node is at most two , and there is a designated class node of in - degree zero , removing of which leaves a spanning tree ; the tree is undirected in the sense that the symmetric conditional mutual information is employed to score arcs .[ [ polynomial - time - result - for - k - branchings ] ] polynomial - time result for -branchings + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our main result is an algorithm that finds an optimal in polynomial time for every constant .( see the next section for a formal definition of the problem . )our overall approach is straightforward : we search exhaustively over all possible sets of at most `` extra arcs '' , fix the guessed arcs , and solve the induced optimization problem for branchings . implementing this seemingly innocent algorithm, however , requires successful treatment of certain complications that arise when applying the existing matroid machinery for finding optimal branchings .in particular , one needs to control the interaction of the extra arcs with the solution from the induced subproblem . [ [ fixed - parameter - tractability ] ] fixed - parameter tractability + + + + + + + + + + + + + + + + + + + + + + + + + + + + our algorithm for the -branching is polynomial for fixed , but the degree of the polynomial depends on , hence the algorithm does not scale well in .we therefore investigate variants of the -branching problem that admit _ fixed - parameter tractability _ in the sense of ( ) : the running time bound is given by a polynomial whose degree is independent of the parameter , the parameter contributing a constant factor to the bound . in particular , we show that the -branching problem is fixed - parameter tractable if the set of arcs incident to nodes with more than one parent form a connected polytree with exactly one sink , and each node has a bounded number of potential parent sets .this result is interesting as we show that the -branching problem remains np - hard under these restrictions .we complement the fixed - parameter tractability result by showing that more general variants of the -branching problem are not fixed - parameter tractable , subject to complexity theoretic assumptions .in particular , we show that the -branching problem is not fixed - parameter tractable when parameterized by the _ number of nodes _ whose deletion produces a branching .a probabilistic network is a multivariate probability distribution that obeys a structural representation in terms of a directed graph and a corresponding collection of univariate conditional probability distributions . for our purposes , it is crucial to treat the directed graph explicitly , whereas the conditional probabilities will enter our formalism only implicitly .such a graph is formalized as a pair , where is the _ node set _ and is the _ arc set _ ; we identify the graph with the arc set when there is no ambiguity about the node set .a node is said to be a _parent _ of in the graph if the arc is in ; we denote by the set of parents of .when our interest is in the undirected structure of the graph , we may denote by the _ skeleton _ of , that is , the set of _ edges _ .for instance , we call a _ polytree _ if is acyclic , and a _ branching _ if additionally each node has at most one parent. when learning a probabilistic network from data it is customary to introduce a scoring function that assigns each graph a real - valued score that measures how well fits the data .while there are plenty of alternative scoring functions , derived under different statistical paradigms and assumptions , the most popular ones share one important property : they are _ decomposable _ , that is , with some `` local '' scoring functions .the generic computational problem is to maximize the scoring function over some appropriate class of graphs given the local scoring functions as input .note that the score need not be a sum of any individual arc weights , and that the parent set may be empty .figure [ fig : score+branching ] shows a table representing a local scoring function , together with an optimal polytree .= [ inner sep=10pt , draw ] = [ draw , line width=8pt ] ( 0,0 ) node[rectangle , inner sep=0pt ] ; ( 2cm,0 cm ) node ; = [ inner sep=2pt , draw ] = [ draw , line width=1pt ] ( -2cm,0 ) node[circle , label = above: ( n1 ) ( 2cm,0 ) node[circle , label = above: ( n2 ) ( -4cm,-4 cm ) node[circle , label = left: ( n3 ) ( 0cm,-4 cm ) node[circle , label = right: ( n4 ) ( 4cm,-4 cm ) node[circle , label = right: ( n5 ) ( -2cm,-8 cm ) node[circle , label = below: ( n6 ) ( 2cm,-8 cm ) node[circle , label = below: ( n7 ) ; ( n1 ) edge[-latex ] ( n3 ) ( n1 ) edge[-latex ] ( n5 ) ( n2 ) edge[-latex ] ( n5 ) ( n3 ) edge[-latex ] ( n6 ) ( n4 ) edge[-latex ] ( n6 ) ( n5 ) edge[-latex ] ( n7 ) ; we study this problem by restricting ourselves to a graph class that is a subclass of polytrees but a superclass of branchings .we call a polytree a _ -branching _ if there exists a set of at most arcs such that in every node has at most one parent . note that any branching is a -branching. the _ -branching problem _ is to find a -branching that maximizes , given the values for each node and some collection of possible parent sets .throughout this section we consider a fixed instance of the -branching problem , that is , a node set and scoring functions for each .thus all arcs will refer to elements of .we will use the following additional notation .if is an arc set , then denotes the _ heads _ of the arcs in , that is , the set .if is a set of edges , then denotes the induced node set .we present an algorithm that finds an optimal -branching by implementing the following approach .first , we guess an arc set of size at most . then we search for an optimal polytree that contains such that in every node has at most one parent ; in other words , is an optimal branching with respect to an induced scoring function .clearly , the set must be acyclic .the challenge is in devising an algorithm that finds an optimal branching that is disjoint from while guaranteeing that the arcs in will not create undirected cycles in the union . to this end, we will employ an appropriate weighted matroid intersection formulation that extends the standard formulation for branchings .we will need some basic facts about matroids .matroid _ is a pair , where is a set of _ elements _ , called the _ ground set _ , and is a collection of subsets of , called the _ independent sets _ , such that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( m1 ) ; ( m2 ) if and then ; and ( m3 ) if and then there exists an such that ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the _ rank _ of a matroid is the cardinality of its maximal independent sets . any subset of that is not independent is called _ dependent_. any minimal dependent set is called a _circuit_. the power of matroid formulations is much due to the availability of efficient algorithms for the _ weighted matroid intersection problem _, defined as follows . given two matroids and , and a weight function , find an that is independent in both matroids and maximizes the total weight of , that is , .the complexity of the fastest algorithm we are aware of ( for the general problem ) is summarized as follows .[ the : brezovec ] the weighted matroid intersection problem can be solved in time , where , is the minimum of the ranks of and , and is the time needed for finding the circuit of in both and where and is independent in both and .we now proceed to the specification of two matroids , and , parametrized by an arbitrary arc set such that is acyclic .the _ in - degree matroid _ : let consist of all arc sets such that no arc in has a head in and every node outside is the head of at most one arc in .the _ acyclicity matroid _ : let consist of all arc sets such that is acyclic .we observe that the standard matroid intersection formulation of branchings is obtained as the special case of : then an arc set is seen to be branching if and only if it is independent in both the in - degree matroid and the acyclicity matroid .the next two lemmas show that and are indeed matroids whenever is acyclic .[ lem : matroid - one ] is a matroid .fix the arc set and denote by for short . clearly , and if and then also .consequently , satisfies ( m1 ) and ( m2 ) . to see that satisfies ( m3 )let with .because of the definition of the sets and contain at most one arc with head , for every . because there is a node such that is the head of an arc in but is not the head of an arc in .let be the arc with head . then and .hence , satisfies ( m3 ) .[ lem : matroid - two ] is a matroid . fix the arc set and denote by for short . because the skeleton is acyclic andacylicity is a hereditary property ( a graph property is called hereditary if it is closed under taking induced subgraphs ) it follows that and if and then also .consequently , satisfies ( m1 ) and ( m2 ) . to see that satisfies ( m3 )let with .consider the sets and .let be a connected subset of .because both and are acyclic , it follows that the number of edges of with both endpoints in is at most the number of edges of with both endpoints in .because every edge in corresponds to an arc in and similarly every edge in corresponds to an arc in and , it follows that there is an arc whose endpoints are contained in two distinct components of .consequently , the set is acyclic and hence .we now relate the common independent sets of these two matroids to -branchings . if is a -branching , we call an arc set a _ deletion set _ of if is a subset of , contains at most arcs , and in every node has at most one parent .[ lem : matroid - three ] let be an arc set and a subset of of size at most such that no two arcs from have the same head and such that is acyclic , where .we have that is a -branching with deletion set if and only if is independent in both and . suppose is a -branching with deletion set .then is a branching , which shows that every node outside has in - degree at most one in . since by definitionall arcs with a head in are contained in , no arc in has a head in .therefore , is independent in . since every -branching is a polytree , is acyclic , and therefore is independent in . since is independent in , we have that is acyclic .thus , is a polytree .as is independent in , every node outside has in - degree at most one in and every node from has in - degree zero in .since the head of every arc from is in and no two arcs from have a common head , has maximum in - degree at most one . because , we have that is a -branching with deletion set .the characterization of lemma [ lem : matroid - three ] enables the following algorithm for the -branching problem .define the weight function by letting for all arcs .guess the arc sets and , put , check that is acyclic , find a maximum - weight set that is independent in both and ; output a -branching that yields the maximum weight over all guesses and , where the weight of is obtained as it is easy to verify that maximizing this weight is equivalent to maximizing the score .figure [ fig : algo ] illustrates the algorithm for the scoring function of figure [ fig : score+branching ] .= [ inner sep=2pt , draw ] = [ draw , line width=1pt ] ( -2cm,0 ) node[circle , label = above: ( n1 ) ( 2cm,0 ) node[circle , label = above: ( n2 ) ( -4cm,-4 cm ) node[circle , label = left: ( n3 ) ( 0cm,-4 cm ) node[circle , label = right: ( n4 ) ( 4cm,-4 cm ) node[circle , label = right: ( n5 ) ( -2cm,-8 cm ) node[circle , label = below: ( n6 ) ( 2cm,-8 cm ) node[circle , label = below: ( n7 ) ; ( n3 ) edge[-latex , densely dotted ] ( n6 ) ( n4 ) edge[-latex , dashed ] ( n6 ) ( n1 ) edge[-latex , densely dotted ] ( n5 ) ( n2 ) edge[-latex , dashed ] ( n5 ) ; = [ inner sep=2pt , draw ] = [ draw , line width=1pt ] ( -2cm,0 ) node[circle , label = above: ( n1 ) ( 2cm,0 ) node[circle , label = above: ( n2 ) ( -4cm,-4 cm ) node[circle , label = left: ( n3 ) ( 0cm,-4 cm ) node[circle , label = right: ( n4 ) ( 4cm,-4 cm ) node[circle , label = right: ( n5 ) ( -2cm,-8 cm ) node[circle , label = below: ( n6 ) ( 2cm,-8 cm ) node[circle , label = below: ( n7 ) ; ( n3 ) edge[-latex , densely dotted ] ( n6 ) ( n4 ) edge[-latex , dashed ] ( n6 ) ( n1 ) edge[-latex , densely dotted ] ( n5 ) ( n2 ) edge[-latex , dashed ] ( n5 ) ( n1 ) edge[-latex ] ( n3 ) ( n5 ) edge[-latex ] ( n7 ) ; it remains to analyze the complexity of the algorithm .denote by the number of nodes . for a moment , consider the arc set fixed . to apply theorem [ the : brezovec ] , we bound the associated key quantities : the size of the ground set is ; the rank of both matroids is clearly ; circuit detection can be performed in time , by a depth - first search for and by finding a node that has higher in - degree than it is allowed to have in .thus , by theorem [ the : brezovec ] , a maximum - weight set that is independent in both matroids can be found in time .then consider the number of possible choices for the set .there are possibilities for choosing a set of at most arcs such that is acyclic . for a fixed ,there are possibilities for choosing a subset such that is acyclic and no two arcs from have the same head .thus there are relevant choices for the set .we have shown the following .[ the : xp ] the -branching problem can be solved in time .theorem [ the : xp ] shows that the problem can be solved in `` non - uniform polynomial time '' as the order of the polynomial time bound depends on . in this sectionwe study the question of whether one can get `` out of the exponent '' and obtain a uniform polynomial - time algorithm . the framework of _ parameterized complexity _ offers the suitable tools and methods for such an investigation , as it allows us to distinguish between uniform and non - uniform polynomial - time tractability with respect to a parameter .an instance of a parameterized problem is a pair where is the _main part _ and is the _parameter _ ; the latter is usually a non - negative integer .a parameterized problem is _ fixed - parameter tractable _ if there exist a computable function and a constant such that instances of size can be solved in time . is the class of all fixed - parameter tractable decision problems .fixed - parameter tractable problems are also called _ uniform polynomial - time tractable _ because if is considered constant , then instances with parameter can be solved in polynomial time where the order of the polynomial is independent of ( in contrast to non - uniform polynomial - time running times such as ) .parameterized complexity offers a completeness theory similar to the theory of np - completeness .parameterized reductions _ which are many - one reductions where the parameter for one problem maps into the parameter for the other .more specifically , problem reduces to problem if there is a mapping from instances of to instances of such that ( i ) is a yes - instance of if and only if is a yes - instance of , ( ii ) for a computable function , and ( iii ) can be computed in time where is a computable function , is a constant , and denotes the size of .the parameterized complexity class }}]-complete under parameterized reductions .note that there exists a trivial non - uniform polynomial - time algorithm for the maximum clique problems that checks all sets of vertices .}} ] implies the ( unlikely ) existence of a algorithm for 3sat .a first parameterized analysis of probabilistic network structure learning using structural parameters such as treewidth has recently been carried out by ( ) .the algorithm from theorem [ the : xp ] considers relevant choices for the set , and for each fixed choice of the running time is polynomial .thus , for restrictions of the problem for which the enumeration of all relevant sets is fixed parameter tractable , one obtains an fpt algorithm .one such restriction requires that is an in - tree , i.e. , a directed tree where every arc is directed towards a designated root , and each node has a bounded number of potential parent sets .[ thm : fptrestriction ] the -branching problem is fixed - parameter tractable if we require that ( i ) the set of arcs is an in - tree and ( ii ) each node has a bounded number of potential parent sets . to compute a -branching , the algorithm guesses its deletion set and the set . as is a -branching , and for every is at most one arc in with head .the algorithm first guesses the root for the in - tree .then it goes over all possible choices for and as follows , until has at least arcs .guess a leaf of ( initially , is the unique leaf of ) , and guess a non - empty parent set for in .if , then backtrack .otherwise , choose at most one arc to add to , where , and add all other arcs from a node from to to ( if , no arc is added to ) .now , check whether the current choice for leads to a -branching by checking whether is acyclic and using the matroids and as in theorem [ the : xp ] .there are at most choices for .the in - tree is expanded in at most steps , as each step adds at least one arc to . in each step , is chosen among at most leaves , there is a constant number of choices for its parent set and at most choices for adding ( or not ) an arc , with , to ( as ) .the acyclicity check for and the weighted matroid intersection can be computed in time , leading to a total running time of , where is such that every node has at most potential parent sets. condition ( i ) in theorem [ thm : fptrestriction ] may be replaced by other conditions requiring the connectivity of or a small distance between arcs from , giving other fixed - parameter tractable restrictions of the -branching problem .the following theorem shows that an exponential dependency on or some other parameter is necessary since the -branching problem remains np - hard under the restrictions given above .[ thm : fptrestrictionnp ] the -branching problem is np - hard even if we require that ( i ) the set of arcs is an in - tree and ( ii ) each node has at most potential parent sets .we devise a polynomial reduction from -sat- a version of 3-satisfiability where every literal occurs at most in two clauses .-sat- is well known to be np - hard .our reduction uses the same ideas as the proof of theorem 6 in .let be an instance of -sat- with clauses and variables .we define the set of nodes as follows . for every variable in set contains the nodes and .furthermore , for every clause the set contains the nodes and .let , , and .we set if the clause is the -th clause that contains the literal .similarly , we set if the clause is the -th clause that contains the literal .we set , for every , and for every .furthermore , we set for all the remaining combinations of and .this completes the construction of and .observe that every node of has at most potential parent sets .this completes our construction .we will have shown the theorem after showing the following claim ._ claim : is satisfiable if and only if there is a -branching such that , the set of arcs is an in - tree , and each node of has at most potential parent sets ._ = [ circle , inner sep=2pt , draw , color = black ] = [ draw ] ( 0,0 ) node[label = above: ( p1 ) ( 2,0 ) node[label = above: ( p2 ) ( 4,0 ) node[label = above: ( p3 ) ( 0,-1 ) node[label = left: ( x1 ) ( -0.5,-2 ) node[label = left: ( x11 ) ( -0.5,-3 ) node[label = left: ( x12 ) ( 0.5,-2 ) node[label = left: ( nx11 ) ( 0.5,-3 ) node[label = left: ( nx12 ) ( 2,-1 ) node[label = left: ( x2 ) ( 1.5,-2 ) node[label = left: ( x21 ) ( 1.5,-3 ) node[label = left: ( x22 ) ( 2.5,-2 ) node[label = left: ( nx21 ) ( 2.5,-3 ) node[label = left: ( nx22 ) ( 4,-1 ) node[label = left: ( x3 ) ( 3.5,-2 ) node[label = left: ( x31 ) ( 3.5,-3 ) node[label = left: ( x32 ) ( 4.5,-2 ) node[label = left: ( nx31 ) ( 4.5,-3 ) node[label = left: ( nx32 ) ( 0,-4 ) node[label = left: ( c3 ) ( 2,-4 ) node[label = left: ( c2 ) ( 4,-4 ) node[label = left: ( c1 ) ( 4,-5 ) node[label = below: ( p4 ) ( 2,-5 ) node[label = below: ( p5 ) ( 0,-5 ) node[label = below: ( p6 ) ; ( p1 ) edge[-latex ] ( p2 ) ( p2 ) edge[-latex ] ( p3 ) ( p3 ) edge[-latex , bend left ] ( p4 ) ( p4 ) edge[-latex ] ( p5 ) ( p5 ) edge[-latex ] ( p6 ) ( x1 ) edge[-latex ] ( p1 ) ( x2 ) edge[-latex ] ( p2 ) ( x3 ) edge[-latex ] ( p3 ) ( c1 ) edge[-latex ] ( p4 ) ( c2 ) edge[-latex ] ( p5 ) ( c3 ) edge[-latex ] ( p6 ) ( nx11 ) edge[-latex ] ( x1 ) ( nx12 ) edge[-latex ] ( x1 ) ( nx21 ) edge[-latex ] ( x2 ) ( nx22 ) edge[-latex ] ( x2 ) ( x31 ) edge[-latex ] ( x3 ) ( x32 ) edge[-latex ] ( x3 ) ( x12 ) edge[-latex ] ( c3 ) ( x22 ) edge[-latex ] ( c2 ) ( nx31 ) edge[-latex ] ( c1 ) ; suppose that the formula is satisfiable and let be a satisfying assignment for .furthermore , for every let be a literal of that is set to true by .we construct a -branching as follows . for every digraph contains an arc if and is the -th clause that contains and an arc if and is the -th clause that contains for some and .furthermore , for every the digraph contains the arcs and if and the arcs and if .last but not least contains the arcs , and for every , , and . figure [ fig : np - hard - branching ] shows an optimal -branching for some -sat- formula .it is easy to see that is a -branching such that and the set of arcs is an in - tree . suppose there is a -branching such that . because it follows that every node of achieves its maximum score in .hence , has to contain the arcs , , , for every , , and .for the same reasons has to contain either the arcs and or the arcs and for every .furthermore , for every the -branching has to contain one arc of the form or where is the -th clause that contains or , respectively , for some and .let , , and .we first show that whenever contains an arc then contains no arc of the form and similarly if contains an arc then contains no arc of the form .suppose for a contradiction that contains an arc together with an arc or an arc together with an arc .in the first case contains the undirected cycle and in the second case contains the cycle contradicting our assumption that is a -branching .it now follows that the assignment with if does not contain the arcs and and if does not contain the arcs and is a satisfying assignment for .so far we have measured the difference of a polytree to branchings in terms of the number of arcs to be deleted .next we investigate the consequences of measuring the difference by the number of nodes to be deleted .we call a polytree a _-node branching _ if there exists a set of at most nodes such that is a branching .-node branching problem _ is to find a -node branching that maximizes .clearly every -branching is a -node branching , but the reverse does not hold . in other words ,the -node branching problem generalizes the -branching problem . in the followingwe show that the -node branching problem is hard for the parameterized complexity class }}]-hard .we devise a parameterized reduction from the following problem , called partitioned clique , which is well - known to be }}$]-complete for parameter .the instance is a graph with partition such that for every .the question is whether there are nodes such that for and for ?( the graph is a _-clique _ of . )let be an instance of this problem with partition , , and parameter .let , , and .let and or for every .then is defined as .let .we define the score function as follows .we set for every and , and for every , , , and .furthermore , we set for all the remaining combinations of and .this completes our construction .we will have the theorem after showing the following claim ._ claim : has a -clique if and only if there is a -node branching such that ._ c|cc = [ circle , inner sep=1pt , draw ] ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v11 ) + ( 0,0 ) node[circle , fill ] ( v12 ) + ( 1cm,0 ) node[circle ] ( v13 ) + ( 0,0 ) node[rectangle , minimum width=1.3cm , minimum height=0.4cm , draw , rotate around=0:(0,0 ) ] ; ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v21 ) + ( 0,0 ) node[circle , fill ] ( v22 ) + ( 1cm,0 ) node[circle ] ( v23 ) + ( 0,0 ) node[rectangle , minimum width=1.3cm , minimum height=0.4cm , draw , rotate around=-120:(0,0 ) ] ; ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v31 ) + ( 0,0 ) node[circle , fill ] ( v32 ) + ( 1cm,0 ) node[circle ] ( v33 ) + ( 0,0 ) node[rectangle , minimum width=1.3cm , minimum height=0.4cm , draw , rotate around=120:(0,0 ) ] ; ( v11 ) edge [ ] ( v23 ) ( v12 ) edge[thick ] ( v22 ) ( v13 ) edge [ ] ( v21 ) ( v21 ) edge [ ] ( v31 ) ( v22 ) edge[thick ] ( v32 ) ( v33 ) edge [ ] ( v13 ) ( v32 ) edge[thick ] ( v12 ) ( v31 ) edge [ ] ( v11 ) ; & & = [ circle , inner sep=1pt , draw ] ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v111 ) + ( 0,0 ) node[circle , fill ] ( v121 ) + ( 1cm,0 ) node[circle ] ( v131 ) + ( -1cm,0.5 ) node[circle ] ( v112 ) + ( 0,0.5 ) node[circle , fill ] ( v122 ) + ( 1cm,0.5 ) node[circle ] ( v132 ) + ( -1cm,1 ) node[circle ] ( v113 ) + ( 0,1 ) node[circle , fill ] ( v123 ) + ( 1cm,1 ) node[circle ] ( v133 ) + ( 0,2.5 ) node[circle ] ( c1 ) + ( 0,1.25 ) node[rectangle , minimum width=1.3cm , minimum height=1.5cm , draw ] ; ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v211 ) + ( 0,0 ) node[circle , fill ] ( v221 ) + ( 1cm,0 ) node[circle ] ( v231 ) + ( -1cm,0.5 ) node[circle ] ( v212 ) + ( 0,0.5 ) node[circle , fill ] ( v222 ) + ( 1cm,0.5 ) node[circle ] ( v232 ) + ( -1cm,1 ) node[circle ] ( v213 ) + ( 0,1 ) node[circle , fill ] ( v223 ) + ( 1cm,1 ) node[circle ] ( v233 ) + ( 0,2.5 ) node[circle ] ( c2 ) + ( 0,1.25 ) node[rectangle , minimum width=1.3cm , minimum height=1.5cm , draw , rotate around=-120:(0,0 ) ] ; ( 0,2 cm ) + ( -1cm,0 ) node[circle ] ( v311 ) + ( 0,0 ) node[circle , fill ] ( v321 ) + ( 1cm,0 ) node[circle ] ( v331 ) + ( -1cm,0.5 ) node[circle ] ( v312 ) + ( 0,0.5 ) node[circle , fill ] ( v322 ) + ( 1cm,0.5 ) node[circle ] ( v332 ) + ( -1cm,1 ) node[circle ] ( v313 ) + ( 0,1 ) node[circle , fill ] ( v323 ) + ( 1cm,1 ) node[circle ] ( v333 ) + ( 0,2.5 ) node[circle ] ( c3 ) + ( 0,1.25 ) node[rectangle , minimum width=1.3cm , minimum height=1.5cm , draw , rotate around=120:(0,0 ) ] ; ( 30:1 ) node[circle ] ( a12 ) ( 150:1 ) node[circle ] ( a13 ) ( -90:1 ) node[circle ] ( a23 ) ; ( v111 ) edge[-latex ] ( c1 ) ( v112 ) edge[-latex ] ( c1 ) ( v113 ) edge[-latex ] ( c1 ) ( v131 ) edge[-latex ] ( c1 ) ( v132 ) edge[-latex ] ( c1 ) ( v133 ) edge[-latex ] ( c1 ) ( a12 ) edge[-latex , bend right=60 ] ( c1 ) ( a13 ) edge[-latex , bend left=60 ] ( c1 ) ( v122 ) edge[-latex , bend left=10 ] ( a12 ) ( v123 ) edge[-latex , bend right=10 ] ( a13 ) ; ( v211 ) edge[-latex ] ( c2 ) ( v212 ) edge[-latex ] ( c2 ) ( v213 ) edge[-latex ] ( c2 ) ( v231 ) edge[-latex ] ( c2 ) ( v232 ) edge[-latex ] ( c2 ) ( v233 ) edge[-latex ] ( c2 ) ( a12 ) edge[-latex , bend left=60 ] ( c2 ) ( a23 ) edge[-latex , bend right=60 ] ( c2 ) ( v221 ) edge[-latex , bend right=10 ] ( a12 ) ( v223 ) edge[-latex , bend left=10 ] ( a23 ) ; ( v311 ) edge[-latex ] ( c3 ) ( v312 ) edge[-latex ] ( c3 ) ( v313 ) edge[-latex ] ( c3 ) ( v331 ) edge[-latex ] ( c3 ) ( v332 ) edge[-latex ] ( c3 ) ( v333 ) edge[-latex ] ( c3 ) ( a13 ) edge[-latex , bend right=60 ] ( c3 ) ( a23 ) edge[-latex , bend left=60 ] ( c3 ) ( v321 ) edge[-latex , bend left=10 ] ( a13 ) ( v322 ) edge[-latex , bend right=10 ] ( a23 ) ; + & & suppose that has a -clique .then it is easy to see that the dag on defined by the arc set is a -node branching and .figure [ fig : hardnessknodeb ] shows an optimal -node branching constructed from an example graph . suppose there is a -node branching with .it follows that every node of achieves its maximum score .in particular , for every the nodes must have score in and hence there is a node such that is adjacent to all nodes in .furthermore , for every the node is adjacent to exactly one node in and to exactly one node in .let be the unique node in adjacent to and similarly let be the unique node in that is adjacent to for every .then and because otherwise the skeleton of would contain the cycle or the cycle .consequently , the edges represented by the parents of in for all form a -clique in .we have studied a natural approach to extend the known efficient algorithms for branchings to polytrees that differ from branchings in only a few extra arcs . at first glance, one might expect this to be achievable by simply guessing the extra arcs and solving the remaining problem for branchings .however , we do not know whether such a reduction is possible in the strict sense . indeed , we had to take a slight detour and modify the two matroids in a way that guarantees a control for the interactions caused by the presence of high - in - degree nodes .as a result , we got an algorithm that runs in time polynomial in the input size : namely , there can be more than relevant input values for each of the nodes ; so , the runtime of our algorithm is less than cubic in the size of the input , supposing the local scores are given explicitly .while this answers one question in the affirmative , it also raises several further questions , some of which we give in the next paragraphs .our complexity analysis relied on a result concerning the general weighted matroid intersection problem .do significantly faster algorithms exist when restricted to our two specific matroids ?one might expect such algorithms exist , since the related problem for branchings can be solved in time by the algorithm of .even if we could solve the matroid intersection problem faster , our algorithm would remain practical only for very small values of .can one find an optimal -branching significantly faster , especially if allowing every node to have at most two parents ? as the current algorithm makes around mutually overlapping guesses , there might be a way to considerably reduce the time complexity .specifically , we ask whether the restricted problem is fixed - parameter tractable with respect to the parameter , that is , solvable in time for some computable function and polynomial . the fixed - parameter algorithm given in section [ sec : fpt ]can be seen as a first step towards an answer to this question .can we find other restrictions under which the -branching problem becomes fixed - parameter tractable ? can we use a similar approach for the more general -node branching problem , i.e. , is there a polynomial time algorithm for the -node branching problem for every fixed ?likewise , we do not know whether the problem is easier or harder for polytrees than for general dags : do similar techniques apply to finding maximum - score dags that can be turned into branchings by deleting some arcs ?serge gaspers , sebastian ordyniak , and stefan szeider acknowledge support from the european research council ( complex reason , 239962 ) .serge gaspers acknowledges support from the australian research council ( de120101761 ) .mikko koivisto acknowledges the support from the academy of finland ( grant 125637 ) .mathieu liedloff acknowledges the support from the french agence nationale de la recherche ( anr agape anr-09-blan-0159 - 03 ) .
|
inferring probabilistic networks from data is a notoriously difficult task . under various goodness - of - fit measures , finding an optimal network is np - hard , even if restricted to polytrees of bounded in - degree . polynomial - time algorithms are known only for rare special cases , perhaps most notably for branchings , that is , polytrees in which the in - degree of every node is at most one . here , we study the complexity of finding an optimal polytree that can be turned into a branching by deleting some number of arcs or nodes , treated as a parameter . we show that the problem can be solved via a matroid intersection formulation in polynomial time if the number of deleted arcs is bounded by a constant . the order of the polynomial time bound depends on this constant , hence the algorithm does not establish fixed - parameter tractability when parameterized by the number of deleted arcs . we show that a restricted version of the problem allows fixed - parameter tractability and hence scales well with the parameter . we contrast this positive result by showing that if we parameterize by the number of deleted nodes , a somewhat more powerful parameter , the problem is not fixed - parameter tractable , subject to a complexity - theoretic assumption .
|
with the foreseen exponentially increasing number of users and traffic in 4 g and lte / lte - advanced ( lte - a ) systems , existing deployment and practice of cellular radio networks that strongly rely on highly hierarchical architectures with centralized control and resource management becomes economically unsustainable .network self - organization and self - optimization are among the key targets of future cellular networks so as to relax the heavy demand of human efforts in the network planning and optimization tasks and to reduce the system s capital and operational expenditure ( capex / opex ) .the next - generation mobile networks are expected to provide a full coverage of broadband wireless service and support fair and efficient resource utilization with a high degree of operation autonomy and system intelligence .in addition , energy efficiency has emerged as an important concern for future mobile networks .it is expected that energy consumption by the information and communications technology ( ict ) industry will be rising at per year , and hence energy bills will become an important portion of operational expenditure . to reduce the impacts on both revenue and environment caused by energy consumption , while providing satisfactory services to customers , a mechanism that jointly improves spectrum and energy efficiency for self - organizing networks is needed . in this paper, we study the problem of self - organizing heterogeneous lte systems and aim to achieve both spectrum and energy efficiency . we propose a generic model that jointly takes into account the key characteristics of today s lte networks , including the usage of orthogonal frequency division multiple access ( ofdma ) in the air interface , the nature of frequency - selective fading for each link , multi - cell multi - link interference occured , and the different transmission ( power ) capabilities of different types of base stations , which could be macro and small cells .we also consider the cost of energy by taking into account the power consumption , including that for wireless transmission and that for the operation of base stations .based on this unified model , we propose a distributed protocol that improves the spectrum efficiency of the system , which one can apply weighted proportional fairness among the throughputs of clients , and reduces the cost of energy .our protocol consists of four components .first , each base station needs to make scheduling decisions for its clients .second , each base station needs to allocate transmission powers on different frequencies by considering the influence on the throughputs of its clients , the interference caused on others , and the cost of energy .third , each client needs to choose a suitable base station to be associated with .finally , each base station needs to determine whether to be in active mode and serve clients , or to be in sleep mode to improve energy efficiency .we propose an online scheduling policy for the first component and shows that it achieves globally optimum performance when the solutions to the other three components are fixed .we also propose distributed strategies for the other three components and show that each of them achieves locally optimal performance under some mild approximation of the system .we show that these strategies only require small computational and communicational overheads , and hence are easily implementable . moreover , these strategies take the interactions of different components into account .thus , an integrated solution that applies all these strategies jointly consider all factors of heterogeneous lte systems .we also conduct extensive simulations .simulation results verify that each of the proposed strategies improves system performance .they also show that the integrated solution achieve much better performance than existing policies in a large scale heterogeneous network .the rest of the paper is organized as follows .section [ section : related ] summarizes existing work .section [ section : system_model ] describes the system model and problem setup .section [ section : scheduling ] presents the online scheduling policy for the first component .section [ section : power ] introduces a distributed heuristic for the second component . section [section : client ] discusses both the third and the fourth components , as they are tightly related .section [ section : simulation ] shows the simulation results .finally , section [ section : conclusion ] concludes the paper .there has been some work on self - organized wireless systems . chen and baccelli proposed a distributed algorithm for the self optimization of radio resources that aims to achieve potential delay fairness .hu et al has proposed a distributed protocol for load balancing among base stations .borst , markakis , and saniee studies the problem of utility maximization for self - organizing networks for arbitrary utility functions .lopez - perez et al , hou and gupta , and hou and chen have considered the problems of jointly optimizing different components in self - organizing networks under various system models .these works do not take energy efficient into considerations . on the other hand , techniques for improving cellular radio energy efficiencyhave recently attracted much attention .auer et al has investigated the amount of power consumptions for various types of base stations .mclaughlin et al has discussed various techniques for improving energy efficiency .conte et al has proposed to turn base stations to sleep mode when the network traffic is small to save energy .son et al , zhou et al , and gong , zhou , and niu have proposed various policies of allocating clients so that clients are mostly allocated to a few base stations . as a result , many base stations that do not have any clients can be turned to sleep mode to save energy .however , these studies require the knowledge of traffic of each client , and can not be applied to scenarios where clients traffic is elastic .chen et al has studied the trade - off between spectrum efficiency and energy efficiency .miao et al and li et al have provided extensive surveys on energy - efficient wireless communications .however , they do not consider the interference and interactions between base stations , and are hence not applicable to self - organizing networks .consider a reuse-1 radio system with several base stations and clients that operate and use lte ofdma .the base stations can be of different types , including macro , micro , pico , and femto base stations .lte divides frequency bandwidth into subcarriers , and time into frames , which are further divided into 20 time slots .the bandwidth of a subcarrier is 15 khz while the duration of a time frame is 10 ms . in this paper, we consider lte frequency division duplex ( fdd ) , the downlink transmission , and resource scheduling . in lte ,a resource block consists of 12 consecutive subcarriers and one time slot of duration 0.5 ms . under the ofdma, each user can be allocated any number of resource blocks . however , for each base station , a resource block can not be allocated to more than one user .lte can thus achieve both time - division multiplexing and frequency - division multiplexing .we hereby define to be the set of base stations , to be the set of clients , and to be set of resource blocks , where each represents a collection of 12 consecutive subcarriers and each represents a time slot . in the sequel , we use both and to denote a resource block for the notational convenience . note that here we consider reuse-1 systems .however , the result could be extended to other systems .we consider the energy consumptions of base stations by breaking them into two categories : _ operation power _ and _ transmission power_. when a base station has no clients to serve , the base station can be turned into sleep mode to save power . on the other hand , when the base station has some clients associated to it , it needs to remain in active mode .in addition to transmission power , a base station in active mode also consumes more power for computation , cooling , etc , than one in sleep mode .we call the sum of energy consumption other than transmission power as the _ operation power_. we denote by as the difference of operation powers consumed when base station is in active mode and when it is in sleep mode .we denote by the amount of transmission power that a base station assigns on resource block .if base station does not operate in resource block , we have .the time - average transmission power consumed base station can then be expressed as .further , we assume that each base station has a fixed power budget for every time slot , and it is required that , for all and , which is also known as the per base station transmit power constraint .we note that the values of and can be different from base station to base station , as different types of base stations may consume different amounts of operation powers and have different power budgets .for example , a macro base station has a much larger and than a femto base station . propagation loss and path conditionare captured by the channel gain .note that in each resource block , one can consider that the channel gain is usually flat over the subcarriers given that the channel coherence bandwidth is greater than 180 khz ( * ? ? ?* ch.12 ) .it is also time invariant in each time slot given that the channel coherence time is greater than 0.5 ms ( * ? ? ?* ch.23 ) .however , the channel gain of a user may change from one resource block to another in the frequency and time domain .let be the channel gain between base station and client on resource block . to be more specific , when the base station transmits with power , the received power at client on resource block is .the received power , , of client is considered to be its received signal strength if base station is transmitting data to client , and is considered to be interference , otherwise .therefore , when base station is transmitting data to client on resource block , the signal - to - interference - plus - noise ratio ( sinr ) of client on is expressible as where is the thermal noise experienced by client on resource block .the throughput of this transmission can then be described by the shannon capacity as , where is the bandwidth of a resource block .each client is associated with one base station . in each frame ,base station schedules one client that is associated with in each of the resource blocks in the frame . the base station may change the client scheduled in a particular resource block from frame to frame .let be the proportion of frames that client is scheduled in resource block by base station .to simplify problem formulation , we assume that does not vary over time. we will then discuss in the following sections how to take channel time variation into account .the influence of channel fading is also demonstrated by simulations in section [ section : simulation ] .consider that does not vary over the time , the overall throughput of client , which is the sum of its throughput over all the resource blocks , can hence be written as : in this work , we aim to jointly achieve both spectrum efficiency and energy efficiency by considering the tradeoff between them . for spectrum efficiency , we aim to achieve weighted proportional fairness among all the clients when the cost of total power consumption is fixed .let be the priority weight of client or user - dependent priority indicator .the weighted proportional fairness can be achieved by maximizing .on the other hand , we also aim to minimize the cost of total power consumption .we denote by as the price of energy for base station .we then formulate the problem of joint spectrum and energy efficiency as the following optimization problem : there are three terms involved in the objective function ( [ equation : introduction : c0 ] ) . the first term, can be called as the _ weighted proportional fairness index _ , as the system achieves weighted proportional fairness by maximizing it .the second term , , is the cost of power consumption on transmission powers of all base stations . in the last term, we note that a base station is only active when it has at least one client , hence , , is the cost of power consumption on operation powers of all base stations . in sum, we aim to maximize proportional fairness index cost of power consumption .in particular , we note that if any of the clients are not covered , i.e. , , for some , then the value of ( [ equation : introduction : c0 ] ) is .therefore , by aiming at maximizing ( [ equation : introduction : c0 ] ) , we also guarantee that all clients are covered .there are two constraints in the formulation .( [ equation : introduction : c1 ] ) states that , for each base station , it can only allocate a resource block to one client in each frame .however , for a fixed resource block , the base station may change the client that it is allocated to from frame to frame .the second constraint , ( [ equation : introduction : c2 ] ) , states that the total amount of power that a base station allocates on all subcarriers can not exceed its power budget .the variables that we are able to control are listed in ( [ equation : introduction : c3 ] ) and ( [ equation : introduction : c4 ] ) , which include the base station that each client is associated to , , the transmission power that each base station allocates on each resource block , , and the scheduling decision of each base station on each resource block , .finally , we note that a base station only needs to be in active mode when at least one client is associated with it , and can be in sleep mode when no clients are associated with it .therefore , the decision on whether a base station is in sleep mode or in active mode is implicitly determined by the choices of , for all .this formulation shows that there are several important components involved . in each frame, a base station needs to decide which client should be scheduled in each resource block .this essentially determines the values of so as to maximize .we call this component the _ scheduling problem_. in each frame , a base station also needs to decide how much power it should allocate in each resource block , subject to the constraint on its power budget . this component is referred as the _ power control problem_. the power control problem influences both the spectrum efficiency , , and the cost of transmission power , . besides , every client needs to choose an active base station to be associated with .base stations also need to decide whether to be in active mode to serve clients , or to be in sleep mode to save energy .we denote both the clients decisions on associated base stations and base stations decisions on whether to be in active or sleep mode as the _ client association problem _ , as these two components are tightly related .hence , the client association problem influences both spectrum efficiency and the total cost of operation power , .further , we notice that there is a natural timescale separation between the three components : the scheduling problem is updated on a per time slot basis . on the other hand ,the power control problem is updated in a slower timescale .finally , the client association problem must only be updated infrequently , as the overheads for clients to change the associated base stations , and for the base stations to switch between sleep / active mode are large . in the following ,we first propose an online algorithm for the scheduling problem , given solutions to the power control problem and the client association problem .we then propose a heuristic for the power control problem by considering solutions to the scheduling problem .finally , we develop a protocol for the client association problem .the protocol uses the knowledge of the power control problem as well as the influences on the scheduling problem .figure [ fig : overview ] illustrates an overview of our approach and the timescales of different components .in this section , we study the scheduling problem , given solutions to the power control problem and the client association problem , i.e. , values of and .we thus define which is the throughput of client on resource block when it is scheduled by base station , to simplify the notations . with and , solving ( [ equation : introduction : c0])-([equation : introduction : c4 ] ) is equivalent to solving the following : one can see that the above optimization problem is in fact convex and hence can be solved by standard techniques of convex optimization . to further simplify the computation overhead , we propose an online scheduling policy for the scheduling problem .let ] be the average throughput of client in the first frames .we then have = \frac{k-1}{k}\phi_{i , m(i),z}[k]+\frac{1}{k} ] otherwise . in our online scheduling policy , the base station schedules the client that maximizes ] achieves the maximum of the optimization problem ( [ equation : scheduling : c0])([equation : scheduling : c2 ] ) .note that in the previous discussions , we have assumed that the channel gain , , does not vary over time . in practice, however , channel gains fluctuate due to fading . to take fading into account , we let be the instantaneous time - varying channel gain , and be the instantaneous throughput that client can get from resource block if it is scheduled by base station .our scheduling policy can then be easily modified such that the base station schedules the client with the largest ] we have : -\zeta_m/|\mathbb{q}|.\end{aligned}\ ] ] each base station updates its power periodically .when base station updates its power , it sets its power on resource block to be : ^+ , \\\hspace{20pt}~~\textrm{if}~ \sum_e[p_{m,(e , q)}+\alpha \frac{\partial u(p)}{\partial p_{m,(e , q)}}]^+ \leq w_m,\\\\ w\frac{[p_{m,(f , q)}+\alpha\frac{\partial u(p)}{\partial p_{m,(f , q)}}]^+}{\sum_e[p_{m,(e , q)}+\alpha \frac{\partial u(p)}{\partial p_{m,(e , q)}}]^+ } , ~~\textrm{otherwise } , \end{array } \right.\end{aligned}\ ] ] where and is a small constant .base station needs to compute to update its power on each resource block .the computation of can be further simplified by setting for all such that is small and has little influence on the value of .thus , to compute , base station exchanges information with base station that is physically close to it so as to know : * the sum weight of clients associated with , i.e. , , * the channel gain from to , * the sum of interference and noise at , * the received signal strength at , and * the average total throughput in the downlink of base station , for all that is large . in lte , the above information can be obtained through periodic channel quality indicator and reference signal reports .this method is easy to implement and only requires limited information exchange between neighbor cells .we assume that the neighbor cell communication takes place between base stations and is supported by the wired backhaul network .in the following , we discuss how to solve the client association problem , i.e. , how each client should choose a base station .our solution consists of two parts : in the first part , each client estimates its throughput when it associates with each base station .the client then selfishly chooses the base station that maximizes its throughput . in the second part, each base station decides whether to be in active mode or in sleep mode by jointly considering the effects on spectrum efficiency and energy consumption .we assume that each client is selfish and would like to choose a base station that maximizes its own throughput when it is associated with .we make this assumption under two main reasons .first , this conforms to the selfish behaviors of clients .second , in a dense network , the decision of by one client only has a limited and indirect impact on the overall performance of other clients .we define as in ( [ equation : scheduling : h ] ) , and be the proportion of frames that base station would schedule if is associated with .the client then selects the base station that maximizes in practice , client can only be associated with base stations that are in active mode and whose is above some threshold for some . to compute for all such base stations , client needs to know the values of and .client assumes that the transmission powers used by base stations are not influenced much by its choice , which is true in a dense network .thus , client only needs to know its perceived sinr with each base station on each resource block to compute .it remains for the client to compute the value of .we propose two different approaches to compute this value . in the first approach , which we call the _ exact simulator _ ( es ) , client first obtains the values of and for all clients that are associated with .client can then simulates the scheduling decisions of base station by running the online scheduling policy introduced in section [ section : scheduling ] , and obtains the value of on each resource block .while this approach offers an accurate estimation on and , it requires high computation and communication overhead . in the second approach , which we callthe _ approximate estimator _ ( ae ) ,client only obtains the values of , and which is the average throughput of base station on resource block .client assumes that , when another client is scheduled by base station on resource block , its throughput on equals the average throughput .client can then estimate by _ algorithm [ algorithm : client : ae]_. the complexity of _ algorithm [ algorithm : client : ae ] _ is , and therefore this approach is much more efficient than the exact simulator .moreover , the following theorem suggests that the approximate simulator provides reasonably good estimates on the throughput of client if it is associated with base station .sort all resource blocks such that + + + + break + + break [ theorem : client : ae ] if for each client other than that is associated , , then , under the online scheduling policy introduced in section [ section : scheduling ] , the throughput of client equals the value of obtained by algorithm [ algorithm : client : ae ] when it is also associated with . please refer to appendix b for the proof .after obtaining the value of , client selects and associates with base station . moreover , client reports the estimated rate with , , and the second largest estimated rate among all base stations , , to .we define , and hence .these values are used for base stations to decide whether to switch to sleep mode , which is discussed in the following section . in summary ,when the approximate estimator is applied , each base station only needs to periodically broadcasts a total number of values , that is , the values of , from which each client can compute , and for all . on the other hand ,when a client decides to be associated with a base station , it only needs to report two values : and .thus , the communication overhead for the approximate estimator is small .we now discuss how base stations decide whether to be in active mode or in sleep mode .the protocol consists of two parts : one for a base station in active mode to decide whether to switch to sleep mode , and one for a base station in sleep mode to decide whether to switch to active mode .first , consider a base station that is in active mode .given the solution to the power control problem , aims to maximize , where is the indicator function .when is in active mode , it estimates that , which is the estimated throughput that client reports to as discussed in the previous section .therefore , we have : on the other hand , when is in sleep mode , it assumes that its clients will be associated with the base station other than that provides the largest throughput , and the resulting throughput of client will be .we then have : the base station simply compares the values of ( [ equation : client : active to sleep 1 ] ) and of ( [ equation : client : active to sleep 2 ] ) .if the latter is larger than the former , switches to sleep mode .next , consider a base station that is in sleep mode .base station periodically wakes up and broadcasts beacon messages on all resource blocks , where it uses equal amounts of power on all resource blocks .each client then measures the sinr from on each resource block and obtains the values of for all .each client computes , which is the estimated throughput of if is the only client associated with .if is larger than , the current throughput of client , client reports the values of , and to base station .base station needs to estimate the throughput of clients that will be associated with it when it switches to active mode .we note that the value of is not a good estimate of the throughput of when is in active mode and associates with .recall that is the estimated throughput of client when is the only client associated with .when is in active mode , it is possible that will have more than one clients , and hence the throughput of may be much less than when is associated with .to better estimate clients throughputs when is in active mode , assumes that frequency - selective fading is not significant , and the values of is the same for all when is fixed . under this assumption, our online scheduling policy will result in for all clients that are associated with and , where .base station then runs algorithm [ algorithm : client : basestation ] and estimates that the set of clients that will be associated with when switches to active mode .hence is estimated to be , and each client in is estimated to have a throughput of .sort all clients that report to such that null we now discuss the intuitions of algorithm [ algorithm : client : basestation ] .let be the largest value in , and thus the value of is last updated in line 7 of the -th iteration in algorithm [ algorithm : client : basestation ] .we then have . for each in , we have , and hence , that is , client is estimated to have higher throughput when it is associated with .this justifies the estimation that client will leave its current base station to be associated with . on the other hand , for each client that is not in , we have and , justifying the estimation that will stay with its current base station and not be associated with .therefore , algorithm [ algorithm : client : basestation ] provides a reasonable estimation on the set of clients that will be associated with when switches to active mode .moreover , the complexity of algorithm [ algorithm : client : basestation ] is only , where is the number of clients that report to . finally , base station compares the value of against the value of . if the former is larger , that is , -\zeta_{m_1}c_{m_1}>0 , ] w. auer et al . has investigated the amount of power needed to operate a base station .it has shown that a macro base station consumes 75w when it is in sleep mode , and consumes 130w when it is in active mode .therefore , we set the operation power of a macro base station to be 55w .it has also shown that the power budget of a macro base station is 20w .similarly , we set the operation power and transmission power budget of a micro base station to be 17w and 6.3w , respectively .we have implemented the proposed online scheduling policy , power control , and approximate estimator .we compare our mechanisms against other mechanisms .we consider two policies for the scheduling problem , round - robin ( _ rr _ ) and the scheduling policy proposed in section [ section : scheduling ] ( _ pf _ ) .for the power control problem , we assume that other mechanisms use the same amount of power on each resource block .we consider two policies for the client association problem .one of the policies associate each client to the closest base station , and is called _default_. the other policy adapts the ones proposed in son et al and zhou et al . in this policy , which we call _ son - zhou _ , each client chooses to be associated with the base station that maximizes \{data rate of when served by }\{number of clients associated }/\{operation power of }. the intuition is that clients prefer to be associated with base stations with many clients . as a result, some of the base stations will have very few clients and can be turned into sleep mode .we assume that a base station is turned into sleep mode if the total weight of its clients is below a certain threshold .we have exhaustively evaluated the performance of son - zhou using different thresholds and found that setting the threshold to be 2 achieves the best performance under all evaluated price of energy .hence , we set the threshold to be 2 .we compare the performance of different mechanisms by their resulting values of ( [ equation : introduction : c0 ] ) , where the throughput of a client , , is measured in kbits / sec .we also evaluate and present the achieved total weighted throughput , defined as , total power consumption , and/or energy efficiency of each mechanism under various scenarios .we first consider a system with one macro base station and 25 clients to demonstrate the performance of our solution to the scheduling problem .clients are uniformly placed as a grid , where the distance between adjacent clients is 100 m .we compare the rr policy against our pf policy , where we consider both cases when the base station has instant knowledge of channel gains and where base stations only have knowledge of long - term average channel gains , denoted by _fast feedback _ and _ slow feedback _ , respectively .we set the price of energy , , to be zero for this system , as we are only interested in the performance of scheduling policies .figure [ fig : scheduling_performance ] and figure [ fig : scheduling_throughput ] show the simulation results on both the values of ( [ equation : introduction : c0 ] ) and the total throughput of the system .it is observed that both fast feedback and slow feedback achieve more than 50% higher throughput than the rr policy , as they both use knowledge on channel gains for scheduling decisions .fast feedback has better performance than slow feedback since it takes effects of fast fading into account .next , we demonstrate the performance of our solution to the power control problem .we consider a system with two macro base stations .each of these base stations has two clients associated with it , and the distance between a client and its associated base station is 50 m .we compare a policy that uses both our scheduling policy and power control algorithm against one that only uses our scheduling policy and allocates equal power on all resource blocks .we consider the performance of the two policies by varying the distance between the two base stations .we also set the price of energy to be zero for this system .simulation results are shown in figure [ fig : power_performance ] and figure [ fig : power_throughput ] .it is observed that when the two base stations are far apart , the two policies achieve similar performance .however , as the distance between the two base stations decreases , the performance of the policy without power control degrades greatly , as it suffers much from the interference between the two base stations . on the other hand , by using our power control algorithm , the two base stations start to operate in disjoint resource blocks as the distance between them decreases .hence , the performance of the policy using power control does not suffer too much from interference .we now consider the client association problem .we consider a system with two macro base stations that are separated by 500 m .there are four clients uniformly distributed between them .we consider the performance of our proposed mechanism under various price of energy .simulation results are shown in figure [ fig : association_performance ] .we can see that when the price is small , the performance of the system degrades quickly with price. however , at a price of 0.06 , the performance increases , and then degrades with price , but with a smaller slope .this is because , at a price of 0.06 , our mechanism determines that it is better to shut down one of the base stations in order to save power and increase energy efficiency .figure [ fig : association_consumption ] also shows that the total power consumption of this system decreases by about half at a price of 0.06 . finally , we present our simulation results for a large scale system .the topology of this system is illustrated in figure [ fig : grid ] .we consider a 3000 m by 3000 m area with 9 macro base stations forming a 3 by 3 grid .in addition , there are 16 micro base stations uniformly distributed in the area \times[1000,3000] ] .clients within the area of \times[0,1000]$ ] have weights , while all other clients have weights .[ fig : full ] shows the performance comparison between our proposed and other mechanisms .[ fig : spectrum_efficiency ] compares the weighted total throughput , , and fig .[ fig : energy_efficiency ] compares the energy efficiency , defined as (total power consumption ) , for the various mechanisms .our proposed protocol achieves better performance than all other mechanisms , especially when the price of energy is high .further , as the price of energy increases , our proposed protocol turns some base stations into sleep mode , which results in smaller weighted total throughput but improves energy efficiency .thus , our proposed protocol can achieve tradeoff between energy efficiency and spectrum efficiency by choosing suitable price of energy .in this paper , we propose a distributed protocol for self - organizing lte systems that considers both spectrum efficiency and energy efficiency .this protocol jointly optimizes several important components , including resource block scheduling , power allocation , client association , and the decisions of being in active or sleep mode .the protocol requires small computational and communicational overheads .further , simulation results show that our proposed protocol achieves much better performance than the existing policy .we would like to thank franois baccelli ( inria - ens ) and alberto conte ( alcatel - lucent ) for their valuable discussion and support .part of this work has been presented at ieee icc12 .10 [ 1]#1 url [ 2]#2 [ 2]l@#1=l@#1#2 3rd generation partnership project .3gpp lte - advanced .http://www.3gpp.org/lte-advanced .j. m. graybeal and k. sridhar , `` the evolution of son to extended son , '' _ bell labs technical journal _ , vol . 15 , no . 3 , pp .518 , 2010 .[ online ] .available : http://dx.doi.org/10.1002/bltj.20454 . http://www.ngmn.org .g. fettweis and e. zimmermann , `` ict energy consumption - trends and challenges , '' in _ wpmc _ , 2008 , pp . 20062009 .s. sesia , i. toufik , and m. baker , _ lte - the umts long term evolution : from theory to practice _ , 2nd ed.1em plus 0.5em minus 0.4emjohn wiley & son , 2011 . .http://www.smallcellforum.org .c. s. chen and f. baccelli , `` self - optimization in mobile cellular networks : power control and user association , '' in _ proc .ieee icc _ , may 2010 .h. hu , j. zhang , x. zheng , y. yang , and p. wu , `` self - configuration and self - optimization for lte networks , '' _ ieee communications magazine _ , no . 2 , pp . 94100 , 2010 .s. borst , m. markakis , and i. saniee , `` distributed power allocation and user assignment in ofdma cellular networks , '' in _ proc .allerton conference on communication , control , and computing _ , sep .d. lopez - perez , a. ladanyi , a. juttner , h. rivano , and j. zhang , `` optimization method for the joint allocation of modulation schemes , coding rates , resource blocks and power in self - organizing lte networks , '' in _ proc .ieee infocom _, 2011 , pp . 111115 .i .- h . hou and p. gupta , `` distributed resource allocation for proportional fairness in multi - band wireless systems , '' in _ proc .ieee isit _ , jul .i .- h . hou and c. s. chen , `` self - organized resource allocation in lte systems with weighted proportional fairness , '' in _ proc .ieee icc _ , jun .2012 , pp . 53485353 .g. auer , v. giannini , c. desset , i. godor , p. skillermark , m. olsson , m. imran , d. sabella , m. gonzalez , o. blume , and a. fehske , `` how much energy is needed to run a wireless network ? '' _ ieee wireless communications _ , vol .18 , no . 5 , pp . 4049 , oct .s. mclaughlin , p. grant , j. thompson , h. haas , d. laurenson , c. khirallah , y. hou , and r. wang , `` techniques for improving cellular radio base station energy efficiency , '' _ ieee wireless communications _18 , no . 5 , pp . 1017 , oct .a. conte , a. feki , l. chiaraviglio , d. ciullo , m. meo , and m. marsan , `` cell wilting and blossoming for energy efficiency , '' _ ieee wireless communications _18 , no . 5 , pp . 5057 , oct .2011 . k. son , h. kim , y. yi , and b. krishnamachari , `` base station operation and user association mechanisms for energy - delay tradeoffs in green cellular networks , '' _ ieee jsac _ , no . 8 ,pp . 15251536 , 2011 .s. zhou , j. gong , z. yang , z. niu , and p. yang , `` green mobile access network with dynamic base station energy saving , '' in _ proc . of acm mobicom _ , 2009 .j. gong , s. zhou , and z. niu , `` a dynamic programming approach for base station sleeping in cellular networks , '' _ ieice transactions on communications _ ,e95-b , no . 2 ,551562 , feb .y. chen , s. zhang , s. xu , and g. li , `` fundamental trade - offs on green wireless networks , '' _ ieee communications magazine _ , no .6 , pp . 3037 , 2011 .g. miao , n. himayat , y. g. li , and a. swami , `` cross - layer optimization for energy - efficient wireless communications : a survey , '' _ wirel .commun . mob ._ , vol . 9 , no . 4 , pp . 529542 , aprg. li , z. xu , c. xiong , c. yang , s. zhang , y. chen , and s. xu , `` energy - efficient wireless communications : tutorial , survey , and open issues , '' _ ieee wireless communications _ , vol . 18 , no . 6 , pp .2835 , dec .h. kim , k. kim , y. han , and s. yun , `` a proportional fair scheduling for multicarrier transmission systems , '' in _ proc .ieee vtc - fall _ , sep .2004 , pp . 409413 .m. s. bazaraa , h. d. sherali , and c. m. shetty , _ nonlinear programming theory and algorithms _ , 3rd ed.1em plus 0.5em minus 0.4emjohn wiley & son , 2006 . , `` evolved universal terrestrial radio access : user equipment radio transmission and reception , '' tech . spec .v10.3.0 , jun .theorem [ theorem : scheduling : optimal ] has shown that the online scheduling policy in section [ section : scheduling ] achieves the optimum solution to the scheduling problem .we claim that , by setting as that derived in algorithm [ algorithm : client : ae ] and , for all , , the resulting and also achieve the optimum solution to the scheduling problem . in the proof of theorem [theorem : scheduling : optimal ] , it has been shown that maximizes if and only if , for all , , for all , and , for all such that . by our settings of ,the first two conditions hold , and we only need to verify the last condition .sort all resource blocks such that we consider two possible cases : there exists some such that , and such does not exist , i.e. , for all . in the first case, we have that , for all , and , for all . by setting , for all , , we have .let and be the values of and in the -th iteration of the * for * loop in algorithm [ algorithm : client : ae ] .as , lines 1317 are executed in this iteration , and we have and . moreover , in line 13 , the value of is chosen so that .therefore , , for all such that and .thus , the last condition holds for resource block . for any resource block , .we then have , and hence , for all such that and .as we set for all , the last condition holds for all .similarly , for any resource block , , for all such that and .as we set , the last condition also holds for all . in sum, the last condition holds for the case that there exists some such that .next consider the case that for all .let be the smallest integer so that . in the -th iteration of the for loop in algorithm [ algorithm: client : ae ] , steps 1011 are executed , and we have , for all such that and .the last condition then holds for resource block . a similar argument as in the previous paragraph shows that the last condition holds for all .
|
this paper studies the problem of self - organizing heterogeneous lte systems . we propose a model that jointly considers several important characteristics of heterogeneous lte system , including the usage of orthogonal frequency division multiple access ( ofdma ) , the frequency - selective fading for each link , the interference among different links , and the different transmission capabilities of different types of base stations . we also consider the cost of energy by taking into account the power consumption , including that for wireless transmission and that for operation , of base stations and the price of energy . based on this model , we aim to propose a distributed protocol that improves the spectrum efficiency of the system , which is measured in terms of the weighted proportional fairness among the throughputs of clients , and reduces the cost of energy . we identify that there are several important components involved in this problem . we propose distributed strategies for each of these components . each of the proposed strategies requires small computational and communicational overheads . moreover , the interactions between components are also considered in the proposed strategies . hence , these strategies result in a solution that jointly considers all factors of heterogeneous lte systems . simulation results also show that our proposed strategies achieve much better performance than existing ones . self - organizing networks , lte , ofdma , proportional fairness , energy effieincy .
|
pollen allergy is a common disease causing hay fever in 5 - 10% of the population . although not a life threatening disease , the symptoms can be very troublesome , furthemore , the costs to the social sector due to pollen related diseases are high .self protection of hay fever patients is possible through the information of future pollen contents in the air .models to forecast pollen concentration in the air are principally based on pollen and atmospheric weather interactions .several statistical techniques , have been used to predict future atmospheric pollen concentrations from weather conditions of the day and of recent previous days . in spite of these attempts ,it has not been possible to predict the pollen concentrations with great accuracy , and about 25% of the daily pollen forecasts have resulted in failures .+ a reason of these failures could be that the methods used in airborne pollen forecasting are based in standard linear statistical techniques which do nt suit when the phenomenon to forecast is esentially non - linear .a previous analysis of the dynamic characteristics of time series of atmospheric pollen was developed by bianchi et al . , through the study of the correlation dimension .the dimension found was a low and non integer value , which indicates that the system may be described by a nonlinear function of just a few variables relating nearest pollen concentrations of the time series .the fact that the correlation dimension found was fractal predicts that this function , also called map in nonlinear dynamics can display chaotic behavior under certain circumstances .the existence of a low dimensional map suggests possibilities for short - term prediction through the use of some nonlinear model .artificial neural networks have been widely used to predict future values of chaotic time series identifying the nonlinear model by extracting knowledge from the past .very good pollen concentrations forecasts were obtained using neural nets and , in a previous work , the hypothesis that random fluctuations appearing in the pollen time series are produced by gaussian noise was rejected . to continue with the characterization of airborne pollen concentrations the next step would be to studywhat kind of correlation is associated with its fluctuations .the hurst exponent is broadly used as a measure of the persistence of a statistical phenomenon . points an antipersistent time series , commonly driven by a phenomenon called noah effect " ( if you see the bible , the storm changed everything in a moment ) .it characterizes a system that reverses itself more frequently and covers less distance than a random walk . implies that we are analyzing a persistent time series which obeys to the joseph effect " ( in the bible refers to 7 years of loom , happiness and health and 7 years of hungry and illness ) .this system has long memory effects : what happens now will influence the future , so there is a very deep dependence with the initial conditions .persistent processes are common in nature .if the distribution is homogeneous there is an unique , but if it is not there are several exponents . the most frequent will characterize the series and will play as hurst exponent . a very efficient new method to obtainthe singularity spectrum of a pollen time series relies on the use of a mathematical tool introduced in signal analysis in the early eighties : the _ wavelet transform_. the wavelet transform has been proved very efficient to detect singularities and fractals are singular functions indeed .et al_ developed the _ wavelet transform modulus maxima _( wtmm ) method as a technique to study fractal objects . in this methodthe wavelet is used as an oscillating variant of the function of a box .wtmm was succesfully applied to study fractal properties of diverse systems such as dna nucleotide sequences , modane turbulent velocity signal and a cool flame experiment .we apply wtmm to obtain the hurst exponents associated with the pollen time series as a whole as well as the persistence of the important rare peaks of highest concentrations .another important tool in describing multifractals that are obtained through wtmm are the generalized fractal dimensions .the material used in this work was from our chaos study of pollen series .data of airborne pollen concentration were obtained with an automatic and volumetric burkard pollen and spore trap , situated at the roof of the facultad de ciencias exactas y naturales of our university , 12 meters above ground level .the area surrounding the sample is typical of mar del plata .the great distance from the sampling site to the emission sources makes the particular emission spectra not important .ten liters of air per minute were sucked through a 14 x 2 orifice , always orientated against the wind flow .the sucking rate is checked weekly . behind the slit, a drum rotates at a speed of 2 per hour .the particles are collected on a cellophane tape ( melinex ) , 19 wide , just below the orifice .the sticky collecting surface comprises nine parts vaseline : one part paraffin in toluene .the exposed tape is removed from the drum , cut into pieces of 48 mm , corresponding to 24-h intervals , then embedded into a solution of polivinylalcohol ( gelvatol ) , water and glycerol and covered with a cover glass .slides were studied as 12 transects per day .the pollen was counted at a magnification of x400 for the first year cycle ( august 1987 - 8 ) and at x200 for the second ( august 1988 - 9 ) , and corresponding to 13.5 and 27 min of sampling every 2 h respectively .the method of counting pollen follows that of kapyla and penttinen .hourly counts were stored in a database file for further analysis .statistics of hourly counts may be seen in table 1 and 2 of .the concentration values correspond to total pollen grains .the main species found were : cupressus , gramineae , eucalyptus , pinace , chenopodiineae , plantago , cyperaceae , betula , cruciferae , compositae tueulflorae , ambrosia , ulmus , umbelliferae , platanus and fraxinus .the aim of this formalism is to determinate the f( ) singularity spectrum of a measure .it associates the haussdorff dimension of each point with the singularity exponent , which gives us an idea of the strength of the singularity . where is the number of boxes needed to cover the measure and is the size of each box .a partition function can be defined from this spectrum ( it is the same model as the thermodinamic one ) . where is a spectrum which arouses by legendre transforming the singularity spectrum .the spectrum of is obtained from the spectrum the capacity or box dimension of the support of the distribution is given by . corresponds to the scaling behavior of the information and is called . for , andthe are related . as we will show in the following section the wavelet transformis specially suited to analyze a time series as a multifractal .the wavelet transform ( wt ) of a signal consists in decomposing it into frequency and time coefficients , asociated to the wavelets .the analyzing wavelet , by means of translations and dilations , generates the so called family of wavelets .the wavelet transform turns the signal into a function (a , b) ] .this means that around where is an order polynomial , and provided the first moments are zero . if we have , the first moments are vanishing .the wavelet modulus function (a , t)|$ ] will have a local maximum around the points where the signal is singular .these local maxima points make a geometric place called modulus maxima line .(a , b_l(a))| \sima^\alpha(b_l(a))\:\:\:\:\ : for \:\ : a \to 0,\ ] ] where is the position at the scale of the maximum belonging to the the line . the wavelet transform modulus maxima method ( wtmm ) consists in the analysis of the scaling behavior of some partition functions that can be defined as : (a , b_l(a))|^q,\ ] ] and will scale like .this partition function works like the previously defined partition function for singular measures . for will prevail the most pronounced modulus maxima and , on the other hand , for will survive the lower ones .the most pronounced modulus take place when very deep singularities are detected , while the others correspond to smoother singularities .we can get ( eq .2 ) and obtain and spectra , as explained previously .the shape of is a hump that has a maximum value , corresponding to this maximum may be associated with the general behavior of the series .so , this particular singularity exponent can be thought like the hurst exponent for the series as a whole .the airborne pollen concentration time series may be seen in fig .the third derivative of gaussian function was chosen as analyzing wavelet : twelve wavelet transform data files were obtained applying the wavelet transform with , ranging the scaling factor from to in steps of . to give an idea of the effect of the change of scale on wavelet transform of the pollen time series , three of them are shown in fig .2 . then we computed the partition function for and , getting , as shown in fig .3 . is a nonlinear convex increasing function with and two asymptotic slopes which are for and for .this lays the corresponding singularity spectrum obtained by legendre transforming that is displayed in fig .4 . the single humped shape with a nonunique hlder exponent obtained characterizes a multifractal .as expected from , the support of extends over a finite interval which bounds are and .the minimum value , , corresponds to the strongest singularity which characterizes the most rarified zone , whereas higher values exhibit weaker singularities until or weakest singularity which corresponds to the densiest zone . corresponds to an antipersistent process and , to a regular process .the spectrum obtained from can be seen in fig .5 . the support dimension ; which implies that the capacity of the support is approximately 1; i.e. the support is not a fractal . converges asymptotically to for and to for .the hlder exponent for the dimension support , , is 0.90 .this particular corresponds to or which implies that the sucesses with are the most frequent ones . implies we are analyzing a persistent time series which obeys to the `` joseph effect '' ( in the bible refers to 7 years of loom , happiness and health and 7 years of hungry and illness ) .this system has long memory effects : what happens now will influence the future , so there is a very deep dependence with the initial conditions .it may be thought like a fractional brownian motion of .a hurst exponent of 0.90 describes a very persistent time series , what is expected in a natural process involved in an inertial system . can be known as hlder exponent or singularity exponent , too .if the distribution is homogeneous there is an unique ( for example fractional brownian motion ) , but if it is not there are several exponents . the most frequent will characterize the series and will play as hurst exponent .this means that the curve is equally humped in both sides with the consequence of having the same inhomogeneity in the less frequent events associated with the branch and in the more frequent ones associated with the branch .the information dimension is which features the scaling behavior of the information .it plays an important role in the analysis of nonlinear dynamic systems , specially in describing the loss of information as chaotic system evolves in time . implies that we are in the presence of a chaotic system .the correlation dimension is which characterizes a chaotic attractor and is very close to the value previously obtained with the grassberger - procaccia method .the wavelet transform modulus maxima method was applied to study the multifractal characteristics of an airborne pollen time series .we have found that pollen time series behave as a whole like long term memory persistent phenomena , as most ones in nature .the most common events associated with which correspond to low pollen concentration values behave in a persistent way as the whole series . on the other hand ,the most rare events associated in the multifractal formalism to which correspond to highest pollen concentration values behave in an antipersistent way characterized by the `` noah effect '' , changing suddenly and catastrophically the air conditions .both the information and the correlation dimensions correspond to a chaotic system showing loss of information with time evolution .c.m.a . would like to thank alain arneodo for introducing him to wavelet transform multifractal analysis .this work was partially supported by a grant from the universidad nacional de mar del plata .99 r. leuschner , j. palyno * 27 * , 305 ( 1991 ) p. comtois , g. batchelder and d. sherknies , _ aerobiology health environment : a symposium _ , ed .p. comtois , university of montreal , montreal , canada ( 1989 ) m. orourke , _ aerobiology health environment : a symposium _ , ed .p. comtois , university of montreal , montreal , canada ( 1989 ) ; l. moseholm , e. weeke and b. petersen , pollen et spores * 29 * , 305 ( 1987 ) . c. goldberg , h. buch , l. moseholm , e. weeke , grana * 27 * , 209 ( 1988 ) .bianchi , c.m .arizmendi and j.r .sanchez , int .* 36 * , 172 ( 1992 ) .p. grassberger and i. proccacia , phys .lett . * 50 * , 346 ( 1983 ) .abraham , a.m. albano , b. das , g. de guzman , s. young , r.s .giorggia , g.p .puccioni , j.r .tredicce , phys .114a * , 217 ( 1986 ) .j.d . farmer and j.j sidorowich , phys .lett . * 59 * , 845 ( 1987 ) .a. lapedes and r. farber , tech .la - ur-87 , los alamos national lab .arizmendi , j.r . sanchez , n.e . ramos and g.i ramos , int j. biom .* 37 * , 139 ( 1993 ) .arizmendi , j.r .sanchez , m.a .foti , fractals , * 3 * , 155 - 160 ( 1995 ) .hinz - otto peitgen , h. jurgens , d.saupe , _ chaos and fractals , new frontiers of science _ ( springer verlag , new york , 1992 ) .edgard a. peters , _ chaos and order in the capital markets _ , ( john wiley and sons , 1991 ) .muzy , e. bacry , a. arneodo , international journal of bifurcation and chaos * 4 * , 245 - 302 ( 1994 ) .muzy , e. bacry , a. arneodo , phys .e * 47 * , 875 ( 1993 ) .a.arneodo , y.daubenton-carafa , e.bacry , p.v.graves , j.f.muzy , c.thermes , + _ wavelet based fractal analysis of dna sequences _ , physica d ( to be published ) .a. arneodo , e. bacry , p.v .graves , j.f .muzy , phys .lett . * 74 * , 3293 ( 1995 ) .a.arneodo , e.bacry , j.f.muzy , physica a , * 213 * , 232 - 275 ( 1995 ) .m. nicollet , a. lemarchand , g.m.l .dumas , fractals * 5 * , 35 ( 1997 ) .m. kapyla and a. penttinen , grana * 20 * , 131 ( 1981 ) .p.goupillaud , a.grossman , j.morlet geoexploration * 23 * , 85 ( 1984 ) .a.grossman , j.morlett s.i.a.m .j.math anal . *15 * , 723 ( 1984 ) ; in : _ mathematics and physics , lectures on recent results _ , ed .l. streit ( world scientific , singapore , 1985 ) . m.schreder _ fractal , chaos , power laws _freeman , 1991 ) .4 . of pollen time series .the support of bounds are which corresponds to an antipersistent process and to a regular process .the hlder exponent for the dimension support , , is 0.90 which characterizes a persistent process .
|
the most abundant biological particles in the atmosphere are pollen grains and spores . self protection of pollen allergy is possible through the information of future pollen contents in the air . in spite of the importance of airborne pollen concentration forecasting , it has not been possible to predict the pollen concentrations with great accuracy , and about 25% of the daily pollen forecasts have resulted in failures . previous analysis of the dynamic characteristics of atmospheric pollen time series indicate that the system can be described by a low dimensional chaotic map . we apply the wavelet transform to study the multifractal characteristics of an airborne pollen time series . we find the persistence behaviour associated to low pollen concentration values and to the most rare events of highest pollen concentration values . the information and the correlation dimensions correspond to a chaotic system showing loss of information with time evolution .
|
the shape of interstellar and circumstellar grains is still an outstanding issue .the complexity of the electromagnetic scattering problem limits the theoretical modeling of the shapes which can be studied to spheres , infinite cylinders and spheroids .however , the shape of many interstellar grains are expected to be non - spherical and maybe even highly irregular .one way to deal theoretically with irregular particles and clusters of dust grains is to assume that they consist of touching spheres .with such an assumption it is possible to construct many distinctly different morphologies which can then be compared with observations .the problem of evaluating the extinction efficiency ( ) is that of solving maxwell s equations with appropriate boundary conditions at the cluster surface . for a homogeneous single spherea solution was formulated by lorenz ( 1890 ) and mie ( 1908 ) and the complete formalism is therefore often referred to as the lorenz - mie theory . a complementary solution based on the expansion of scalar potentialswas given by debye ( 1909 ) .a detailed description of this exact electromagnetic solution can be found in the book by bohren & huffman ( 1983 ) . for a review on exact theories and numerical techniques for computing the scattered electromagnetic field by clusters of particleswe refer the reader to the textbook by mishchenko et al .( 2000 ) . for a comprehensive review on the optics of cosmic dust see voshchinnikov ( 2002 ) and videen & kocifaj ( 2002 ) . to investigate clustering effects we have computed and analyzed the extinction of different polycrystalline graphitic and silicate clusters .we have chosen clusters ranging from small to large , and which are either sparse or compact , to evaluate how the extinction is influenced by the structure .we focus on clusters consisting of 4 , 7 , 8 , 27 , 32 and 49 touching polycrystalline spheres with a radius of 10 nm .the extinction of the clusters is calculated using two rigorous methods ga ( grardy & ausloos 1982 ) , and the generalized multi - particle mie ( gmm ) solution ( xu 1995 ; 1997 ) and two discrete dipole approximation ( dda ) methods marcodes ( markel 1998 ) and ddscat ( draine & flatau 2000 ) to test how well these latter approximations perform when applied to clusters with different morphology .ddscat is as such an exact solution if enough dipoles are used in the approximation of the target .it has been used in a wide range of scattering problems concerning clusters of particles including the extinction of interstellar dust grains ( e.g. bazell & dwek 1990 ; wolff et al .1994 ; stognienko et al . 1995 ; fogel & leung 1998 ; vaidya et al .the rigorous solutions are only exact if a high enough number of multi - poles is treated .we consider three - dimensional clusters of identical touching spherical particles , of radius r , arranged in three different geometric configurations : prefractal ( frac ) , simple cubic ( sc ) , and face - centered cubic ( fcc ) .these structures do not have shapes expected to be found in space , but will provide us with boundary conditions for the problem of calculating the extinction of clusters of grains of different morphologies . the snowflake prefractal of order is recursively constructed starting with the initiator which is just a single sphere .next , the 1st order prefractal , or generator , is built up by pasting together seven copies of the initiator as shown in the top left hand corner frame of fig.1 . for ,pasting together seven copies of the order prefractal according to the generator s pattern yields the order prefractal .for example , the bottom left hand corner frame of fig .1 displays the snowflake prefractal of order 2 .as its order increases and goes to infinity , the snowflake prefractal will become the snowflake fractal .vicsek ( 1983 ) has shown that regardless of their order all the snowflake prefractals have the same dimension , , which is exactly the dimension of the snowflake fractal .this value is close to that obtained for random cluster - cluster aggregation models for grain growth ( meakin 1988 ; botet and jullien 1988 ; meakin & jullien 1988 ; wurm & blum 1998 ) . as a contrast to the prefractal structure , we also consider some compact crystalline structures , namely , face - centered cubic and simple cubic ,see e.g. kittel ( 1986 ) for a discussion on crystal structures .all of the clusters we consider are symmetric , see fig.1 , and consist of spheres with radii nm .for our clusters we consider two different materials ; graphite and silicates .graphite can be characterized by two different dielectric functions , and , corresponding to the electric - field vector being perpendicular ( ) and parallel ( ) to the symmetry axis of the crystal ( -axis ) , which is perpendicular to the basal plane .it is far easier to experimentally determine than , because graphite cleaves readily along the basal plane and hence reflectivity measurements can be made with normal incident light , in contrast , it is very difficult to prepare suitable optical surfaces parallel to the -axis .we use the dielectric functions and of graphite derived by draine & lee ( 1984 ) covering the region from the far - ir to the far - uv .for the silicates we use the dielectric function of astronomical silicates in the form given by weingartner & draine ( 2001 ) . the dielectric functions of the two materials are shown in fig.2 .as discussed by draine & lee ( 1984 ) the dielectric constants for graphite are both temperature and size dependent .the graphite data obtained from bruce draine ( http://www.astro.princeton.edu//dust/dust.diel.html ) are given for particle radius nm . when using the data for other grain sizes it is necessary to correct the data according to eq . 2 in draine & lee ( 1984 )should be taken from the errata ( draine & lee 1987 ) . ] . the effect of the size corrections on the ( average ) optical constants and are shown in fig.3 for different grain sizes .the correction is most significant in the long wavelength range ( m ) . in this work we deal with the anisotropy of graphite by assuming that in all our clusters , each individual particle is polycrystalline having a dielectric function given by the arithmetic average of and , namely . for a polycrystalthe arithmetic average is a realistic model for , since avellaneda et al .( 1988 ) have shown that it is an attainable upper bound for its dielectric function . in stellar environments grainsare most likely to grow as polycrystalline or even amorphous particles rather than as mono - crystalline ( gail & sedlmayr 1984 ; sedlmayr 1994 ) . in the usual `` 1/3/3 '' approximationeach individual particle is treated as a mono - crystalline , where 1/3 of the cluster particles are assumed to have dielectric function and the remaining 2/3 to have dielectric function .this approximation has been shown by draine ( 1988 ) and draine & malhotra ( 1993 ) to have a surprisingly good accuracy for graphite grains with radii . however , for larger particles assuming polycrystalline particles seems much more viable , see e.g. rouleau et al .( 1997 ) for a discussion about different ways of obtaining an average dielectric function for graphite .multi - particle scattering shows effects both of interaction and interference of scattered waves from the particles which can give rise to distinct features not seen in single - particle scattering .the two rigorous solutions presented here are generalized mie theories giving each a complete solution to the multi - sphere light scattering problem .they are both based on the exact solution of maxwell s equations for arbitrary cluster geometries , polarization and incidence direction of the exciting light .a rigorous and complete solution to the multi - sphere light scattering problem has been given by grardy & ausloos ( ga ) ( 1980 ; 1982 ; 1983 ; 1984 ) as an extension of the mie - ruppin theory ( mie 1908 ; ruppin 1975 ) .the solution is obtained by expanding the various fields involved in terms of vector spherical harmonics ( vsh ) .boundary conditions are extended to account for the possible existence of longitudinal plasmons in the spheres .high - order multi - polar electric and magnetic interaction effects are included .we consider a cluster of homogeneous spheres of radius and dielectric function , embedded in a matrix of dielectric constant and submitted to a plane polarized time harmonic electromagnetic field .the total scattered field from the cluster is represented as a superposition of individual fields scattered from each sphere . the electromagnetic field impinging on each sphere consist of the external incident wave and the waves scattered by the other spheres .for any sphere , the incident , internal and scattered fields are expressed in vsh centered at the sphere origin .the boundary conditions on its surface are solved by transforming all relevant field expansions into the sphere coordinate system , yielding a system of equations whose solution is the -polar approximation to the electromagnetic response of the cluster . for a short description of this method see andersen et al .( 2002 ) . a neat analytical far - field solution to the electromagnetic scattering by an aggregate of spheres in a fixed orientation is provided by xu ( 1995 ; 1997 ; 1998a ; 1998b and xu et al . 1999 ) , and is implemented in the fortran code gmm01f.f ( gmm ) available at http://www.astro.ufl.edu/ . as any other rigorous solution to the multi - particle scattering , his approach considers two cooperative scattering effects : interaction and interference of scattered waves from individual particles ( xu & khlebtsov 2003 ) .nevertheless , his treatment of the second effect is novel . when a plane wave is incident upon the particles ( scatterers ) of a cluster , it has a phase difference determined by the geometrical configuration and spatial orientation of the clusterlikewise , far away from the scatterers , the waves scattered from them also have well defined phase relations that depend on the scattering direction . these incident andscattered phase differences give rise to far - field interference effects .xu ( 1997 ) includes the incident - wave phase terms in the incident - field expansions centered on each scatterer , and the scattered - wave phase terms in the single - field representation of the total scattered far - field from the whole cluster .this way of treating the interference effects is in practical calculations quite efficient because then the required multipole order of the field expansions will depend only on the size of the individual particles and not on the distance between them , that is , it will not depend on the size of the cluster ( xu & khlebtsov 2003 ) .this allows in principle the treatment of clusters of arbitrary size ; the only limiting factor being the availability of computer memory . in general , an adequate estimate for the field - expansion truncation of all the scatterers in a cluster is given by the wiscombe s criterion ( wiscombe 1980 ) for the field - expansion truncation of a single sphere with size parameter , .there are a number of cases , however , where this criterion grossly underestimates the number of multipoles needed in the scattering calculations .this is the case for our clusters of graphitic spheres as it is for the gold nano - bispheres discussed by xu & khlebtsov ( 2003 ) , and the soot bispheres discussed by mackowski ( 1994 ) .for example , to get a converged solution to the multi - particle scattering problem at wavelength 1.047 microns for a cluster of 8 graphitic spheres of radii 10 nm , arranged in a simple cubic structure , the actual single - sphere expansion truncation is 44 whereas the wiscombe s criterion estimate is just 3 . finally , gmm implements two methods for solving the system of linear equations arising in multi - particle scattering , namely the order of scattering method of fuller and kattawar ( 1988a ; 1988b ) and the biconjugate gradient method ( gutknecht 1993 ) .the discrete dipole approximation ( dda ) - also known as the coupled dipole approximation - method is one of several discretization methods ( e.g. draine 1988 ; hage & greenberg 1990 ) for solving scattering problems in the presence of a target with arbitrary geometry .the discretization of the integral form of maxwell s equations is usually done by the method of moments ( harrington 1968 ) .purcell & pennypacker ( 1973 ) were the first to apply this method to astrophysical problems ; since then , the dda method has been improved greatly by draine ( 1988 ) , goodman et al .( 1991 ) , draine & goodman ( 1993 ) , draine & flatau ( 2000 ) , markel ( 1998 ) , and draine ( 2000 ) .the dda method has gained popularity among scientists due to its clarity in physical principle and the fortran implementation which have been made publicly available by e.g.draine & flatau ( 2000 ; ddscat ) and by markel ( markel 1998 ; marcodes ) . within the framework of the dda method , when considering the problem of scattering and absorption of linearly polarized light of wavelength by an isotropic grain , the grain is replaced by a set of discrete elements of volume with relative dielectric constant and dipole moments , whose coordinates are specified by vectors .the equations for the dipole moments can be written using simple considerations based on the concept of the exciting field , which is equal to the sum of the incident wave and the fields of the rest of the dipoles in a given point . in this workwe use the discrete dipole approximation code version 5a10 ( ddscat ; draine & flatau 1994 ; draine & flatau 2000 ) , available at + http://www.astro.princeton.edu//ddscat.html .this version contains a new shape option where a target can be defined as the union of the volumes of an arbitrary number of spheres . in ddscatthe considered grain / cluster is replaced by a cubic array of point dipoles .the cubic array has numerical advantages because the conjugate gradient method can be efficiently applied to solve the matrix equation describing the dipole interactions ( goodman et al .1991 ) .there are three criteria for the validity of ddscat : + ( 1 ) the wave phase shift over the distance between neighboring dipoles should be less than 1 for calculations of total cross sections and less than 0.5 for phase function calculations .here , is the complex refractive index of the target material + ( 2 ) must be small enough to describe the object shape satisfactorily .+ ( 3 ) the refractive index must fulfill . for materials with large refractive indexes ( ) , draine & goodman ( 1993 )have shown that especially the absorption is overestimated by dda .as illustrated in fig.2 , graphite has a high refractive index throughout most of the range m , showing that for graphite the region of applicability of ddscat is rather small , in fact , the criterion is only fulfilled for wavelengths shorter than m ( ) ; see the inset in the left frame of fig.2 .however , relaxing the criterion a little , to account for the variability of in the region below m , the upper limit can be pushed up to m ( ) .another efficient code based on dda is the markel coupled dipole equation solver ( marcodes ; markel 1998 ) available at + http://atol.ucsd.edu/ / scatlib/. this code is designed to approximate the spherical particles in an arbitrary cluster with dipoles ( this corresponds to in ddscat ) .the program is in principle applicable to clusters of arbitrary geometry consisting of small spherical particles , but it is most efficient computationally for sparse clusters ( i.e. when the volume fraction is very low ) with significant number of particles . unlike ddscat , the program does not use the fast fourier transformation ( fft ) because this might significantly decrease its computational performance on clusters with a low volume filling fraction . when the volume filling fraction is close to unity , algorithms utilizing fft will be much faster .the dimensionality of the coordinates of particles in marcodes require a special consideration . by replacing real particles by point dipoles located at their centers the strength of their interactionis significantly underestimated . in order to correct the interaction strength, the author of marcodes introduces geometrical intersection of particles .all coordinates are defined in terms of the distance between neighboring dipoles , which is given by .so , for example , if two particles have radii 10 nm , then the distance between the dipoles is 16.12 nm .this suggested phenomenological procedure allows marcodes to be more accurate than the usual single dipole approximation since the intersection produces some analogy of including higher multi - pole interactions between particles .the fact that the program only uses a single dipole for each particle in the cluster has significant benefits in computation efficiency when compared to other multi - polar approaches such as ga , gmm or ddscat .the version of marcodes tested here can not calculate a face - centered cubic structure of touching particles because in this case the lattice cells representing neighboring particles will touch only at the corners , giving as a consequence the spectrum corresponding to non - touching particles . within the ga methodthe extinction of a cluster is calculated in the -polar approximation .in general , the smallest l needed for the convergence of the extinction differs for different regions of the optical spectrum ; for graphite in particular , for a chosen fixed accuracy , the longer the wavelength , the higher the polar orders that are required in the calculations .this happens because the magnitude of the refractive index of graphite increases with wavelength up to about m , where it reaches a plateau .generally , the extinction of graphitic clusters needs to be calculated to a higher polar order than that of the silicate clusters to ensure convergence . in the uv - visible range , by accepting an accuracy of 5% in the computation of the extinction, we can use l = 5 for open graphitic clusters and l = 7 for compact graphitic ones ; we expect this to hold for clusters of up to a few tens of particles . in table1 ,the cut - off polar order l used in the calculation of the extinction of all the clusters can be found .full convergence was only achieved for the small clusters in the uv - vis range ; for the larger clusters , l indicates the maximum polar order obtained with our available computer resources ..the clusters presented in this paper have three different geometries ( see fig.1 ) : prefractal ( frac ; d ) , face - center cubic ( fcc ) and simple cubic ( sc ) .l designates the polar order achieved with the ga method .[ cols="<,^,^,^,^,^,^,^,^,^",options="header " , ] the theory behind gmm is in many ways similar to that behind ga , differing from it in its use of an asymptotic form of the vector translational addition theorem in the calculation of the total scattered wave in the far field , which avoids the severe numerical problems encountered by ga when computing the latter for clusters of a large number of particles . as said earlier , gmm uses either the order of scattering method or the biconjugate gradient method in solving the multi - particle scattering problem .furthermore , when the former method fails in finding a solution , gmm switches to the latter method , thus providing an answer in the majority of cases , although it may have to use very high polar orders to achieve a desired accuracy .for example , to achieve an accuracy of four significant figures in the extinction of the two small clusters frac7 and sc8 , gmm needs to use polar orders as high as l=44 for wavelengths around m .ga , on the other hand , proceeds one polar order at a time , and when it does not converge , it is not possible to establish the accuracy of the solution .xing & hanner ( 1997 ) find that the typical number of dipoles needed with ddscat to obtain a reliable computational result , can be determined by calculating the minimum number of dipoles needed per particle .when a particle of radius is represented by a 3-dimensional array of dipoles , its volume is , which must be equal to , hence since is related to the wave phase shift by ( draine & flatau 2000 ) .for instance , for m , around 30 dipoles are needed for each of our graphitic spheres and for m just one dipole seem to be enough , indicating that marcodes should be comparable with ddscat for wavelengths m . as illustrated in fig.4 , for graphitic clusters increasingthe number of dipoles used in the ddscat calculation does lead to a solution which is slightly closer to the exact result .however , for the frac7 cluster doubling the number of dipoles = 46656 dipoles of which 6265 dipoles represented the cluster ( 895 dipoles per particle ) and = 110592 dipoles of which 14721 dipoles represented the cluster ( 2103 per particle ) . ]only leads to a very slight improvement of the solution .for example , the solution using the lower number of dipoles is about 5% off the exact solution around m . doubling the number of dipoles doubles the computation time while the solution is only improved by less than 1% . according to eq.(2 )we are using more than an adequate number of dipoles for both solutions indicating that a reliable result in the xing & hanner ( 1987 ) terminology is less accurate than 5% .as seen from fig.5 increasing the number of dipoles used in ddscat is not always a guarantee for getting a result which is closer to the exact solution . in the figuretwo different calculations of the frac49 cluster composed of graphitic spheres are shown . the number of dipoles = 64000 dipoles and = 46656 dipoles .this resulted in 2223 dipoles and 1323 dipoles in the target and and dipoles per particle , respectively . ]is almost doubled between the two calculations .a comparison with the ga calculations taken to polar order l=6 , shows that almost doubling the number of dipoles improves the result of the ddscat calculations by 10 % around m .the ga solution is exact in the short wavelength range ( m ) and coincides completely with a calculation of gmm taken to l = 19 . at longer wavelengthsthe ga solution is not fully converged but we expect it to be in the vicinity of the exact solution .the ddscat calculations show peculiar behavior at longer wavelengths since the solution using the lower number of dipoles comes much closer to the ga solution than the solution using 30% more dipoles .this indicates that the lower number of dipoles gives a better accuracy for longer wavelengths ( m ) while the higher number of dipoles gives a better accuracy for shorter wavelengths . in the case of a discrete dipole array ,the dipoles in the interior will be effectively shielded , while the dipoles located on the target surface are not fully shielded and , as a result , absorb energy from the external field at an excessive rate . in principlethe excess absorption which is introduced by having a large fraction of the dipoles at the surface of the particle should be minimized when introducing more dipoles , but fig.5 indicates that it is not necessarily a linear effect .it should be emphasized , however , that the refractive index of the graphitic material have at m ( see fig.2 ) which means that the ddscat calculations are outside the range recommended by draine & goodman ( 1993 ) for its use , so an overestimate of the absorption should be expected . despite this , increasing the number of dipoles should still lead to an improved solution which is in contrast to what we find .this leaves open the question of `` how many dipoles are enough to assure a certain accuracy in a ddscat computation '' .we therefore strongly recommend that this question gets investigated in much more detail in the future .to set up a comparison baseline , we computed the extinction of single graphitic and silicate spheres , both of radius 10 nm , using the two rigorous solutions ( ga and gmm ) and the two dda codes ( ddscat and marcodes ) . since for a single sphere both the ga and gmm theories reduce to the mie theory , the ga results are exactly the same as those of gmm and equal to the mie solution , regardless of the sphere s material .the two dda codes , however , give results that differ markedly for both graphite and silicate . for graphite , ddscat and the mie solution coincide up to m , where from ddscat starts to diverge slowly from the mie solution .in contrast , marcodes differs from the mie solution in the region m .this indicates that for graphite ddscat might be the better choice of code for wavelengths m while marcodes is the better choice of code for longer wavelengths . for the silicate , ddscat coincides with the mie solution in the whole wavelength range that we studied ( m ) while marcodes again differs from the mie solution in the region m .next we study the effect of clustering in the computation of the extinction of graphitic and silicate clusters . as for single spheres ,graphitic clusters allow us to better understand the nuances of the different computational methods .for the frac7 cluster ( fig.6 ) and the sc8 cluster ( fig.7 ) the gmm and ga solution completely agree up to m showing that the ga solution is indeed converged within the uv - vis region of the spectrum for these clusters . at wavelengths m the ga is not fully converged as can be seen from the fact that it underestimates the extinction for both clusters at longer wavelengths .ddscat agrees within 5% with the exact solution up to m at longer wavelengths it underestimates the extinction but only slightly more than the not fully converged ga solution .marcodes is 5% off compared to gmm at m for the graphitic frac7 cluster and 10% off for the graphitic sc8 cluster at the same wavelength . at wavelengths shortward and longward of m marcodes significantly underestimates the extinction . for the graphitic frac7 cluster ( fig.6 )marcodes performs better than for the sc8 cluster ( fig.7 ) . according to markel et al .( 2000 ) , the performance of marcodes can be improved by altering the intersection parameter which determines if the particles touch or overlap , but we have not investigated that here .nevertheless , the fact that marcodes gives higher extinction than gmm in some cases and lower in others , suggests that the determination of the rather arbitrary optimal intersection parameter is a very complex problem indeed . for the silicate frac7 cluster ( fig.7 )both the ga , ddscat and marcodes coincides completely with the exact solution for m . at longer wavelengthsthe ga and ddscat solution coincide but overestimate the extinction by about 5% while marcodes understimates the extinction at long wavelenghts for the frac7 cluster . for the sc8 silicate cluster ( fig.8 )ddscat and ga coincide with the exact solution while marcodes underestimates the extinction at m and m .this suggests that the ga solution is fully converged for the silicate frac7 cluster within the whole wavelength region considered and for the silicate sc8 cluster for m . for the silicate clusters the same number of dipoles were used as for the graphitic clusters in the ddscat calculations .ddscat deviates less than 5% from the exact solution for both of the silicate clusters showing that it performs much better for materials with smaller refractive indices s than graphite .all of them , ga , gmm and ddscat required fairly large amounts of computer time ; we point out , however , that all calculations were done on single processor machines ( typically with 800 mhz and 256 mb memory ) and took at most a few days .the computation time is determined by the accuracy required , and even for a reasonable accuracy it is necessary to use a very fine discretization - i.e. a lot of dipoles for the dda and high multi - pole orders for the ga and gmm .this leads to a large number of linear equations which needs to be solved for the determination of the scattered electromagnetic field since the scattering matrix is obtained by averaging the scattering matrices over a large number of individual particles .generally the computation time for a graphitic cluster was 3 times higher than that for the equivalent silicate cluster because of the much higher refractive index of graphite .marcodes is by far the fastest of all the methods but its accuracy is sometimes low , especially for compact clusters . regarding documentation , ddscat , marcodes and gmmhave well documented user guides which make these programs fairly user friendly . for the gmm code , however , we needed some clarifying correspondence with its author . a new version of ddscat is now released ( ddscat 6.0 ) which among other things have mpi capability for parallel computations of different target orientations ( at a single wavelength ) ; this should be very useful for calculations of averages over orientations ( b.t.draine pers.com . ) .gmm and marcodes are also continuously being improved by their authors .we now compare the extinction calculated with the different methods for prefractals and compact clusters . in fig.8 , the ga calculation for frac49is compared to that of fcc32 and sc27 around the 2200 absorption feature . herethe conclusion would be ( 1 ) the prefractal cluster has a shift in peak position and ( 2 ) the extinction of the prefractal clusters are of the same order of magnitude as the compact clusters . at long wavelengthsall the considered cluster display an extinction of the same order of magnitude .fig.9 shows the dda calculations for the compact sc27 cluster and the sparse frac49 cluster .a shift in peak position between the prefractal and the compact cluster is observed around the 2200 peak .ddscat tends to indicate that the prefractal clusters have a somewhat enhanced extinction around the 2200 peak . at long wavelengthsboth codes show slightly higher extinction for the compact sc27 cluster than for the frac49 cluster . for the silicate clusters marcodes would lead to the conclusion that the prefractal clusters have lower extinction at shorter wavelengths ( m ) than the considered compact clusters while with ddscat one would conclude that the extinction was of the same order of magnitude for the whole wavelength range in accordance with the ga result .the ga calculations therefore suggest that the extinction of prefractal and small compact clusters are on the same order of magnitude making it difficult to distinguish the different cluster morphology by observations . with the two dda codesone might just as well reach the opposite ( erroneous ) conclusion .we have performed extinction calculations for clusters consisting of polycrystalline graphitic and silicate spheres in the wavelength range to m . for the computationswe have used the rigorous multi - polar theory of grardy & ausloos ( 1982 ; ga ) , the rigorous generalized multi - particle mie - solution by xu ( 1995 ; gmm ) ; the discrete dipole approximation using one dipole per particle by markel ( 1998 ; marcodes ) and the discrete dipole approximation using multi dipoles by draine & flatau ( 2000 ; ddscat ) .we have compared the extinction of open prefractal clusters and compact clusters .the prefractal and small compact clusters display an extinction of the same order of magnitude as when computed with the exact methods ( ga and gmm ) . at shorter wavelengths around the 2200 featurethe graphitic prefractal clusters seem to have a stable peak position . overall , ddscat performs better than marcodes for all of the clusters . with ddscat, however , there is the unresolved question of how - many - dipoles are needed to ensure a fairly accurate result , this number seems to follow a non - linear pattern so a more accurate result can not always be expected by doubling the number of dipoles ( see fig.5 ) .marcodes is computationally much faster than the ddscat , gmm or ga method .the gmm computations were sufficiently fast so that convergence was reached over the whole studied wavelength range . on the other hand , our available ga program was slower and we could obtain converged results only in the uv - visible wavelength range . which of the four approaches is best to use for calculating the extinction of cluster particles will depend on the type of problem one wants to address and the accuracy needed .
|
dust particles in space may appear as clusters of individual grains . the morphology of these clusters could be of a fractal or more compact nature . to investigate how the cluster morphology influences the calculated extinction of different clusters in the wavelength range m , we have preformed extinction calculations of three - dimensional clusters consisting of identical touching spherical particles arranged in three different geometries : prefractal , simple cubic and face - centered cubic . in our calculations we find that the extinction coefficients of prefractal and compact clusters are of the same order of magnitude . for the calculations , we have performed an in - depth comparison of the theoretical predictions of extinction coefficients of multi - sphere clusters derived by rigorous solutions , on the one hand , and popular discrete - dipole approximations , on the other . this comparison is essential if one is to assess the degree of reliability of model calculations made with the discrete - dipole approximations , which appear in the literature quite frequently without an adequate accounting of their validity . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
|
complex systems together with their dynamical behavior known as complexity are thought to pervade much of the natural , informational , sociological , and economic world .a unique , all - encompassing definition of a complex system is lacking - worse still , such a definition would probably end up being too vague . instead , such complex systems are better thought of in terms of a list of common features which distinguish them from ` simple ' systems , and from systems which are just ` complicated ' as opposed to being complex .although a unique list of complex system properties does not exist , most people would agree that the following would typically appear : feedback and adaptation at the macroscopic and/or microscopic level , many ( but not too many ) interacting parts , non - stationarity , evolution , coupling with the environment , and observed dynamics which depend upon the particular realization of the system .in addition , complex systems have the ability to produce large macroscopic changes which appear spontaneously but have long - lasting consequences .such large changes might also be referred to as ` innovations ' , ` breakthroughs ' , ` gateway events ' or ` punctuated equilibria ' depending on the context .alternatively , the particular trajectory taken by a complex system can be thought of as exhibiting ` frozen accidents ' .understanding the functionality of complex systems is of paramount importance , from both practical and theoretical viewpoints .such functionality is currently being addressed through the study of ` collectives ' .the down - side of labelling such a wide range of systems as belonging to the same ` complex ' family , is that instead of saying something about everything , one may end up saying nothing very much about anything .indeed , one may end up with little more than the vague notion that ` ant colonies are like vehicular traffic , which is like financial markets , which are like fungal colonies etc . ' . on the other hand, it would be a mistake to focus too narrowly on a specific example of a complex system since the lessons learned may not be transferable - worse still , they may be misleading or plain wrong in the context of other complex systems .such is the daunting task facing researchers in complex systems : based on studies of a few very specific complex system models , how can one extract general theoretical principles which have wide applicability across a range of disciplines ? this explains why , to date , there are very few truly universal theoretical principles or ` laws ' to describe complex systems . as pointed out by john casti on p. 213 of ref . , ` .... a decent mathematical formalism to describe and analyze the [ so - called ] el farol problem would go a long way toward the creation of a viable theory of complex , adaptive systems ' .the rationale behind this statement is that the el farol problem , which was originally proposed by brian arthur to demonstrate the essence of complexity in financial markets , incorporates the key features of a complex system in an everyday setting .very briefly , the el farol problem concerns the collective decision - making of a group of potential bar - goers , who repeatedly try to predict whether they should attend a potentially overcrowded bar on a given night each week .they have no information about the others predictions .indeed the only information available to each agent is global , comprising a string of outcomes ( ` overcrowded ' or ` undercrowded ' ) for a limited number of previous occasions .hence they end up having to predict the predictions of others .no ` typical ' agent exists , since all such typical agents would then make the same decision , hence rendering their common prediction scheme useless .this simple yet intriguing problem has inspired a huge amount of interest in the physics community over the past few years .reference , which was the first work on the full el farol problem in the physics community , identified a minimum in the volatility of attendance at the bar with increasing adaptivity of the agents . with the exception of ref . , the physics literature has instead focussed on a simplified binary form of the el farol problem as introduced by challet and zhang .this so - called minority game ( mg ) is discussed in detail in refs . ) and ref . .the minority game concerns a population of heterogeneous agents with limited capabilities and information , who repeatedly compete to be in the minority group .the agents ( e.g. people , cells , data - packets ) are adaptive , but only have access to global information about the game s progress . in both the el farol problem and the minority game , the time - averaged fluctuations in the system s global outputare of particular importance for example , the time - averaged fluctuations in attendance can be used to assess the wastage of the underlying global resource ( i.e. bar seating ) . despite its appeal ,the el farol problem ( and the minority game in particular ) is somewhat limited in its applicability to general complex systems , and hence arguably falls short of representing a true generic paradigm for complex systems .first , the reward structure is simultaneously too restrictive and specific .agents in the minority game , for example , can only win if they instantaneously choose the minority group at a particular timestep : yet this is not what investors in a financial market , for example , would call ` winning ' .second , agents only have access to global information . in the natural world , most if not all biological and social systems have at least some degree of underlying connectivity between the agents , allowing for local exchange of information or physical goods .in fact it is this interplay of network structure , agent heterogeneity , and the resulting functionality of the overall system , which is likely to be of prime interest across disciplines in the complex systems field .three enormous challenges therefore face any candidate complex systems theory : it must be able to explain the specific numerical results observed for the el farol problem and its variants ; it should also be able to account for the presence of an arbitrary underlying network ; yet it should be directly applicable to a much wider class of ` game ' with a variety of reward structures .only then will one have a hope of claiming some form of generic theory of complex systems . and only then will the design of multi - agent collectives to address both forward and inverse problems , become relatively straightforward .it might then be possible to achieve the holy grail of predicting _ a priori _ how to engineer agents reward structures , or which specific game rules to invoke , or what network communication scheme to introduce , in order to achieve some global objective . for a detailed discussion of these issues , we refer to the paper of kagan tumer and david wolpert at this workshop .in this paper , we attempt to take a step in this more general direction , by building on the success of the crowd - anticrowd theory in describing both the original el farol problem , and the minority game and its variants .in particular , we present a formal treatment of the collective behavior of a generic multi - agent population which is competing for a limited resource .the applicability of the crowd - anticrowd analysis is _ not _ limited to mg - like games , even though we focus on mg - like games in order to demonstrate the accuracy of the crowd - anticrowd approach in explaining the numerical results .we also show how the crowd - anticrowd theory can be extended to incorporate the presence of networks .the theory is built around the crowding ( i.e. correlations ) in strategy - space , rather than the precise rules of the game itself , and only makes fairly modest assumptions about a game s dynamical behavior . to the extent that a given complex system mimics a general competitive multi - agent game , it is likely that the crowd - anticrowd approach will therefore be applicable .this would be a welcome development , given the lack of general theoretical concepts in the field of complex systems as a whole .the challenge then moves to understanding how best to embed the present theory within the powerful coin framework developed by wolpert and tumer for collectives .we have tried to aim this paper at a multi - disciplinary audience within the general complex systems community .the layout of the paper is as follows .section ii briefly discusses the background to the crowd - anticrowd framework .section iii provides a description of a wide class of complex systems which will be our focus : we call these b - a - r ( binary agent resource ) problems in recognition of the stimulus provided by arthur s el farol problem .the minority game is a special limiting case of such b - a - r problems .section iv develops the crowd - anticrowd formalism which describes a general b - a - r system s dynamics .section v considers the implementation of the crowd - anticrowd formalism , deriving analytic expressions for the global fluctuations in the system in various limiting cases .section vi applies these results to both the basic minority game and several generalizations , in the absence of a network .section vii then considers the intriguing case of b - a - r systems subject to an underlying network .in particular , it is shown analytically that such network structure may have important benefits at very low values of network connectivity , in terms of reduced wastage of global resources .the conclusion is given in section viii .the fundamental idea behind our crowd - based approach describing the dynamics of a complex multi - agent system , is to incorporate accurately the correlations in strategies followed by the agents .this methodology is well - known in the field of many - body theory in physics , particularly in condensed matter physics where composite ` super - particles ' are typically considered which incorporate all the strong correlations in the system .a well - known example is an exciton gas : each exciton contains one negatively - charged electron and one positively - charged hole . because of their opposite charges , these two particles move in opposite directions in a given electric field and attract each other strongly .however since this exciton super - particle is neutral overall , any two excitons will have a negligible interaction and hence the excitons move independently of each other .more generally , a given exciton ( or so - called ` excitonic complex ' ) may contain a ` crowd ' of electrons , together with a ` crowd ' of holes having opposite behavior . in an analogous way ,the crowd - anticrowd theory in multi - agent games forms groups containing like - minded agents ( ` crowd ' ) and opposite - minded agents ( ` anticrowd ' ) .this is done in such a way that the strong strategy correlations are confined within each group , leaving weakly interacting crowd - anticrowd groups which then behave in an uncorrelated way with respect to each other .the first application of our crowd - based approach was to the full el farol problem .it yielded good quantitative agreement with the numerical simulations .however the analysis was complicated by the fact that the space of strategies in the el farol problem is difficult to quantify .this analysis becomes far easier if the system has an underlying binary structure , as we will see in sections iii and iv .however we note that the crowd - anticrowd analysis is not in principle limited to such binary systems .as indicated above , the crowd - anticrowd analysis breaks the -agent population down into groups of agents , according to the correlations between these agents strategies .each group contains a crowd of agents using strategies which are positively - correlated , and a complementary anticrowd using strategies which are strongly negatively - correlated to the crowd .hence a given group might contain ] agents who are all using the opposite strategy and hence act as an anticrowd ( e.g. by staying away from the bar en masse ) .most importantly , the anticrowd ] _ regardless _ of the current circumstances in the game , since the strategies and imply the opposite action in all situations .note that this collective action may be entirely involuntary in the sense that typical games will be competitive , and there may be very limited ( if any ) communication between agents .since all the strong correlations have been accounted for within each group , these individual groups will then act in an uncorrelated way with respect to each other , and hence can be treated as uncorrelated stochastic processes .the global dynamics of the system is then given by the sum of the uncorrelated stochastic processes generated by the groups .regarding the special limiting case of the minority game ( mg ) , we note that there have been various alternative theories proposed to describe the mg s dynamics .although elegant and sophisticated , such theories have however not been able to reproduce the original numerical results of ref . over the full range of parameter space .what is missing from such theories is an accurate description of the correlations between agents strategies : in essence , these correlations produce a highly - correlated form of decision noise which can not easily be averaged over or added in . by contrastthese strong correlations take center - stage in the crowd - anticrowd theory , in a similar way to particle - particle correlations taking center - stage in many - body physics .figure 1 summarizes the generic form of the b - a - r ( binary agent resource ) system under consideration . at timestep , each agent( e.g. a bar customer , a commuter , or a market agent ) decides whether to enter a game where the choices are action ( e.g. attend the bar , take route a , or buy ) and action ( e.g. go home , take route b , or sell ) .we will denote the number of agents choosing as ] . for simplicity, we will assume here that all the agents participate in the game at each timestep , although this is not a necessary restriction .we can define an ` excess demand ' as =n_{+1}[t]-n_{-1}[t ] \ \ . \label{vol}\ ] ] as suggested in figure 1 , the agents may have connections between them : these connections may be temporary or permanent , and their functionality may be conditional on some other features in the problem . the global information available to the agents is a common memory of the recent history , i.e. the most recent global outcomes .for example for , the possible forms are , , or which we denote simply as , , or . hence at each timestep , the recent history constitutes a particular bit - string of length . for general , there will be possible history bit - strings .these history bit - strings can alternatively be represented in decimal form : where corresponds to , corresponds to 01 etc .a strategy consists of a predicted action , or , for each possible history bit - string .hence there are possible strategies . for for example , there are therefore possible strategies . in order to mimic the heterogeneity in the system , a typical game setup would have each agent randomly picking strategies at the outset of the game .in the minority game , these strategies are then fixed for all time however a more general setup would allow the strategies held , and hence the heterogeneity in the population , to change with time .the agents then update the scores of their strategies after each timestep with ( or ) as the pay - off for predicting the action which won ( or lost ) .this binary representation of histories and strategies is due to challet and zhang .the rules of the game determine the subsequent game dynamics .the particular rules chosen will depend on the practical system under consideration .it is an open question as to how best to abstract an appropriate ` game ' from a real - world complex system . in ref . , we discuss multi - agent games which are relevant to financial markets . elsewhere we plan to discuss possible choices of game for the specific case of foraging fungal colonies .the foraging mechanism adopted within such networks might well be active in other biological systems , and may also prove relevant to social / economic networks .insight into network functionality may even prove useful in designing and controlling arrays of imperfect nanostructures , microchips , nano - bio components , and other ` systems on a chip ' .the following rules need to be specified in order to fully define the dynamics of a multi - agent game : * how is the global outcome , and hence the winning action , to be decided at each timestep ? in a general game , the global outcome may be an arbitrary function of the values of ] , and ] representing the bar capacity . if <l[t] ] , the bar will be overcrowded and can be assigned the global outcome .the winning action is then ( i.e. stay away ) .however more generally , the global outcome and hence winning action may be any function of present or past system data .furthermore , the resource level ] , ] , ] , and then announces the global outcome .we note that in principle , the agents themselves do not actually need to know what game they are playing .instead they are just fed with the global outcome : each of their strategies are then rewarded / penalized according to whether the strategy predicted the winning / losing action .one typical setup has agents adding / deducting one virtual point from strategies that would have suggested the winning / losing action . *how do agents decide which strategy to use ? typically one might expect agents to play their highest scoring strategy , as in the original el farol problem and minority game .however agents may instead adopt a stochastic method for choosing which strategy to use at each timestep .there may even be ` dumb ' agents , who use their worst strategy at each timestep .of course , it is not obvious that such agents would then necessarily lose , i.e. whether such a method is actually dumb will depend on the decisions of the other agents . * what happens in a strategy tie - breaksuppose agents are programmed to always use their highest - scoring strategy .if an agent were then to hold two or more strategies that are tied for the position of highest - scoring strategy , then a rule must be invoked for breaking this tie .one example would be for the agent to toss a fair coin in order to decide which of his tied strategies to use at that turn of the game .alternatively , an agent may prefer to stick with the strategy which was higher on the previous timestep .* what are the rules governing the connections between agents ? in terms of structure , the connections may be hard - wired or temporary , random or ordered or partially ordered ( e.g. scale - free network or small - world network ) . in terms of functionality, there is also an endless set of possible choices of rules .for example , any two connected agents might compare the scores of their highest scoring strategies , and hence adopt the predicted action of whichever strategy is overall the highest - scoring .just as above , a tie - break rule will then have to be imposed for the case where two connected agents have equal - scoring highest strategies for example a coin - toss might be invoked , or a ` selfish ' rule may be used whereby an agent sticks with his own strategy in the event of such a tie .the connections themselves may have directionality , e.g. perhaps agent can influence agent but not vice - versa . or maybe the connection ( e.g. between and ) is inactive unless a certain criterion is met .hence it may turn out that agent follows even though agent is actually following agent . * do agents have to play at each timestep ?for simplicity in the present paper , we will consider that this is the case .this rule is easily generalized by introducing a confidence level : if the agent does nt hold any strategies with sufficiently high success rate , then the agent does not participate at that timestep .this in turn implies that the number of agents participating at each timestep fluctuates .we do not pursue this generalization here , but note that its effect is to systematically prevent the playing of low - scoring strategies which are anticorrelated to the high - scoring strategies . for financial markets , for example ,this extra property of confidence - level is a crucial ingredient for building a realistic market model , since it leads to fluctuations in the ` volume ' of active agents .we emphasize that the set - up in figure 1 does not make any assumptions about the actual game being played , nor how the winning decision is to be decided .instead , one hopes to obtain a fairly general description of the type of dynamics which are possible _ without _ having to specify the exact details of the game . on the other hand, situations can arise where the precise details of the strategy tie - break mechanism , for example , have a fundamental impact on the type of dynamics exhibited by the system and hence its overall performance in terms of achieving some particular goal .we refer to ref . for an explicit example of this , for the case of network - based multi - agent systems .figure 2 shows in more detail the example strategy space from figure 1 .a strategy is a set of instructions to describe what an agent should do in any given situation , i.e. given any particular history , the strategy then decides what action the agent should take .the strategy space is the set of strategies from which agents are allocated their strategies .if this strategy allocation is fixed randomly at the outset , then this acts as a source of quenched disorder .alternatively , the strategy allocation may be allowed to evolve in response to the system s dynamics . in the case that the initial strategy allocation is fixed , it is clear that the agents playing the game are limited , and hence may become ` frustrated ' , by this quenched disorder .the strategy space shown is known as the full strategy space fss , and contains all possible permutations of the actions and for each history . as such there are strategies in this space .the dimensional hypercube shows all strategies from the fss at its vertices .of course , there are many additional strategies that could be thought of , but which are nt present within the fss . for example , the simple strategies of persistence and anti - persistence are not present in the fss .the advantage however of using the fss is that the strategies form a complete set , and as such the fss displays no bias towards any particular action for a given history . to include any additional strategies like persistence and anti- persistence would mean opening up the strategy space , hence losing the simplicity of the b - a - r structure and returning to the complexity of arthur s original el farol problem .it can be observed from the fss , that one can choose a subset of strategies such that any pair within this subset has one of the following characteristics : * anti - correlated , e.g. and , or and .for example , any two agents using the ( ) strategies and respectively , would take the opposite action irrespective of the sequence of previous outcomes and hence the history .hence one agent will always do the opposite of the other agent . for example , if one agent chooses at a given timestep , the other agent will choose .their net effect on the demand ] . * uncorrelated , e.g. and .for example , any two agents using the strategies and respectively , would take the opposite action for two of the four histories , while they would take the same action for the remaining two histories .if the histories occur equally often , the actions of the two agents will be uncorrelated on average .a convenient measure of the distance ( i.e. closeness ) of any two strategies is the hamming distance which is defined as the number of bits that need to be changed in going from one strategy to another .for example , the hamming distance between and is , while the hamming distance between and is just .although there are strategies in the strategy space , it can be seen that one can choose subsets such that any strategy - pair within this subset is either anti - correlated or uncorrelated .consider , for example , the two groups and any two strategies within are uncorrelated since they have a hamming distance of . likewiseany two strategies within are uncorrelated since they have a relative hamming distance of .however , each strategy in has an anti - correlated strategy in : for example , is anti - correlated to , is anti - correlated to etc .this subset of strategies comprising and , forms a reduced strategy space ( rss ) . since it contains the essential correlations of the full strategy space ( fss ) , running a given game simulation within the rss is likely to reproduce the main features obtained using the fss .the rss has a smaller number of strategies than the fss which has . for , there are strategies in the rss compared to in the fss , whilst for there are strategies in the fss but only strategies in the rss .we note that the choice of the rss is not unique , i.e. within a given fss there are many possible choices for a rss .in particular , it is possible to create distinct reduced strategy spaces from the fss . in short, the rss provides a minimal set of strategies which ` span ' the fss and are hence representative of its full structure .the history of recent outcomes changes in time , i.e. it is a dynamical variable . the history dynamics can be represented on a directed graph ( a so - called digraph ) .the particular form of directed graph is called a de bruijn graph .figure 3 shows some examples of the de bruijn graph for and .the probability that the outcome at time will be a ( or ) depends on the state at time .hence it will depend on the previous outcomes , i.e. it depends on the particular state of the history bit - string .the dependence on earlier timesteps means that the game is not markovian .however , modifying the game such that there is a finite time - horizon for scoring strategies , may then allow the resulting game to be viewed as a high - dimensional markov process ( see refs . for the case of the minority game ) . the dynamics for a particular run of the b - a - r system will depend upon the strategies that the agents hold , and the random process used to decide tie - breaksthe particular dynamics which emerge also depend upon the initial score - vector of the strategies and the initial history used to seed the start of the game .if the initial strategy score - vector is not ` typical ' , then a bias can be introduced into the game which never disappears . in short ,the system never recovers from this bias .it will be assumed that no such initial bias exists . in practicethis is achieved , for example , by setting all the initial scores to zero .the initial choice of history is not considered to be an important effect .it is assumed that any transient effects resulting from the particular history seed will have disappeared , i.e. the initial history seed does not introduce any long - term bias .the strategy allocation among agents can be described in terms of a tensor .this tensor describes the distribution of strategies among the individual agents .if this strategy allocation is fixed from the beginning of the game , then it acts as a quenched disorder in the system .the rank of the tensor is given by the number of strategies that each agent holds .for example , for the element gives the number of agents assigned strategy , then strategy , and then strategy , in that order . hence where the value of represents the number of distinct strategies that exist within the strategy space chosen : in the fss , and in the rss .figure 4 shows an example distribution for agents in the case of and , in the reduced strategy space rss .we note that a single ` macrostate ' corresponds to many possible ` microstates ' describing the specific partitions of strategies among the agents .for example , consider an agent system with : the microstate in which agent has strategy while agent has strategy , belongs to the same macrostate as in which agent has strategy while agent has strategy .hence the present crowd - anticrowd theory retained at the level of a given , describes the set of all games which belong to that same macrostate .we also note that although is not symmetric , it can be made so since the agents will typically not distinguish between the order in which the two strategies are picked .given this , we will henceforth focus on and consider the symmetrized version of the strategy allocation matrix given by .in general , would be allowed to change in time , possibly evolving under some pre - defined selection criteria . in davidsmith s paper elsewhere in this workshop , changes in are invoked in order to control the future evolution of the multi - agent game : this corresponds to a change in heterogeneity in the agent population , and could represent the physical situation where individual agents ( e.g. robots ) could be re - wired , re - programmed , or replaced in order to invoke a change in the system s future evolution , or to avoid certain future scenarios or trajectories for the system as a whole .in addition to the excess demand ] ( or ` volatility ' in a financial context ) .this gives a measure of the fluctuations in the system , and hence can be used as a measure of ` risk ' in the system . in particularthe standard deviation gives an idea of the size of typical fluctuations in the system .however in the case that ] itself , any risk assessment based on probability distribution functions over a fixed time - scale ( e.g. single timestep ) may be misleading .instead , it may be the _ cumulative _ effects of a string of large negative values of ] and its standard deviation , noting that the same analytic approach would work equally well for other statistical functions of ] and its standard deviation .similar analysis can be carried out for any function of ] with , places more weighting on large deviations of ] , such as the cumulative value =\sum_{i } d[t_i < t] ] and ] .there is a current score - vector ] which define the state of the game .the excess demand =d\left [ \underline{s}[t],\mu \lbrack t]\right] ] for this given run , corresponds to a time - average for a given realization of and a given set of initial conditions .it may turn out that we are only interested in ensemble - averaged quantities : consequently the standard deviation will then need to be averaged over many runs , and hence averaged over all realizations of _ and _ all sets of initial conditions .equation 1 can be rewritten by summing over the rss as follows :,\mu \lbrack t]\right ] = n_{+1}[t]-n_{-1}[t]\equiv \sum_{r=1}^{2p}a_{r}^{\mu \lbrack t]}n_{r}^{\underline{s}[t]},\ ] ] where .the quantity }=\pm 1 ] is the number of agents using strategy at time .the superscript ] will now be shown , where the average is over time for a given realization of the strategy allocation .we use the notation \right\rangle _ { t} ] for a given . hence ,\mu\lbrack t]\right ] \right\rangle _ { t } & = & \sum_{r=1}^{2p}\left\langlea_{r}^{\mu \lbrack t]}n_{r}^{\underline{s}[t]}\right\rangle _ { t } \\ & = & \sum_{r=1}^{2p}\left\langlea_{r}^{\mu \lbrack t]}\right\rangle _ { t}\left\langle n_{r}^{\underline{s}[t]}\right\rangle _ { t } \nonumber\end{aligned}\ ] ] where we have used the property that } ] are uncorrelated .we now consider the special case in which all histories are visited equally on average : this may arise as the result of a periodic cycling through the history space ( e.g. a eulerian trail around the de bruijn graph ) or if the histories are visited randomly . even if this situation does not hold for a specific , it may indeed hold once the averaging over has also been taken .for example , in the minority game all histories are visited equally at small and a given : however the same is only true for large if we take the additional average over all . under the property of equal histories ,we can write ,\mu \lbrack t]\right ] \right\rangle _ { t } & = & \sum_{r=1}^{2p}\left ( \frac{1}{p}\sum_{\mu = 0}^{p-1}a_{r}^{\mu \lbrack t]}\right ) \left\langle n_{r}^{\underline{s}[t]}\right\rangle _ { t } \label{hist } \\ & = & \sum_{r=1}^{p}\left ( \frac{1}{p}\sum_{\mu = 0}^{p-1}a_{r}^{\mu \lbrack t]}+a_{\overline r}^{\mu \lbrack t]}\right ) \left\langle n_{r}^{\underline{s}[t]}\right\rangle _ { t } \nonumber \\ & = & \sum_{r=1}^{p}0.\left\langle n_{r}^{\underline{s}[t]}\right\rangle _ { t } \nonumber \\ & = & 0 \nonumber \end{aligned}\ ] ] where we have used the exact result that }=-a_{\overline r}^{\mu \lbrack t]} ] , and the approximation }\right\rangle _ { t}=\left\langle n_{\overline r}^{\underline{s}[t]}\right\rangle _ { t} ] . in the event that all histories are not equally visited over time , even after averaging over all , it may still happen that the system s dynamics is restricted to equal visits to some _ subset _ of histories .an example of this would arise for , for example , for a repetitive sequence of outcomes in which case the system repeatedly performs the 4-cycle . in this case one can then carry out the averaging in equation [ hist ] over this subspace of four histories , implying that there are now strategies that are effectively identical ( i.e. they have the same response for these four histories , even though they differ in their response for one or more of the remaining four histories , , , which are not visited ) .more generally , such sub - cycles within the de bruijn graph may lead to a bias towards s or s in the global outcome sequence .we note that such a biased series of outcomes can also be generated by biasing the initial strategy pool .we will focus now on the fluctuations of ] , which is the square of the standard deviation , is given by ,\mu \lbrack t ] \right ] ^{2}\right\rangle _ { t}-\left\langle d\left [ \underline{s}[t],\mu \lbrack t]\right ] \right\rangle _ { t}^{2 } \ .\ ] ] for simplicity , we will assume the game output is unbiased and hence we can set ,\mu \lbrack t]\right ] \right\rangle _ { t}=0 ] from the right hand side of the expression for . hence ,\mu\lbrack t]\right ] ^{2}\right\rangle _ { t } \\ & = & \sum_{r , r^{\prime } = 1}^{2p}\left\langle a_{r}^{\mu \lbrack t]}n_{r}^{\underline{s}[t]}a_{r^{\prime } } ^{\mu \lbrack t]}n_{r^{\prime } } ^{\underline{s}[t]}\right\rangle _ { t}. \nonumber\end{aligned}\ ] ] in the case that the system visits all possible histories equally , the double sum can usefully be broken down into three parts , based on the correlations between the strategies : ( fully correlated ) , ( fully anti - correlated ) , and ( fully uncorrelated ) where is a vector of dimension with component } ] and } ] .equation [ modelc1 ] is an important intermediary result for the crowd - anticrowd theory .it is straightforward to obtain analogous expressions for the variances in ]. equation [ modelc1 ] provides us with an expression for the time - averaged fluctuations .some form of approximation must be introduced in order to reduce equation [ modelc1 ] to explicit analytic expressions .it turns out that equation [ modelc1 ] can be manipulated in a variety of ways , depending on the level of approximation that one is prepared to make .the precise form of any resulting analytic expression will depend on the details of the approximations made .we now turn to the problem of evaluating equation [ modelc1 ] analytically .a key first step is to relabel the strategies .specifically , the sum in equation [ modelc1 ] is re - written to be over a _virtual - point ranking _ and not the decimal form .consider the variation in points for a given strategy , as a function of time for a given realization of .the ranking ( i.e. label ) of a given strategy in terms of virtual - points score will typically change in time since the individual strategies have a variation in virtual - points which also varies in time . for the minority game , this variation is quite rapid in the low regime since there are many more agents than available strategies hence any strategy emerging as the instantaneously highest - scoring , will immediately get played by many agents and therefore be likely to lose on the next time - step .more general games involving competition within a multi - agent population , will typically generate a similar ecology of strategy - scores with no all - time winner . [ n.b .if this were nt the case , then there would by definition be an absolute best , second - best etc .hence any agent observing the game from the outside would be able to choose such a strategy and win consistently .such a system is not sufficiently competitive , and is hence not the type of system we have in mind ] .this implies that the specific identity of the ` highest - scoring strategy ' changes frequently in time .it also implies that } ] ) to the time - evolution of the virtual points of the highest scoring strategy ( i.e. to ] .the label is used to denote the rank in terms of strategy score , i.e. is the highest scoring strategy position , is the second highest - scoring strategy position etc . with assuming no strategy - ties .( whenever strategy ties occur , this ranking gains a ` degeneracy ' in that for a given ) . a given strategy , e.g. , may at a given timestep have label , while a few timesteps later have label .given that ( i.e. all strategy scores start off at zero ) , then we know that .equation [ modelc1 ] can hence be rewritten exactly as }-n_{\overline{k}}^{\underline{s}[t]}\right ) ^{2}\right\rangle _ { t}\right\rangle _ { \psi } .\label{modelc1k}\ ] ] now we make another important observation .since in the systems of interest the agents are typically playing their highest - scoring strategies , then the relevant quantity in determining how many agents will instantanously play a given strategy , is a knowledge of its relative ranking not the actual value of its virtual points score .this suggests that the quantities } ] will fluctuate relatively little in time , and that we should now develop the problem in terms of time - averaged values .the actual number of agents } ] , in order to calculate how many agents hold the ranked strategy but _ do not _ hold another strategy with higher - ranking .the heterogeneity in the population represented by , combined with the strategy scores ] for each and hence the standard deviation in ] and =0 ] for small ) but will increase the number of agents playing lower - scoring strategies ( i.e. >0 ] represents a small noise term .hence , -n_{\overline{k}}-\varepsilon _ { \overline{k}}[t ] \right ] ^{2}\right\rangle _ { t}\right\rangle _ { \psi } \label{average}\\ & = & \left\langle \sum_{k=1}^{p}\left\langle \left [ ( n_{k}-n_{\overline{k}})+(\varepsilon _ { k}[t]-\varepsilon _ { \overline{k}}[t])\right ] ^{2}\right\rangle _ { t}\right\rangle _ { \psi } \nonumber \\ & = & \left\langle \sum_{k=1}^{p}\left\langle \left [ n_{k}-n_{\overline{k}}\right ] ^{2}+\left [ \varepsilon _ { k}[t]-\varepsilon _ { \overline{k}}[t]\right ] ^{2}+\left [ 2(n_{k}-n_{\overline{k}})(\varepsilon _ { k}[t]-\varepsilon _ { \overline{k}}[t])\right ] \right\rangle _ { t}\right\rangle _ { \psi } \nonumber \\ & \approx&\left\langle \sum_{k=1}^{p}\left\langle \left [ n_{k}-n_{\overline{k}}\right ] ^{2}\right\rangle _ { t}\right\rangle _ { \psi } = \left\langle \sum_{k=1}^{p}\left [ n_{k}-n_{\overline{k}}\right ] ^{2}\right\rangle _ { \psi } , \nonumber\end{aligned}\ ] ] since the latter two terms involving noise will average out to be small .the resulting expression in equation [ average ] involves no time dependence .the averaging over can then be taken inside the sum .the individual terms in the sum , i.e. ^{2}\right\rangle _ { \psi} ] , obtained for the particular case of the minority game .the analytic results were taken from sections va , vb and vc .the results are shown as a function of agent memory size .the spread in numerical values from individual runs , for a given , indicates the extent to which the choice of alters the dynamics of the mg .the upper line for each value at low , is equation [ flatom ] showing .the lower line for each value at low , is equation [ nonflatom ] showing .the line at high is equation [ highm ] showing , and is independent of . comparing the analytic curves with the numerical results, it can be seen that the analytic expressions capture the essential physics ( i.e. the strong correlations ) driving the fluctuations in the system .here we consider the crowd - anticrowd theory applied to a mixed population containing agents of different memory - sizes ( or equivalently , agents with differing opinions as to the relative importance of previous outcomes ) .since we are interested in the effects of crowding within this mixed - ability population , we will focus on small .consider a population containing some agents with memory , and some agents with memory where .for a pure population of agents with the same memory , there is information left in the history time - series . in the small limit , however , this information is hidden in bit - strings of length greater than and hence is not accessible to these agents .however it would in principle be accessible to agents with a larger memory . in the mixed population or ` alloy ' , there are two sub - populations comprising agents with memory and strategies per agent , and agents with memory and strategies per agent .let us focus on the variance of demand ] from both sub - populations , can be obtained by adding separately the contributions to the variance from the agents and the agents .hence , where ( ) is the variance due to the ( ) agents .defining the concentration of agents as , gives and where the expressions for and follow from equations [ final ] and [ yofr ] : ^{s_i}-\bigg[1- \frac{k}{2^{m_i+1}}\bigg]^{s_i } \\ & \ & \ \ -\\bigg[1-\frac{(2^{m_i+1}-k)}{2^{m_i+1}}\bigg]^{s_i}+\bigg[1- \frac{2^{m_i+1}+1-k}{2^{m_i+1}}\bigg]^{s_i}\bigg)^2 \nonumber\end{aligned}\]]hence ^{1/2}\ .\label{alloy}\]]it can be seen that equation [ alloy ] will generally exhibit a _ minimum _ in at finite , hence the mixed population uses the limited global resource more efficiently than a pure population of either or agents .this analytic result has been confirmed in numerical simulations of a mixed population . in the ` thermal minority game ' ( tmg ) , agents choose between their strategies using an exponential probability weighting . as pointed out by marsili _et al _ , such a probabilistic strategy weighting has a long tradition in economics and encodes a particular behavioral model .the numerical simulations of cavagna _ et al _ demonstrated that at small , where the mg is larger - than - random , the tmg could be pushed below the random coin - toss limit just by altering this relative probability weighting , or equivalently the ` temperature ' .this reduction in for stochastic strategies seems fairly general : for example , in ref . we had presented a modified mg in which agents with stochastic strategies also generate a smaller - than - random .the common underlying phenomenon in both cases is that stochastic strategy rules tend to reduce ( increase ) the typical size of crowds ( anticrowds ) , which in turn implies an _ increase _ in the cancellation between the actions of the crowds and anticrowds .hence gets reduced , and can even fall below the random coin - toss limit .we will now show that the crowd - anticrowd theory provides a quantitative explanation of the main result of ref . , whereby a smaller - than - random is generated with increasing ` temperature ' at small .we therefore focus on small , and assume a nearly flat strategy allocation matrix . at any moment in the game , strategies can be ranked according to their virtual points , where is the best strategy , is second best , etc .consider any two strategies ranked and within the list of strategies in the rss . as mentioned earlier , in the small regime of interest the virtual - point strategy ranking and popularity ranking for strategiesare essentially the same .consider as an example .let be the probability that a given agent possesses and , where ( i.e. is the best , or equal best , among his strategies ) .in contrast , let be the probability that a given agent possesses and , where ( i.e. is the worst , or equal worst , among his strategies ) .let be the probability that the agent uses the worst of his strategies , while is the probability that he uses the best .the probability that the agent plays is then given by \label{psubk } \\ & = & \ \theta \p_{-}(k)+2^{-2(m+1)}\ \theta + \( 1-\theta ) \p_{+}(k ) \nonumber\end{aligned}\]]where is the probability that the agent has picked _ and _ that is the agent s best ( or equal best ) strategy ; is the probability that the agent has picked _ and _ that is the agent s worst strategy .using equation yofr it is straightforward to show that ^{2}-\bigg[1-\frac{k}{% 2^{m+1}}\bigg]^{2}\bigg)\ \ .\label{pplus}\]]note that where the probability that the agent holds strategy after his picks , with no condition on whether it is best or worst .an expression for follows from equations [ pplus ] and [ pk ] .the basic mg corresponds to the case . in the tmg , each agentis equipped at each timestep with his own ( biased ) coin characterised by exponential probability weightings .an agent then flips this coin at each timestep to decide which strategy to use . to relate the present analysis to the tmg in ref . , is considered : corresponds to ` temperature ' while corresponds to with ] obtained for the basic mg . equation [ sigtheta ] explicitly shows that the standard deviation _ decreases _ as increases ( recall ) : in other words , the standard deviation decreases as agents use their worst strategy with increasing probability .an increase in leads to a reduction in the size of the larger crowds using high - scoring strategies , as well as an increase in the size of the smaller anticrowds using lower - scoring strategies , hence resulting in a more substantial cancellation effect between the crowd and the anticrowd .as increases , will eventually drop _ below _ the random coin - toss result at where now switch to the popularity labels in order to examine in the two limits of ` delta ' , and ` flat ' .the delta - function distribution is which is peaked at , while the flat distribution is given by ] although we again emphasize that the crowd - anticrowd theory is not limited to the case of ` thermal ' strategy weightings .figure 8 shows a comparison between the theory of equations [ thetad ] , [ thetaf ] and numerical simulation for various runs , and .the theory agrees well in the range and , most importantly , provides a quantitative explanation for the transition in from larger - than - random to smaller - than - random as ( and hence ) is increased .the numerical data for different runs has a significant natural spread .most of these data points do lie in the region in between the two analytic curves , which act as approximate upper and lower - bounds . above ,the numerical data tend to flatten off while the present theory predicts a decrease in as .this is because the present theory averages out the fluctuations in strategy - use at each time - step ( equation [ nk ] only considers the mean number of agents using a strategy of given rank ) .consider .for a particular configuration of strategies picked at the start of the game , and at a particular moment in time , the number of agents using each strategy is typically distributed _ around _ the mean value given by equation [ nk ] for .the resulting distribution describing the strategy - use is therefore non - flat .it is these fluctuations about the mean values and which give rise to a non - zero .the crowd - anticrowd theory can be extended to account for the effect of these fluctuations in strategy - use for in the following way : all agents are randomly assigned strategies . to represent a turn in the game, each agent flips a ( fair ) coin to decide which of the two strategies is the preferred one .having generated a list of the number of agents using each strategy , is then found in the usual way by cancelling off crowds and anticrowds . a time - averaged value for is then obtained by averaging over independent coin - flip outcomes for the given initial distribution of strategies among agents .this procedure provides a semi - analytic calculation for the value of at .it is also possible to perform a fully analytic calculation of the average in the limit : the initial random assignment of strategies can be modelled using a random - walk .this yields an average value of ^{2} ] .provided that the basic mg is in the crowded regime as discussed earlier , equation sigqtheta holds for all and and hence any value of .hence a critical value can be predicted for fixed , or for fixed , at which crosses from worse - than - random to better - than - random . for a given value of , it follows from equation [ sigqtheta ] that similar expression follows for .given that , equation [ qcrit ] implies that the run - averaged numerical volatility should lie above the random coin - toss value if where of ` temperature ' .since the and values considered are such that the basic mg is in the worse - than - random regime , and therefore as required .similarly will remain above the random coin - toss value for all if where now extend the above analysis to the case of a network connection between agents . in the first subsection, we consider the appropriate modification of the values . in the second sub - section ,we follow the spirit of the alloy game mentioned above , whereby we consider the case of very few connections within the population in order to show that in principle the standard deviation can actually decrease as network connections are added .the presence of a network allows for a sharing of information across that network . depending on the rules of the game regarding information exchange, the connected agents may decide to adopt the strategy prediction of agents to whom they are connected .the crowd - anticrowd calculation can be generalized to incorporate such situations by forming new expressions for .in particular , the bin - counting method to give must now be generalized to account for ( i ) any agent whose own strategies are lower ranking than , but who is connected to another agent holding strategy , and ( ii ) any agent whose highest - scoring strategy is , but who has a connection to another agent with an even higher - scoring strategy .the contribution ( i ) will increase above the bin - counting value , however contribution ( ii ) then reduces it .the competition between these two effects will determine what then happens to the standard deviation in the presence of connections .we will consider the following rule governing functionality of connections .consider agent connected to agent in a game where each agent holds strategies ( e.g. ) .we also suppose for the moment , that the ranking of strategies is unique ( i.e. there are no strategies which are tied in virtual - points ) .suppose that the highest - ranking strategy of agent is , but that the highest - ranking strategy of agent is where and hence has higher virtual - point score than .for the particular connection rule we have in mind , we will let agent therefore have access to the highest - scoring strategy of agent .in other words , agent now uses strategy since it is the highest - scoring of the strategies that the two agents hold between them .of course , agent may also be connected to other agents he will therefore use the highest - scoring strategy among all the agents to whom he is connected . in the case that agent holds the highest - scoring strategy of them all ,then agent uses the strategy ranked .the same is true for all other agents .hence we need to modify the calculation of in order to incorporate this effect .in particular , the number of agents using the ranked strategy at a particular timestep in the game - with - network will be where is a sum over all agents who are connected to an agent whose highest - ranking strategy is , and who themselves have a highest - ranked strategy which is worse than ( i.e. is lower - ranked than and hence ) .hence these agents use , whereas in the absence of the network they would have used their own strategy which is lower - ranked than .by contrast , is a sum over all agents whose highest - ranking strategy is , but who are connected to an agent whose highest - ranked strategy is better than ( i.e. is higher - ranked than and hence ) .hence these agents use , whereas in the absence of the network they would have used . notice that this is irrespective of the actual structure of the network , for example the network could be random , small - world , scale - free , or regular . in order to implement the renormalization of in a particular example, we consider the case of a random network in which agent has a probability of of forming a connection with agent .the term is given by summing over all agents whose own highest - scoring strategy is lower - ranked than ( i.e. ) _ given _ that they hold at least one connection to an agent whose highest - scoring strategy is _ but _ they do nt have any connections to any agents with a higher - ranked strategy ( i.e. ) .the resulting expression is \bigg[(1-p)^{\sum_{g < k } { \overline { n_g}}}\bigg]\ \bigg[1-(1-p)^{{\overline { n_k}}}\bigg]\ ] ] where the third factor represents the probability that a given agent holds at least one connection to an agent whose highest - scoring strategy is , the second factor accounts for the probability that a given agent is not connected to any agent whose highest - scoring strategy is higher - ranked than , and the first factor sums over all agents whose highest - scoring strategy is lower - ranked than . continuing this analysis , we have \ ] ] where the second factor accounts for the probability that a given agent has at least one connection to an agent whose highest - scoring strategy is higher - ranked than , and the first factor is just the number of agents whose own highest - scoring strategy is . finally , we need a suitable expression , which is the number of agents using the highest - ranking strategy in the absence of the network . as before, we will consider the case of the small- limit with a flat strategy allocation matrix which , from equation 23 , gives ^{s}-\left [ 1- \frac{k}{2^{m+1}}\right ] ^{s}\right ) \ .\ ] ] these expressions can then be used to evaluate .elsewhere we will discuss the numerical results for a network b - a - r system . in the limit of high connectivity ,there is substantial crowding in the network b - a - r system and hence degeneracy of strategy scores .accounting for the correct frequency of degenerate / non - degenerate timesteps yields excellent quantitative agreement with the numerical simulations . herewe will present a proof - in - principle that the addition of connections can lead to a reduction in the standard deviation of demand ] in the system .suppose there is a given probability that any given agent is connected to agent .the population of agents therefore will contain , in general , a certain number of agents who are unconnected ( i.e. cluster - size ) , a number of pairs of connected agents ( i.e. cluster - size ) , a number of triples of connected agents ( i.e. cluster - size ) , etc .hence we can effectively think of the population of agents as comprising a ` gas ' containing monomers , dimers , trimers , and hence -mers where .hence we can write for very low network connection probability , any particular realization of the connections will result in most agents remaining unconnected while a few are connected in pairs .figure 9 shows a schematic diagram of this situation , whereby the population just comprises monomers ( i.e. isolated agents ) and dimers ( i.e. connected pairs of agents ) .this implies that the mean number of dimers is simply given by the total number of possible pairs , i.e. , multiplied by the probability that a given pair is connected , hence yielding .hence .we note that we are implicitly working below the percolation threshold : when we have that every agent has on average one connection , in which case the assumption of truncating at dimers breaks down .we now turn to the calculation of the variance of ] by the monomer agents forms a stochastic process which is uncorrelated to the contribution from the gas of dimer agents .note that we are _ not _ assuming that the individual monomers are not correlated with each other , or that the individual dimers are not correlated with each other on the contrary , there will be significant crowding ( i.e. correlations ) within each sub - population .the variance of ] . hence we have proved that , to the extent to which a particular numerical implementation reflects the connection rules assumed in the present analysis , adding in a small number of connections can actually reduce the fluctuations in ] are reduced .interestingly , such a minimum in has already been reported based on numerical simulations for a somewhat similar game .our expression for is likely to _ underestimate _ the actual value at which a minimum occurs , since we have overestimated the coordination within a given dimer , in addition to overestimating the number of dimers ( and hence underestimating the effects of trimers etc . ) .however the fact that the crowd - anticrowd analysis predicts that a minimum can in principle exist , and ref . had earlier found a minimum numerically in a similar game , is very encouraging .clearly there is a vast amount of further theoretical analysis that can be done within the present framework however we leave this to future presentations .we have given an in - depth presentation of the crowd - anticrowd theory in order to understand the fluctuations in competitive multi - agent systems , in particular those based on an underlying binary structure .since the theory incorporates details concerning the structure of the strategy space , and its possible coupling to history space , we believe that the crowd - anticrowd theory will have applicability for more general multi - agent systems .hence we believe that the crowd - anticrowd concept might serve as a fundamental theoretical concept for more general complex systems which mimic competitive multi - agent games . this would be a welcome development , given the lack of general theoretical concepts in the field of complex systems as a whole .it is also pleasing from the point of view of physics methodology , since the basic underlying philosophy of accounting correctly for ` inter - particle ' correlations is already known to be successful in more conventional areas of many - body physics .this success in turn raises the intriguing possibility that conventional many - body physics might be open to re - interpretation in terms of an appropriate multi - particle ` game ' : we leave this for future work . some properties of multi - agent games can not be described using time- and configuration - averaged theories . in particular , an observation of a real - world complex system which is thought to resemble a multi - agent game ,may correspond to a _run which evolves from a specific initial configuration of agents strategies .this implies a particular , and hence the time - averagings within the crowd - anticrowd theory must be carried out for that particular choice of .however this problem can still be cast in terms of the crowd - anticrowd approach , since the averagings are then just carried out over some sub - set of paths in history space , which is conditional on the path along which the complex system is already heading .we also emphasize that a single ` macrostate ' corresponds to many possible ` microstates ' , where each microstate corresponds to one particular partition of strategy allocation among the agents .hence the crowd - anticrowd theory retained at the level of a given specified , is equally valid for the entire _ set _ of games which share this same ` macrostate ' .we refer to david smith s presentation at this workshop for a detailed discussion of -specific dynamics .see also refs . for the simpler case of the minority game .we have been discussing a complex system based on multi - agent dynamics , in which both deterministic and stochastic processes co - exist , and are indeed intertwined .depending on the particular rules of the game , the stochastic element may be associated with any of five areas : ( i ) disorder associated with the strategy allocation and hence with the heterogeneity in the population , ( ii ) disorder in the underlying network . both ( i ) and ( ii )might typically be fixed from the outset ( i.e. , quenched disorder ) hence it is interesting to see the interplay of ( i ) and ( ii ) in terms of the overall performance of the system . the extentto which these two ` hard - wired ' disorders might then compensate each other , as for example in the parrondo effect or stochastic resonance , is an interesting question .such a compensation effect might be engineered , for example , by altering the rules - of - the - game concerning inter - agent communication on the existing network .three further possible sources of stochasticity are ( iii ) tie - breaks in the scores of strategies , ( iv ) a stochastic rule in order for each agent to pick which strategy to use from the available strategies , as in the thermal minority game , ( v ) stochasticity in the global resource level ] . corresponds to while corresponds to , hence we will only consider . 0.5 in * figure 1 * schematic representation of b - a - r ( binary agent resource ) system .at timestep , each agent decides between action and action based on the predictions of the strategies that he possesses .a total of ] choose .in the simplified case that each agent s confidence threshold for entry into the game is very small , then +n_{+1}[t]=n ] , are shown corresponding to ( upper solid line ) , ( lower dashed line ) and ( monotonically - increasing solid line which is independent of ) .the numerical values were obtained from different simulation runs ( triangles , crosses and circles ) . 0.5 in * figure 8 * crowd - anticrowd theory vs. numerical simulation results for thermal minority game as a function of stochastic probability , or ` temperature ' .the analytic results ( lines ) correspond to the ( solid upper line ) and ( solid lower line ) limiting - case approximations .
|
we discuss a crowd - based theory for describing the collective behavior in a generic multi - agent population which is competing for a limited resource . these systems whose binary versions we refer to as b - a - r ( binary agent resource ) collectives have a dynamical evolution which is determined by the aggregate action of the heterogeneous , adaptive agent population . accounting for the strong correlations between agents strategies , yields an accurate description of the system s dynamics in terms of a ` crowd - anticrowd ' theory . this theory can incorporate the effects of an underlying network within the population . most importantly , its applicability is _ not _ just limited to the el farol problem and the minority game . indeed , the crowd - anticrowd theory offers a powerful approach to tackling the dynamical behavior of a wide class of agent - based complex systems , across a range of disciplines . with this in mind , the present working paper is written for a general multi - disciplinary audience within the complex systems community . 0.5 in * working paper for the workshop on collectives and the design of complex systems , stanford university , august 2003 . * research performed in collaboration with former graduate students michael hart and paul jefferies , and present graduate students sehyo charley choe , sean gourley and david smith .
|
recently , there has been increasing interest in analyzing on - line auction data and in inferring the underlying dynamics that drive the bidding process .each series of price bids for a given auction corresponds to pairs of random bidding times and corresponding bid prices generated whenever a bidder places a bid [ jank and shmueli ( , ) , , ] .related longitudinal data where similar sparsely and irregularly sampled noisy measurements are obtained are abundant in the social and life sciences ; for example , they arise in longitudinal growth studies . while more traditional approaches of functional data analysis requirefully or at least densely observed trajectories [ , , ] , more recent extensions cover the case of sparsely observed and noise - contaminated longitudinal data [ , ] .a common assumption of approaches for longitudinal data grounded in functional data analysis is that such data are generated by an underlying smooth and square integrable stochastic process [ , , , , ] .the derivatives of the trajectories of such processes are central for assessing the dynamics of the underlying processes [ , ] .although this is difficult for sparsely recorded data , various approaches for estimating derivatives of individual trajectories nonparametrically by pooling data from samples of curves and using these derivatives for quantifying the underlying dynamics have been developed [ , , , ] .related work on nonparametric methods for derivative estimation can be found in , and on the role of derivatives for the functional linear model in .we expand here on some of these approaches and investigate an empirical dynamic equation .this equation is distinguished from previous models that involve differential equations in that it is empirically determined from a sample of trajectories , and does not presuppose knowledge of a specific parametric form of a differential equation which generates the data , except that we choose it to be a first order equation .this stands in contrast to current approaches of modeling dynamic systems , which are `` parametric '' in the sense that a prespecified differential equation is assumed .a typical example for such an approach has been developed by , where a prior specification of a differential equation is used to guide the modeling of the data , which is done primarily for just one observed trajectory .a problem with parametric approaches is that diagnostic tools to determine whether these equations fit the data either do not exist , or where they do , are not widely used , especially as nonparametric alternatives to derive differential equations have not been available .this applies especially to the case where one has data on many time courses available , providing strong motivation to explore nonparametric approaches to quantify dynamics .our starting point is a nonparametric approach to derivative estimation by local polynomial fitting of the derivative of the mean function and of partial derivatives of the covariance function of the process by pooling data across all subjects [ ] .we show that each trajectory satisfies a first order stochastic differential equation where the random part of the equation resides in an additive smooth drift process which drives the equation ; the size of the variance of this process determines to what extent the time evolution of a specific trajectory is determined by the nonrandom part of the equation over various time subdomains , and therefore is of tantamount interest .we quantify the size of the drift process by its variance as a function of time .whenever the variance of the drift process is small relative to the variance of the process , a deterministic version of the differential equation is particularly useful as it then explains a large fraction of the variance of the process .the empirical stochastic differential equation can be easily obtained for various types of longitudinal data .this approach thus provides a novel perspective to assess the dynamics of longitudinal data and permits insights about the underlying forces that shape the processes generating the observations , which would be hard to obtain with other methods .we illustrate these empirical dynamics by constructing the stochastic differential equations that govern online auctions with sporadic bidding patterns .we now describe a data model for longitudinally collected observations , which reflects that the data consist of sparse , irregular and noise - corrupted measurements of an underlying smooth random trajectory for each subject or experimental unit [ ] , the dynamics of which is of interest .given realizations of the underlying process on a domain and of an integer - valued bounded random variable , we assume that measurements , , are obtained at random times , according to where are zero mean i.i.d .measurement errors , with , independent of all other random components .the paper is organized as follows . in section [ sec2 ] ,we review expansions in eigenfunctions and functional principal components , which we use directly as the basic tool for dimension reduction alternative implementations with b - splines or p - splines could also be considered [ , , ] .we also introduce the empirical stochastic differential equation and discuss the decomposition of variance it entails .asymptotic properties of estimates for the components of the differential equation , including variance function of the drift process , coefficient of determination associated with the dynamic system and auxiliary results on improved rates of convergence for eigenfunction derivatives are the theme of section [ sec3 ] .background on related perturbation results can be found in , , , .section [ sec4 ] contains the illustration of the differential equation with auction data , followed by a brief discussion of some salient features of the proposed approach in section [ sec5 ] .additional discussion of some preliminary formulas is provided in appendix [ seca1 ] , estimation procedures are described in appendix [ seca2 ] , assumptions and auxiliary results are in appendix [ seca3 ] and proofs in appendix [ seca4 ] . key methodology for dimension reduction and modeling of the underlying stochastic processes that generate the longitudinal data , which usually are sparse , irregular and noisy as in ( [ kl2 ] ) , is functional principal component analysis ( fpca ) .processes are assumed to be square integrable with mean function and auto - covariance function , , which is smooth , symmetric and nonnegative definite . using as kernel in a linear operator leads to the hilbert schmidt operator .we denote the ordered eigenvalues ( in declining order ) of this operator by and the corresponding orthonormal eigenfunctions by .we assume that all eigenvalues are of multiplicity in the sequel .it is well known that the kernel has the representation and the trajectories generated by the process satisfy the karhunen love representation [ ] . herethe , , are the functional principal components ( fpcs ) of the random trajectories .the are uncorrelated random variables with and , with . upon differentiating both sides ,one obtains where and are the derivatives of mean and eigenfunctions .the eigenfunctions are the solutions of the eigen - equations , under the constraint of orthonormality . under suitable regularity conditions, one observes \\[-8pt ] \phi_k^{(1)}(t ) & = & \frac{1}{\lambda_k}\int_{{\mathcal{t } } } \frac{\partial}{\partial t}\,g(t , s)\phi_k(s)\,ds,\nonumber\end{aligned}\ ] ] which motivates corresponding eigenfunction derivative estimates. a useful representation is \\[-8pt ] \eqntext{\nu_1 , \nu_2 \in\{0,1\ } , s , t \in{\mathcal{t}},}\end{aligned}\ ] ] which is an immediate consequence of the basic properties of the functional principal components . for more details and discussion, we refer to appendix [ seca1 ] .it is worthwhile to note that the representation ( [ kl3 ] ) does not correspond to the karhunen love representation of the derivatives , which would be based on orthonormal eigenfunctions of a linear hilbert schmidt operator defined by the covariance kernel .a method to obtain this representation might proceed by first estimating using ( [ rep ] ) for and suitable estimates for eigenfunction derivatives , then directly decomposing into eigenfunctions and eigenvalues .this leads to and the karhunen love representation , with orthonormal eigenfunctions [ ] . in the following we consider differentiable gaussian processes , for which the differential equation introduced below automatically applies . in the absence of the gaussian assumption, one may invoke an alternative least squares - type interpretation .gaussianity of the processes implies the joint normality of centered processes at all points , so that this joint normality immediately implies a `` population '' differential equation of the form , as has been observed in ; for additional details see appendix [ seca1 ] . however , it is considerably more interesting to find a dynamic equation which applies to the individual trajectories of processes .this goal necessitates inclusion of a stochastic term which leads to an empirical stochastic differential equation that governs the dynamics of individual trajectories .[ thm1 ] for a differentiable gaussian process , it holds that where \\[-8pt ] & = & \frac{1}{2}\frac{d}{dt}\log[{\operatorname{var}}\{x(t)\}],\qquad t\in{\mathcal{t}},\nonumber\end{aligned}\ ] ] and is a gaussian process such that are independent at each and where is characterized by and , with \\[-8pt ] & & { } - \beta(s)\sum_{k=1}^\infty\lambda_k\phi^{(1)}_k(t ) \phi_k(s ) + \beta(t)\beta(s)\sum_{k=1}^\infty\lambda_k\phi_k(t ) \phi_k(s).\nonumber\end{aligned}\ ] ] equation ( [ de ] ) provides a first order linear differential equation which includes a time - varying linear coefficient function and a random drift process .the process `` drives '' the equation at each time .it is square integrable and possesses a smooth covariance function and smooth trajectories .it also provides an alternative characterization of the individual trajectories of the process .the size of its variance function determines the importance of the role of the stochastic drift component .we note that the assumption of differentiability of the process in theorem [ thm1 ] can be relaxed .it is sufficient to require weak differentiability , assuming that , where denotes the sobolev space of square integrable functions with square integrable weak derivative [ ] . along these lines ,equation ( [ de ] ) may be interpreted as a stochastic sobolev embedding .observe also that the drift term can be represented as an integrated diffusion process . upon combining ( [ kl3 ] ) and( [ de ] ) , and observing that functional principal components can be represented as , where is the eigenfunction of the wiener process on domain ], we find from ( [ r2 ] ) ^ 2\\ & & { } \big/ \bigl[\sum\lambda_k ( \cos( 2k\pi t))^2 \sum\lambda_k k ( \sin(2k\pi t))^2 \bigr],\qquad t \in [ 0,1].\end{aligned}\ ] ] choosing and numerically approximating these sums , one obtains the functions as depicted in figure [ rfig ] .this illustration shows ( [ r1 ] ) , ( [ r2 ] ) , quantifying the fraction of variance explained by the deterministic part of the dynamic equation ( [ de ] ) , illustrated for the trigonometric basis on n\rightarrow \infty ] . adopting the customary approach ,the bid prices are log - transformed prior to the analysis .the values of the live bids are sampled at bid arrival times , where refers to the auction index and to the total number of bids submitted during the auction ; the number of bids per auction is found to be between 6 and 49 for these data .we adopt the point of view that the observed bid prices result from an underlying price process which is smooth , where the bids themselves are subject to small random aberrations around underlying continuous trajectories .since there is substantial variability of little interest in both bids and price curves during the first three days of an auction , when bid prices start to increase rapidly from a very low starting point to more realistic levels , we restrict our analysis to the interval [ ( in hours ) , thus omitting the first three days of bidding .this allows us to focus on the more interesting dynamics in the price curves taking place during the last four days of these auctions .our aim is to explore the price dynamics through the empirical stochastic differential equation ( [ de ] ) .our study emphasizes description of the dynamics over prediction of future auction prices and consists of two parts : a description of the dynamics of the price process at the `` population level '' which focuses on patterns and trends in the population average and is reflected by dynamic equations for conditional expectations .the second and major results concern the quantification of the dynamics of auctions at the individual or `` auction - specific level '' where one studies the dynamic behavior for each auction separately , but uses the information gained across the entire sample of auctions .only the latter analysis involves the stochastic drift term in the stochastic differential equation ( [ de ] ) .we begin by reviewing the population level analysis , which is characterized by the deterministic part of ( [ de ] ) , corresponding to the equation .this equation describes a relationship that holds for conditional means but not necessarily for individual trajectories . for the population level analysis, we require estimates of the mean price curve and its first derivative , and these are obtained by applying linear smoothers to ( [ smooth1 ] ) to the pooled scatterplots that are displayed in figure [ auc - mu ] ( for more details , see appendix [ seca2 ] ) .one finds that both log prices and log price derivatives are increasing throughout , so that at the log - scale the price increases are accelerating in the mean as the auctions proceed .( left panel ) and of their derivatives ( right panel ) , ( solid ) , ( dashed ) and ( dash - dotted ) . ] a second ingredient for our analysis are estimates for the eigenfunctions and eigenvalues ( details in appendix [ seca2 ] ) .since the first three eigenfunctions were found to explain 84.3% , 14.6% and 1.1% of the total variance , three components were selected .the eigenfunction estimates are shown in the left panel of figure [ auc - xeig ] , along with the estimates of the corresponding eigenfunction derivatives in the right panel . for the interpretation of the eigenfunctionsit is helpful to note that the sign of the eigenfunctions is arbitrary .we also note that variation in the direction of the first eigenfunction corresponds to the major part of the variance .the variances that are attributable to this eigenfunction are seen to steadily decrease as is increasing , so that this eigenfunction represents a strong trend of higher earlier and smaller later variance in the log price trajectories .the contrast between large variance of the trajectories at earlier times and smaller variances later reflects the fact that auction price trajectories are less determined early on when both relatively high as well as low prices are observed , while at later stages prices differ less as the end of the auction is approached and prices are constrained into a narrower range .correspondingly , the first eigenfunction derivative is steadily increasing ( decreasing if the sign is switched ) , with notably larger increases ( decreases ) both at the beginning and at the end and a relatively flat positive plateau in the middle part .the second eigenfunction corresponds to a contrast between trajectory levels during the earlier and the later part of the domain , as is indicated by its steady increase and the sign change , followed by a slight decrease at the very end .this component thus reflects a negative correlation between early and late log price levels .the corresponding derivative is positive and flat , with a decline and negativity toward the right endpoint .the third eigenfunction , explaining only a small fraction of the overall variance , reflects a more complex contrast between early and late phases on one hand and a middle period on the other , with equally more complex behavior reflected in the first derivative .the eigenfunctions and their derivatives in conjunction with the eigenvalues determine the varying coefficient function , according to ( [ beta ] ) .the estimate of this function is obtained by plugging in the estimates for these quantities and is visualized in the left panel of figure [ auc - beta - zeig ] , demonstrating small negative values for the function throughout most of the domain , with a sharp dip of the function into the negative realm near the right end of the auctions . for subdomains of functional data , where the varying coefficient or `` dynamic transfer '' function is negative , as is the case for the auction data throughout the entire time domain, one may interpret the population equation as indicating `` dynamic regression to the mean . '' by this we mean the following : when a trajectory value at a current time falls above ( resp ., below ) the population mean trajectory value at , then the conditional mean derivative of the trajectory at falls below ( resp . ,above ) the mean .the overall effect of this negative association is that the direction of the derivative is such that trajectories tend to move toward the overall population mean trajectory as time progresses . .] thus , our findings for the auction data indicate that `` dynamic regression to the mean '' takes place to a small extent throughout the auction period and to a larger extent near the right tail , at the time when the final auction price is determined [ see also ] .one interpretation is that at the population level , prices are self - stabilizing , which tends to prevent price trajectories running away toward levels way above or below the mean trajectory .this self - stabilization feature gets stronger toward the end of the auction , where the actual `` value '' of the item that is being auctioned serves as a strong homogenizing influence .this means that in a situation where the current price level appears particularly attractive , the expectation is that the current price derivative is much higher than for an auction with an unattractive ( from the perspective of a buyer ) current price , for which then the corresponding current price derivative is likely lower .the net effect is a trend for log price trajectories to regress to the mean trajectory as time progresses .we illustrate here the proposed stochastic differential equation ( [ de ] ) . first estimating the function , we obtain the trajectories of the drift process .these trajectories are presented in figure [ auc - z ] for the entire sample of auctions .they quantify the component of the derivative process that is left unexplained by the varying coefficient function and linear part of the dynamic model ( [ de ] ) .the trajectories exhibit fluctuating variances across various subdomains .the subdomains for which these variances are small are those where the deterministic approximation ( [ approx ] ) to the stochastic differential equation works best .it is noteworthy that the variance is particularly small on the subdomain starting at around 158 hours toward the endpoint of the auction at 168 hours , since auction dynamics are of most interest during these last hours .it is well known that toward the end of the auctions , intensive bidding takes place , in some cases referred to as `` bid sniping , '' where bidders work each other into a frenzy to outbid each other in order to secure the item that is auctioned . .right : smooth estimates of the first ( solid ) , second ( dashed ) and third ( dash - dotted ) eigenfunction of based on ( [ gz ] ) . ]the right panel of figure [ auc - beta - zeig ] shows the first three eigenfunctions of , which are derived from the eigenequations derived from estimates of covariance kernels ( [ gz ] ) that are obtained as described after ( [ smooth2 ] ) . in accordance with the visual impression of the trajectories of in figure [ auc - z ] ,the first eigenfunction reflects maximum variance in the middle portion of the domain and very low variance at both ends .interestingly , the second eigenfunction reflects high variance at the left end of the domain where prices are still moving upward quite rapidly , and very low variance near the end of the auction .this confirms that overall variation is large in the middle portion of the auctions , so that the drift process in ( [ de ] ) plays an important role in that period .further explorations of the modes of variation of the drift process can be based on the functional principal component scores of .following , we identify the three auctions with the largest absolute values of the scores .a scatterplot of second and first principal component scores with these auctions highlighted can be seen in the left upper panel of figure [ auc - zscore - ext ] .the corresponding individual ( centered ) trajectories of the drift process are in the right upper panel , and the corresponding trajectories of centered processes and in the left and right lower panels .the highlighted trajectories of are indeed similar to the corresponding eigenfunctions ( up to sign changes ) , and we find that they all exhibit the typical features of small variance near the end of the auction for and and of large variance for . , where the point marked by a circle corresponds to the auction with the largest ( in absolute value ) first score , the point marked with a square to the auction with the largest second score and the point marked with a `` triangle '' to the auction with the largest third score , respectively .top right : the trajectories of the drift process for these three auctions , where the solid curve corresponds to the trajectory of the `` circle '' auction , the dashed curve to the `` square '' auction and the dash - dotted curve to the `` triangle '' auction .bottom left : corresponding centered trajectories .bottom right : corresponding centered trajectory derivatives . ] for the two trajectories corresponding to maximal scores for first and second eigenfunction of we find that near the end of the auctions their centered derivatives turn negative .this is in line with dynamic regression to the mean , or equivalently , negative varying coefficient function , as described in section [ sec41 ] . herethe trajectories for at a current time are above the mean trajectory , which means the item is pricier than the average price at . as predicted by dynamic regression to the mean , log price derivative trajectories at are indeed seen to be below the mean derivative trajectories at .the trajectory corresponding to maximal score for the third eigenfunction also follows dynamic regression to the mean : here the trajectory for is below the overall mean trajectory , so that the negative varying coefficient function predicts that the derivative trajectory should be above the mean , which indeed is the case .( dashed ) and ( solid ) .right : smooth estimate of ( [ r1 ] ) , the variance explained by the deterministic part of the dynamic equation at time . ] that the variance of the drift process is small near the endpoint of the auction is also evident from the estimated variance function in the left panel of figure [ auc - var - r2 ] , overlaid with the estimated variance function of .the latter is rapidly increasing toward the end of the auction , indicating that the variance of the derivative process is very large near the auction s end .this means that price increases vary substantially near the end across auctions .the large variances of derivatives coupled with the fact that is small near the end of the auction implies that the deterministic part ( [ approx ] ) of the empirical differential equation ( [ de ] ) explains a very high fraction of the variance in the data .this corresponds to a very high , indeed close to the upper bound 1 , value of the coefficient of determination ( [ r1 ] ) , ( [ r2 ] ) in an interval of about 10 hours before the endpoint of an auction , as seen in the right panel of figure [ auc - var - r2 ] .we therefore find that the dynamics during the endrun of an auction can be adequately modeled by the simple deterministic approximation ( [ approx ] ) to the stochastic dynamic equation ( [ de ] ) , which always applies .this finding is corroborated by visualizing the regressions of versus at various fixed times .these regressions are linear in the gaussian case and may be approximated by a linear regression in the least squares sense in the non - gaussian case .the scatterplots of versus for times hours and hours ( where the time domain of the auctions is between 0 and 168 hours ) are displayed in figure [ auc - reg ] .this reveals the relationships to be indeed very close to linear .these are regressions through the origin .the regression slope parameters are not estimated from these scatterplot data which are contaminated by noise , but rather are obtained directly from ( [ de ] ) , as they correspond to .thus one simply may use the already available slope estimates , and .the associated coefficients of determination , also directly estimated via ( [ r2 ] ) and the corresponding estimation procedure , are found to be and .on ( both centered ) at hours ( left panel ) and hours ( right panel ) , respectively , with regression slopes and coefficient of determination , respectively , and , demonstrating that the deterministic part ( [ approx ] ) of the empirical differential equation ( [ de ] ) explains almost the entire variance of at hours but only a fraction of variance at hours . ] as the regression line fitted near the end of the auction at hours explains almost all the variance , the approximating deterministic differential equation ( [ approx ] ) can be assumed to hold at that time ( and at later times as well , all the way to the end of the auction ) . at regression line explains only a fraction of the variance , while a sizable portion of variance resides in the drift process , so that the stochastic part in the dynamic system ( [ de ] ) can not be comfortably ignored in this time range .these relationships can be used to predict derivatives of trajectories and thus price changes at time for individual auctions , given their log price trajectory values at .we note that such predictions apply to fitted trajectories , not for the actually observed prices which contain an additional random component that is unpredictable , according to model ( [ kl2 ] ) .we find that at time , regression to the mean is observed at the level of individual auctions : an above ( below ) average log price level is closely associated with a below ( above ) average log price derivative .this implies that a seemingly very good ( or bad ) deal tends to be not quite so good ( or bad ) when the auction ends .the main motivation of using the dynamic system approach based on ( [ de ] ) is that it provides a better description of the mechanisms that drive longitudinal data but are not directly observable .the empirical dynamic equation may also suggest constraints on the form of parametric differential equations that are compatible with the data . in the auction example , the dynamic equation quantifies both the nature and extent of how expected price increases depend on auction stage and current price level .this approach is primarily phenomenological and does not directly lend itself to the task of predicting future values of individual trajectories .that expected conditional trajectory derivatives satisfy a first - order differential equation model ( which we refer to as the `` population level '' since this statement is about conditional expectations ) simply follows from gaussianity and in particular does not require additional assumptions .this suffices to infer the stochastic differential equation described in ( [ jt ] ) which we term `` empirical differential equation '' as it is determined by the data .then the function , quantifying the relative contribution of the drift process to the variance of , determines how closely individual trajectories follow the deterministic part of the equation. we could equally consider stochastic differential equations of other orders , but practical considerations favor the modeling with first - order equations .we find in the application example that online auctions follow a dynamic regression to the mean regime for the entire time domain , which becomes more acute near the end of the auction .this allows us to construct predictions of log price trajectory derivatives from trajectory levels at the same .these predictions get better toward the right endpoint of the auctions .this provides a cautionary message to bidders , since an auction that looks particularly promising since it has a current low log price trajectory is likely not to stay that way and larger than average price increases are expected down the line .conversely , an auction with a seemingly above average log price trajectory is likely found to have smaller than average price increases down the line .this suggests that bidders take a somewhat detached stance , watching auctions patiently as they evolve .in particular , discarding auctions that appear overpriced is likely not a good strategy as further price increases are going to be smaller than the average for such auctions .it also implies that bid snipers are ill advised : a seemingly good deal is not likely to stay that way , suggesting a more relaxed stance .conversely , a seller who anxiously follows the price development of an item , need not despair if the price seems too low at a time before closing , as it is likely to increase rapidly toward the end of the auction . for prediction purposes , drift processes for individual auctionsare of great interest . in time domains where their variance is large ,any log price development is possible .interestingly , the variance of drift processes is very small toward the right tail of the auctions , which means that the deterministic part of the differential equation ( [ de ] ) is relatively more important , and log price derivatives during the final period of an auction become nearly deterministic and thus predictable .other current approaches of statistical modeling of differential equations for time course data [ e.g. , ] share the idea of modeling with a first order equation . in allother regards these approaches are quite different , as they are based on the prior notion that a differential equation of a particular and known form pertains to the observed time courses and moreover usually have been developed for the modeling of single time courses .this established methodology does not take into account the covariance structure of the underlying stochastic process .in contrast , this covariance structure is a central object in our approach and is estimated nonparametrically from the entire ensemble of available data , across all subjects or experiments .formula ( [ rep ] ) is an extension of the covariance kernel representation in terms of eigenfunctions , given by [ ] , which itself is a stochastic process version of the classical multivariate representation of a covariance matrix in terms of its eigenvectors and eigenvalues , .specifically , using representation ( [ kl3 ] ) , one finds , and ( [ rep ] ) follows upon observing that for and for . regarding the `` population differential equation '' ,observe that for any jointly normal random vectors with mean and covariance matrix with elements , it holds that . applying this to the jointly normal random vectors in ( [ jt ] ) then implies this population equation .the specific form for the function in ( [ beta ] ) is obtained by plugging in the specific terms of the covariance matrix given on the right - hand side of ( [ jt ] ) . applying ( [ rep ] ) , observing , and then taking the log - derivative leads to /[\sum_k \lambda_k \phi_k^2(t)] ] with 0 \leq\ell_1+\ell_2 < \ell , \ell_1\neq\nu _ 1 , \ell_2 \neq\nu_2 \ell_1=\nu_1 , \ell_2=\nu _2 \ell_1+\ell_2=\ell ] .then the properties of the functional principal component scores lead directly to whence .this implies .according to ( [ de ] ) , , for all , using ( [ cov ] ) and ( [ beta ] ) .this implies the independence of , due to the gaussianity .next observe , from which one obtains the result by straightforward calculation .proof of lemma [ lem1 ] since and is a bounded and integer - valued random variable .denote the upper bound by . to handle the one - dimensional case in ( [ kwa ] ) ,we observe where is the indication function . note that for each , is obtained from an i.i.d . sample . slightly modifying the proof of theorem 2 in for a kernel of order the weak convergence rate .it is easy to check that , as is a positive integer - valued random variable .therefore , analogously , for the two - dimensional case in ( [ kwa1 ] ) , let and then ^{-1 } \hat{\theta}_{\mathbf{j}}^\ast$ ] . similarly to the above , one has .again it is easy to verify that .the triangle inequality for the distance entails .proof of theorem [ thm2 ] note that the estimators , , and all can be written as functions of the general averages defined in ( [ kwa ] ) , ( [ kwa1 ] ) .slightly modifying the proof of theorem 1 in , with rates replaced by the rates given in lemma [ lem1 ] , then leads to the optimal weak convergence rates for and in ( [ thm2-eq1 ] ) .for the convergence rate of , lemma 4.3 in implies that where is defined in ( [ spacing ] ) and is an arbitrary estimate ( or perturbation ) of .denote the linear operators generated from the kernels and by , respectively , .noting that , one finds which implies ( [ thm2-eq2 ] ) .proof of theorem [ thm3 ] from ( [ bosq ] ) it is easy to see that , and from both ( [ bosq ] ) and ( [ pf1 ] ) that uniformly in .one then finds that is bounded in probability by which implies that , where is defined in ( [ rate ] ) and the remainder terms in ( [ rem ] ) .similar arguments lead to , noting due to the cauchy schwarz inequality .regarding , one has applying the cauchy schwarz inequality to .observing yields .to study , we investigate the convergence rates of where ( resp . , ) and ( resp ., ) share the same argument , and we define . in analogy to the above arguments , , .this leads to .the same argument also applies to .next we study and find that .analogous arguments apply to , completing the proof .we are grateful to two referees for helpful comments that led to an improved version of the paper .dauxois , j. , pousse , a. and romain , y. ( 1982 ) .asymptotic theory for the principal component analysis of a vector random function : some applications to statistical inference ._ j. multivariate anal . _* 12 * 136154 .mas , a. and menneteau , l. ( 2003 ) .perturbation approach applied to the asymptotic study of random operators . in _ high dimensional probability , iii ( sandjberg , 2002)_. _ progress in probability _ * 55 * 127134 .birkhuser , basel .ramsay , j. o. , hooker , g. , campbell , d. and cao , j. ( 2007 ) .parameter estimation for differential equations : a generalized smoothing approach ( with discussion ). _ j. r. stat .methodol . _ * 69 * 741796 .reithinger , f. , jank , w. , tutz , g. and shmueli , g. ( 2008 ) . modelling price paths in on - line auctions : smoothing sparse and unevenly sampled curves by using semiparametric mixed models .c _ * 57 * 127148 .shi , m. , weiss , r. e. and taylor , j. m. g. ( 1996 ) .an analysis of paediatric cd4 counts for acquired immune deficiency syndrome using flexible random curves ._ j. roy .c _ * 45 * 151163 .
|
we demonstrate that the processes underlying on - line auction price bids and many other longitudinal data can be represented by an empirical first order stochastic ordinary differential equation with time - varying coefficients and a smooth drift process . this equation may be empirically obtained from longitudinal observations for a sample of subjects and does not presuppose specific knowledge of the underlying processes . for the nonparametric estimation of the components of the differential equation , it suffices to have available sparsely observed longitudinal measurements which may be noisy and are generated by underlying smooth random trajectories for each subject or experimental unit in the sample . the drift process that drives the equation determines how closely individual process trajectories follow a deterministic approximation of the differential equation . we provide estimates for trajectories and especially the variance function of the drift process . at each fixed time point , the proposed empirical dynamic model implies a decomposition of the derivative of the process underlying the longitudinal data into a component explained by a linear component determined by a varying coefficient function dynamic equation and an orthogonal complement that corresponds to the drift process . an enhanced perturbation result enables us to obtain improved asymptotic convergence rates for eigenfunction derivative estimation and consistency for the varying coefficient function and the components of the drift process . we illustrate the differential equation with an application to the dynamics of on - line auction data.=-1 and . .
|
the east process is a one - dimensional spin system that was introduced in the physics literature by jckle and eisinger in 1991 to model the behavior of cooled liquids near the glass transition point , specializing a class of models that goes back to .each site in has a -value ( vacant / occupied ) , and , denoting this configuration by , the process attempts to update to at rate ( a parameter ) and to at rate , only accepting the proposed update if ( a `` kinetic constraint '' ) .it is the properties of the east process before and towards reaching equilibrium it is reversible w.r.t . , the product of bernoulli( ) variables which are of interest , with the standard gauges for the speed of convergence to stationarity being the inverse spectral - gap and the total - variation mixing time ( and ) on a finite interval , where we fix for ergodicity ( postponing formal definitions to [ sec : prelims ] ) . that the spectral - gap is uniformly bounded away from 0 for any first proved in a beautiful work of aldous and diaconis in 2002 .this implies that is of order for any fixed threshold for the total - variation distance from . for a configuration with ,call this rightmost 0 its _ front _ ; key questions on the east process revolve the law of the sites behind the front at time , basic properties of which remain unknown .one can imagine that the front advances to the right as a biased walk , behind which ( its trail is mixed ) . indeed , if one ( incorrectly ! ) ignores dependencies between sites as well as the randomness in the position of the front , it is tempting to conclude that converges to , since upon updating a site its marginal is forever set to bernoulli( ) .whence , the positive vs. negative increments to would have rates ( a 0-update at ) vs. ( a 1-update at with a 0 at its left ) , giving the front an asymptotic speed .of course , ignoring the irregularity near the front is problematic , since it is precisely the distribution of those spins that governs the speed of the front ( hence mixing ) .still , just as a biased random walk , one expects the front to move at a positive speed with normal fluctuations , whence its concentrated passage time through an interval would imply total - variation _ cutoff _ a sharp transition in mixing within an -window . to discuss the behavior behind the front ,let denote the set of configurations on the negative half - line with a fixed 0 at the origin , and let evolve via the east process constantly re - centered ( shifted by at most 1 ) to keep its front at the origin .blondel showed ( see theorem [ blondel1 ] ) that the process converges to an invariant measure , on which very little is known , and that converges in probability to a positive limiting value as ( an asymptotic velocity ) given by the formula ( we note that by the invariance of the measure and the fact that . ) the east process of course entails the joint distribution of and ; thus , it is crucial to understand the dependencies between these as well as the rate at which converges to as a prerequisite for results on the fluctuations of .our first result confirms the biased random walk intuition for the front of the east process , establishing a clt for its fluctuations around ( illustrated in fig .[ fig : front ] ) . along a time interval of , vs. its mean and standard deviation window . ][ th : main1 ] there exists a non - negative constant such that for all , &=vt+o(1),\\ \label{th1.3 } \lim_{t\to \infty } \tfrac 1 t { \operatorname{var}}_{\omega}\left(x({\omega}(t))\right)&=\s_*^2.\end{aligned}\ ] ] moreover , obeys a central limit theorem : a key ingredient for the proof is a quantitative bound on the rate of convergence to , showing that it is exponentially fast ( theorem [ coupling ] ) .we then show that the increments behave ( after an initial burn - in time ) as a stationary sequence of weakly dependent random variables ( corollary [ cor : wf ] ) , whence one can apply an ingenious steins - method based argument of bolthausen from 1982 to derive the clt . moving our attention to finite volume , recall that the _ cutoff phenomenon _ ( coined by aldous and diaconis ; see as well as and the references therein ) describes a sharp transition in the convergence of a finite markov chain to stationarity : over a negligible period of time ( the cutoff window ) the distance from equilibrium drops from near 1 to near .formally , a sequence of chains indexed by has cutoff around with window if for any fixed .it is well - known ( see , e.g. , *example 4.46 ) that a biased random walk with speed on an interval of length has cutoff at with an -window due to normal fluctuations .recalling the heuristics that depicts the front of the east process as a biased walk flushing a law in its trail , one expects precisely the same cutoff behavior .indeed , the clt in theorem [ th : main1 ] supports a result exactly of this form .[ th : main2 ] the east process on with parameter exhibits cutoff at with an -window : for any fixed and large enough , where is the c.d.f . of andthe implicit constant in the depends only on . behind the front of the east process ( showing simulated via monte - carlo for . ) ] while these new results relied on a refined understanding of the convergence of the process behind the front to its invariant law ( shown in fig .[ fig : nu ] ) , various basic questions on remain unanswered . for instance , are the single - site marginals of monotone in the distance from the front ?what are the correlations between adjacent spins ?can one explicitly obtain , thus yielding an expression for the velocity ?for the latter , we remark that the well - known upper bound on in terms of the spectral - gap ( eq . ) , together with theorem [ th : main2 ] , gives the lower bound ( cf .also ) })}{\log\left(1/(p\wedge q)\right)}=\frac{{{\rm gap}}({\ensuremath{\mathcal l}})}{\log\left(1/(p\wedge q)\right)}.\ ] ] finally , we accompany the concentration for and cutoff for the east process by analogous results including cutoff with an -window on the corresponding kinetically constrained models on trees , where a site is allowed to update ( i.e. , to be reset into a bernoulli( ) variable ) given a certain configuration of its children ( e.g. , all - zeros / at least one zero / etc . ) .these results are detailed in [ sec : trees ] ( theorems [ th : main3][th : main4 ] ) .the concentration and cutoff results for the kinetically constrained models on trees ( theorems [ th : main3][th : main4 ] ) do not apply to every scale but rather to infinitely many scales , as is sometimes the case in the context of tightness for maxima of branching random walks or discrete gaussian free fields ; see , e.g. , as well as the beautiful method in to overcome this hurdle for certain branching random walks .indeed , similarly to the latter , one of the models here gives rise to a distributional recursion involving the maximum of i.i.d .copies of the random variable of interest , plus a non - negative increment . unfortunately ,unlike branching random walks , here this increment is not independent of those two copies , and extending our analysis to every scale appears to be quite challenging .let and let consist of those configurations such that the variable is finite . in the sequel , for any we will often refer to as the _ front _ of .given and will write for the restriction of to . a. _ the east process ._ for any and let denote the indicator of the event . we will consider the markov process on with generator acting on local functions ( depending on finitely many coordinates ) given by ,\ ] ] where and are the configurations in obtained from by fixing equal to or to respectively the coordinate at . in the sequelthe above process will be referred to as the _ east process on _ and we will write for its law when the starting configuration is .average and variance w.r.t . to be denoted by ] for the law and average at a fixed time .if the starting configuration is distributed according to an initial distribution we will simply write for and similarly for ] , the projection on of the half - line east process on is a continuous time markov chain because each vertex only queries the state of the spin to its left . in the sequel the above chain will be referred to as the _ east process _ in .let denote the corresponding generator .the main properties of the above processes can be summarized as follows ( cf . for a survey ) .they are all ergodic and reversible w.r.t . to the product bernoulli( )measure ( on the corresponding state space ) .their generators are self - adjoint operators on satisfying the following natural ordering : by translation invariance the value of does not depend on and , similarly , depends only on the cardinality of .as mentioned before , the fact that ( but only for ) was first proved by aldous and diaconis , where it was further shown that the order of the exponent in the lower bound matching non - rigorous predictions in the physics literature .the positivity of was rederived and extended to all in by different methods , and the correct asymptotics of the exponent as matching the _ upper bound _ in was very recently established in .it is easy to check ( e.g. , from ) that , a fact that will be used later on .for the east process in it is natural to consider its mixing times , , defined by where denotes total - variation distance .it is a standard result for reversible markov chains ( see e.g. ) that where . in particular . a lower boundwhich also grows linearly in the length of the interval follows easily from the _ finite speed of information propagation _ : if we run the east model in starting from the configuration of except for a zero at the origin , then , in order to create zeros near the right boundary of a sequence of order of successive rings of the poisson clocks at consecutive sites must have occurred .that happens with probability iff we allow a time which is linear in ( see [ sec : finite - speed ] and in particular lemma [ finitespeed ] ). given two probability measures on and we will write to denote the total variation distance between the marginals of and on . when the process starts from a initial configuration with a front , it is convenient to define a new process on as _ the process as seen from the front _ .such a process is obtained from the original one by a random shift which forces the front to be always at the origin .more precisely we define on the markov process with generator given by , \\ { \ensuremath{\mathcal l}}^{{\rm s}}f({\omega})&= ( 1-p)\left[f(\vartheta^-{\omega})-f({\omega})\right]+ p \,c_0({\omega})\left[f(\vartheta^+{\omega})-f({\omega})\right],\end{aligned}\ ] ] where that is , the generator incorporates the moves of the east process behind the front plus shifts corresponding to whenever the front itself jumps forward / backward .the same graphical construction that was given for the east process applies to the process : this is clear for the east part of the generator ; for the shift part , simply apply a positive shift when there is a ring at the origin and the corresponding bernoulli variable is one .if the bernoulli variable is zero , operate a negative shift . with this notation ,the main result of blondel can be summarized as follows .[ blondel1 ] the front of the east process , , and the process as seen from the front , , satisfy the following : a. there exists a unique invariant measure for the process .moreover , } ] and let be such that . further let be largest between the maximal spacing between two consecutive zeros of in and the distance of the last zero of from the vertex . then to prove this proposition , we need the following lemma . [ infinitesupport ] there exist universal positive constants independent of such that the following holds .fix with ,let and let }\mapsto { { \ensuremath{\mathbb r } } } ] be the set of all configurations } ] . the special configuration in } ] will be denoted by .observe that , using reversibility together with the fact that the updates in ] , it is easy to verify that , at the hitting time of the set }:\ { \omega}'_\ell=0\} ] have coupled .let } ] then using the grand coupling , | & = |{\mathbb{e}}_{\omega,\pi_{[1,\ell]}}\left[f(\omega(t))-f(\omega'(t))+\pi_{\ell}(f)(\omega'(t))-\pi_{\ell}(f)(\omega(t))\right]|\\ & { { \;\leqslant\;}}4 \sup_{{\omega}'\in { \omega}^{\omega}_{(-\infty,\ell\,]}}{{\ensuremath{\mathbb p } } } ( \exists \ , x\in [ 1,\ell]:\ { \omega}_x(t)\neq { \omega}_x'(t))\\ & { { \;\leqslant\;}}4 { { \ensuremath{\mathbb p } } } _ { { \omega}^*}(\tau_\ell > t)\\ & { { \;\leqslant\;}}4 { { \ensuremath{\mathbb p } } } _ { { \omega}^*}(x({\omega}^*(t))<\ell).\end{aligned}\ ] ] the first equality follows by adding and subtracting }(f)(\omega(t)\right] ]. then by construction , for any .thus the first statement follows at once from proposition [ prop : key1 ] .the other two statements follow from the fact that \}}\right ) { { \;\leqslant\;}}\ell p^{{\delta}\ell^{{\varepsilon}/2}}. \qedhere\ ] ] as the east process is an interacting particle system whose rates are bounded by one , it is well known that in this case information can only travel through the system at finite speed . a quantitative statement ofthe above general fact goes as follows .[ finitespeed ] for and , define the `` linking event '' as the event that there exists a ordered sequence or of rings of the poisson clocks associated to the corresponding sites in \cap { { \ensuremath{\mathbb z } } } ] and let .assume .then for any and any the following holds : \in b\}}{\mathds 1}_{\{x({\omega}(t))=a\}}\mid { \ensuremath{\mathcal f}}_s\right]-{{\ensuremath{\mathbb e } } } _ { \omega}\left[{\mathds 1}_{\{\vartheta_{x({\omega}(s ) ) } [ { \omega}(t)]\in b\}}\mid { \ensuremath{\mathcal f}}_s\right]{{\ensuremath{\mathbbe } } } _ { \omega}\left[{\mathds 1}_{\{x({\omega}(t))=a\}}\mid { \ensuremath{\mathcal f}}_s\right]\,\big|\\= o(e^{-\ell}).\end{gathered}\ ] ] to see what the proposition roughly tells we first assume that the front at time is at . then the above result says that at a later time any event supported on ] . using lemma [ linearspeed ] , with probability greater than we can assume that ] , the events and imply that there exists with the following properties : * ; * the hitting time is smaller than ; * is identically equal to one in the interval ] and once for the choice of otherwise .the statement now follows by taking small enough . we now prove .as before we give the result in the east process setting ( for the law and replaced by its random shifted version ) .we decompose the interval \cap { { \ensuremath{\mathbb z } } } ] and . ] , corollary [ cor : spacing ] together with the markov property at time show that \mid { \ensuremath{\mathcal f}}_s\right)\nonumber\\ & { { \;\leqslant\;}}\left\|{{\ensuremath{\mathbb p } } } _ { \omega}(\cdot\mid { \ensuremath{\mathcal f}}_s)-\pi\right\|_{[x({\omega}(s)),x({\omega}(s))+{\delta}]}+ \pi\left({\omega}_x=1\ \forall x\in [ x({\omega}(s)),x({\omega}(s))+{\delta}]\right ) \nonumber\\ \label{eq:4 } & { { \;\leqslant\;}}{\delta}\left(\frac{c^*}{q}\right)^{\delta}e^{-(t - s)({{\rm gap}}({\ensuremath{\mathcal l}})\wedge m ) } + p^{|{\delta}|}=o(t^{-10{\epsilon}}).\end{aligned}\ ] ] above we used the fact that .hence , can be chosen depending only on such that holds and stays bounded as + we now take the union of the random intervals ] , with the additional property that it does not contain a sub - interval of length where is constantly equal to one ( which will then imply , with room to spare ) .+ we now upper bound the probability that the set ] is uniformly in the configuration at time . in conclusion we proved that ssc holds with probability in an interval containing ] of length .hence we can apply proposition [ prop : key1 ] to the interval ] with probability .finite speed of propagation in the form of lemma [ linearspeed ] guarantees that , with probability , .the proof of is complete .it remains to prove .let \cap { { \ensuremath{\mathbb z } } } ] .this property is assumed henceforth .let us decompose according to the value of the front : .\end{gathered}\ ] ] using lemma [ linearspeed ] , occurs with probability greater than . thus \\= \sum_{\substack { a\in { { \ensuremath{\mathbb z } } } \\ 0<a - x({\omega}(t_\ell)){{\;\leqslant\;}}v_{\rm max}(t - t_\ell)}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[{\mathds 1}_{\{\vartheta_{a } { \omega}(t)_{\lambda}\in a\}}\,{\mathds 1}_{\{x({\omega}(t))=a\}}\mid { \ensuremath{\mathcal f}}_{t_\ell}\right ] + e^{-{\gamma}(t - t_\ell)}.\end{gathered}\ ] ] by definition , the event is the same as the event . using the restriction that , the choice of and the fact that , we get ] ) to get that where is the length of , since by assumption satisfies wsc in .because of our choice of the parameters the r.h.s .of is if are chosen small enough and large enough respectively depending on . since by remark [ speedbnd ] as can be chosen to be bounded as + the claim now follows because , with \\ & \subset [ x({\omega}(t_\ell))-v_{\rm min}t_\ell,\ , x({\omega}(t_\ell))- ( v_{\rm max}/v_{\rm min})\kappa\ell\ , ] \subset i,\end{aligned}\ ] ] together with the translation invariance of expressed by .this establishes and concludes the proof of theorem [ th : key3 ] .notice that at all points in the proof , was chosen to be bounded as the proof is based on a coupling argument .there exists such that , for any large enough and for any pair of starting configurations , } { { \;\leqslant\;}}c ' e^{-t^{\alpha}},\ ] ] with independent of .also can be chosen uniformly as once this step is established and using the invariance of the measure under the action of the semigroup , }&= \|\,\mu_{\omega}^t -\int d\nu({\omega}')\mu_{{\omega}'}^t\,\|_{[-v^*t,\,0]}\\ & { { \;\leqslant\;}}\int d\nu({\omega}')\|\,\mu_{\omega}^t -\mu_{{\omega}'}^t\,\|_{[-v^*t,\ , 0]}{{\;\leqslant\;}}c ' e^{-t^{\alpha}}.\end{aligned}\ ] ] we now prove .we first fix a bit of notation . given and a large , let where is the constant appearing in theorem [ th : key3 ] , let and define .we then set it will be convenient to refer to the time lag as the -round . in turnwe split each round into two parts : from to and from to .we will refer to the first part of the round as the _ burn - in part _ and to the second part as the _ mixing part_. we also set ] .+ a. if are not equal in the interval , then let them evolve for the mixing part of the round ( i.e. , from time to time ) via the basic coupling .+ b. if instead they agree on , then search for the rightmost common zero of in and call its position .if there is no such a zero , define to be the right boundary of .next sample a bernoulli random variable with .the value has to be interpreted as corresponding to the event that the two poisson clocks associated to and to the origin in the graphical construction did not ring during the mixing part of the round .0.1 cm 1 .if , set and similarly for .the remaining part of the configurations at time is sampled using the basic coupling to the left of and the maximal coupling for the east process in the interval ] .the bound of theorem [ th : key3 ] shows that the contribution of such a case is .having discarded the occurrence of the above `` extremal '' situations , we now assume that are such that : ( i ) they are different in the interval ; ( ii ) they satisfy the -weak spacing condition in ] .theorem [ th : key3 ] proves that , uniformly in , the first error term takes into account the variation distance from of the marginals in of and , the second error term bounds the probability that either or do not satisfy the ssc condition in the interval ] and in a time lag , we see a discrepancy . in conclusion , the probability that in is larger than thus proving the claim .we are now in a position to finish the proof of theorem [ coupling ] .let and let be the left boundary of the interval ] imply the linking event from lemma [ finitespeed ] . by construction for large enough . therefore ,:\ { \omega}_x\neq { \omega}'_x)&{{\;\leqslant\;}}p_n + { { \ensuremath{\mathbb p } } } \left(f(a_n ,- v^*t;t_n , t)\right)\\ & { { \;\leqslant\;}}o(e^{-t^{\alpha}})+ e^{-{\epsilon}v_{\rm max}t},\end{aligned}\ ] ] as required .moreover , by the proof of claim [ cm1 ] , can be chosen uniformly as .thus we are done . to prove we observe that , for any , the event implies the occurrence of the linking event .lemma [ finitespeed ] now gives that { { \;\leqslant\;}}\max_{|x|{{\;\leqslant\;}}v_{\rm max}{\delta}}f(x)^2 + \sum_{n\ge v_{\rm max}{\delta } } f(n+1)^2 e^{-n }< \infty.\ ] ] in order to prove we apply the markov property at time and write = \int d\mu^{t_{n-1}}_{\omega}({\omega}')\ , { { \ensuremath{\mathbb e } } } _ { { \omega}'}\left[f(\xi_1)\right].\end{gathered}\ ] ] at this stage we would like to appeal to theorem [ coupling ] to get the sought statement. however theorem [ coupling ] only says that , for any large enough , is very close to the invariant measure in the interval ] and identically equal to elsewhere . then , under the basic coupling , the front at time starting from is different from the front starting from iff the linking event occurred . in conclusion ,if , - \int d\mu^{t_{n-1}}_{\omega}({\omega}')\ , { { \ensuremath{\mathbb e } } } _ { \phi_{t_{n-1}}({\omega}')}\left[f(\xi_1)\right]\bigg|\\ & { { \;\leqslant\;}}{{\ensuremath{\mathbb p } } } ( f(-v^*t_{n-1},0;0,{\delta}))^{1/2}\sup_{{\omega}\in { \omega}_{{\rm f}}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[f(\xi_1)^2\right]^{1/2}\\ & { { \;\leqslant\;}}e^{-v^*t_{n-1}/2 } \sup_{{\omega}\in { \omega}_{{\rm f}}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[f(\xi_1)^2\right]^{1/2}.\end{aligned}\ ] ] we can now apply theorem [ coupling ] to get that -{{\ensuremath{\mathbb e } } } _ { \nu}\left[f(\xi_1)\right]\bigg| \\ { { \;\leqslant\;}}\left [ \sup_{{\omega}\in { \omega}_{{\rm f}}}\|\mu_{\omega}^{t_{n-1}}-\nu\|^{1/2}_{[-v^*t_{n-1},0 ] } + e^{-v^*t_{n-1}/2}\right]\sup_{{\omega}\in { \omega}_{{\rm f}}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[f(\xi_1)^2\right]^{1/2 } = o(e^{-t_{n-1}^{\alpha}/2}).\end{gathered}\ ] ] to prove suppose first that where is the constant appearing in theorem [ coupling ] . then we can use the markov property at time and repeat the previous steps to get the result .if instead it suffices to write \right)\ ] ] and apply to ] for all , then implies that otherwise there exists such that ^{1/2}\ge \frac{q+pq^*}{2p } ] and let .clearly as uniformly in then corollary [ cor : wf ] implies that }{n^{1/2}}\right]^2=\frac 1n \sum_{j , k=1}^n { { \operatorname{cov}}}_{\omega}\left(\tilde f_n(\bar \xi_j),\tilde f_n(\bar \xi_k)\right)\ ] ] converges to as uniformly in .hence it is enough to prove the result for the truncated variables . for lightness of notationwe assume henceforth that the s are bounded .let now and let ,\quad j\in \{1,\dots , n\}.\end{aligned}\ ] ] the decay of covariances implies that .hence it is enough to show that is asymptotically normal .the main observation of , in turn inspired by the stein method , is that the latter property of follows if =0 , \quad \forall { \lambda}\in { { \ensuremath{\mathbb r } } } .\ ] ] in turn follows if ( see *eqs .( 4)(5 ) ) &=0\\ \label{eq:19tris } \lim_{n\to \infty } \frac{1}{\sqrt{{\alpha}_n } } { { \ensuremath{\mathbb e } } } _ { \omega}\bigl[\big|\ \sum_{j=1}^n \bar\xi_j\bigl(1-e^{-i{\lambda}\frac{s_n}{\sqrt{{\alpha}_n}}}-i{\lambda}s_{j , n}\bigr)\big|\bigr]&=0\\ \label{eq:19quatris}\lim_{n\to \infty } \frac{1}{\sqrt{{\alpha}_n}}\sum_{j=1}^n{{\ensuremath{\mathbb e } } } _ { \omega}\bigl[\bar \xi_j\ , e^{i{\lambda}\frac{(s_n - s_{j , n})}{\sqrt{{\alpha}_n}}}\bigr ] & = 0.\end{aligned}\ ] ] as in , the mixing properties and easily prove that and hold . as far as is concerned the formulation of theorem [ coupling ] forces us to argue a bit differently than .we first observe that , using the boundedness of the variables s , is equivalent to =0 , \quad \forall { \lambda}\in { { \ensuremath{\mathbb r } } } .\ ] ] fix two numbers and with ( eventually they will be chosen logarithmically increasing in ) and write {\mathds 1}_{\{| \frac{(s_n - s_{j , n})}{\sqrt{{\alpha}_n}}| > l\}}\\ & = : \y^{(j)}_1+y^{(j)}_2+y^{(j)}_3.\end{aligned}\ ] ] let us first examine the contribution of and to the covariance term .using the boundedness of the variables there exists a positive constant such that : | & { { \;\leqslant\;}}c \,\sqrt{n } \\frac{l^{m+1}}{m!},\\ \frac{1}{\sqrt{{\alpha}_n}}\sum_{j=\ell_n}^n|{{\ensuremath{\mathbb e } } } _ { \omega}\left[\bar \xi_j\,y^{(j)}_3\right]|&{{\;\leqslant\;}}c\,\sqrt{n}\max_j { { \ensuremath{\mathbb e } } } _ { \omega}\left[e^{2|{\lambda}| \frac{|s_n - s_{j , n}|}{\sqrt{{\alpha}_n}}}\right]^{1/2 } { { \ensuremath{\mathbb p } } } _ { \omega}\left(| \frac{(s_n - s_{j , n})}{\sqrt{{\alpha}_n}}| > l\right).\end{aligned}\ ] ] [ large - dev ] there exists such that , for all large enough and any , { { \;\leqslant\;}}2 e^{c { \beta}^2}.\ ] ] moreover , there exists such that , for all large enough and all , assume for the moment the lemma and choose and . we can conclude that | { { \;\leqslant\;}}c\sqrt{n}\left[e^{-c'l^2}+ \frac{l^{m+1}}{m!}\right],\end{gathered}\ ] ] so that |=0.\ ] ] we now examine the contribution of to .recall thus clearly , =\frac{1}{\sqrt{{\alpha}_n}}\sum_{j=\ell_n}^n\sum_{m=1}^m \left ( \frac{i{\lambda}}{\sqrt{n}}\right)^m{\sum_{\substack{i_1,\dots , i_m \\\min_k |i_k - j|\ge \ell_n}}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[\bar \xi_j \prod_{i = k}^m \bar \xi_{i_k}\right],\ ] ] where the labels run in .[ covar2 ] let .then , for any , any and any satisfying , it holds that |=o(e^{-n^{{\alpha}/6}}).\ ] ] here is the mixing exponent appearing in theorem [ coupling ] . assuming the lemma we get immediately that also =0\ ] ] and is established . in conclusion , would follow from lemmas [ large - dev][covar2 ] .let us begin with .for simplicity we prove that , for any constant , {{\;\leqslant\;}}e^{c{\beta}^2} ] and get that {{\;\leqslant\;}}{{\ensuremath{\mathbb e } } } _ { \omega}\left[\exp({\beta}s_n/\sqrt{n})\right ] + { { \ensuremath{\mathbb e } } } _ { \omega}\left[\exp(-{\beta}s_n/\sqrt{n})\right]{{\;\leqslant\;}}2 e^{c{\beta}^2}.\ ] ] we partition the discrete interval into disjoints blocks of cardinality .given a integer , by applying the cauchy - schwarz inequality a finite number of times depending on , it is sufficient to prove the result for replaced by the sum of the s restricted to an arbitrary collection of blocks with the property that any two blocks in are separated by at least blocks .fix one such collection and let be the rightmost block in .let be the largest label in which is not in the block and let be the corresponding time .further let . if where is the constant appearing in theorem [ coupling ] , we can appeal to to obtain = { { \ensuremath{\mathbb e } } } _ \nu\left[\exp({\beta}z_b/\sqrt{n})\right]+ o(e^{-n^{{\alpha}/3}}e^{{\beta}n^{-1/6}}).\ ] ] using the trivial bound we have = 1 + \frac{{\beta}^2}{2n}{\operatorname{var}}_\nu(z_b ) + o({\beta}^3 n^{-7/6}){\operatorname{var}}_\nu(z_b),\ ] ] where thanks to . abovewe used the trivial bound {{\;\leqslant\;}}c\ , n^{1/3}{\operatorname{var}}_\nu(z_b).\ ] ] in conclusion , using the apriori bound , we get that { { \;\leqslant\;}}1+c \frac{{\beta}^2}{n^{2/3}}.\ ] ] the markov property and a simple iteration imply that , {{\;\leqslant\;}}\left[1+c \frac{{\beta}^2}{n^{2/3}}\right]^{|{\ensuremath{\mathcal b}}|}{{\;\leqslant\;}}\exp(c ' { \beta}^2),\ ] ] uniformly in the cardinality of the collection .the bound is proved .the bound follows at once from and the exponential chebyshev inequality ,\ ] ] with , being a sufficiently small constant .fix ] and , let be the law of the process started from .recall that and introduce the hitting time where the initial configuration is identically equal to one ( in the sequel ) .it is easy to check ( see , e.g. , ) that at time the basic coupling ( cf . [ setting - notation ] ) has coupled all initial configurations .thus using the graphical construction , up to time the east process in started from the configuration coincides with the infinite east process started from the configuration with a single zero at the origin .therefore thus establishing a bridge with theorem [ th : main1 ] .recall now the definition of from theorem [ th : main1 ] and distinguish between the two cases and . the case . herewe will show that for , let .then implies that as hence , to prove a lower bound on the total variation norm , set ( any diverging sequence which is would do here ) and define the event }\bigr).\ ] ] then and so any lower bound on would translate to a lower bound on up to an additive -term . again by , as thus we conclude that eq . now follows from and by choosing the case here a similar argument shows that using the fact ( following the results in [ sec : front ] ) that if .this concludes the proof of theorem [ th : main2 ] .in this section we consider constrained oriented models on regular trees and prove strong concentration results for hitting times which are the direct analog of the hitting time define in [ east - cutoff ] for the east process . as a consequencewe derive a strong cutoff result for the `` maximally constrained model '' ( see below ) .let be the -ary rooted tree , , in which each vertex has children .we will denote by the root and by the subtree of consisting of the first -levels starting from the root . in analogy to the east process , for a given integer consider the constrained oriented process ofa - jf on ( cf . ) in which each vertex waits an independent mean one exponential time and then , provided that among its children are in state , updates its spin variable to with probability and to with probability .it is known that this process exhibits an ergodicity breakdown above a certain critical probability ( defined more precisely later ) . in this paperwe will only examine the two extreme cases and which will be referred to in the sequel as the _ minimally _ and _ maximally _ constrained models .the finite volume version of the ofa - jf process is a continuous time markov chain on . in this case , in order to guarantee irreducibility , the variables at leaves of are assumed to be unconstrained . as in the case of the east process, the product bernoulli measure is the unique reversible measure and the same graphical construction described in [ setting - notation ] holds in this new context .we are now in a position to state our results for the minimally and maximally constrained finite volume ofa - jf models .recall that and define ] such that b. consider the maximally constrained model and choose . for any fixed ,if is large enough then there exists ] such that b. if then for any and any large enough there exists ] , in particular on the other hand , using the results in , there exists a constant such that in conclusion , using theorem [ io e c ] , and we reach a contradiction by choosing .similarly , in the minimally constrained case , assume ,\ ] ] so that using again theorem [ io e c ] together with we get and again we reach a contradiction by choosing .the key observation here is that , for any , the hitting time is stochastically larger than the maximum between independent copies of the hitting time . that follows immediately by noting that : * starting from the configuration identically equal to , a vertex can be updated only after the first time at which all its -children have been updated ; * the projection of the ofa - jf process on the sub - trees rooted at each one of the children of the root of are independent ofa - jf processes on .henceforth , the proof follows from a beautiful argument of dekking and host that was used in to derive tightness for the minima of certain branching random walks .\\ & \ge \frac 12 { { \ensuremath{\mathbb e } } } \bigl[{\tau}^{(1)}(l)+{\tau}^{(2)}(l ) + |{\tau}^{(1)}(l)-{\tau}^{(2)}(l)|\bigr]\\ & = t_{\rm hit}(l ) + \frac 12 { { \ensuremath{\mathbb e } } } \bigl[|{\tau}^{(1)}(l)-{\tau}^{(2)}(l)|\bigr]\\ & \ge t_{\rm hit}(l ) + \frac 12 { { \ensuremath{\mathbb e } } } \bigl[|\bar { \tau}^{(1)}(l)|\bigr],\end{aligned}\ ] ] since whenever are i.i.d .copies of a variable one has by conditioning on and then applying cauchy - schwarz .altogether , { { \;\leqslant\;}}2 \left(t_{\rm hit}(l+1)-t_{\rm hit}(l)\right).\ ] ] the conclusion of the theorem now follows from lemma [ treelem:1 ] and theorem [ io e c ] . in this casewe define where is the first time that the -child of the root of is updated and we write + \sup_l\sup_{{\omega}\in { \ensuremath{\mathcal g}}_l } { { \ensuremath{\mathbb e } } } _ { \omega}\bigl[{\tau}(l)\bigr],\ ] ] with the set of configurations in with and at least one zero among the children of the children of the root . assuming the lemma we write + c\\ & { { \;\leqslant\;}}\frac 12 { { \ensuremath{\mathbb e } } } \bigl[{\tau}^{(1)}(l)+{\tau}^{(2)}(l ) - |{\tau}^{(1)}(l)-{\tau}^{(2)}(l)|\bigr ] + c\\ & = t_{\rm hit}(l ) -\frac 12 { { \ensuremath{\mathbb e } } } \bigl[|{\tau}^{(1)}(l)-{\tau}^{(2)}(l)|\bigr]+c.\end{aligned}\ ] ] thus {{\;\leqslant\;}}{{\ensuremath{\mathbb e } } } \bigl[|{\tau}^{(1)}(l)-{\tau}^{(2)}(l)|\bigr ] { { \;\leqslant\;}}2\bigl(t_{\rm hit}(l)-t_{\rm hit}(l+1)\bigr ) + 2c.\ ] ] hence ,if $ ] satisfies property ( b ) of lemma [ treelem:1 ] , we get {{\;\leqslant\;}}2 \frac{c_1}{{\delta } } { t_{\rm rel}}\left((1+{\delta})n\right ) + 2c.\ ] ] the conclusion of the theorem now follows from theorem [ io e c ] .fix and and observe that that is because at time while it is a bernoulli(p ) random variable given that the root has been updated at least once .thus {{\;\leqslant\;}}\frac{1}{1-p}\int_0^\infty dt \ , |{{\ensuremath{\mathbb p } } } _ { \omega}({\omega}_r(t)=1 ) -p |.\ ] ] in order to bound from above the above integral we closely follow the strategy of *4 . in what follows , for any finite subtree of , we will refer to the _ children _ of as the vertices of with their parent in . using the graphical construction , for all times we define a ( random ) _ distinguished _ tree according to the following algorithm : a. coincides with the root together with those among its children which have at least one zero among their children ( they are unconstrained ) .b. until the first `` legal '' ring at time at one of the children of , call it . c. .d. iterate .a. for all each leaf of is unconstrained there is a zero among its children ; b. if at time the variables are not fixed by instead are i.i.d with law , then , conditionally on , the same is true for the variables . c. for all , given and , the law of the random time does not depend on the variables ( clock rings and coin tosses ) of the graphical construction in . as in *eqs .( 4.8 ) and ( 4.10 ) , the above properties imply that {{\;\leqslant\;}}e^{-2t/{t_{\rm rel}}(l)}. \ ] ] therefore , \big|&{{\;\leqslant\;}}\sup_{{\omega}\in { \ensuremath{\mathcal g}}_l}{{\ensuremath{\mathbb e } } } _ { \omega}\,\big|\ , { { \ensuremath{\mathbb e } } } _ { \omega}\left[{\omega}_r(t)-p\mid \{{\ensuremath{\mathcal t}}_s\}_{s{{\;\leqslant\;}}t}\right]\big| \\ & { { \;\leqslant\;}}\left(\frac{1}{p\wedge q}\right)^{|{\ensuremath{\mathcal t}}_0| } \sup_{{\omega}\in { \ensuremath{\mathcal g}}_l}{{\ensuremath{\mathbb e } } } _ { \omega}\bigl[\sum_{{\omega}\in { \omega}_{{\ensuremath{\mathcal t}}_0}}\pi({\omega})\big|\,{{\ensuremath{\mathbb e } } } _ { \omega}\left({\omega}_r(t)-p\mid \{{\ensuremath{\mathcal t}}_s\}_{s{{\;\leqslant\;}}t}\right)\big|\bigr]\\ & { { \;\leqslant\;}}\left(\frac{1}{p\wedge q}\right)^{|{\ensuremath{\mathcal t}}_0|}\sup_{{\omega}\in { \ensuremath{\mathcal g}}_l}{{\ensuremath{\mathbb e } } } _ { \omega}\left[{\operatorname{var}}_{\pi}\left({{\ensuremath{\mathbb e } } } _ { \omega}\left({\omega}_r(t){\thinspace |\thinspace}\{\xi_s\}_{s{{\;\leqslant\;}}t}\right)\right)^{1/2}\right ] \\ & { { \;\leqslant\;}}\left(\frac{1}{p\wedge q}\right)^{|{\ensuremath{\mathcal t}}_0|}e^{-t/{t_{\rm rel}}(l)}\,.\end{aligned}\ ] ] by theorem [ io e c ] we have that , and the proof is complete .consider the maximally constrained process on and let be the first time at which all the children of the root have been updated at least once starting from the configuration identically equal to one .for a given and , further let be the maximal subtree rooted at where is equal to one . finally , recall that denotes the basic coupling given by the graphical construction and that denotes the process at time started from the initial configuration .recall that under the basic coupling all the starting configurations have coupled by time .hence , where is the first time that the first ( in some chosen order ) child of the root has been updated starting from all ones . by construction , at time first child has all its children equal to zero .therefore the event implies that there exists some other child of the root such that has cardinality at least . using reversibility and the independence between and the process in the subtree of depth rooted at together with a union bound over the choice of ,we conclude that the statement of the lemma follows at once by summing over .[ l.2 ] fix any positive integer . for all thereexists such that + c\,{t_{\rm rel}}(l)\qquad & \text{if } p < p_c,\\ & ( ii)&t_{\rm hit}(l+\ell)&{{\;\leqslant\;}}{{\ensuremath{\mathbb e } } } \left[{\tau}^{\rm max}(l)\right]+ c l { t_{\rm rel}}(l ) \qquad & \text{if } p = p_c.\end{aligned}\ ] ] moreover , for any , for simplicity we give a proof for the case .the general proof is similar and we omit the details . we first claim that , starting from , one has {{\;\leqslant\;}}c\ , |{\ensuremath{\mathcal c}}_{\omega}|{t_{\rm rel}}(l)\ ] ] for some constant , where denotes the cardinality of .if we assume the claim , the strong markov property implies that +c\ , { { \ensuremath{\mathbb e } } } \left[|{\ensuremath{\mathcal c}}_{{\omega}({\tau}^{\rm max}(l))}|\right]\,{t_{\rm rel}}(l)\ ] ] where all expectations are computed starting from all ones .using lemma [ l.1bis ] , {{\;\leqslant\;}}c ' \sum_{\omega}\pi({\omega})|{\ensuremath{\mathcal c}}_{\omega}(r)|\ ] ] for some constant and parts ( i ) and ( ii ) of the lemma follow by standard results on percolation on regular trees ( see , e.g. , ) .to prove we proceed exactly as in lemma [ l.1 ] .we first write {{\;\leqslant\;}}\frac{1}{1-p}\int_0^\infty dt \ , |{{\ensuremath{\mathbb p } } } _ { \omega}({\omega}_r(t)=1 ) -p |\ ] ] and then we apply the results of *4 to get that .\ ] ] thus , latexmath:[\[\frac{1}{1-p}\int_0^\infty dt \ , |{{\ensuremath{\mathbb p } } } _ { \omega}({\omega}_r(t)=1 ) -plastly we prove .the subcritical case follows easily from and markov s inequality , while the critical case follows from . to see this ,write using markov s inequality and , \right]\\ & { { \;\leqslant\;}}\frac{c}d \ , { { \ensuremath{\mathbb e } } } \left[{\mathds 1}_{\{|{\ensuremath{\mathcal c}}_{{\omega}({\tau}^{\rm max}(l))}| { { \;\leqslant\;}}d^{2/3}\}}|{\ensuremath{\mathcal c}}_{{\omega}({\tau}^{\rm max}(l))}|\right ] { \;\leqslant\;}c { d}^{-1/3}.\end{aligned}\ ] ] the second term is also using lemma [ l.1bis ] and the fact that , for , fix .let be a sequence such that , for all large enough , for some constant independent of .the existence of such a sequence is guaranteed by lemma [ treelem:1 ] .we begin by proving that exactly as for the east process , one readily infers from the graphical construction that at time all initial configurations have coupled . therefore ( cf . [ east - cutoff ] ) , if , markov s inequality together with imply that \\ & { { \;\leqslant\;}}\frac{2}{{\delta } } c\,{t_{\rm rel}}(l_n).\end{aligned}\ ] ] inequality now follows by choosing .next we prove the lower bound start the process from the configuration identically equal to one and let be the time when all the vertices at distance from the root have been updated at least once .conditionally on , the root is connected by a path of to some vertex at distance at time . on the other hand , standard percolation results for that the -probability of the above event is smaller than provided that is chosen large enough .therefore , for such value of , it remains to show that for .we prove this by contradiction .let , where is a constant to be specified later , and suppose that .using lemma [ l.2 ] we can choose a large constant independent of such that and hence , by a union bound , however , for large enough , this contradicts theorem [ th : main3 ] .theorem [ th : main4 ] now follows from , , theorems [ io e c ] and [ noi ] , and lemma [ treelem:1 ] .we are grateful to y. peres for pointing out the relevant literature on branching random walks , which led to improved estimates in theorems [ th : main3][th : main4 ] .we also thank o. zeitouni for an interesting conversation about the concentration results on trees and o. blondel for several useful comments .this work was carried out while f.m .was a visiting researcher at the theory group of microsoft research and s.g .was an intern there ; they thank the group for its hospitality .
|
the east process is a 1d kinetically constrained interacting particle system , introduced in the physics literature in the early 90 s to model liquid - glass transitions . spectral gap estimates of aldous and diaconis in 2002 imply that its mixing time on sites has order . we complement that result and show cutoff with an -window . the main ingredient is an analysis of the _ front _ of the process ( its rightmost zero in the setup where zeros facilitate updates to their right ) . one expects the front to advance as a biased random walk , whose normal fluctuations would imply cutoff with an -window . the law of the process behind the front plays a crucial role : blondel showed that it converges to an invariant measure , on which very little is known . here we obtain quantitative bounds on the speed of convergence to , finding that it is exponentially fast . we then derive that the increments of the front behave as a stationary mixing sequence of random variables , and a stein - method based argument of bolthausen ( 82 ) implies a clt for the location of the front , yielding the cutoff result . finally , we supplement these results by a study of analogous kinetically constrained models on trees , again establishing cutoff , yet this time with an -window . 18
|
forest stand development has been studied for many decades , and a practical understanding of the general patterns and forms observed in the population dynamics is well established . however , despite the development of a great body of simulation models for multi - species communities ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , the elucidation of general rules for the structural development of monocultures is not clear .this is due in part to the huge variation in physiological and morphological traits of tree species , but also because of the importance of space and size dependent interactions .great progress has been made in the analysis of both size - structured ( see e.g. * ? ? ? * ) and , more recently , spatially - structured population models ( see e.g. * ? ? ?* ; * ? ? ?however , an understanding of the dynamics of real communities , structured in both size and space , has been limited by a lack of application of simple models , amenable to analysis and approximation , to the communities in question . an important concept in forest conservation and uneven - aged stand managementis that of `` old - growth '' .this is an autogenic state which is obtained through an extended period of growth , mortality and regeneration , in the absence of external disturbances .it is often seen as an `` equilibrium '' state , and is characterised by a fully represented ( high variance ) age and size structure , and non - regular spatial pattern . depending on the species involved , it may take several centuries to attain .the habitat created in this state is generally considered a paradigm of what conservation oriented forest management might hope to achieve . whilst marked point process simulations have recently been used to analyse the effects of plantation stand management , we seek to develop and directly apply a generic process - based model , which is closely related to those of and ) , to understanding the key elements of observed stand behaviour , from planting through to old - growth , which can also be applied to guide silviculture .our approach is illustrated via application to data on scots pine ( _ l .pinus syslvestris _ ) .transformation management aims to speed the transition to the old - growth state , from the starting point of a plantation stand . suggested methods for the attainment of this `` sustainable irregular condition '' ; some transformation experiments have taken place or are in progress , whilst other work has made more in - depth analysis of the structural characteristics of natural forest stands .an example of a `` semi - natural '' stand , of the type studied by is shown in figure 1 .however , the management history of such stands is generally not known sufficiently ( if at all ) before around 100 years ago , complicating parameter estimation and model validation . a generic spatial , size - structured , individual based model of interacting sessile individualsis presented in section [ sec_model ] .parameters are estimated and the model assessed using data obtained from scots pine ( _ l .pinus sylvestris _ ) communities .section [ sec_results ] studies model dynamics : an initial growth dominated period gives way to a reduction in density and a meta - stable state governed by reproduction and mortality , all of which correspond with field observations of the growth of stands of a range of species . keeping in mind this long - term behaviour , section[ sec_manage ] considers examples of the application of management practices which may accelerate transformation .the model is a markovian stochastic birth - death - growth process in continuous ( two - dimensional ) space .individuals have fixed location , and a size which increases monotonically ; these jointly define the state space of the process .the model operates in continuous time by means of the gillespie algorithm ; this generates a series of events ( i.e. growths , births , deaths ) and inter - event times . after any given event ,the rate ( ) of every possible event that could occur next is computed .the time to the next event is drawn from an exponential distribution with rate ; the probability of a particular event occurring is .interaction between individuals plays a key role , operating on all population dynamic processes in the model .individuals interact with their neighbours by means of a predefined `` kernel '' which takes a value dependent upon their separation and size difference . assuming that interactions act additively , and that the effects of size difference and separation are independent, we define a measure of the competition felt by tree where is the set of all individuals excluding . is the size of tree and its position .we here consider a generic form for the interaction kernel ; a flexible framework implemented by .competitive inhibition is a gaussian function of distance to neighbours .this is then multiplied by the size of the competitor , and a function , which represents size asymmetry in the effects of competition .that is where .the function allows anything from symmetric ( ) to completely asymmetric competition ( ) .multiplying interaction by the size of the neighbour considered reflects the increased competition from larger individuals , independent of the size difference ( consider two tiny individuals with given separation / size - difference , compared to two large ones with the same separation / difference ) .we consider trees with a single size measure , `` dbh '' ( diameter at breast height ( 1.3 m ) ) , a widely used metric in forestry , due to its ease of measurement in the field .dbh has been shown to map linearly to exposed crown foliage diameter ( which governs light acquisition and seed production ) with minimal parameter variation across many species ( purves , unpublished data , and see ) .we use the gompertz model for individual growth , reduced by neighbourhood interactions .this leads to an asymptotic maximum size , and was found to be the best fitting , biologically accurate , descriptor of growth in statistical analysis of tree growth increment data ( results not shown ) .trees grow by fixed increments m at a rate in the absence of competition ( ) , the asymptotic size of an individual is thus . under intense competition , the right hand side of equation [ eqn_gomp ] may be negative . in this case, we fix ( similarly to e.g. ) . variation in minimal effect on dynamics provided it is sufficiently small that growth events happen frequently compared to mortality and birth .mortality of an established individual occurs at a rate is a fixed baseline , and causes individuals under intense competition to have an elevated mortality rate .exisiting individuals produce offspring of size m at a rate determined by their seed production .this is proportional to crown foliage area , and hence also to basal area .the individual rate of reproduction is thus .offspring are placed at a randomly selected location within 10 m of the parent tree with probability of establishment / survival .this approximation assumes years taken to reach initial size ( 0.01 m dbh ) and avoids introduction of time - lagged calculations , which would impair computational and mathematical tractability .the fecundity of trees and accurate quantification of seed establishment success is a long standing problem in forest ecology , due the combination of seed production , dispersal , neighbourhood and environmental effects involved .submodels for regeneration are often used , but due to data collection issues , precise definition of their structure and parameterisation is more difficult ( e.g. * ? ? ?the approximation described above effectively removes this stage of the life cycle from the model , allowing a focus on structure in mature individuals only .our presented simulations use an establishment time ( ) of 20 years , which is supported by field studies of scots pine regeneration ( sarah turner , unpublished data ) .community structure is tracked via various metrics : density ( number of individuals per m ) , total basal area ( ) , size and age density distributions , and pair correlation and mark correlation functions ( relative density and size multiple of pairs at given separation , ) .all presented model results ; means and standard deviations ( in figures , lines within grey envelopes ) are computed from 10 repeat simulation runs .the simulation arena represents a 1 ha plot ( m ) .periodic boundary conditions are used .results are not significantly altered by increasing arena size , but a smaller arena reduces the number of individuals to a level at which some statistics can not be computed accurately .we use data from two broad stand types ( collected in scotland by forest research , uk forestry commission ) : plantation and `` semi - natural '' ( see * ? ? ?* ; * ? ? ?plantation datasets ( ha stands ) from glenmore ( highland , scotland ) incorporate location and size , allowing comparison of basic statistics at a single point in time ( stand age 80 years ) .semi - natural data is available from several sources .spatial point pattern and increment core data ( measurements of annual diameter growth over the lifespan of each tree , at 1.0 m height ) for four stands in the black wood of rannoch ( perth and kinross , scotland ) allows estimation of growth ( and growth interaction ) parameters .location and size measurements ( at one point in time ) from a semi - natural stand in glen affric ( highland , scotland ) provide another basis for later comparison . in none of the standsis there adequate information to reliably estimate mortality ( , ) or fecundity ( ) .these are thus tuned to satisfactorily meet plantation and steady state ( semi - natural stand ) density .the baseline mortality rate used gives an expected lifespan of 250 years .a nonlinear mixed effects ( nlme ) approach was used to estimate growth parameters , and .best - fitting growth curves were computed for each of a subset of individuals from two of the rannoch plots , and the mean , standard deviation and correlation between each parameter within the population was estimated .details are given in appendix 1 , electronic supplementary materials ( esm ) .mean values for and are used for simulation , though large variation between individuals was observed . was difficult to estimate from the semi - natural data , its standard deviation being larger than its mean .however , it has a large effect on the simulated `` plantation '' size distribution , whilst semi - natural stand characteristics are relatively insensitive to its precise value ( appendix 2 , esm ) . therefore a value slightly lower than the estimated mean was used in order to better match the size distribution in both plantation and semi - natural stages . was selected to provide an interaction neighbourhood similar to previous authors ( e.g. * ? ? ? determines early ( plantation ) size distribution , and was selected accordingly ( it has minimal effect on long - run behaviour ) .all parameter values used for simulation are shown in table [ tab_parametertable ] . sensitivity to parameter variation over broad intervalswas also tested , a brief summary of which is provided in appendix 2 , esm .a standard planting regime implemented in scots pine plantations is a 2 m square lattice , typically on previously planted ground .old stumps and furrows prevent a perfectly regular structure being created , so our initial condition has 0.01 m dbh trees with small random deviations from exact lattice sites ..model parameters , description and values .[ cols="<,<,<",options="header " , ] [ tab_parameffect2 ]this section simply presents the same results relating to management as those in section 4 of the main text ( `` acceleration of transition to old - growth state '' ) .the statistics computed using the model in which individual variation ( `` model 2 '' ) is allowed show a similar but slightly less clear pattern .temporal evolution of basal area and density show precisely the same pattern as those under the homogeneous growth model ( `` model 1 '' ) the longer the duration of management , the closer they remain to the steady state after thinning . under model 1 ,the size distribution demonstrated a shift in canopy peak as the total duration of management increased , with a larger size and lower density ( figure 5c in main text ) . under model 2 ,the size distribution shows no increase in the size of trees in the canopy , only a reduction in density towards that of the steady - state distribution ( figure [ fig_manageapp]c here ) .this is due to the much lower mean asymptotic size under model 2 .with regards the pair correlation function ( pcf ) , the shift towards the steady state appears to be present but is also slightly less clear the shift towards a clutered pattern being slower to occur under model 2 ( figure [ fig_manageapp]d here ) . [ 0.70]m).intervals : 2 years ( solid ) , 5 years ( dash ) , 10 years ( fine dash ) .again , the effect on dynamics is demonstated by ( a ) basal area ( b ) density ( c ) size distribution at 200 years ( d ) pcf at 200 years .the dotted lines show the dynamics of an unmanaged forest , whilst the thick solid lines in ( c ) and ( d ) show the long - run steady state.,title="fig : " ] gratzer , g. , canham , c. , dieckmann , u. , fischer , a. , iwasa , y. , law , r. , lexer , m. j. , sandmann , h. , spies , t. a. , splectna , b. e. , and swagrzyk , j. ( 2004 ) .spatio - temporal development of forests current trends in field methods and models ., 107:315 .larocque , g. r. ( 2002 ) .examining different concepts for the development of a distance - dependent competition model for red pine diameter growth using long - term stand data differing in initial stand density ., 48(1):2434 .law , r. , illian , j. , burslem , d. f. r. p. , gratzer , g. , gunatilleke , c. v. s. , and gunatilleke , i. a. u. n. ( 2009 ) .ecological information from spatial patterns of plants : insights from point process theory ., 97:616628 .weiner , j. , stoll , p. , muller - landau , h. , and jasentuliyana , a. ( 2001 ) .the effects of density , spatial pattern , and competitive symmetry of size variation in simulated plant populations . , 158(4):438450 .
|
\1 . concerns about biodiversity and the long - term sustainability of forest ecosystems have led to changing attitudes with respect to plantations . these artificial communities are ubiquitous , yet provide reduced habitat value in comparison with their naturally established counterparts , key factors being high density , homogeneous spatial structure , and their even - sized / aged nature . however , _ transformation _ management ( manipulation of plantations to produce stands with a structure more reminiscent of natural ones ) produces a much more complicated ( and less well understood ) inhomogeneous structure , and as such represents a major challenge for forest managers . \2 . we use a stochastic model which simulates birth , growth and death processes for spatially distributed trees . each tree s growth and mortality is determined by a competition measure which captures the effects of neighbours . the model is designed to be generic , but for experimental comparison here we parameterise it using data from caledonian scots pine stands , before moving on to simulate silvicultural ( forest management ) strategies aimed at speeding transformation . \3 . the dynamics of simulated populations , starting from a plantation lattice configuration , mirror those of the well - established qualitative description of natural stand behaviour conceived by , an analogy which assists understanding the transition from artificial to old - growth structure . \4 . data analysis and model comparison demonstrates the existence of local scale heterogeneity of growth characteristics between the trees composing the considered forest stands . \5 . the model is applied in order to understand how management strategies can be adjusted to speed the process of transformation . these results are robust to observed growth heterogeneity . \6 . we take a novel approach in applying a simple and generic simulation of a spatial birth - death - growth process to understanding the long run dynamics of a forest community as it moves from a plantation to a naturally regenerating steady state . we then consider specific silviculture targeting acceleration of this transition to `` old - growth '' . however , the model also provides a simple and robust framework for the comparison of more general sivicultural procedures and goals . \(1 ) school of physics and astronomy , the university of edinburgh , eh9 3jz , scotland , ( 2 ) biomathematics and statistics scotland , eh9 3jz , ( 3 ) forest research , northern research station , midlothian , eh25 9sy + * e - mail : t.p.adams.ed.ac.uk
|
coherence is not only the essence of the interference phenomena but also the foundation of quantum theory [ 1 ] .it is almost directly or indirectly related to all the intriguing quantum phenomena .the most remarkable phenomena are quantum correlation including quantum entanglement .both can be understood as the combination of the coherence and the tensor product structure of state space and play important roles in quantum information processing tasks ( qipts ) [ 2 - 5 ] .in addition , it is also shown that quantum coherence has been widely applied in quantum thermal engine [ 6,7 ] , biological system [ 8 ] and quantum parallelism [ 9 ] . in quantum information , quantum feature such as entanglement [ 10 ] and quantum correlation [ 11 ] , due to the potential application in qipts , can be well quantified from the resource theory point of view . recently , in the same manner a rigorous framework has been developed for the quantification of quantum coherence [ 12 ] .it points out that a good coherence measure should satisfy three conditions : 1 ) the incoherent states have no coherence ; 2 ) ( monotonicity ) incoherent completely positive and trace preserving maps can not increase the coherence , or the average coherence is not increased under selective measurements ; 3 ) ( convexity ) it is not increased under the mixing of quantum states .meanwhile it also presented several good coherence measures .however , such coherence measures strongly depend on the choice of the basis .this means that a quantum state can have certain coherence in one basis , but it could possess more , less , or none coherence in the other basis . even though such a basis - dependent quantification of quantum coherence is consistent with our intuitive understanding ( such as the contribution of off diagonal entries of density matrix ) , this could only consider the partial contribution of coherence of a state , once one is allowed to freely select the basis . in particular, the change of basis is a quite easy thing or at a small price at practical scenarios . taking the linear optics for an example, one can only rotate the wave plate to get to another framework [ 13 ] . since quantum coherencecan be understood as the useful resource , why not try one s best to extract it as many as possible ?so it is natural to consider , with all potential basis taken into account , how much coherence a state possesses or what is the maximal coherence in a state . in this paper , we present the total coherence measure to quantify all the contributions of the quantum coherence in a state .the most distinct feature of this measure is that it only covers the property of a state instead of the external observable ( the choice of basis ) .we give several analytically calculable coherence measures in two different frameworks : optimization among all potential bases or quantifying the distance between the state and the incoherent state set .we find that all measures satisfy the mentioned three properties . in particular , one can find that the measure based on norm is also a valid candidate , even though the norm is not contractive .in addition , we find that the coherence measures based on relative entropy and the norm have the same result in the different frameworks . from the angle of the experimental detection , we give an explicit scheme to physically detect these measures .it is shown that such detections do not require reconstructing the full density matrix . as an application , we study the total coherence in the dqc1-like quantum probing schemes [ 14,15 ] including the qom [ 16 ] .as we know , dqc1-like quantum schemes show quantum speedup , but what the source of the speedup is remains open . hereinstead of finding the exact source , we study what cost is needed to pay for such schemes .it is found that both the normalized trace in dqc1 and the overlap of two states in qom can be well described by the change of the total coherence of the probing qubit . in other words, the nontrivial quantum probing always gives rise to the change of the total coherence .the paper is organized as follows .we first propose various total coherence measures ; then we present the properties of these measures ; and then we study the total coherence in the dqc1-like quantum probing schemes ; finally , we give a summary and discussions .the classical coherence is usually characterized by the frequencies and the phases of different waves , but a good definition of quantum coherence stemming from the superposition of state ( a single wave ) depends not only on the state itself but also on the associated observable .the physical root of such a definition is that the measurement on the observable can reveal the interference pattern provided that the observable does not commute with the considered density matrix . in this sense , it is obvious that the coherence measure will have to depend on the framework ( or basis ) that the density matrix is given in [ 17,18 ] .therefore , there are naturally two ways to quantifying quantum coherence : one is based on the commutation , the other is based on the distance . based on the former , ref .[ 19 ] used the skew information to measure the coherence , and based on the latter , ref .[ 12 ] proposed several measures .we also used norm to study the source of quantum entanglement [ 20 ] . considering the potential classification of coherence of composite quantum system , we have provided a new angle to understand the geometric quantum discord , quantum non - locality and the monogamy of coherence [ 21 ] . herewe shall consider the maximal coherence with different bases taken into account , or the total coherence which a state could have .therefore , a natural method to doing so is to maximize the basis - dependent coherence by taking into account all the potential bases .in addition , as mentioned before , the coherence can be embodied by the commutation between the state and some particular observable . since we consider all potential bases or ( observables ) , it is implied that the incoherent state requires that the density matrix should commute with all observables .the direct conclusion is that the incoherent state is the maximally mixed state with denoting the dimension of the state and denoting the -dimensional unity .so one can easily construct the coherence measure based on the distance [ 9 ] . in the following, we will consider the coherence measures both by optimizing the basis and by the distance .* coherence based on basis optimization.*-with the different bases considered , we can define the total coherence based on the optimization of basis as follows. denotes the diagonal matrix and denotes some good norms or distance functions .for example , we can employ the norm , norm , relative entropy and so on .one can also use the trace norm and fidelity , but the incoherent state is usually not given by , but some particular states in the incoherent set [ 12 ] .the skew information can also be employed , but no explicit incoherent state is required . in order to provide an explicit expression of the total coherence , next we will list some coherence measures by the particular choice of the `` norms '' .\(1 ) norm could be the most easily calculable norm .but it is not contractive , so in many cases an unphysical result could appear [ 12 ] . in the current case , one will find in the paper that norm can be safely used to quantify the total coherence measure .based on norm , we have minimum can be reached because there always exists the basis such that the diagonal entries of are uniform .\(2 ) if we employ the relative entropy [ 9 ] , the total coherence can be given by denoting the relative entropy of and .the minimum is also achieved by the basis subject to the uniform distribution of the diagonal entries of .\(3 ) based on skew information , we will have a different definition . the skew information [ 22 - 24 ] for a density matrix and an observable is given by ^{2} ] for all . so the skew information in the basis can quantify the coherence of in this basis .considering all the potential basis , the maximal naturally quantifies the total coherence as mentioned above .this definition should be distinguished from that in ref . [19 ] where the coherence could depend on the eigenvalues of the observable . norm of a matrix is defined by the sum of the absolute values of all the entries of the matrix .it is also a good norm even though it is not a unitary - invariant norm . with this norm, the total coherence can be given by , with the maximum reached when all the diagonal entries of equal .however , because norm is not unitary - invariant , the optimal result of for a general ( especially in high dimensional hilbert space ) can not be easily given .but one can find that is unitary- invariant , because the optimization compensates for it .in addition , the explicit expressions of the total coherence based on trace norm and the fidelity can not be easily given because the nearest incoherent state can not be determined in general cases .* coherence based on distance.*-since the completely incoherent state is , one can always define the total coherence based on the distance between a given state and . using some ( unitary - invariant ) norms or distance functions , we have since no optimization is included , all the coherence measures can be easily calculated so long as one selects a proper function .for example , one can easily find the explicit form of the total coherence measure based on trace norm , fidelity and so on . here, we would like to emphasize the following several candidates .( ) if the relative entropy is used , one can find ( ) if l norm is selected , we have it is obvious that the total coherence measures based on the relative entropy and the norm have the same final expressions in the different frameworks . because norm could be changed by a unitary operation , the coherence based on it has to take some optimization on the unitary transformations , that is , . herewe would like to emphasize that our coherence measures actually are closely related to the purity .one knows that the purity of a state is defined by .it is obvious that for pure states and for mixed state , based on which one can design many other similar quantities for purity such as and with and so on .these purities reach maximum value for pure state and nonzero minimum value for maximally mixed states .thus the presented total coherence can be regarded as a displacement on the purity . in this sense , we give the purity a new understanding by the coherence and _vice versa_. one should note that our coherence measures are defined different from the basis - dependent coherence , so the criteria for a good measure should be different either .next , we will list the useful properties that these measures satisfy , meanwhile they could form new criteria for a basis - independent coherence measure .in what follows , we will list several good properties that our above total coherence measures satisfy . in particular , we will show that the coherence measure based on norm is still a monotone , even though it is not a contractive norm .this could provide great convenience for the future applications .\(i ) _ maximal for pure states and vanishing for incoherent states.- _ it is easy to find that all the coherence measures vanish for maximally mixed state and arrive at its maximal value for pure states .this can be well understood , since any pure state can be converted to a maximally coherent state by changing basis .( ii)__invariant under unitary operations.-__the most obvious feature , based on the definitions , is that all these measures are invariant under unitary transformations .( iii)_convexity.- _ all the measures are convex .that is , the total coherence will not increase under mixing .this can be found from the fact that all the norms satisfy the triangle inequality .for the squared norm , one also needs to consider the convexity of the quadratic function .in addition , we know that the fidelity is strongly concave , the von neumann entropy is concave and the skew information is convex , so this property can be easily proved .( iv)_monotonicity.- _ this property will have different contents from that for the basis - dependent coherence measure .just as in entanglement theory and coherence measure [ 12 ] , the definitions of entanglement monotone and coherence monotone require the non - entangling operations and the basis - dependent incoherent operations , respectively .now let the incoherent operation be given in kraus representation as .if we follow the rules of non - entangling operations and the basis - dependent incoherent operations the elements of which can not generate entanglement or coherence , one can easily find that should be a unitary transformation neglecting a constant ( corresponding to probability ) .therefore , here the incoherent operations can be rewritten as .a simple algebra can show that the average total coherence equals the total coherence of the original state before the operation , but the total coherence of the final state after the operation will not be increased due to the convexity .we would like to emphasize that the coherence measure based on norm also satisfies this property , so it can be used safely .\(v ) _ coherence does nt increase under the special povm.- _ this is another interesting property for the total coherence .let s consider such an operation that is given in kraus representation as .one can find that this operation can not create any coherence from the incoherent state , even though the single element such as may produce coherence .therefore , the average total coherence could be increased by .however , it is interesting that the total coherence of the final state ( the final ensemble generated by on the original state ) after this operation can not be increased .this conclusion may be drawn from the fact that all the above employed quantifications but norm are contractive .however , one can also prove that this property is satisfied for the case of norm . the proof is given as follows .let denote the density matrix that we want to consider .the final state after the operation can be given by .thus , where we use the eigenvalue decomposition of with denoting the diagonal matrix of eigenvalues and with . since , it is obvious that defines a positive operator - valued measurement ( povm ) and which implies {ij}\right\vert ^{2}=\sum_{im}\left\vert [ a_{m}]_{ij}\right\vert ^{2}=1 $ ] .expand , we will have {ij}\right\vert ^{2}\lambda _ { i}\lambda _ { j } \nonumber \\ & \leq & \left [ \sum\limits_{i}\lambda _ { i}^{2}\right ] ^{1/2}\left [ \sum\limits_{i}\left ( \sum_{jm}\left\vert [ a_{m}]_{ij}\right\vert ^{2}\lambda _ { j}\right ) ^{2}\right ] ^{1/2 } \nonumber \\ & \leq & \left [ \sum\limits_{i}\lambda _ { i}^{2}\right ] ^{1/2}\left [ \sum\limits_{ij}\sum_{m}\left\vert [ a_{m}]_{ij}\right\vert ^{2}\lambda _ { j}^{2}\right ] ^{1/2 } \nonumber \\ & = & \sum\limits_{i}\lambda _ { i}^{2}=tr\rho^{2}.\end{aligned}\ ] ] this shows that the total coherence based on norm is not increased under the operation either .in the above sections , we mainly consider the mathematical approaches to measuring the total coherence .how can we directly measure the coherence experimentally ?in fact , one can note that the presented measures can be expressed by the function of the eigenvalues of the density matrix , for example , eqs .( 2,3,4,6,7 ) .since the eigenvalues of a density can be directly measured ( for example , the schemes for the measurable entanglement and discord [ 25 - 27 ] ) , our presented coherence can be naturally determined .however , for integrity and the latter use , we would like to briefly describe the concrete implementation .since for any -dimensional density matrix , one can only set , respectively , and experimentally measure . in this way, we can get equations depending on the eigenvalues . in principle , all the eigenvalues can be determined by solving these equations . in order to do so , we can define the generalized swapping operator as . with the swapping operator one can find .thus one can first prepare a probing qubit and copies of measured state . then let the particles undergo a controlled gate , i.e. , with the probing qubit as the control qubit .finally , the measurement is performed on the probing qubit and the probability of obtaining will be which is as expected . the quantum circuit is shown in fig .hence , generally speaking , all the coherence measures can always be obtained by measuring with at most copies of the state . however , for the coherence measure based norm , one can find that the measurement scheme becomes quite simple , because it can be directly obtained by only measuring with only 2 copies of , which does not depend on the dimension of the measured density matrix .this is akin to the overlap measurement scheme [ 16 ] . .for the quantum probing scheme , the initial state of the probing qubit is given by and the initial state is usually replaced by some density matrix . in dqc1 scheme , with . , width=188 ]here we consider the dqc1-like quantum probing schemes which include the above mentioned qom [ 16 ] and the remarkable dqc1 schemes [ 14,15 ] .the features of this kind of schemes are ( i ) a probing qubit is used to extract the information from the quantum system ; ( ii ) the cost of the probing does not depend on the probed quantum systems ( or the dimension of the input state space ) , once the system has been designed .the quantum circuit can be sketched as fig .1 , where a probing qubit is sent to the probed quantum system , and the interaction between the probing qubit and the system is usually provided by one or several controlled- operations . it is shown that there exists quantum speedup in the schemes [ 14,15 ] , but the essence of this speedup is neither entanglement nor discord between the probing qubit and the probed qubits [ 14,28,29 ] .most people could think that the coherence as the candidate should be an intuitive physics , but no quantitive description has been presented up to now . herewe will do such a job by proving a weak result that nontrivial probing needs the existence of coherence . for generality , we set the probing qubit to be given by is a real 3-dimensional vector with and denotes the vector made up of the 3 pauli matrices .suppose that the probed _n_-dimensional density matrix is denoted by .so the controlled- operation will lead to the final state as the hadamard gate .thus the final density matrix of the probing qubit becomes in order to guarantee that this probing scheme works , it is required that and do not vanish simultaneously .similarly , if we want to use operation to probe information of , should not be the identity .based on eq .( 2 ) and eq . (10 ) , one can easily obtain the total coherence based on l norm for and as .\end{aligned}\]]so the change of the total coherence can be given by the subscript denoting the change of the total coherence induced by the controlled- operation .thus is closely related to the evaluation of this quantum probing scheme . from the point of probing qubit of view, the cost of the probing qubit is that the total coherence changes for such a task .generally , if is an identity which means we do nothing in the scheme , one will see that .for the general dqc1 where and with , we have which is consistent with ref .for the qom where , and , one can immediately find that which is directly given by the overlap of and .in fact , the above probing schemes maybe include more unitary operations denoted by ( here we mainly consider the controlled - u operations , and the unitary operations separately performed on the probing qubit and the probed quantum state . in particular , it is more reasonable to consider all the operations given by the basic quantum logic gates . ) .we would like to define the cost of such a scheme as the sum of the changes of the total coherence of the probing qubit .that is , taking all the unitary operations .in particular , one should note that for any based on eq .therefore , generally speaking , the more operations are used , the more cost is paid . in particular, will not vanish if the probing scheme only includes two unitary operations such as and .this should be distinguished from the scheme which only includes a single identity operation .this difference can be understood in the frame of basic logic gates . that is ,suppose and are given by a series of logic gates , the qubits through them will have to undergo the corresponding ` dynamical evolution ' , even though at the final moment the original state is recovered .on the contrary , a direct identity operation ( we mean no logic gates ) , no such an ` evolution ' is needed . in this sense , the subscript in eq .( 15 ) taking all the covered logic gates could be more reasonable , but it will lead to more complicated calculations because how to construct a given operation by logic gates has to be considered . finally , one can find that the coherence measures given by eqs .( 3,4 ) are described by the eigenvalues of the density matrix . in the above probing scheme, one can easily calculate that the eigenvalues of the density matrix are the functions of .hence we can always find what the cost is for the different measures which we can choose .in particular , it can be shown that is directly related to . to sum up, vanishes if and only if the probing scheme is trivial . in this sense, we think that can be understood as the cost of such a probing scheme .we have studied the total coherence of a quantum state and presented several coherence measures which are independent of the basis .it is shown that all the presented measures especially including the measure based on norm satisfy all properties such as the monotonicity .this actually provides a very convenient tool for the relevant researches due to the simple form of norm . in particular , we have shown that the total coherence measures based on the relative entropy and the norm have the same expression by optimizing the basis or by quantifying the distance .in addition , for integrity , the experimental schemes for the detection of coherence are also briefly introduced .finally , we study the total coherence in the dqc1-like quantum probing schemes .it is shown that both the normalized trace in dqc1 and the overlap of the two states in qom can be well described by the change of the total coherence of the probing qubit .in other words , all the nontrivial probing schemes have to lead to the change of the total coherence .therefore , this change can be understood as the cost of implementing such a probing scheme .this could motivate a new platform to study the essence of the speedup of mixed - state quantum computing .
|
quantum coherence is the most fundamental feature of quantum mechanics . the usual understanding of it depends on the choice of the basis , that is , the coherence of the same quantum state is different within different reference framework . to reveal all the potential coherence , we present the total quantum coherence measures in terms of two different methods . one is optimizing maximal basis - dependent coherence with all potential bases considered and the other is quantifying the distance between the state and the incoherent state set . interestingly , the coherence measures based on relative entropy and norm have the same form in the two different methods . in particular , we show that the measures based on the non - contractive norm is also a good measure different from the basis - dependent coherence . in addition , we show that all the measures are analytically calculable and have all the good properties . the experimental schemes for the detection of these coherence measures are also proposed by multiple copies of quantum states instead of reconstructing the full density matrix . by studying one type of quantum probing schemes , we find that both the normalized trace in the scheme of deterministic quantum computation with one qubit ( dqc1 ) and the overlap of two states in quantum overlap measurement schemes ( qom ) can be well described by the change of total coherence of the probing qubit . hence the nontrivial probing always leads to the change of the total coherence . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
fair allocation problems arise naturally in various real - world contexts and are the object of study in several research areas such as mathematics , game theory and operations research .these problems consist in sharing resources among several self - interested parties ( players or agents ) so that each party receives his / her due share . at the same timethe resources should be utilized in an efficient way from a central point of view .a wide variety of fair allocation problems have been addressed in the literature depending on the resources to be shared , the fairness criteria , the preferences of the agents , and other aspects for evaluating the quality of the allocation . in this paperwe focus on a specific discrete allocation problem , introduced briefly in , that can be seen as a multi - agent _ subset sum problem _ : a common and bounded resource ( representing e.g. , bandwidth , budget , space , etc . )is to be shared among a set of agents each owning a number of indivisible items .the items require a certain amount of the resource , called item weight and the problem consists in selecting , for each agent , a subset of items so that the sum of all selected items weights is not larger than a given upper bound expressing the resource capacity .we assume that the utility function of each agent consists of the sum of weights over all selected items of that agent . in this context , maximizing the resource utilization is equivalent to determining the solution of a classical , i.e. single agent , subset sum problem . since we are interested in solutions implementing some fairness criteria , we call the addressed problem the _ fair subset sum problem _ ( fssp ) . throughout the paper , as usual with allocation problems , we consider for each agent a _ utility function _ which assigns for any feasible solution a certain utility value to that agent .we assume that the system utility ( e.g. the overall resource utilization in an allocation problem ) is given by the sum of utilities over all agents .this assumption of additivity appears frequently in quantitative decision analysis ( cf .e.g. ) .the solution is chosen by a central decision maker while the agents play no active role in the process .the decision maker is confronted with two objectives : on one hand , there is the maximization of the sum of utilities over all agents . on the other hand , such a _ system optimum _ may well be highly unbalanced .for instance , it could assign all resources to one agent only and this may have severe negative effects in many application scenarios .thus , it would be beneficial to reach a certain degree of agents satisfaction by implementing some criterion of fairness .clearly , the maximum utility taken only over all fair solutions will in general deviate from the system optimum and thus incurs a loss of utility for the overall system . in this paperwe want to analyze this loss of utility implied by a fair solution from a worst - case point of view . this should give the decision maker a guideline or quantified argument about the _ cost of fairness_. a standard indicator for measuring this system efficiency loss is given by the relative loss of utility of a fair solution compared to the system optimum in a worst - case sense , which is called _ price of fairness _( ) .the concept of fairness is not uniquely defined in the scientific literature since it strongly depends on the specific problem setting and also on the agents perception of what a fair solution is . in this paperwe consider three types of fair solutions , namely proportional fair , maximin and kalai - smorodinski solutions ( definitions are given in section [ sec : notation ] ) .moreover , we formalize several properties of fair solutions some of which have been already investigated in some specific contexts holding for any general multi - agent problem without any specific assumption on the utility sets , contrarily from most of the scientific literature on allocation or multi - agent problems .the most significant part of this work is devoted to completely characterizing for the fair subset sum problem with two agents for the three above mentioned fairness concepts .caragiannis et al . were the first to introduce the concept of in the context of fair allocation problems : in particular , they compare the value of total agents utility in a global optimal solution with the maximum total utility obtained over all fair solutions ( they make use of several notions of fairness namely , proportionality , envy - freeness and equitability ) . in , bertsimasfocus on proportional fairness and maximin fairness and provide a tight characterization of the price of fairness for a broad family of allocation problems with compact and convex agents utility sets .the price of fairness measures the inefficiency implied by fairness constraints , similarly to the utility loss implied by selfish behavior of agents and quantified by the price of anarchy ( see , e.g. ) . from a wider perspective , many authors have dealt with the problem of balancing global efficiency and fairness in terms of defining appropriate models or designing suitable objective functions or determining tradeoff solutions ( see for instance ) . a recent survey on the operations research literature that considersthe tradeoff between efficiency and equity is .the subset sum problem considered in this paper is related to the so - called _ knapsack sharing problem _ in which different agents try to fit their own items in a common knapsack ( see for instance ) .the problem consists in determining the solution that tries to balance the profits among the agents by maximizing the objective of the agent with minimum profit .as we will see , this problem is equivalent to determining a specific type of fair solution , known as maximin solution in the literature .another special knapsack problem has been addressed in , where a bi - objective extension of the linear multiple choice knapsack ( lmck ) problem is considered .the author wants to maximize the profit while minimizing the maximum difference between the resource amounts allocated to any two agents .fairness concepts have been widely studied in the context of _ fair division _problems , see e.g. for a general overview , and in many other application scenarios ( mostly in telecommunications systems and , more recently , in cloud computing ) . in particular , in the authors point out that resource allocation in computing systems is one of the hottest topics of interest for both computer scientists and economists .fair division includes a great variety of different problems in which a set of goods has to be divided among several agents each having its own preferences .the goods to be divided can be ( ) a single heterogeneous good as in the classical cake - cutting problem ( see e.g. and , which considers price of fairness in the line of ) , ( ) several divisible goods as in resource allocation problems ( see e.g. ) , or ( ) several indivisible goods ( see e.g. ) .the fair subset sum problem we address is strongly related to fair division .it can be seen either as a single resource allocation problem in which the resource can be only allocated in predetermined blocks / portions ( the item weights ) or as a special case of the indivisible goods problem in which , due to an additional capacity constraint , only a selection of the goods can be allocated .a different but related scenario is presented in , where a game is considered in which several agents own different tasks each requiring certain resources .the agents compete for the usage of the scarce resources and have to select the tasks to be allocated .the paper is organized as follows .the next section provides the basic definitions , the formal statements for the problems studied ( section [ sec : def_ss ] ) and a summary of our results ( section [ sec : summary ] ) .some properties which hold for any general -agent problem are given in section [ sec : general ] , where the special case of problems with a symmetric structure is also addressed . in section [ sec : subsetsum ] we consider the fair subset sum problem with two agents in two different scenarios . in particular , in section [ sec : separate ] we present the results concerning the case in which the two agents have two disjoint sets of items , while in section [ sec : shared ] the case in which the agents share a common set of items is considered .finally , in section [ sec : conc ] some conclusions are drawn .consider a general multi - agent problem , e.g. some type of resource allocation problem , in which we are given a set of agents and let be the set of all feasible solutions , e.g. allocations .each agent has a utility function .if two solutions and yield the same utility for all agents , i.e. for all , then we are not interested in distinguishing between them and we consider and as equivalent .note that we do not make any assumption on the set nor on the functions .we define the above problem to be _ symmetric _ and denote it by , if for any solution and for any permutation of the agents there always exists a solution such that for all . in other words , permuting among the agents the utilities gained from a feasible solution in a symmetric problem always results again in a feasible solution .the global or social utility of a solution is the sum of the agents utilities given by .the globally optimal solution is called _ system optimum _ , its value is given by .in addition to the system optimum solution we consider _ fair solutions _ , which focus on the individual utilities obtained by each agent . in this paper , we use three different notions of fairness formally defined below .other notions of fairness , such as envy - freeness or equitability , are not considered here . *_ maximin fairness _ : based on the principle of _ rawlsian justice _ , a solution is sought such that even the least happy agent gains as much as possible , i.e. the agent obtaining the lowest utility , still receives the highest possible utility .+ formally , we are looking for a solution maximizing , such that for all .equivalently , we are looking for a solution such that + we only consider pareto efficient solutions to avoid dominated solutions with the same objective function value .clearly , this does not guarantee the uniqueness of solutions . *_ kalai - smorodinski fairness _ : a drawback of maximin fairness is the fact that an agent is guaranteed a certain level of utility , thus possibly incurring a significant loss to the other agents , even though the agent would not be able to gain a substantial utility when acting on its own . in the kalai - smorodinski fairness conceptwe modify the notion of maximin fairness by maximizing the minimum relative to the best solution that an agent could obtain .+ formally , let be the maximum utility value each agent can get over all feasible solutions .a kalai - smorodinski fair solution minimizes , such that for all .equivalently , we are looking for a solution such that as before , we only consider pareto efficient solutions . clearly , if all agents can reach the same utility , i.e. for all , then .* _ proportional fairness _ : a solution is _ proportional fair _ , if any other solution does not give a total relative improvement for a subset of agents which is larger than the total relative loss inflicted on the other agents . note that a pareto - dominated solution can never be proportional fair .+ formally , we are looking for a solution with for all , such that for all feasible solutions while for any instance of the problems considered in this paper maximin and kalai - smorodinski fair solutions always exist , a proportional fair solution might not ( see e.g. example [ ex : ssppof ] ) . on the other hand , as we show in the sequel , proportional fair solutions are always unique , if they exist .in contrast , it should be noted that for maximin fairness and also for kalai - smorodinski fairness schemes , there may exist several different fair solutions . in the literature, these two maximin concepts are sometimes extended to a lexicographic maximin principle ( i.e. among all maximin solutions , maximize the second lowest utility value , and so on ) which still does not guarantee uniqueness of solutions .however , this will not be a relevant issue for this paper .in fact , our restriction to pareto efficient solutions implies the lexicographic principle for agents .it is well known that in case of convex utility sets , the proportional fair solution is a nash solution , i.e. the solution maximizing the product of agents utilities ( cf . ) . even for the general utility sets treated in this paper it is shown in theorem[ th : nash ] that if a proportional fair solution exists then it is the one that maximizes the product of utilities . observe however that the opposite is , in general , not true , since a proportional fair solution does not always exist . in order to measure the loss of total utility or overall welfare of a fair solution compared to the system optimum, we study the price of fairness as defined in : given an instance of our general problem , let be the value of a fair solution and be the system optimum value .the price of fairness , , is defined as follows : obviously , ] . from theorem[ th : shared_pf_sameamount ] we know that if a proportional fair solution exists , then . plugging in this identity into the definition of proportional fairness we get : which proves the thesis .so far , we presented some general results holding for any general multi - agent problem . in the next sectionwe address a specific allocation problem with agents .in this section we focus on the fair subset sum problem ( fssp ) for two agents and we provide several bounds on the price of fairness . as we discussed in section [ sec : def_ss ] , to give a more comprehensive analysis , we introduce an upper bound on the largest item weight , i.e. for all items and analyze as a function of . formally , we extend the definition of from by taking the upper bound into account : let denote the set of all instances of our fssp where all items weights are not larger than . given let for a solution and be the system optimum value for instance .then we can define the price of fairness depending on as follows : obviously , .it is also clear from the above definition that is monotonically increasing in , i.e. if , then .moreover , note that the value may be actually attained for an instance with .figure [ fig : poftrends ] illustrates the functions and for the separate items sets and shared items set cases .the first bound on the price of fairness for fssp with agents and an upper bound on the maximum item weight , is given in the following lemma .we show in the next sections that for certain values this bound can be improved .[ lem : lessthanalpha ] the price of fairness for any pareto efficient solution of the fssp with agents and an upper bound ] .[ ex : pof_small_pf ] let agent own items of weight and own items of weight . in this casethere are only two pareto efficient solutions , namely which is the system optimum with values and , and with values and .it is easy to show that is a fair solution in all three settings , i.e. . for we get : hence , we can state that for every with , and integer , from the bound of example [ ex : pof_small_pf ] when , we get for .note that for this matches the lower bound of example [ ex : ssppoflarge ] . in the following theoremwe show that the bounds of examples [ ex : ssppoflarge ] and [ ex : pof_small_pf ] for the maximin fairness concept are worst possible when , i.e. can not be larger than the lower bounds provided by those examples . in figure [fig : poftrendsseparate ] the function , or the corresponding upper and lower bounds when , are plotted for the separate items sets case . [th : ssppof ] fssp with separate item sets and an upper bound on the maximum item weight has the following price of fairness for maximin fair solutions : the case follows from lemma [ lem : lessthanalpha ] , thus proving with the lower bound given by example [ ex : pof_small_pf ] .we now consider the case and prove upper bounds ( [ eq : poflarge ] ) and ( [ eq : pofmedium ] ) .the corresponding matching lower bounds were given in example [ ex : ssppoflarge ] and [ ex : pof_small_pf ] ( take ) .we assume without loss of generality that .if the fair solution includes an item with weight , we have and thus .hence , we assume that includes since otherwise neither nor would include the largest item and we could remove it from consideration where the largest item might contribute to . ] .now we consider two cases : * _ case _ in this case , it is feasible to include in and thus and . *_ case _ let for some residual weight .we can assume that , since otherwise we would have again thus implying the thesis .this means that there is enough capacity for to pack at least also in the fair solution , i.e. .now we can distinguish two bounds on the fair solution .+ assume first that since , it must be and thus , but also .+ secondly , assume that if we combine ( [ eq : bnd1 ] ) and ( [ eq : bnd2 ] ) and define and , we have the following : by elementary algebra it is easy to observe that showing that is equivalent to showing this last expression is true for by the definition of . finally , for the case it can be easily shown that the desired upper bound of is obtained from ( [ eq : eq ] ) . since when a proportional fair solution exists ( theorem [ th : prop_fair_k ] and corollary [ th : pofpflessthanpofmm ] ) , we get the following result ( see example [ ex : pof_small_pf ] for the tightness of ) . [th : cor_pofpflessmm ] fssp with separate item sets and an upper bound on the maximum item weight has the following price of fairness for proportional fair solutions : we conclude this section by providing upper bounds on the price of fairness for kalai - smorodinski fair solutions .note that these worst case bounds have the same values as those for maximin fair solutions , even though the proof is quite different .as for theorem [ th : ssppof ] , figure [ fig : poftrendsseparate ] illustrates the function for the separate items sets case .recall that it was established by examples [ ex : ks_not_pf ] and [ ex : ks_betterthan_pf ] that in general the utilities reached for the two fairness concepts have no dominance relations .[ th : ssppofks ] fssp with separate item sets and an upper bound on the maximum item weight has the following price of fairness for kalai - smorodinski fair solutions : the lower bounds of and were given in example [ ex : ssppoflarge ] and [ ex : pof_small_pf ] ( take ) . the case follows from lemma [ lem : lessthanalpha ] , thus proving with the lower bound again given by example [ ex : pof_small_pf ] . when it is useful to partition the items into _ small items _ with weight at most and _ large items _ with weight greater than .let us now consider the case and prove the upper bound ( [ eq : poflargeks ] ) . by contradiction , assume that , i.e. it follows that any remaining unpacked small item could be added to .thus , we conclude that _ all _ small items are included in .if neither nor own a large item , the bound of would follow from lemma [ lem : lessthanalpha ] . furthermore ,if does not contain a large item , then , since contains all small items .hence , we can assume w.l.o.g . that owns a large item , say , which is contained in , and write for some weight sum comprising small items .due to does not contain , hence because otherwise could replace .therefore , by the definition of kalai - smorodinski fair solutions , it must be : we can observe that therefore , in the right - hand side of ( [ eq : proofksl ] ) it must be .this means that to fulfill ( [ eq : proofksl ] ) we also must have if also owns a large item , say , then could replace because with assumption ( [ eq : pofksass ] ) and ( [ eq : ksbboundl ] ) we have : the last inequality holds exactly for ] we proceed as follows .let for a fair solution such that .by definition of and pareto efficiency we must have now we consider two cases depending on the weight of the largest item contained in a system optimal solution . * _ case 1 _ : . among the different systemoptima consider the one where , while corresponds to the weight of some other subset of items .clearly , and neither nor contains .we have that since otherwise , i.e. if , we could add to which then exceeds . since also would constitute a solution with better total value than . by a similar argumentalso .+ hence , and we get the upper bound which is increasing in for all . thus , by plugging in the largest possible value , that is , we obtain for * _ case 2 _ : . among the different systemoptima consider the one built with an lpt like procedure for ( see for instance ) : the items in are sorted in decreasing order and assigned iteratively to the agent with current lower total weight .let and indicate the values for the two agents in this solution .clearly , in general , it is not known which of the two values is larger .+ if then following ( [ eq : mmopt ] ) any solution with could be used to replace and improve in .hence , it must be .this means that according to the lpt logic , at least one additional item was added to the agent receiving , which can happen only after the other agent weight has exceeded . therefore , .+ by lpt we also have .it follows with ( [ eq : mmopt ] ) that thus , we have + it follows immediately that , for , independently from , while it is clear that for only case 2 is feasible , for an instance with either of the two cases may occur .hence , we can only state an upper bound as a maximum of the two : which easily yields relations and . for , where the bound of theorem [ th : ssppofshared ] is not tight , we can bound ( as in the case of separate item sets ) the ratio between upper and lower bound in on as follows : again , this shows that for smaller values of an almost tight description of is derived .the largest gap arises for where .in this paper we introduced a general allocation function to assign utilities to a set of agents .the focus of our attention is directed on _ fair allocations _ which give a reasonable amount of utility to each agent .a number of fairly general results holding for any multi - agent problem were derived for three different notions of fairness , namely maximin , kalai - smorodinsky and proportional fairness . in particular, we showed that for a large and meaningful class of problems proportional fair solutions are system optimal and equitable , that is each agent receives the same utility as every other agent .in the main part of the paper we considered a bounded resource allocation problem which can be seen as a two - agent version of the subset sum problem and thus is referred to as _fair subset sum problem _ ( fssp ) .we are interested in evaluating the loss of efficiency incurred by a fair solution compared to a system optimal solution which maximizes the sum of agents utilities .in particular , we presented several lower and upper bounds on the price of fairness for different versions of the problem .as discussed for the three notions of fairness considered in this paper , it is in general hard to compute a fair solution , so it would be desirable to introduce a solution concept permitting a polynomial time algorithm , or even a simple heuristic allocation rule , fulfilling some fairness criterion and still guaranteeing an adequate level of efficiency ( i.e. a certain upper bound on the price of fairness ) .concerning fssp , it is easy to show that it is binary np - hard to recognize fair solutions ( for all three fairness concepts ) .in fact , if all item weights and the capacity are integers , it is possible to design dynamic programming algorithms running in pseudopolynomial time to find all po solutions in the separate and shared items cases .the algorithms are briefly sketched hereafter . for separate items ,we may define two dynamic arrays ] , , with binary entries , where e.g. =1 ] if =1 ] and =1 ] if a solution with weight for and for exists .it is updated for each item by observing that each entry with =1 ] and =1.$ ] thus , all reachable solutions can be determined in time .more details , e.g. about storing the set of items for each entry , can be found in ( * ? ? ?finally , a natural generalization of the fssp , with significant applications in several real - world scenarios such as project management and portfolio optimization , would consider a different utility function associated to profits , thus defining a multi - agent ( fair ) knapsack problem .gaia nicosia and andrea pacifici have been partially supported by italian miur projects prin - cofin n. 2012jxb3yf 004 and n. 2012c4e3kt 001 .+ ulrich pferschy was supported by the austrian science fund ( fwf ) : p 23829-n13 .99 aumann y. , y. dombb ( 2010 ) . the efficiency of fair division with connected pieces , proceedings of wine 2010 ,_ springer lecture notes in computer science _ , 6484 , 26 - 37 .bertsimas d. , v. farias , n. trichakis ( 2011 ) .the price of fairness , _ operations research _ , 59 ( 1 ) , 1731 . bertsimas d. , v. farias , n. trichakis ( 2012 ) .on the efficiency - fairness trade - off , _ management science _ , 58(12 ) , 22342250 .brams s.j . , a.d .taylor ( 1996 ) ._ fair division : from cake - cutting to dispute resolution _ , cambridge university press .butler , m. , h.p .williams ( 2002 ) .fairness versus efficiency in charging for the use of common facilities , _ journal of operational research society _ , 53(12 ) , 13241329 .caragiannis i. , c. kaklamanis , p. kanellopoulos , m. kyropoulou ( 2012 ) .the efficiency of fair division , _ theory of computing systems _ , 50(4 ) , 589610 , 2012 .see also : proceedings of wine 2009 , _springer lecture notes in computer science _ , 5929 , 475482 .coffman jr . , e.g. , m.r .garey , d.s .johnson ( 1997 ) . approximation algorithms for bin packing : a survey , in : d. hochbaum ( ed . ) , _ approximation algorithms for np - hard problems _ , pws publishing co. darmann a. , g. nicosia , u. pferschy , j. schauer ( 2014 ) . the subset sum game , _european journal of operational research _ , 233(3 ) , 539549 .drees m. , s. riechers , a. skopalik ( 2014 ) .budget - restricted utility games with ordered strategic decisions , proceedings of sagt 2014 , _ springer lecture notes in computer science _ , 8768 , 110121 .fritzsche r. , p. rost , g.p .fettweis ( 2015 ) .robust rate adaptation and proportional fair scheduling with imperfect csi , _ ieee transactions on wireless communications _, 14(8 ) , 4417 - 4427 .fujimoto m. , t. yamada ( 2006 ) .an exact algorithm for the knapsack sharing problem with common items , _european journal of operational research _ , 171(2 ) , 693707 .ghodsi a. , m. zaharia , b. hindman , a. konwinski , s. shenker , and i. stoica ( 2011 ) .dominant resource fairness : fair allocation of multiple resource types , proceedings of the 8th usenix conference on networked systems design and implementation ( nsdi ) , 2437 .goel g. , c. karande , l. wang ( 2010 ) .single - parameter combinatorial auctions with partially public valuations , proceeding of sagt 2010 , _ springer lecture notes in computer science _, 6386 , 234245 .graham r.l .lawler , j.k .lenstra , a.h.g .rinnooy kan ( 1979 ) , optimization and approximation in deterministic sequencing and scheduling : a survey , in : p.l .hammer et al .( eds . ) , _ annals of discrete mathematics _ , 5 , 287326 , elsevier .hifi m. , h. mhallab , s. sadfi ( 2005 ) .an exact algorithm for the knapsack sharing problem , _ computers and operations research _ , 32(5 ) , 13111324 .kalai e. , m. smorodinsky ( 1975 ) .other solutions to nash bargaining problem , _ econometrica _ , 43 , 513518 . karsu . , a. morton ( 2015 ) , inequity averse optimization in operational research , _european journal of operational research _, 245(2 ) , 343359 .kellerer h. , u. pferschy , d. pisinger ( 2004 ) . _knapsack problems _ , springer .kelly f.p .maulloo and d.k.h . tan ( 1998 ) .rate control in communication networks : shadow prices , proportional fairness and stability , _ journal of the operational research society _ , 49 , 237252 .klamler , c. ( 2010 ) .fair division , in : kilgour d.m . andc. eden ( eds . ) , _ handbook of group decision and negotiation _ , springer , 183202 .kppen m. , k. yoshida , k. ohnishi , m. tsuru ( 2012 ) .meta - heuristic approach to proportional fairness , _ evolutionary intelligence _ , 5(4 ) , 231244 .kozanidis g. ( 2009 ) .solving the linear multiple choice knapsack problem with two objectives : profit and equity , _ computational optimization and applications _ , 43(2 ) , 261294 .nicosia g. , a. pacifici , u. pferschy ( 2015 ) .brief announcement : on the fair subset sum problem , proceedings of sagt 2015 , _springer lecture notes in computer science _ , 9347 , 309311 .nisan n. , t. roughgarden , e. tardos , v.v .vazirani ( 2007 ) ._ algorithmic game theory _ , cambridge university press .parkes d.c .procaccia , n. shah ( 2015 ) . beyond dominant resource fairness : extensions , limitations , and indivisibilities ._ acm transactions on economics and computation _ , 3(1 ) ,article no .rawls j. ( 1971 ) ._ a theory of justice _ , harvard university press .zhang c. , j.a .shah ( 2015 ) . on fairness in decision - making under uncertainty : definitions , computation , and comparison ._ proceedings of the 29th aaai conference on artificial intelligence _ , 36423648 .* theorem [ thm : pfequal ] * _ if two proportional fair solutions and exist , then for all ._ let and be two proportional fair solutions . by definition of proportional fairness and using equation for both and , we obtain and .let for , clearly .then the two above inequalities can be rewritten as : and . by summing up these last two inequalitieswe get that .moreover , for any .hence , the only possible way to satisfy is , which implies , for all .let be the proportional fair solution and any feasible solution .let .by , recalling that the geometric mean is not larger than the arithmetic mean , we have as a consequence and the thesis follows .
|
in this paper we study the problem of allocating a scarce resource among several players ( or agents ) . a central decision maker wants to maximize the total utility of all agents . however , such a solution may be unfair for one or more agents in the sense that it can be achieved through a very unbalanced allocation of the resource . on the other hand fair / balanced allocations may be far from optimal from a central point of view . so , in this paper we are interested in assessing the quality of fair solutions , i.e. in measuring the system efficiency loss under a fair allocation compared to the one that maximizes the sum of agents utilities . this indicator is usually called the _ price of fairness _ and we study it under three different definitions of fairness , namely maximin , kalai - smorodinski and proportional fairness . our results are of two different types . we first formalize a number of properties holding for any general multi - agent problem without any special assumption on the agents utilities . then we introduce an allocation problem , where each agent can consume the resource in given discrete quantities ( items ) . in this case the maximization of the total utility is given by a subset sum problem . for the resulting _ fair subset sum problem _ , in the case of two agents , we provide upper and lower bounds on the price of fairness as functions of an upper bound on the items size . subset sum problem , fairness , multi - agent systems , bicriteria optimization .
|
extracting the full physical content from einstein s equations has proven to be a difficult task .the complexity of these equations has allowed researchers only a peek into the rich phenomenology of the theory by assuming special symmetries and reductions .computational methods , however , are opening a new window into the theory . to realize the full utility of computational solutions in exploring einstein s equations , several questions must first be addressed .namely , a deeper understanding of the system of equations and its boundary conditions , the development and use of more refined numerical techniques and an efficient use of the available computational resources . in recent years, considerable advances have been made in some of these issues , allowing for the analysis of complex physical systems which arguably must be tackled numerically . in the present articlewe highlight some recent analytical and numerical techniques and apply them to two practical applications .the first application is the _ numerical evolution of bubble spacetimes in five - dimensional kaluza - klein theory_. we study their dynamical behavior , the validity of cosmic censorship in a set - up which a - priori would appear promising to give rise to violations of the conjecture and reveal the existence of critical phenomena . as a second application, we discuss the _ numerical evolution of single black hole spacetimes_. here we consider some analytical and numerical difficulties in modeling these systems accurately .we discuss a method to alleviate some of these problems , and present tests to demonstrate the promise of this method .in the cauchy formulation of general relativity , einstein s field equations are split into evolution and constraint equations .numerical solutions are found by specifying data on an initial spacelike slice , subject to the constraints , and by integrating the evolution equations to obtain the future development of the data . owing to finite computer resources ,one is forced to use finite , and , in practice , rather small computational domains to discretize the problem .this raises several important issues .the fundamental property for any useful numerical solution is that the solution must convergence to the continuum solution in the limit of infinite resolution .a prerequisite for a well - behaved numerical solution is a well - posed continuum formulation of the initial - boundary value problem . in certain cases ,the well - posed continuum problem can then be used to construct stable numerical discretizations for which one can _ a priori _ guarantee convergence .in particular , this can be achieved for linear , first - order , symmetric hyperbolic systems with maximally dissipative boundary conditions .this is briefly discussed in sec .[ sect : sbp ] , for a detailed description and an extension to numerical relativity see refs . .the application of these ideas in general relativity is , naturally , more complicated .first , einstein s equations are nonlinear and so it is much harder to _ a priori _ prove convergence .however , a discretization that guarantees stability for the linearized equations should already be useful for the nonlinear equations , especially for those systems with smooth solutions as expected for the einstein equations when written appropriately .this is because in a small enough neighborhood of any given spacelike slice , the numerical solution can be modeled as a small amplitude perturbation of the continuum solution .the constraint equations in general relativity bring additional complications and greatly restrict the freedom in specifying boundary and initial data .this is illustrated and further discussed in section [ sect : cpbc ] .section [ sect : dyncontrol ] discusses issues regarding the stability of the constraint manifold .the manifold is invariant with respect to the flow defined by the evolution system in the continuum problem .numerically , however , small errors in the solution arising from truncation or roundoff error may lead to large constraint violations if the constraint manifold is unstable .section [ sect : dyncontrol ] discusses a method for suppressing such rapid constraint violations .a simple numerical algorithm , or `` recipe , '' can be followed to solve first order , linear symmetric hyperbolic equations with variable coefficients and maximally dissipative boundary conditions , for which stability can be guaranteed .it is based on finite difference approximations with spatial difference operators that satisfy the _ summation by parts _ ( sbp ) property .this property is a discrete analogous of _ integration by parts _ ,which is used in the derivation of energy estimates , a key ingredient for obtaining a well posed formulation of the continuum problem .sbp allows to obtain similar energy estimates for the discrete problem .[ [ employ - spatial - difference - operators - that - satisfy - sbp - on - the - computational - domain . ] ] employ spatial difference operators that satisfy sbp on the computational domain . for the sake of simplicity, consider a set of linear , first order symmetric hyperbolic equations in the one - dimensional domain which is discretized with points , , where . nowlet us introduce the discrete scalar product , for some positive definite matrix with elements which in the continuum limit approaches the norm .at the continuum level , the derivative operator and scalar product satisfy integration by parts , i.e. , which in the discrete case is translated into a finite difference operator which satisfies and approaches in the continuum limit .the simplest difference operator and scalar product satisfying sbp are where the scalar product is diagonal : for .higher order operators satisfying sbp have been constructed by strand .additionally , when dealing with non - trivial domains containing inner boundaries , additional complexities must be addressed to attain sbp , see ref . . the finite operator is then used for the discretization of the spatial derivatives in the evolution equations , thus obtaining a semi - discrete system .[ [ impose - boundary - conditions - via - orthogonal - projections . ] ] impose boundary conditions via orthogonal projections .this ensures the consistent treatment of the boundaries , guaranteeing the correct handling of modes propagating towards , from and tangential to the boundaries .an energy estimate can be obtained for the semi - discrete system .[ [ implement - an - appropriate - time - integration - algorithm . ] ] implement an appropriate time integration algorithm .the resulting semi - discrete system constitutes a large system of ode s which can be numerically solved by using a time integrator that satisfies an energy estimate .[ [ consider - adding - explicit - dissipation ] ] consider adding explicit dissipation it is well known that finite difference approximations do not adequately represent the highest frequency modes on a given grid , corresponding to the shortest possible wavelengths that can be represented on the grid . if the smallest spacing between points is , the shortest wavelength is with the corresponding frequency .these modes can , and often do , travel in the wrong direction .for this reason , it is sometimes useful to add explicit numerical dissipation to rid the simulation of these modes in a way that is consistent with the continuum equation at hand .as finer grids are used , the effect of this dissipation becomes smaller and acts only on increasingly higher frequencies .the dissipation operators are constructed such that discrete energy estimates , obtained using sbp , are not spoiled .explicit expressions for such dissipation operators are presented in ref . .to summarize , beginning with a well - posed initial - boundary value problem , we mimic the derivation of continuum energy estimates for the discrete problem using ( 1 ) spatial derivative operators satisfying summation by parts , ( 2 ) orthogonal projections to represent boundary conditions and ( 3 ) choosing an appropriate time integrator . as discussed above , a numerical implementation of any system of partial differential equationsnecessarily involves boundaries .unless periodic boundary conditions can be imposed , as is often the case for the evolution on compact domains without boundaries , one deals with an initial - boundary value problem , and thus has to face the question of how to specify boundary conditions . in theories that give rise to constraints , like general relativity , such conditions must be chosen carefully to ensure that the constraints propagate . as a very simple illustration ,consider the 1d wave equation on the half line .let us reduce it to first order form by introducing the variables and , with a negative constant : at the boundary , the system has two ingoing fields , given by and , and one outgoing field .however , the ingoing fields can not be given independently , as we see next .the constraint propagates as and so is an ingoing field with respect to .therefore , we have to impose the boundary condition which implies the condition on the main variables .we can replace this with a condition that is intrinsic to the boundary by using the evolution equation ( [ eq : ut2 ] ) in order to eliminate the -derivative and obtain this equation provides an evolution equation for determining at the boundary , which guarantees that the constraint is preserved throughout evolution .it can be complemented by the sommerfeld condition .this simple example gives just a glimpse of the different issues involved in prescribing constraint - preserving boundary conditions .the case of einsteins s field equations is more complicated ; we refer the interested reader to refs . . a major difficulty is the fact that , in general , constraint - preserving boundary conditions do not have the form of maximal dissipative boundary conditions , and for this reason it has proven to be difficult to find well posed initial - boundary value formulations of einstein s equations that preserve the constraints .formulations of the einstein equations are often cast in symmetric hyperbolic form by adding constraints to the evolution equations multiplied by parameters or spacetime functions .the symmetric hyperbolicity condition partially restricts these parameters , however , considerable freedom in the formulation exists in choosing these free parameters ( see , for instance , ) .analytically , when data are on the constraint surface , all allowed values for these parameters are equally valid . off of the constraint surface , however , different values of these parameters may be regarded as representing `` different '' theories .it is no surprise then that numerical simulations are sensitive to the values chosen for these parameters , as numerical data rarely are on the constraint surface .unfortunately , the parameters in current simulations are proving to be extremely sensitive .relatively mall variations in these parameters ( within the allowed range for a symmetric hyperbolic formulation ) produce run times in simulations that vary over several orders of magnitude , as measured by an asymptotic observer . furthermore , the parameters are not unique. values convenient for one physical problem might be inappropriate in another . recently , a method to dynamically choose these parameters promoted to functions of time was introduced that naturally adapts to the physical problem under study .basically , one exploits the freedom in choosing these functions to control the growth rate of an energy norm for constraint violations .since this norm is exactly zero analytically , this provides a guide to choosing the parameters that will drive the solution to one that satisfies the constraints .this method provides a _ practical _ solution to this problem of choosing parameters , although it may not be the most elegant solution .ideally , one would like to understand how the growth rate of the solution depends on the these parameter values in order to choose them appropriately .this would require sharp growth estimates , however , which are still unavailable . while further understanding is gained in this front, this practical remedy can be of much help in present simulations .we summarize here the essential ideas of this method .consider a system of hyperbolic equations with constraint terms , , written schematically as where , and are vector valued functions , and is a matrix ( generally not square ) that is a function of the spacetime ( represents a vector function of general constraint variables ) .the indices range over each element of the vector or matrix functions , while the indices label points on a discrete grid .we define an _ energy _ or _ norm _ of the discrete constraint variables as where , , are the number of points in each direction .the grid indices are suppressed to simplify the notation .the time derivative of the norm can be calculated using eq .( [ linearc ] ) and therefore can be known in closed form provided the matrix valued sums \times \nonumber \\ & & \left[\sum_c(a^cd_cu_b ) + b_b\right ] \label{split1 } \\ { \cal i}^{\mu}_{bc } & = & \sum_{ijk } \sum_{a } \frac{c_a}{n_xn_yn_z } \times \nonumber \\ & & \left[\frac{\partial c_a } { \partial u_b}+ \sum_k\frac{\partial c_a}{\partial d_k u_b}d_k \right]c_c \label{split2}\end{aligned}\ ] ] are computed during evolution . here is the discrete derivative approximation to .we then use the dependence of the energy growth on the free constraint - functions to achieve some desired behavior for the constraints , i.e. , solving eq .( [ split ] ) for . for example , if we choose does not denote an index , as before .similarly , the subscript in indicates that the quantity is related to through eq .( [ a ] ) . ] any violation of the constraints will decay exponentially as discussed in ref . , one good option among many others seems to be choosing a tolerance value , , for the norm of the constraints that is close to the initial discrete value , and solving for such that the constraints decay to this tolerance value after a given relaxation time .this can be done by adopting an such that after some time the constraints have the value .replacing by in equation ( [ decay ] ) and solving for gives if one then solves for , with given by eq .( [ a ] ) , the value of the norm should be , independent of its initial value . therefore , eq . ( [ eq_for_mu ] ) serves as a guide to formulate a practical method to choose free parameters in the equations with which the numerical solution behaves well with respect to the satisfaction of the constraints . naturally , if one deals , as it is often the case , with more than one free parameter , eq .( [ eq_for_mu ] ) must be augmented with other conditions to yield a unique solution .this extra freedom is actually very useful in preventing large time - variations in the parameters that are sometimes needed in order to keep the constraints under control .these large variations do not represent a fundamental problem but a practical one , due to the small time stepping that they require in order to keep errors due to time integration reasonably small .one way to prevent this is by using this extra freedom to pick up the point in parameter space that not only gives the desired constraint growth , but also minimizes the change of parameters between two consecutive timesteps . rather than including the full details on the particular way we have implemented the method , we describe here a simple example to illustrate its application .assume , for instance , that within a particular formulation only two free functions , , are employed , eq .( [ eq_for_mu ] ) formally evaluates to now , we exploit the freedom in the free functions to adjust the rate of change of the energy if the values of are known . in practice , these are easily obtained during evolution . once these are known , eq .( [ eq_for_mu_ex ] ) , coupled to the requirement that vary as little as possible from one evaluation to another , results in a straightforward strategy to evaluate preferred values of the free parameters .this is done at a single resolution `` test '' run and , through interpolation in time , continuum , _ a priori _ defined parameters which keep the constraints under control for the given problem are obtained .depending on the formulation of the equations , the free parameters might have to satisfy some conditions in order for symmetric hyperbolicity to hold , which can restrict the range of values these parameters can take .nevertheless , even within a restricted window , the technique allows one to adopt the most convenient values these parameters should have for the problem at hand .we now present applications of the techniques previously discussed .the goal is to illustrate how well - resolved simulations can indeed serve as a powerful tool to understand particular problems . to this endwe have chosen a problem found in higher dimensional general relativity .a second application is that of the simulation of single black hole spacetimes , where the issue of the _ a priori _ lack of a preferred formulation is illustrated . as a first application we concentrate on the study of _ bubble spacetimes _ and elucidate the dynamical behavior of configurations with both positive and negative masses and their possible connection to naked singularities .bubble spacetimes have been studied extensively within five - dimensional kaluza - klein theory .these are five - dimensional spacetimes in which the circumference of the `` extra '' dimensions shrinks to zero on some compact surface referred to as the `` bubble '' .these bubbles were initially studied by their relevance in the quantum instability of flat spacetime , as bubbles can be obtained via semi - classical tunneling from it .they were later extended to include data corresponding to negative energy configurations ( at a moment of time symmetry ) .as mentioned , among the reasons for considering negative energy solutions is that naked singularities are associated with them .therefore , these solutions are attractive tests of the cosmic censorship conjecture . additionally , bubble spacetimes can also be obtained by double - wick rotation of black strings , whose stability properties ( or lack thereof ) have been the subject of intense scrutiny in recent years .these features make bubble spacetimes both interesting an relevant for gravity beyond four - dimensions , and thus attention has been devoted to fully understand their behavior .as we will see , even when the `` analytical '' study of the problem is greatly simplified by symmetry assumptions , many lingering questions remain and numerical simulations provided a viable way to shed light into them .furthermore , these simulations were also key to ` digging out ' a few unexpected features of the solution . in order to obtain a complete description of the dynamical behavior of these spacetimes , a numerical code , implementing einstein equations in 5d settings , and capable of handling the possibly strong curvature associatedneed be constructed .fortunately , the assumption of a symmetry simplifies the treatment of the problem , which can be reduced to a manifold .this , in turn , renders the problem quite tractable by the currently available computational resources , though as we will see , considerable care must be placed at both analytical and numerical levels for an accurate treatment of the problem .we consider a generalization of the time symmetric family of initial data presented in .we start with a spacetime endowed with the metric where is the standard metric on the unit two - sphere and is a smooth function that has a regular root at some , is everywhere positive for and converges to one as . the coordinate parameterizes the extra dimension which has the period .the resulting spacetime constitutes a regular manifold with the topology .the bubble is located where the circumference of the extra dimension shrinks to zero , that is , at . additionally , we consider the presence of an electromagnetic field of the form where is a smooth function of that converges to zero as .the symmetries of the problem would also allow for a non - trivial electric component of the field .however , it is not difficult to show that maxwell s equations imply that such a field necessarily diverges at the location of the bubble . for this reason , in the following , we only consider the case of vanishing electric field . in this article , we consider initial data with where is an arbitrary constant and an integer greater than one .this field generalizes the ansatz considered in , where only the case was discussed , and allows for different interesting initial configurations . in the time - symmetric case ,initial data satisfying the hamiltonian constraint obeys with and free integration constants and . here , the parameter is related to the adm mass via .the fact that the bubble be located at requires that , where , , .we also require and avoid the conical singularity at by fixing the period of to .it can be shown that the initial acceleration of the bubble area with respect to proper time is given by .\label{eq : ddota}\ ] ] for , as discussed in ref . , this implies that negative mass bubbles start out expanding ( the initial velocity of the area is zero since we only consider time - symmetric initial data ) , while for large enough positive mass the bubble starts out collapsing . in the vacuum case, our numerical simulations suggest that initially collapsing bubbles undergo complete collapse and form a black string . in the non - vacuum casehowever , the strength of the electromagnetic field can modify this behavior completely .we will see that for small enough the bubble continues to collapse whereas when is large the bubble area bounces back and expands .interesting behavior is obtained at the critical value for which divides the phase space between collapsing and expanding solutions . for is possible to obtain initial configurations with negative mass and negative initial acceleration . this can potentially give rise to a collapsing bubble of negative energy , and thus to a naked singularity .however , our numerical results suggest that cosmic censorship is valid : the bubble bounces back and starts out expanding . in order to study the time evolution of the initial data sets given on a slices of the metric ( [ eq : bubblemetric ] ) and the electromagnetic field ( [ eq : bubbleemfield ] ), it is convenient to introduce a new radial coordinate which facilitates the specification of regularity conditions at the bubble location .this new coordinate is defined by the metric ( [ eq : bubblemetric ] ) now reads with , , .since near , and converges to one in the asymptotic region , and are regular functions .an explicit example is the initial data corresponding to the zero mass witten bubble where and thus . when studying the time evolution of the initial data sets discussed above , we consider the metric ( [ eq : metricgeneral ] ) where , , and are functions of and .as we will see , the coordinate is well suited for imposing regularity conditions at the bubble location since represent polar coordinates near the bubble , being the center , and assuming the role of the angular coordinate . in order to avoid a conical singularity, must have the period . for this to be constant we need to impose the boundary condition at .similarly , the electromagnetic field ( [ eq : bubbleemfield ] ) is written in the form where the functions and depend on and and satisfy and at the initial time .we choose the following gauge condition for the lapse with a parameter which , in our simulations , is either zero or one .for the resulting gauge condition is strongly related to the densitized lapse condition often encountered in hyperbolic formulations of einstein s equations : indeed , the square root of the determinant of the four metric belonging to eq .( [ eq : metricgeneral ] ) is given by , so eq . ( [ eq : gaugecond ] ) sets equal to the square root of the determinant of the four metric but divides the result by the factor which is singular at the bubble , at the poles and in the asymptotic region . for , the condition ( [ eq : gaugecond ] )implies that the two - metric is in the conformal flat gauge .as we will see , the principal part of the evolution equations is governed by the dalembertian with respect to this metric . since the two - dimensional dalembertian operator is conformally covariant , the resulting equations are semi - linear in that case .in particular , this implies that the characteristic speeds do not depend on the solution that is being evolved .the field equations resulting from the five - dimensional einstein - maxwell equations split into a set of evolution equations and a set of constraints .the evolution equations can be written as ' - 3(\lambda-1)(c'+g')^2 e^{2\lambda b } - ( \lambda+1 ) v \nonumber\\ & + & 2\lambda \dot{a}\dot{b } -\lambda(\lambda+1)\dot{b}^2 - 3(\lambda+1)\dot{c}^2 + g\left [ ( 1-\lambda ) \pi_\gamma^2 - ( 1+\lambda ) e^{2\lambda b } d_\gamma^2 \right ] , \label{eq : amax}\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \ddot{b } & = & e^{(\lambda-1)b - ( \lambda+2 ) f } \left [ b ' e^{(\lambda+1)b + ( \lambda+2 ) f } \right ] ' + \frac{3r_+^2 + 2r^2}{(r_+^2 + r^2)^2}\ , e^{2\lambda b } \nonumber\\ & + & ( \lambda-1)\dot{b}^2 - 2 v , \label{eq : bmax}\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \ddot{c } & = & e^{(\lambda-1)b - f } \left [ ( c'+g ' ) e^{(\lambda+1)b + f } \right ] ' - v + ( \lambda-1)\dot{b}\dot{c } , \nonumber\\ & + & \frac{2g}{3}\left [ \pi_\gamma^2 - e^{2\lambda b } d_\gamma^2 \right ] , \label{eq : cmax}\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \dot{\pi}_\gamma & = & e^{\lambda b - 2(c+g)}\left [ d_\gamma e^{\lambda b + 2(c+g ) } \right ] ' + ( \lambda\dot{b } - 2\dot{c } ) \pi_\gamma\ ; ,\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \dot{d}_\gamma & = & \frac{\sqrt{r_+^2+r^2}}{r } e^{-(b - 2c)}\left [ \pi_\gamma \frac{r}{\sqrt{r_+^2 + r^2 } } e^{b-2c } \right ] ' - ( \dot{b } - 2\dot{c } ) d_\gamma\ ; , \end{aligned}\ ] ] where we have set , , and , and . here , and in the following , a dot and a prime denote differentiation with respect to and , respectively .the evolution equations constitute a hyperbolic system on the domain .the constraints are the hamiltonian and the component of the momentum constraint , given by , , where '\nonumber\\ & + & \left [ \frac{3r_+^2 + 2r^2}{(r_+^2 + r^2)^2 } - ( b ' + f')(a ' + 2 g ' ) + 3(c'+g')^2 \right ] e^{2\lambda b } \nonumber\\ & - & v - ( \dot{a } - \lambda\dot{b})\dot{b } + 3\dot{c}^2 + g\left [ \pi_\gamma^2 + e^{2\lambda b } d_\gamma^2 \right ] , \label{eq : defc}\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % { \cal c}_r & = & e^{a - 2c}\left [ e^{-(a- 2c ) } \dot{b } \right ] ' - ( b ' + f')\left [ \dot{a } - ( \lambda+1)\dot{b } \right ] + 2(c'+g')(3\dot{c } - \dot{b } ) \nonumber\\ & + & 2g\pi_\gamma d_\gamma\ , .\label{eq : defcr}\end{aligned}\ ] ] [ [ regularity - conditions ] ] regularity conditions + + + + + + + + + + + + + + + + + + + + + the evolution equations contain terms proportional to which diverge like near , and therefore , regularity conditions have to be imposed at .this is achieved by demanding the boundary conditions assuming that the fields are smooth enough near , it then follows that the right - hand side of the evolution equations is bounded for .next , as discussed above , the avoidance of a conical singularity at requires that must be constant at .we show that this condition is a consequence of the evolution and constraint equations , and of the regularity conditions ( [ eq : regcond ] ) . using the evolution equations in the limit and taking into account the conditions ( [ eq : regcond ] ) , we find \right\ } \right|_{r=0 } = \left .-(\lambda+1 ) e^{(1-\lambda)b } { \cal c } \right|_{r=0 } .\ ] ] this means that if the hamiltonian constraint is satisfied at ( or in the case that even if the constraints are violated ) , the condition will hold provided that the initial data satisfies .next , we analyze the propagation of the constraint variables and and show that the regularity conditions ( [ eq : regcond ] ) and the evolution equations imply that the constraints are satisfied at each time provided they are satisfied initially . [ [ propagation - of - the - constraints ] ] propagation of the constraints + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + first , we notice that the vanishing of the momentum constraint requires that because of the factor which diverges like near in the definition of .this is precisely the condition discussed above .however , for this condition to hold , we first have to show that the momentum constraint actually vanishes . in order to see this, we regularize the constraint variables and define , . now the regularity conditions ( [ eq : regcond ] ) imply that is regular and that vanishes at . as a consequence of the evolution equations and bianchi s identities , the constraint variables obey the following evolution system + ( 3\lambda-1)\dot{b}\tilde{\cal c } , \label{eq : evconstr1}\\ \partial_t\tilde{\cal c}_r & = & e^{-(\lambda+1)b - \lambda f}\partial_r\left [ e^{(\lambda+1)b + \lambda f}\tilde{\cal c } \right ] + ( \lambda-1)\dot{b}\tilde{\cal c}_r \label{eq : evconstr2}\end{aligned}\ ] ] which is regular at . defining the energy norm taking a time derivative and using the equations ( [ eq : evconstr1]),([eq : evconstr2 ] ) we obtain the boundary term vanishes because of the regularity conditions at and under the assumptions that all fields fall off sufficiently fast as . if is smooth and bounded , we can estimate the integral on the right - hand side by a constant times , and it follows that .this shows that if the constraints are satisfied initially , they are also satisfied for all for which a smooth solution exists . in the gauge where we even obtain the result that the norm of the constraints can not grow in time . to summarize ,the boundary conditions ( [ eq : regcond ] ) imply that the constraints , and are preserved throughout evolution .[ [ outer - boundary - conditions ] ] outer boundary conditions + + + + + + + + + + + + + + + + + + + + + + + + + for numerical computations , our domain extends from to for some .now we have to replace the estimate ( [ eq : energyestconstr ] ) by the estimate and it only follows that the constraints are zero if we control the boundary term at . for this reason , we impose the momentum constraint , , at .this condition results in an evolution equation for at the outer boundary .we combine this condition with the sommerfeld - like conditions at , next , we discuss the numerical implementation of the above constrained evolution system . in order to apply the discretization techniques discussed in sect .[ sect : annumtools ] we first recast the evolution equations into first order symmetric hyperbolic form by introducing the new variables , , and , , . the resulting first order system is then discretized by the method of lines .let us first discuss the spatial discretization which requires special care at because of the coefficients proportional to that appear in the evolution equations . to this end , consider the following family of toy models where is the radial coordinate , and .we impose the regularity condition at , which , for sufficiently smooth fields , implies that at , and assume that the fields vanish for sufficiently large . the toy model ( [ eq : toy1][eq : toy2 ] )corresponds to the -dimensional wave equation for spherically symmetric solutions .the principal part of our evolution system has precisely this form near , where is given by , , , for the evolution equations for , , and , respectively .the system ( [ eq : toy1][eq : toy2 ] ) admits the conserved energy a second order accurate and stable numerical discretization of the system ( [ eq : toy1][eq : toy2 ] ) can be obtained as follows : we assume a uniform grid , , approximate the fields and by grid functions , , and consider the semi - discrete system where for a grid function , is the second order accurate centered differencing operator .it is not difficult to check that this scheme preserves the discrete energy which proves the numerical stability of the semi - discrete system .finally , we use a third order runge - kutta algorithm in order to perform the integration in time . by a theorem of levermore , this guarantees the numerical stability of the fully discrete system for small enough courant factor .we apply these techniques for the discretization of our coupled system .the outer boundary conditions are implemented by a projection method .of course , the resulting system is much more complicated than the simple toy model problem presented above , and we have no _ a priori _ proof of numerical stability .nevertheless , we find the above analysis useful as a guide for constructing the discretization .our resulting code is tested by running several convergence tests , and its accuracy is tested by monitoring the constraint variables and and the quantity . herewe discuss the results for the numerical evolution of the initial data defined by eqs .( [ eq : bubblemetric][eq : idu ] ) .we start by reviewing the evolution of the initially expanding bubbles and the initially collapsing negative mass bubbles and then focus on the initially collapsing positive mass bubbles .[ [ brill - horowitz - initially - expanding - case ] ] brill - horowitz initially expanding case + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the brill - horowitz initial data ( ) in the case of vanishing electromagnetic field is evolved . the bubble area as a function of the proper time at the bubble is shown in fig .[ fig : area ] for different values of the mass parameter .as expected , the lower the mass of the initial configuration , the faster the expansion . empirically , and for the parameter ranges used in our runs , we found that at late times the expansion rate obeys where a dot denotes the derivative with respect to proper time .in particular this approximation is valid for the bubble solution exhibited by witten which describes the time evolution in the case . + .the figure shows four illustrative examples of bubbles whose initial acceleration is positive .as it is evident , the expansion of the bubble continues and the difference is the rate of the exponential expansion .the relative error in these curves , estimated from the appropriate richardson extrapolated solution in the limit , is well below 0.001%.,width=302 ] [ [ collapsing - negative - mass - case ] ] collapsing negative mass case + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we here restrict to cases with negative masses that start out collapsing .interestingly enough we find that even when starting with large initial negative accelerations , which in turn make the bubble shrink in size to very tiny values , it bounces back without ever collapsing into a naked singularity . as an example , fig .[ fig : notnakediii ] shows the bubble s area versus time for different values of and .the initially collapsing bubbles decrease in size in a noticeable way but this trend is halted and the bubbles bounce back and expand . although we have not found a simple law as that in eq .( [ simplerelation ] ) , clearly the bubbles expand exponentially fast. therefore , it seems not to be possible to `` destroy '' the bubble and create a naked singularity .this situation is somewhat similar to the scenarios where one tries to `` destroy '' an extremal reissner - nordstrm black hole by attempting to drop into it a test particle with high charge to mass ratio .there , the electrostatic repulsion prevents the particle from entering the hole .each ) whose initial acceleration is negative . as it is evident , the collapse of the bubble is halted and the trend is completely reversed .the error in these curves is estimated to be well below 0.001% ., width=302 ] [ [ brill - horowitz - initially - collapsing - case ] ] brill - horowitz initially collapsing case + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + next , we analyze the brill - horowitz initial data for the case in which the bubble is initially collapsing ( notice that for this implies that the adm mass is positive ) .while our numerical simulations reveal that in the absence of the gauge field such a bubble continues to collapse , we also show that when the gauge field is strong enough , the bubble shrinks at a rate which decreases with time and then bounces back .obviously , if the collapse trend were not halted , a singularity should form at the origin .since the adm mass is positive , one expects this singularity to be hidden behind an event horizon , and one should obtain a black string .in fact , for the solutions which are initially collapsing and which have vanishing gauge field , we observe the formation of an apparent horizon .furthermore , we compute the curvature invariant quantity at the apparent horizon ( as discussed in ) , where is the kretschmann invariant and the areal radius of the horizon . for a neutral black string ,this invariant is .figure [ fig : blackstring ] shows how this value is attained after the apparent horizon forms for representative vacuum cases ( with and ) this , together with the formation of apparent horizons , provides strong evidence for the formation of a black string .+ vs. asymptotic time for ( solid line ) and ( dashed line ) .the first non - zero values of the lines mark the formation of the apparent horizon . after some transient period ,both lines approach the value of suggesting a black string has formed.,width=302 ] as mentioned , for strong enough gauge fields , the previously described dynamics is severely affected .figure [ fig : criticalonset ] ( left panel ) shows the bubble area vs. proper time for different values of . for large values he bubble`` bounces '' back and expands while for small ones the bubble collapses .there is a natural transition region separating these two possibilities .tuning the value of one can reveal an associated critical phenomena , the ` critical solution ' being a member of the family of static solutions given by where and .the parameters and are related to the period of the coordinate and to the adm mass via and . since the quantities and are conserved , the member of the family of static solutions the dynamical solution approaches to can be determined _ a priori _ from the initial data .figure [ fig : criticalonset ] ( right panel ) displays the time defined as the length of asymptotic time during which the bubble s area stays within of the minimum value attained when the bubbles bounces back .this is a measure of how long the solution stays close to the static solution as a function of the parameter .empirically , we find the law with a parameter that does not seem to depend on the family of initial data chosen .this universality property is reinforced by the linear stability analysis of the critical solutions ( [ eq : critsol1],[eq : critsol2 ] ) performed in ref . where we prove that each solution has precisely one unstable linear mode growing like with a universal lyapunov exponent of .this explains the law ( [ eq : critlaw ] ) with . and . by tuning the value of appropriately, the amount of time that the area remains fairly constant can be extended for as long as desired . *right panel * the time which is a measure of how long the solution stays close to the static solution vs. the logarithm of the difference between the parameter and its critical value .a linear interpolation gives the value .,width=453 ] as one of the applications that we have chosen to illustrate the use of the techniques previously discussed we consider here the evolution of single non - spinning black holes .even when the data provided correspond to spherically symmetric and vacuum scenarios , as we will see , obtaining a long term stable implementation is not a trivial task . for additional information , and a more general treatment , we refer the reader to ref . .we adopt the symmetric hyperbolic family of formulations introduced in .this is a first order formulation whose evolved variables are given by with the induced metric on surfaces at , the extrinsic curvature , are first derivatives of the metric , , is the lapse , and are normalized first derivatives of the lapse , .the einstein equations written in this formulation are subject to the physical constraints , the hamiltonian and momentum constraints , as well as non - physical constraints , which arise from the variable definitions .the non - physical constraints are } = 0.\ ] ] the constraints are added to the field equations and the spacetime _ constraint - functions _ are introduced as multiplicative factors to the constraints .while these quantities are sometimes introduced as parameters , we extend them to time - dependent functions . for simplicity in this work , we set .requiring that the evolution system is symmetric hyperbolic imposes algebraic conditions on these factors , and they are not all independent . if we require that all the characteristic speeds are `` physical '' ( i.e. either normal to the spatial hypersurfaces or along the light cone ) , then we obtain two symmetric hyperbolic families .one family has a single free parameter , , and another symmetric system with two varying constraint - functions : initial data for a schwarzschild black hole are given in in - going eddington finkelstein coordinates .the shift will be considered an _ a priori _ given vector field while the lapse is evolved to correspond to the time harmonic gauge with a given source function .this gauge source function is taken from the exact solution , such that in the high - resolution limit .black hole excision is usually based on the assumption that an inner boundary ( ib ) can be placed on the domain such that information from this boundary does not enter the computational domain .this requirement places strenuous demands on cubical excision for a schwarzschild black hole in kerr - schild , painlevee - gullstrand or the martel - poisson coordinates : the cube must be inside in each direction .this forces one to excise very close to the singularity , where gradients in the solution can become very large , requiring very high resolution near the excision boundary to adequately resolve the solution .this requirement follows directly from the physical properties of the schwarzschild solution in these coordinates , and is independent of the particular formulation of the einstein equations . with our current uniform cartesian code, however , we do not have enough resolution to adequately resolve the schwarzschild solution near the singularity .thus , we place the inner boundary inside the event horizon , but outside the region where all characteristics are out - going .the difference stencils are one - sided at the inner boundary , and no boundary conditions are explicitly applied . testing various locationswe find that placing the inner boundary at gives reasonable results for the resolutions we are able to use , .we are working to resolve this inconsistency in our code by using coordinate systems that conform to the horizon s geometry .we performed numerical experiments with the outer boundary at three different locations , , and .boundary conditions for the outer boundary are applied using the orthogonal projection technique referenced above , by `` freezing '' the incoming characteristic modes .that is , their time derivative is set to zero through an orthogonal projection .this makes use of the fact that one knows that the continuum , exact solution is actually stationary .while this would not be useful in the general case , as we shall see , even in such a simplified case the constraint manifold seems to be unstable .we are currently working on extending the boundary treatment to allow for constraint - preserving boundary conditions and studying the well posedness of the associated initial - boundary value problem .having set up consistent initial and boundary data , in a second order accurate implementation using the techniques mentioned in section ii , we now concentrate on simulating a stationary black hole spacetime .as we will see below , even in this simple system , one encounters difficulties to evolve the system for long times .in particular , as has been illustrated in several occasions , the length of time during which a reliable numerical solution is obtained varies considerably depending on the values of the free parameters in the formulation .these parameters play no role at the constraint surface ; however , off this constraint surface , these parameters have a sensible impact . hence , at the numerical level where generic data is only approximately at this surface , it is necessary to adopt preferred values of these parameters .these , in turn , will depend not only on the physical situation under study but also in the details of the particular implementation ( order of convergence , etc ) . as we argued in section ii ,the constraint minimization method provides a practical way to adopt these parameters .we next illustrate this in numerical simulations of schwarzschild spacetime .we concentrate here on black hole simulations performed using the symmetric hyperbolic formulation with two constraint functions .the single function family and its disadvantages for constraint minimization are discussed in ref . . as a first attempt to numerically integrate the einstein equations, one could simply fix the parameters and to constant values .lacking knowledge of preferred values for these parameters we might simply set and .evolutions of the schwarzschild spacetime for these parameter choices , however , show that the solution is quickly corrupted , and the solution diverges .figure [ convergence1 ] shows the error in the numerical solution with respect to the exact solution for three resolutions .while the code converges , the error at a single resolution grows without bound as a function of time . ,inner and outer boundaries at and , respectively.,height=226 ] we now apply the constraint minimization technique to evolutions of a schwarzschild black hole .the constraint functions and will now vary in time , and both will be used to control the constraint growth . with two functions we can attempt to minimize changes in the functions themselves . this is advantageous because smoothly varying functions seem to yield better numerical results .thus , and are chosen at time step to minimize the quantity ^ 2 + \left [ \gamma ( n+1 ) - \gamma(n ) \right]^2 \label{triangle}\ ] ] is nonlinear in but linear in , allowing one to solve for such that , where , as in section iii , is given by eq .( [ a ] ) . is chosen from some arbitrary , large interval .the corresponding given by eq .( [ eta_tent ] ) is computed , and the pair that minimizes defined in eq .( [ triangle ] ) is chosen . and may be freely chosen , except that , giving two `` branches '' : always larger than -1/2 , and always smaller than -1/2 . we have only explored the branch using the seed values , . in order to keep the variation of the parameters between two consecutive timesteps reasonably small ,we have needed to set the tolerance value for the constraints energy roughly one order of magnitude larger than the initial discretization error , and to either or .this means that the constraints energy , though in a longer timescale , will still grow . , , , and ., height=226 ] ., height=226 ] .the figure compares the resulting energy for the constraints with that of the previous figure ( shown at late times only , since because of the setup the runs are identical up to ).,height=226 ] the outer boundary is first placed at .figure [ bi_dyn_bound5 m ] shows the energy of the constraints and the error with respect to the exact solution .the corresponding constraint functions are shown in figure [ bi_dyn_bound5mcf ] . the large variation in the functions near the end of the run appears to be a consequence of other growing errors . in figure [ bi_dyn_bound5ma ]the minimization is stopped at , and the functions are fixed to , for the remainder of the run .the solution diverges at approximately the same time .another measure of the error in the solution is the mass of the apparent horizon , as shown in figure [ ah ] . after some time, the mass approximately settles down to a value that is around , which corresponds to an error of the order of one part in one thousand . for the higher resolution ,the apparent horizon mass at late times becomes indistinguishable from , given the expected level of discretization errors . the outer boundary is now placed at .figure [ bi_dyn_bound15 m ] shows results for data equivalent to those discussed for figure [ bi_dyn_bound5 m ] .the initial discretization value for the energy is , and , was used .the minimization of the constraint - functions is stopped at , at which point the constraint - functions are approximately constant , and equal to figure [ bi_dyn_bound15 m ] shows that the dependence of the lifetime on the location of the outer boundaries is not monotonic , as for this case the code runs for , roughly , , while with boundaries at and it ran for around , and , respectively .a detailed analysis of such dependence would be computationally expensive and beyond the scope of this work , and may even depend on the details of the constraint minimization , such as the values for and .however , comparing figure 5 with figures 69 , we see that the constraint minimization considerably improves the lifetime of the simulation , as expected . , , , and .the constraint - functions are constant for , where they are .thus , the constraint functions do not respond when the code is about to crash.,title="fig:",height=226 ] , , , and .the constraint - functions are constant for , where they are .thus , the constraint functions do not respond when the code is about to crash.,title="fig:",height=226 ]we have chosen two problems to illustrate both the power of numerical simulations of einstein s equations and some of the difficulties encountered in obtaining accurate numerical solutions .this is especially relevant for black hole systems , where different poorly understood issues coupled to lack of sufficient computational power makes it much more difficult to advance at a sustained pace towards the final goal of producing a reliable description of a binary black hole system .however , it is clear that goal outweighs these difficulties . as the bubble problem illustrates , a robust implementation was not only key to responding to open questions but also proved to be the way to observing other phenomena not previously considered .not only did it show that _ a priori _ possible way to violate cosmic censorship is invalid , but it also revealed the existence of critical phenomena , which , in turn , can be used to shed further light in the stability of black string systems .fortunately , a substantial body of work in recent years has begun to address a number of these questions. a better understanding of the initial boundary - value problem in general relativity , advances in the definition of initial data and gauge choices coupled to several modern numerical techniques are having a direct impact in current numerical efforts. it seems reasonable to speculate that if this trend continues , the ultimate goal will be within reach in a not - too - distant future .this research was supported in part by the nsf under grants no : phy0244335 , phy0326311 , int0204937 to louisiana state university , the research corporation , the horace hearne jr .institute for theoretical physics , nsf grant no .phy-0099568 to caltech , and nsf grants no .phy0354631 and phy0312072 to cornell university .this research used the resources of the center for computation and technology at louisiana state university , which is supported by funding from the louisiana legislature s information technology initiative .we thank gioel calabrese , rob myers , jorge pullin and oscar reula for several discussions related to the applications presented in this work .olsson `` summation by parts , projections and stability .i '' , mathematics of computation * 64 * , 1035 ( 1995 ) ; `` supplement to summation by parts , projections and stability .i '' , mathematics of computation * 64 * , s23 ( 1995 ) ; `` summation by parts , projections and stability .ii '' , mathematics of computation * 64 * , 1473 ( 1995 ) .
|
combining deeper insight of einstein s equations with sophisticated numerical techniques promises the ability to construct accurate numerical implementations of these equations . we illustrate this in two examples , the numerical evolution of `` bubble '' and single black hole spacetimes . the former is chosen to demonstrate how accurate numerical solutions can answer open questions and even reveal unexpected phenomena . the latter illustrates some of the difficulties encountered in three - dimensional black hole simulations , and presents some possible remedies .
|
the paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed ( sp ) optimal controls problems ( that is , problems of optimal control of sp systems ) considered on the infinite time horizon .we mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well .our consideration is based on earlier results on averaging of sp control systems and on linear programming formulations of optimal control problems .the idea that we exploit is to first asymptotically approximate a given problem of optimal control of the sp system by a certain averaged optimal control problem , then reformulate this averaged problem as an infinite - dimensional ( i d ) linear programming ( lp ) problem , and then approximate the latter by semi - infinite lp problems .we show that the optimal solution of these semi - infinite lp problems and their duals ( that can be found with the help of a modification of an available lp software ) allow one to construct near optimal controls of the sp system .we will be considering the sp system written in the form where is a small parameter ; are continuous vector functions satisfying lipschitz conditions in and ; and where controls are measurable functions of time satisfying the inclusion being a given compact metric space . the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] )will be considered with the initial condition we are assuming that all solutions of the system obtained with this initial condition satisfy the inclusion where is a compact subset of and is a compact subset of ( the consideration is readily extendable to the case when only optimal and near optimal solutions satisfy ( [ equ - y ] ) ) .we will be mostly dealing with the problem of optimal control where is a continuous function , is a discount rate , and is sought over all controls and the corresponding solutions of ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) that satisfy the initial condition([e - initial - sp ] ) .however , the approach that we are developing is applicable to other classes of sp optimal control problems as well . to demonstrate this point , we will indicate a way how results obtained for the problem with time discounting criterion ( [ vy - perturbed ] ) can be extended to the periodic optimization setting , and we will consider an example of a sp periodic optimization problem which is numerically solved with the help of the proposed technique . the presence of in the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) ) implies that the rate with which the -components of the state variables change their values is of the order and is , therefore , much higher than the rate of changes of the -components ( since is assumed to be small ) .accordingly , the -components and -components of the state variables are referred to as _ fast _ and _ slow _ , respectively . problems of optimal controls of singularly perturbed systems appear in a variety of applications and have received a great deal of attention in the literature ( see , e.g. , , , , , , , , , , , , , , , , , , , , , , and references therein ) .a most common approach to such problems is based on the idea of approximating the slow -components of the solutions of the sp system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) by the solutions of the so - called reduced system which is obtained from ( [ e : intro-0 - 1 ] ) via the replacement of by , with being the root of the equation note that the equation ( [ e : red-2 ] ) can be obtained by formally equating to zero in ( [ e : intro-0 - 1 ] ) .being very efficient in dealing with many important classes of optimal control problems ( see , e.g. , , , , , , , , , ) , this approach may not be applicable in the general case ( see examples in , , , ) .in fact , the validity of the assertion that the system ( [ e : red-1 ] ) can be used for finding a near optimal control of the sp system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) is related to the validity of the hypothesis that the optimal control of the latter is in some sense slow and that ( in the optimal or near optimal regime ) the fast state variables converge rapidly to their quasi steady states defined by the root of ( [ e : red-2 ] ) and remain in a neighborhood of this root , while the slow variables are changing in accordance with ( [ e : red-1 ] ) . while the validity of such a hypothesis has been established under natural stability conditions by famous tichonov s theorem in the case of uncontrolled dynamics ( see and ) , this hypothesis may not be valid in the control setting if the dynamics is nonlinear and/or the objective function is non - convex , the reason for this being the fact that the use of rapidly oscillating controls may lead to significant ( not tending to zero with ) improvements of the performance indexes .various averaging type approaches allowing one to deal with the fact that the optimal or near optimal controls can take the form of rapidly oscillating functions have been proposed and studied by a number of researchers ( see , , , , , , , , , , , , , , , , , , , , , , and references therein ) .this collective effort lead to a good understanding of what the true limit " problems , optimal solutions of which approximate optimal solutions of the sp problems with small , are .however , to the best of our knowledge , no algorithms for finding such approximating solutions ( in case fast oscillations may lead to a significant improvement of the performance ) have been discussed in the literature . in this paper , we fill this gap by developing an apparatus for construction of such algorithms , our development being based on results of , , and establishing the equivalence of optimal control problems to certain idlp problems ( related results on linear programming formulations of optimal control problems in both deterministic and stochastic settings can be found in , , , , , , , , , , , , and ) .the paper is organized as follows .it consists of five parts .part i ( sections [ sec - contents ] - [ sec - preliminaries ] ) is introductory .section [ sec - contents ] is this description of the contents of the paper . in section [ sec - two - examples ] , we consider two examples of sp optimal control problems , in which fast oscillations lead to improvements of the performance indexes . near optimal solutions of these problems obtained with the proposed techniqueare exhibited later in the text ( section [ sec - construction - sp - examples ] ) . in section [ sec - preliminaries ]some notations and definitions used in the paper are introduced .in part ii ( sections [ sec - augm - reduced ] and [ sec - ave - aug ] ) , we build a foundation for the subsequent developments by considering two problems that describe an asymptotic behavior of the idlp problem related to the sp optimal control problem ( [ vy - perturbed ] ) .one is the augmented reduced idlp problem obtained via adding some extra constraints to the problem resulted from equating of the small parameter to zero ( section [ sec - augm - reduced ] ) and the other is the averaged " idlp problem , which is related to the averaged problem of optimal control ( section [ sec - ave - aug ] ) .we show that these two problems are equivalent and that both of them characterize the limit behavior of the sp problem when provided that the slow dynamics of the sp system is approximated by the averaged system on finite time intervals ( see definition [ def - average - approximation ] and propositions [ prop - ave - disc ] , [ prop - ave - disc-1 ] , [ prop - sp-2 ] , [ prop - present-3 ] ) .in part iii ( sections [ sec - acg - nec - opt ] - [ sec - acg - construction ] ) , we introduce the concept of an average control generating ( acg ) family ( the key building block of the paper ) , and we use duality results for idlp problems involved and their semi infinite approximations to characterize and construct optimal and near optimal acg families .more specifically , in section [ sec - acg - nec - opt ] , the definitions of an acg family and of optimal/ near optimal acg families are given ( definitions [ def - acg ] and [ def - acg - opt ] ) . also in this section , averaged and associated dual problems are introduced and a necessary optimality condition for an acg family to be optimal is established under the assumption that solutions of these duals exist ( proposition [ prop - necessary - opt - cond ] ) . in section [ n - approx - dual - opt ] , approximating averaged semi infinite lp problem and the corresponding approximating averaged and associated dual problems are introduced . in section [ sec - existence - controllability ]it is proved that solutions of these approximating dual problems exist under natural controllability conditions ( propositions [ prop - existence - disc ] and [ dual - existence - average ] ) . in section [ sec - acg - construction ] , it is established that solutions of the approximating averaged and associated dual problems can be used for construction of near optimal acg families ( theorem [ main - sp - nemeric ] ) .in part iv ( sections [ sec - sp - acg - theorem ] - [ sec - lp - based - algorithm ] ) , we indicate a way how asymptotically optimal ( near optimal ) controls of sp problems can be constructed on the basis of optimal ( near optimal ) acg families . in section[ sec - sp - acg - theorem ] , we describe the construction of a control of the sp system based on an acg family and establish its asymptotic optimality / near optimality if the acg family is optimal / near optimal ( theorem [ prop - convergence - measures - discounted ] and corollary [ cor - asym - near - opt ] ) . in section [ sec - construction - sp - examples ], we discuss the process of construction of asymptotically near optimal controls using solutions of the approximating averaged and associated dual problems , and we illustrate the construction with two numerical examples .a linear programming based algorithm allowing one to find solutions of approximating averaged problem and solutions of the corresponding approximating ( averaged and associated ) dual problems numerically is outlined in section [ sec - lp - based - algorithm ] .part v ( sections [ sec - phi - map ] - [ sec - main - ave ] ) contains some technical proofs .namely , the proofs of propositions [ prop - ave - disc ] and [ prop - sp-2 ] are given in section [ sec - phi - map ] , and the proofs of theorems [ main - sp - nemeric ] and [ prop - convergence - measures - discounted ] are given in sections [ sec - main - main ] and [ sec - main - ave ] , respectively .example 1 . consider the optimal control problem with minimization being over the controls , and the corresponding solutions of the sp system where and by taking in ( [ e : ex-4 - 2-repeat ] ) , one obtains that and , thus , arrives at the equality which makes the slow dynamics uncontrolled and leads to the equality .the latter , in turn , implies that to see that this value is not even approximately optimal for small but non - zero , let us consider the controls .the solution of the sp system ( [ e : ex-4 - 2-repeat ] ) , ( [ e : ex-4 - 1-rep ] ) obtained with this control can be verified to be of the form with the slow state variable decreasing in time and reaching zero at the moment . the value of the objective function obtained with using these controls until the moment when the slow component reaches zero and with applying zero controls " after that moment is equal to .thus , in the given example , controls and the corresponding state components and ( that are verified to be near optimal " in the given example ) have been numerically constructed with the help of proposed technique for ( see figures 1,2,3 and 7 in section [ sec - construction - sp - examples ] ) and for ( see figures 4,5,6 and 8 in section [ sec - construction - sp - examples ] ) .the corresponding values of the objective function obtained with these two values of are approximately equal to and to ( respectively ) .example 2 .assume that the fast dynamics and the controls are as in example 1 ( that is , they are described by ( [ e : ex-4 - 1-rep-1 - 0 ] ) and ( [ e : ex-4 - 2-repeat ] ) ) .assume that the slow dynamics is two - dimensional and is described by the equations with consider the periodic optimization problem where minimization is over the length of the time interval and over the controls defined on this interval subject to the periodicity conditions : and .as in example 1 , equating to zero leads to ( [ e : ex-4 - 1-rep-1 ] ) , which makes the slow dynamics uncontrollable .the optimal periodic solution of ( [ e : ex-4 - 1-rep-101 - 1 ] ) in this case is the trivial " steady state one : , which leads one to the conclusion that .note that the slow subsystem " ( [ e : ex-4 - 1-rep-101 - 1 ] ) is equivalent to the second order differential equation that describes a linear oscillator influenced by the controls and the fast state variables .one can expect , therefore , that , if , due to a combined influence of the latter , some sort of resonance oscillations of the slow state variables are achievable , then the value of the objective can be negative ( see example 1 in ) . this type of a near optimal oscillatory regime ( with rapid oscillations of and and with slow oscillations of ) was obtained with the use of the proposed technique .the images of state trajectories constructed numerically for and are depicted in figures 9 and 10 in section [ sec - construction - sp - examples ] .the values of the objective function for these two cases are approximately equal .given a compact metric space , will stand for the -algebra of its borel subsets and will denote the set of probability measures defined on .the set will always be treated as a compact metric space with a metric , which is consistent with its weak topology .that is , a sequence converges to in this metric if and only if for any continuous .there are many ways of how such a metric can be defined . in this paper, we will use the following definition : , where is a sequence of lipschitz continuous functions which is dense in the unit ball of ( the space of continuous functions on ) . using this metric , one can define the hausdorff metric on the set of subsets of as follows : where note that , although , by some abuse of terminology , we refer to as to a metric on the set of subsets of , it is , in fact , a semi metric on this set ( since is equivalent to if and only if and are closed ) .it can be verified ( see e.g. lemma .4 in , p.205 ) that , with the definition of the metric as in ( [ e : intro-2 ] ) , where stands for the closed convex hull of the corresponding set . given a measurable function , the _ occupational measure _ generated by this function on the interval ] by the admissible pairs of the associated system that satisfy the initial conditions and denote by the union of over all . in it has been established that , under mild conditions , where is defined in ( [ e:2.4 ] ) ( see theorem 2.1(i ) in ) , and also that , under some additional conditions ( see theorem 2.1(ii),(iii ) and proposition 4.1 in ) ) , with the convergence being uniform with respect to .define the function by the equation and consider the system in which the role of controls is played by measure valued functions that satisfy the inclusion the system ( [ e : intro-0 - 4 ] ) will be referred to as _ the averaged system_. in what follows , it is assumed that the averaged system is viable in .[ def - adm - averaged ] a pair will be referred to as _admissible for the averaged system _ if ( [ e : intro-0 - 4 ] ) and ( [ e : intro-0 - 5 ] ) are satisfied for almost all ( being measurable and being absolutely continuous functions ) and if from theorem 2.6 of ( see also corollary 3.1 in ) it follows that , under the assumption that ( [ e : intro-0 - 3 - 3 ] ) is satisfied ( and under other technical assumptions including the lipschitz continuity of the multi - valued map ) , the averaged system approximates the sp dynamics in the sense that the following two statements are valid on any finite time interval ] .consider the problem where and is sought over all admissible pairs of the averaged system that satisfy the initial condition ( [ e : intro-0 - 3 - 6 - 1 ] ) .this will be referred to as _ the averaged problem_. denote by the set of discounted occupational measures generated by the admissible pairs of the averaged system satisfying the initial condition ( [ e : intro-0 - 3 - 6 - 1 ] ) .that is , where is the discounted occupational measure generated by and the union is over all admissible pairs of the averaged system , being the graph of ( see ( [ e:2.4 ] ) ) : using ( [ e : oms-0 - 2 ] ) , one can rewrite the averaged problem ( [ vy - ave - opt ] ) in terms of minimization over measures from the set as follows to establish the relationships between the sp and the averaged optimal control problems , let us introduce the map defined as follows . for any ,let be such that where is as in ( [ e : intro-0 - 3 - 9 ] ) ( this definition is legitimate since the right - hand side of the above expression defines a linear continuous functional on , the latter being associated with an element of that makes the equality ( [ e : h&th-1 ] ) valid ) .note that the map is linear and it is continuous in the sense that with converging to in the weak topology of and converging to in the weak topology of ( see lemma 4.3 in ) .[ prop - ave - disc ] if the averaged system approximates the sp system on finite time intervals , then where stands for the closure of the corresponding set and .also , the proof is given in section [ sec - phi - map ] .define the set by the equation and consider the idlp problem this problems plays an important role in our consideration and , for convenience , we will be referring to it as to _ the averaged _ idlp problem . from lemma 2.1 of follows that also from theorem 2.2 of it follows that , under certain conditions , the equality is valid the latter implying the equality [ prop - ave - disc-1 ] let the averaged system approximate the sp system on finite time intervals and let ( [ e - ave - lp - sets - di ] ) be valid .then and from ( [ e : intro-3 - 1 ] ) and ( [ e - occupset - convergence - dis ] ) it follows that due to continuity and linearity of , hence , ( [ e - occupset - convergence - dis - lp ] ) is impled by ( [ e - ave - lp - sets - di ] ) and ( [ e - occupset - convergence - dis - lp-1 ] ) .also , ( [ e - objective - convergence - dis - lp ] ) is implied by ( [ e - objective - convergence - dis ] ) and ( [ c - result - di - ave ] ) .the following result establishes that the averaged idlp problem ( [ e - ave - lp - opt - di ] ) is equivalent to the augmented reduced idlp problem ( [ sp - idlp-0 ] ) .[ prop - sp-2 ] _ the averaged and the augmented reduced idlp problems are equivalent in the sense that _ also , is an optimal solution of the augmented reduced idlp problem ( [ sp - idlp-0 ] ) if and only if is an optimal solution of the averaged idlp problem ( [ e - ave - lp - opt - di ] ) . the proof is given in section [ sec - phi - map ] .[ cor - important - inequality ] the inequality is valid .the proof follows from ( [ e : oms-6 - 0 ] ) , ( [ sp - idlp - convergence-1 - 1 ] ) and ( [ e - ave - lp - opt-1 ] ) .note that proposition [ prop - sp-2 ] does not assume that the averaged system approximates the sp system on finite time intervals .if this assumption is made , then proposition [ prop - ave - disc-1 ] in combination with proposition [ prop - sp-2 ] imply that the augmented problem ( [ sp - idlp-0 ] ) defines the true limit " for the perturbed idlp problem ( [ sp - idlp - di ] ) in the sense that the following statement strengthening proposition [ prop - sp-1 ] is valid [ prop - present-3 ] let the averaged system approximate the sp system on finite time intervals and let the equalities ( [ e : oms-6 ] ) , ( [ e - ave - lp - sets - di ] ) be valid .then the proof follows from propositions [ prop - ave - disc-1 ] and [ prop - sp-2 ] .remark ii.2 .results of this and the previous sections have their counterparts in dealing with the periodic optimization problem where is sought over the length of the time interval and over the ( defined on this interval ) controls such that the corresponding solutions of the sp system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) satisfy the periodicity condition .it is known ( see corollaries 3 , 4 in and lemma 3.5 in ) that in the general case and , under certain additional assumptions , where is the optimal value of the following idlp problem in which \displaystyle \int_{u\times y\times z } \nabla ( \phi(y)\psi(z ) ) ^t \chi_{\epsilon}(u , y , z ) \gamma(du , dy , dz ) = 0 \ \ \ \forall \phi(\cdot)\in c^1 ( \r^m ) , \ \ \forall \psi(\cdot)\in c^1 ( \r^n ) \ } , \end{array}\ ] ] with being as in ( [ e : sp - w-1 ] ) ( that is , ) .note that can be formally obtained from by taking in the expression for the latter ( see ( [ e : sp - w-1 ] ) ; note the disappearance of the dependence on and in ( [ vy - perturbed - per - new-5 ] ) ) . similarly to proposition [ prop - sp-1 ] , one can come to the conclusion that where is the optimal value of the augmented reduced problem the set being defined by the right - hand side of ( [ e : sp - m ] ) taken with ( note that the dependence on disappears here as well ) .also , similarly to proposition [ prop - sp-2 ] , it can be established that the problem ( [ vy - perturbed - per - new-7 ] ) is equivalent to the idlp problem where is defined by the right - hand side of ( [ d - sp - new ] ) taken with .the equivalence between these two problems includes , in particular , the equality of the optimal values ( see proposition [ prop - sp-2 ] ) , implying ( by ( [ vy - perturbed - per - new-2 ] ) and ( [ vy - perturbed - per - new-6 ] ) ) that note that the problem ( [ vy - perturbed - per - new-8 ] ) is the idlp problem related to the periodic optimization problem where is sought over the length of the time interval and over the admissible pairs of the averaged system ( [ e : intro-0 - 4 ] ) that satisfy the periodicity condition .in particular , under certain conditions , the latter , under the assumption that the averaged problem ( [ vy - perturbed - per - new-10 ] ) approximates the sp problem ( [ vy - perturbed - per - new-1 ] ) in the sense that ( sufficient conditions for this can be found in ) leads to the equality * iii .average control generating ( acg ) families . *the validity of the representation ( [ tilde - w-1 ] ) for the set motivates the definition of _ average control generating family _ given below . for any ,let be an admissible pair of the associated system ( [ e : intro-0 - 3 ] ) and be the occupational measure generated by this pair on ( see ( [ e : oms-0 - 1-infy ] ) ) , with the integral being a measurable function of and for any continuous .note that the estimate ( [ e - opt - om-1 - 0 ] ) is valid if is -periodic , with being uniformly bounded on .[ def - acg ] the family will be called _ average control generating _ ( acg ) if the system where has a unique solution . notethat , according to this definition , if is an acg family , with being the family of occupational measures generated by this family , and if is the corresponding solution of ( [ e - opt - om-1 ] ) , then the pair ( with ) is an admissible pair of the averaged system . for convenience, this admissible pair will also be referred to as one generated by the acg family .[ prop - clarification-1 ] let be an acg family and let and be , respectively , the family of occupational measures and the admissible pair of the averaged system generated by this family .let be the discounted occupational measure generated by , and let then where is the discounted occupational measure generated by . for an arbitrary continuous function and defined as in ( [ e : intro-0 - 3 - 9 ] ), one can write down by the definition of ( see ( [ e : h&th-1 ] ) ) , the latter implies ( [ e - opt - om-2 - 1 ] ) .[ def - acg - opt ] an acg family will be called optimal if the admissible pair generated by this family is optimal in the averaged problem ( [ vy - ave - opt ] ) .that is , an acg family will be called _-near optimal _ ( ) if note that , provided that the equality ( [ c - result - di - ave ] ) is valid , an acg family will be optimal ( near optimal ) if and only if the discounted occupational measure generated by is an optimal ( near optimal ) solution of the averaged idlp problem ( [ e - ave - lp - opt - di ] ) .also , from ( [ e - opt - om-2 - 1 ] ) it follows that if the acg family is optimal and that if the acg family is -near optimal . thus , under the assumption that the equality ( [ c - result - di - ave ] ) is valid , an acg family will be optimal ( near optimal ) if and only if is an optimal ( near optimal ) solution of the reduced augmented problem ( [ sp - idlp-0 ] ) .let be the hamiltonian of the averaged system where and are defined by ( [ e : g - tilde ] ) and ( [ e : g - tilde ] ) .consider the problem where is sought over all continuously differentiable functions .note that the optimal value of the problem ( [ e : dual - ave ] ) is equal to the optimal value of the averaged idlp problem ( [ e - ave - lp - opt - di ] ) .the former is in fact dual with respect to the later , the equality of the optimal values being one of the duality relationships between the two ( see theorem 3.1 in ) . for brevity , ( [ e : dual - ave ] ) will be referred to as just _ averaged dual problem_. note that the averaged dual problem can be equivalently rewritten in the form where is the graph of ( see ( [ e : graph - w ] ) ) .a function will be called a solution of the averaged dual problem if or , equivalently , if note that , if satisfies ( [ e : dual - ave - sol-1 ] ) , then satisfies ( [ e : dual - ave - sol-1 ] ) as well . assume that a solution of the averaged dual problem ( that is , a functions satisfying ( [ e : dual - ave - sol-1 ] ) ) exists and consider the problem in the right hand side of ( [ e : h - tilde ] ) with , rewriting it in the form \mu(du , dz)\ } = \tilde h ( \nabla \zeta^ * ( z),z).\ ] ] the latter is an idlp problem , with the dual of it having the form where is sought over all continuously differentiable functions .the optimal values of the problems ( [ e : h - tilde-10 - 1 ] ) and ( [ e : dec - fast-4 ] ) are equal , this being one of the duality relationships between these two problems ( see theorem 4.1 in ) .the problem ( [ e : dec - fast-4 ] ) will be referred to as _ associated dual problem_. a function will be called a solution of the problem ( [ e : dec - fast-4 ] ) if the following result gives a necessary condition for an acg family to be optimal provided that the latter is periodic , that is , for some ( in fact , for the result to be valid , the periodicity is required only for , where is the solution of ( [ e - opt - om-1 ] ) ) .[ prop - necessary - opt - cond ] assume that the equality ( [ c - result - di - ave ] ) is valid . assume also that a solution of the averaged dual problem exists and a solution of the associated dual problem exists for any .then , for an acg family satisfying ( [ e : h - tilde-10 - 3 - 1001 ] ) to be optimal , it is necessary that for almost all and for almost all ] and over the admissible pairs of the associated system ( [ e : intro-0 - 3 ] ) ( considered with ) that satisfy the periodicity conditions ( with the optimal value in ( [ e : dec - fast-6 ] ) being equal to the right hand side in ( [ e : opt - cond - ave-1-proof-2 ] ) ) . by corollary 4.5 in , this implies the statement of the proposition .remark iii.1 .it can be readily verified that the concept of a solution of the averaged dual problem ( see ( [ e : dual - ave - sol-1 ] ) ) is equivalent to that of a smooth viscosity subsolution of the hamilton - jacobi - bellman ( * hjb * ) equation related to the averaged optimal control problem ( [ vy - ave - opt ] ) ( provided that ( [ c - result - di - ave ] ) is valid ) .it can be also understood that the concept of a solution of the associated dual problem ( see ( [ e : h - tilde-10 - 3 ] ) ) is essentially equivalent to that of a smooth viscosity subsolution of the hjb equation related to the periodic optimization problem ( [ e : dec - fast-6 ] ) .note that the convergence of the optimal value function of a sp optimal control problem to the viscosity solution ( not necessarily smooth ) of the corresponding averaged hjb equation have been studied in , , and ( see also references therein ) . in the considerationabove , we are using solutions of the averaged and associated dual problems ( which can be interpreted as the inequality forms of the corresponding hjb equations ) to state a necessary condition for an acg family to be optimal . the price for a possibility of doing itis , however , the assumption that solutions of these problems , that is functions satisfying ( [ e : dual - ave - sol-1 ] ) and ( [ e : h - tilde-10 - 3 ] ) , exist .this is a restrictive assumption , and we are not going to use it in the sequel . instead , we will be considering simplified ( approximating " ) versions of the averaged and associate dual problems , solutions of which exist under natural controllability type conditions .we will use those solutions instead of and in ( [ e : h - tilde-10 - 3 - 1002 ] ) for the construction of near optimal acg families , which , in turn , will be used for the construction of asymptotically near optimal controls of the sp problem .note , in conclusion , that we also will not be assuming the periodicity of optimal or near optimal acg families although in the numerical examples that we are considering in sections [ sec - two - examples ] and [ sec - construction - sp - examples ] , the constructed near optimal acg families appear to be periodic ( since the system describing the fast dynamics in the examples is two - dimensional , it is consistent with the recent result of establishing the sufficiency of periodic regimes in dealing with log run average optimal control problems with two - dimensional dynamics ; see also earlier developments in and ) .let be a sequence of functions such that any and its gradient are simultaneously approximated by a linear combination of and their gradients .also , let be a sequence of functions such that any and its gradient are simultaneously approximated by a linear combination of and their gradients .examples of such sequences are monomials , and , respectively , , , with , and standing for the components of and ( see , e.g. , ) .let us introduce the following notations : and ( compare with ( [ e:2.4 ] ) , ( [ e : graph - w ] ) and ( [ d - sp - new ] ) , respectively ) and let us consider the following semi - infinite lp problem ( compare with ( [ e - ave - lp - opt - di ] ) ) this problem will be referred to as _ -approximating averaged problem_. it is obvious that defining the set by the equation one can also see that ( with , and being considered as subsets of ) , the latter implying , in particular , that it can be readily verified that ( see , e.g. , the proof of proposition 7 in ) that where , in the first case , the convergence is in the housdorff metric generated by the weak convergence in and , in the second , it is in the housdorff metric generated by the weak convergence in and the convergence in .[ prop - lm - convergence ] the following relationships are valid : where the convergence in both cases is in housdorff metric generated by the weak convergence in .also , if the optimal solution of the averaged idlp problem ( [ e - ave - lp - opt - di ] ) is unique , then , for an an arbitrary optimal solution of the -approximating problem ( [ e - ave - lp - opt - di - mn ] ) , the proofs of ( [ e : graph - w - m-101 ] ) and ( [ e : graph - w - m-101 - 101 ] ) follow a standard argument and are omitted . from ( [ e : graph - w - m-101 ] ) it follows that and from ( [ e : graph - w - m-101 - 101 ] ) it follows that the above two relationships imply ( [ e : graph - w - m-102 ] ) . if the optimal solution of the averaged idlp problem ( [ e - ave - lp - opt - di ] ) is unique , then , by ( [ e : graph - w - m-102 - 101 - 2 ] ) , for any solution of the problem in the right - hand side of ( [ e : graph - w - m-102 - 101 - 1 ] ) there exists the limit also , if for an arbitrary optimal solution of the approximating problem ( [ e - ave - lp - opt - di - mn ] ) and for some , there exists , then this limit is an optimal solution of the problem in the right - hand side of ( [ e : graph - w - m-102 - 101 - 1 ] ) .this proves ( [ e : graph - w - m-102 - 101 ] ) .define the finite dimensional space by the equation and consider the following problem this problem is dual with respect to the problem ( [ e - ave - lp - opt - di - mn ] ) , the equality of the optimal values of these two problems being a part of the duality relationships . note that the problem ( [ e : dual - ave-0-approx - mn ] ) looks similar to the averaged dual problem ( [ e : dual - ave-0 ] ) .however , in contrast to the latter , the sup is sought over the finite dimensional subspace of and is used instead of . the problem ( [ e : dual - ave-0-approx - mn ] )will be referred to as _ -approximating averaged dual problem_. a function , will be called a solution of the -approximating averaged dual problem if define the finite dimensional space by the equation and , assuming that a solution of the -approximating averaged dual problem exists , consider the following problem while the problem ( [ e : dec - fast-4-associated ] ) looks similar to the associated dual problem ( [ e : dec - fast-4 ] ) , it differs from the latter , firstly , by that is sought over the finite dimensional subspace of and , secondly , by that a solution of ( [ e : dual - ave-0-approx - mn ] ) is used instead of a solution of ( [ e : dual - ave ] ) ( the later may not exist ) .the problem ( [ e : dec - fast-4-associated ] ) will be referred to as _ -approximating associated dual problem_. it can be shown that it is , indeed , dual with respect to the semi - infinite lp problem \mu(du , dy)= \sigma_{n , m}^*(z),\ ] ] the duality relationships including the equality of the optimal values ( see theorem 5.2(ii ) in ) .a function , will be called a solution of the -approximating associated dual problem if in the next section , we show that solutions of the -approximating averaged and associated dual problems exist under natural controllability conditions .in what follows it is assumed that , for any and the gradients and are linearly independent on any open subset of and , respectively , . that is , if is an open subset of , then the equality is valid only if . similarly ,if is an open subset of , then the equality is valid only if .let stand for the set of points that are reachable by the state components of the admissible pairs of the averaged system ( [ e : intro-0 - 4 ] ) .that is , the existence of a solution of the approximating averaged dual problem can be guaranteed under the following controllability type assumption about the averaged system .[ ass - ave - disc - controllability ] the closure of the set has a nonempty interior .that is , [ prop - existence - disc ] if assumption [ ass - ave - disc - controllability ] is satisfied , then a solution of the -approximating averaged dual problem exists for any and . the proof is given at the end of this section ( its idea being similar to that of the proof of proposition 3.2 in ) .the existence of a solution of the approximating associated dual problem is guaranteed by the following assumption about controllability properties of the associated system .[ ass - associated - local - controllability ] there exists such that the closure of has a nonempty interior and such that any two points in can be connected by an admissible trajectory of the associated system ( that is , for any , there exists an admissible pair of the associated system defined on some interval ] . to state our next assumption ,let us re - denote the occupational measure ( introduced in assumption [ set-1](iv ) above ) as ( that is , ) .[ set-2 ] for almost all , there exists an open ball centered at such that : \(i ) the occupational measure is continuous on .namely , for any , where is a function tending to zero when tends to zero ( ) . also , for any , is a constant .\(ii ) the system has a unique solution and , for any , where \ : \ z^{n , m}(t')\notin q_{t'}\}.\ ] ] and stands for the lebesgue measure of the corresponding set .in addition to assumptions [ set-1 ] and [ set-2 ] , let us also introduce [ set-3 ] for each such that , the following conditions are satisfied : \(i ) for almost all , there exists an open ball centered at such is uniquely defined ( the problem ( [ e - nm - minimizer-0 ] ) has a unique solution ) for .\(ii ) the function satisfies lipschitz conditions on .that is , is a constant .\(iii ) let be the solution of the system ( [ e : opt - cond - ave-1-fast-100 - 2-nm ] ) considered with and with the initial condition .we assume that , for any , where \ : \ y^{n , m}_{t}(\tau')\notin b_{t , \tau'}\}.\ ] ] [ main - sp - nemeric ] let assumptions [ set-1 ] , [ set-2 ] and [ set-3 ] be valid . then the family being the solution of ( [ e : opt - cond - ave-1-fast-100 - 2-nm ] ) and is a - near optimal acg family , where the proof is given in section [ sec - main - main ] .it is based on lemma [ fast - convergence ] stated at the end of this section .note that in the process of the proof of the theorem it is established that } ||z^{n , m}(t')- z^*(t')|| = 0 \ \ \ \forall t\in [ 0,\infty),\ ] ] where is the solution of ( [ e - opt - om-1-mn ] ) .also , it is shown that for almost all , and where the relationship ( [ e : hjb-19-nm ] ) implies the statement of the theorem with ( see definition [ def - acg - opt ] ) .[ fast - convergence ] let the assumptions of theorem [ main - sp - nemeric ] be satisfied and let be such that . then }| y^{n , m}_{t } ( \tau ' ) - y_t^{*}(\tau ' ) ||=0 \ \ \ \forall \tau \in [ 0,\infty)\ ] ] also , for almost all . the proof is given in section [ sec - main - main ] .remark iii.2 .results of sections [ sec - acg - nec - opt ] - [ sec - acg - construction ] can be extended to the case when the periodic optimization problem ( [ vy - perturbed - per - new-1 ] ) is under consideration .in particular , one can introduce the -approximating averaged problem where is defined by the right - hand side of ( [ d - sp - new - mn ] ) taken with .the optimal value of this problem is related to the optimal value of the problem ( [ vy - perturbed - per - new-8 ] ) by the inequalities and , in addition , ( compare with ( [ e : graph - w - m-103 - 101 ] ) and ( [ e : graph - w - m-102 ] ) ) .one can also introduce the -approximating averaged dual problem as in ( [ e : dual - ave-0-approx - mn ] ) ( with ) and introduce the -approximating associated dual problem as in ( [ e : dec - fast-4-associated ] ) . assuming that solutions of these problems exist ,one can define a control as a minimizer ( [ e - nm - minimizer-1 ] ) .it can be shown that , under certain assumptions ( some of which are similar to those used in theorem [ main - sp - nemeric ] and some are specific for the periodic optimization case ) , the control allows one to construct an acg family that generates a near optimal solution of the averaged periodic optimization problem ( [ vy - perturbed - per - new-10 ] ) .while we do not give a precise result justifying such a procedure in the present paper , we demonstrate that it can be efficient in dealing with sp periodic optimization problems by considering a numerical example ( see example 2 in sections [ sec - two - examples ] and [ sec - construction - sp - examples ] ) . *asymptotically near optimal controls of sp problems . *in this section we describe a way how an asymptotically optimal ( near optimal ) control of the sp optimal control problem ( [ vy - perturbed ] ) can be constructed given that an asymptotically optimal ( near optimal ) acg family is known ( a way of construction of the latter has been discussed in section [ sec - acg - construction ] ) . [ def - asympt - opt ] a control will be called asymptotically optimal in the sp problem ( [ vy - perturbed ] ) if where is the solution of the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) obtained with the control and with the initial condition ( [ e - initial - sp ] ) .a control will be called asymptotically -near optimal ( ) in the sp problem ( [ vy - perturbed ] ) if we will need a couple of more definitions and assumptions . [ def - steering-1 ] let and .we shall say that is _ attainable _ by the associated system from the initial conditions if there exists an admissible pair of the associated system ( see definition [ def - adm - associate ] ) satisfying the initial condition such that the occupational measure generated by the pair on the interval ] by taking where is the control steering the associated system ( [ e : intro-0 - 3 ] ) to from the initial condition .note that , in this instance , the associated system is considered with . below we establish that , under an additional technical assumption, the control constructed above is asymptotically optimal ( near optimal ) if the acg family is optimal ( near optimal ) .the needed assumption is introduced with the help of the following definition .[ ass - locally - lipschitz ] we will say that an acg family is weakly piecewise lipschitz continuous in a neighborhood of if , for any lipschitz continuous function , the function is piecewise lipschitz continuous in a neighborhood of , where is the family of measures generated by and is the solution of ( [ e - opt - om-1 ] ) . the piecewise lipschitz continuity of in a neighborhood of is understood in the following sense . for any , there exists no more than a finite number of points , \i=1, ... ,k ] and any control \rightarrow u ] by taking where is the solution of ( [ e : opt - cond - ave-1-fast-100 - 2-nm ] ) obtained with and with the initial condition .the control will be asymptotically -near optimal in the sp problem ( [ vy - perturbed ] ) , with satisfying ( [ e : hjb-1 - 17 - 1-n - z - const - def - near - opt ] ) , if all assumptions of theorems [ main - sp - nemeric ] and [ prop - convergence - measures - discounted ] are satisfied . note that the assumptions of theorems [ main - sp - nemeric ] and [ prop - convergence - measures - discounted ] do not need to be verified for one to be able to construct the control defined by ( [ e : near - opt - sp-10 - 1])-([e : near - opt - sp-10 - 2 ] ) .the latter can be constructed as soon as an optimal solution of the -approximating averaged problem ( [ e - ave - lp - opt - di - mn ] ) , its optimal value , and solutions , of the -approximating averaged and associated dual problems are found for some and ( a lp based algorithm for finding the latter is described in section [ sec - lp - based - algorithm ] ) .once the control is constructed , one can integrate the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) and find the value of the objective function obtained with this control , since ( by ( [ e - imp - inequality-1 ] ) and ( [ e : graph - w - m-103 - 101 ] ) ) , \ \leq \\limsup_{\epsilon\rightarrow 0}v_{di}^{n , m}(\epsilon , y_0 , z_0 ) - \liminf_{\epsilon\rightarrow 0}v_{di}^{*}(\epsilon , y_0 , z_0)\ ] ] the difference can serve as a measure asymptotic near optimality " of the control .let us resume the above in the form of steps that one may follow to find an asymptotically near optimal control . _( 1 ) choose test functions , , and construct the -approximating averaged problem ( [ e - ave - lp - opt - di - mn ] ) for some and ._ \(2 ) use the lp based algorithm of section [ sec - lp - based - algorithm ] to find an optimal solution and the optimal value of the problem ( [ e - ave - lp - opt - di - mn ] ) , as well as a solution of the -approximating averaged dual problem ( [ e : dual - ave-0-approx - mn ] ) and a solution of the -approximating associated dual problem ( [ e : dec - fast-4-associated ] ) ; \(3 ) define according to ( [ e - nm - minimizer-1 ] ) and construct the control according to ( [ e : near - opt - sp-10 - 1])-([e : near - opt - sp-10 - 2 ] ) ; \(4 ) substitute the control into the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) and integrate the obtained ode with matlab . also , use matlab to evaluate the objective function ; \(5 ) assess the proximity of the found solution to the optimal one by evaluating the difference + .example 1 ( continued ) .consider the sp optimal control problem defined by the equations ( [ e : exampl2 - 4])-([e : ex-4 - 1-rep ] ) .the -approximating averaged problem ( [ e - ave - lp - opt - di - mn ] ) for this example was constructed with the use of powers as and monomials as and with , ( note the change in the indexation of the test functions and recall that stands for the number of constraints in ( [ d - sp - new - mn ] ) and stands for the number of constraints in ( [ e:2.4-m ] ) ) , this problem was solved with the algorithm of section [ sec - lp - based - algorithm ] .the optimal value of the problem was obtained to be approximately equal to : the expansions ( [ e : dual - ave-0-approx-2 ] ) and ( [ e : dual - ave-0-approx-1-associate-1 ] ) defining solutions of the -approximating averaged and dual problems take the form where the coefficients and are obtained as a part of the solution of the problem ( [ e - ave - lp - opt - di - mn ] ) with the algorithm of section [ sec - lp - based - algorithm ] . using and , one can compose the problem ( [ e - nm - minimizer-0 ] ) , which in this case takes the form }\{u_1 ^ 2+u_2 ^ 2+y_1 ^ 2+y_2 ^ 2+z^2 + \frac{d\zeta^{15,35}(z)}{dz}(-y_1u_2 + y_2u_1 ) + \frac{\partial\eta^{15,35}_z(y)}{\partial y_1}(-y_1 + u_1 ) + \frac{\partial\eta^{15,35}_z(y)}{\partial y_2}(-y_2 + u_2)\}. \ \ \ \ ] ]the solution of the problem ( [ e - nm - minimizer-0-example ] ) is as follows where and .construct the control according to ( [ e : near - opt - sp-10 - 1])-([e : near - opt - sp-10 - 2 ] ) .note that , since , in the given example , the equations describing the fast dynamics do not depend on the slow component , the control can be presented in a more explicit feedback form where is the solution of the system ( [ e : ex-4 - 2-repeat])-([e : ex-4 - 1-rep ] ) obtained with the control ( for convenience , we omit the superscripts below and write , and instead of , and ) .the graphs of , and obtained via the integration with matlab of the system ( [ e : ex-4 - 2-repeat])-([e : ex-4 - 1-rep ] ) considered with and are depicted in figures 1,2,3 and 4,5,6 ( respectively ) .note that in the process of integration the lengths of the intervals were chosen experimentally to optimize the results ( that is , they were not automatically taken to be equal to as in ( [ e : contr - rev-100 - 1 ] ) ) .the values of the objective function were obtained to be approximately equal to and ( respectively ) , both values being close to ( see ( [ e : near - opt - sp-10 - 6 ] ) ; recall that in this case ) .hence , the constructed control can be considered to be a good candidate " for being asymptotically near optimal ._ controls and state components as functions of time for _ fig . 1 : fig . 2 : fig . 3 : _ controls and state components as functions of time for _ fig . 4 : fig . 5 : fig . 6: the state trajectories corresponding to the two cases ( and ) are depicted in figures 7 and 8 .as can be seen , the fast state variables move along a square like figure that gradually changes its shape while the slow variable is decreasing from the initial level to the level , which is reached at the moment .after this moment , the zero controls are applied and the fast variables are rapidly converging to zero , with the slow variable stabilizing and remaining approximately equal to . _ state trajectories for and _ fig . 7 : for fig.8 : for remark iv.2 .as has been mentioned earlier ( see remarks ii.2 and iii.2 ) , many results obtained for the sp optimal control problem with time discounting have their counterparts in the periodic optimization setting .this remains valid also for the consideration of this section ( as well as that of section [ sec - sp - acg - theorem ] ) .in particular , the process of construction of a control that is asymptotically near optimal in the sp periodic optimization problem ( [ vy - perturbed - per - new-1 ] ) on the basis of an acg family that generates a near optimal solution the averaged problem ( [ vy - perturbed - per - new-10 ] ) is similar to that outlined in steps to above , with being a minimizer in ( [ e - nm - minimizer-1 ] ) ( and with and being solutions of the corresponding -approximating averaged and associated dual problems ; see remark iii.2 ) .denote thus obtained control as and the corresponding periodic solution of the system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) ( assume that it exists ) as , the period of the latter being denoted as .denote also by the corresponding value of the objective function : by ( [ vy - perturbed - per - new-9 ] ) and by ( [ e : graph - w - m-103 - 101-ave ] ) , \ \leq \\limsup_{\epsilon\rightarrow 0}v_{per}^{n , m}(\epsilon ) - \liminf_{\epsilon\rightarrow 0}v_{per}^{*}(\epsilon)\ ] ] where is the optimal of the idlp problem ( [ vy - perturbed - per - new-8 ] ) and is the optimal value of the -approximating problem ( [ e - ave - lp - opt - di - mn - ave ] ) ( compare with ( [ e : near - opt - sp-10 - 4 ] ) ) .that is , the difference can serve as a measure of the asymptotic near optimality of the control in the periodic optimization case .thus , one may conclude that , to find an asymptotically near optimal control for the sp periodic optimization problem ( [ vy - perturbed - per - new-1 ] ) , one may follow the steps similar to , with the differences being as follows : _( i ) a solutions of the approximating problem ( [ e - ave - lp - opt - di - mn - ave ] ) and the corresponding averaged and associated dual problems should be used instead of solutions of the problem ( [ e - ave - lp - opt - di - mn ] ) and its corresponding duals ; ( ii ) a periodic solution of the sp system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) should be sought instead of one satisfying the initial condition ( [ e - initial - sp ] ) ; ( iii ) the difference should be used as a measure of asymptotic near optimality of the control . _example 2 ( continued ) .consider the periodic optimization problem ( [ e : ex-4 - 1-rep-101 - 2 ] ) .the -approximating averaged problem ( [ e - ave - lp - opt - di - mn - ave ] ) was constructed with the use of monomials as and monomials as and with , solving this problem with the algorithm of section [ sec - lp - based - algorithm ] , one finds its optimal value as well as the coefficients of the expansions that define solutions of the -approximating averaged and dual problems ( see ( [ e : dual - ave-0-approx-2 ] ) and ( [ e : dual - ave-0-approx-1-associate-1 ] ) ) . using and , one can compose the problem ( [ e - nm - minimizer-0 ] ) : }\{0.1u_1 ^ 2 + 0.1u_2 ^ 2-z_1 ^ 2 + \frac{\partial \zeta^{35,35}(z)}{\partial z_1}z_2 + \frac{\partial \zeta^{35,35}(z)}{\partial z_2}(-4z_1 - 0.3 z_2 -y_1u_2 + y_2u_1 ) + \frac{\partial\eta^{35,35}_z(y)}{\partial y_1}(-y_1 + u_1)\ ] ] the solution of the problem ( [ e - nm - minimizer-0-example - per ] ) is similar to that of ( [ feedbackfinalvel - sp ] ) and is written in the form where and . as in example 1, the equations describing the fast dynamics do not depend on the slow component .hence , the the control defined by ( [ e : near - opt - sp-10 - 1])-([e : near - opt - sp-10 - 2 ] ) can be written in the feedback form : where is the solution of the system ( [ e : ex-4 - 2-repeat]),([e : ex-4 - 1-rep-101 - 1 ] ) obtained with the control . the periodic solution of the system ( [ e : ex-4 - 2-repeat]),([e : ex-4 - 1-rep-101 - 1 ] ) was found with matlab for and .the images of the state trajectories obtained as the result of the integration are depicted in figures 9 and 10 ( where , again , the superscripts are omitted from the notations ) .the slow -components appear to be moving periodically along a closed , ellipse like , figure on the plane , with the period being approximately equal to .note that this figure and the period appear to be the same for and .the fast -components are moving along square like figures centered around the points on the ellipse " , with about rounds for the case and about rounds for the case .the values of the objective functions obtained for these two cases are approximately the same and , the latter being close to the value of ( see ( [ e : near - opt - sp-10 - 6-per ] ) ) ._ images of state trajectories for and _ fig . 9 : for fig.10 : for will start with a consideration of an algorithm for finding an optimal solution of the following generic " semi - infinite lp problem where with being a non - empty compact metric space and with being continuous functional on .note that the problem dual with respect to ( [ e : lp - decomp - alg-1 ] ) is the problem ( [ e : important - simple - duality - lemma-1 ] ) , and we assume that the inequality ( [ e : important - simple - duality - lemma-3 ] ) is valid only with ( which , by lemma [ lemma - important - simple - duality ] , ensures the existence of a solution of the problem ( [ e : important - simple - duality - lemma-1 ] ) ). it is known ( see , e.g. , theorems a.4 and a.5 in ) that among the optimal solutions of the problem ( [ e : lp - decomp - alg-1 ] ) there exists one that is presented in the form where are dirac measures concentrated at , .having in mind this presentation , let us consider the following algorithm for finding optimal concentration points and optimal weights ( see and ) .let points ( ) be chosen to define an initial grid on at every iteration a new point is defined and added to this set .assume that after iterations the points have been defined and the set has been constructed : the iteration ( ) is described as follows : \(i ) find a basic optimal solution of the lp problem where note that no more than components of are positive , these being called basic components .also , find an optimal solution of the problem dual with respect to ( [ e : ns-3 ] ) , the latter being of the form \(ii ) find an optimal solution of the problem \(iii ) define the set by the equation using an argument similar to one commonly used in a standard ( finite dimensional ) linear programming , one can show ( see , e.g. , or ) that if , then and the measure ( where stands for the index set of basic components of ) is an optimal solution of the problem ( [ e : lp - decomp - alg-1 ] ) , with being an optimal solution of the problem ( [ e : important - simple - duality - lemma-1 ] ) .if , for then , under some non - degeneracy assumptions , it can be shown that , and that any cluster ( limit ) point of the set of measures is an optimal solution of the problem ( [ e : lp - decomp - alg-1 ] ) , while any cluster ( limit ) point of the set is an optimal solution of the problem ( [ e : important - simple - duality - lemma-1 ] ) ( main features of the proof of these convergence results can be found in , ) .the approximating problem ( [ e - ave - lp - opt - di - mn ] ) is a special case of the problem ( [ e : lp - decomp - alg-1 ] ) with an obvious correspondence between the notations : assume that the set has been constructed .the lp problem ( [ e : ns-3 ] ) takes in this case the form where = 0 \ , \ \i = 1, ... ,n \},\ ] ] with the corresponding dual being of the form \ \ \forall \ l=1, ... ,k+j\}.\ ] ] denote by an optimal basic solution of the problem ( [ e : ns-3-ave ] ) and by an optimal solution of the dual problem ( [ e : ns-3-ave - dual ] ) . the problem ( [ e : ns-5 ] ) identifying the point to be added to the set takes the following form \ } = \min_{z\in z}\{\tilde{\mathcal{g}}^{n , m , j}(z ) + c ( \psi_i ( z_0 ) - \psi_i ( z ) ) \},\ ] ] where \mu(du , dy)\ } .\ ] ] note that the problem ( [ e : ns-5-sp-1 ] ) is also a special case of the problem ( [ e : lp - decomp - alg-1 ] ) with its optimal solution as well as an optimal solution of the corresponding dual problem can be found with the help of the same approach .denote the latter as and , respectively . by adding the point to the set ( being an optimal solution of the problem in the right - hand side of ( [ e : ns-5-sp ] ) ) , one can define the set and then proceed to the next iteration . under the controllability conditions introduced in section [ sec - existence - controllability ] ( see assumptions [ ass - ave - disc - controllability ] and [ ass - associated - local - controllability ] ) and under additional ( simplex method related ) non - degeneracy conditions , it can be proved ( although we do not do it in the present paper ) that the optimal value of the problem ( [ e : ns-3-ave ] ) converges to the optimal value of the -approximating averaged problem and that , if is a cluster ( limit ) point of the set of optimal solutions of the problem ( [ e : ns-3-ave - dual ] ) considered with , then is an optimal solution of the -approximating averaged dual problem ( [ e : dual - ave-0-approx - mn ] ) .in addition to this , it can be shown that , if is a cluster ( limit ) point of the set of optimal solutions of the problem dual to ( [ e : ns-5-sp-1 ] ) considered with , then is an optimal solution of the -approximating associated dual problem ( [ e : dec - fast-4-associated ] ) . a software that implements this algorithm on the basis of the ibm ilog cplex lp solver and global nonlinear optimization routines designed by a. bagirov and m. mammadov has been developed ( with the cplex solver being used for finding optimal solutions of the lp problems involved and bagirov s and mammadov s routines being used for finding optimizers in ( [ e : ns-5-sp ] ) and in problems similar to ( [ e : ns-5 ] ) that arise when solving ( [ e : ns-5-sp-1 ] ) ) .the numerical solutions of examples 1 and 2 in section [ sec - construction - sp - examples ] were obtained with the help of this software ( was taken to be equal to zero in dealing with the periodic optimization problem of example 2 ) remark iv.3 .the decomposition of the problem ( [ e : ns-5 ] ) , an optimal solution of which identifies the point to be added to the set , into problems ( [ e : ns-5-sp ] ) and ( [ e : ns-5-sp-1 ] ) resembles the column generating technique of generalized linear programming ( see ) .note that a similar decomposition was observed in dealing with lp problems related to singular perturbed markov chains ( see , e.g. , , and ) .finally , let us also note that , while in this paper we are using the -approximating problems and their lp based solutions for finding near optimal acg families , other methods for finding the latter can be applicable as well .for example , due to the fact that the averaged and associated dual problems ( [ e : dual - ave-0 ] ) and ( [ e : dec - fast-4 ] ) are inequality forms of certain hjb equations ( see remark iii.1 ) , it is plausible to expect that an adaptation of methods of solution of hjb equations developed in , , can be of a particular use . *v. selected proofs.*[proofs for sp - lp ]let , be a sequence of lipschitz continuous functions that is dense in the unit ball of and let where with . from the fact that the averaged system approximates the sp system on finite time intervals it follows ( see ( [ e : intro-0 - 3 - 8 ] ) ) that is the housdorff metric generated by a norm in , and the sets , are defined by the equations with the first union being over all controls of the sp system and the second being over all admissible pairs of the averaged system .define the sets and by the equations where , again , the first union is over all controls of the sp system and the second is over all admissible pairs of the averaged system .it is easy to see that where .let us use ( [ e : sp - aug-5 - 1 ] ) and ( [ e : sp - aug-5 - 1 - 0 - 5 ] ) to show that for some such that .let us separately deal with two different cases .first is the case when the estimate ( [ e : sp - aug-5 - 1 ] ) is uniform , that is , the second is the case when there exists a number and sequences , such that in case ( [ e : sp - aug-5 - 1 - 1 ] ) is valid , from ( [ e : sp - aug-5 - 1 ] ) and ( [ e : sp - aug-5 - 1 - 0 - 5 ] ) it follows that hence , passing to the limit when , one obtains ( [ e : sp - aug-5 - 1 - 0 - 6 ] ) with . to deal with the case when ( [ e : sp - aug-5 - 1 - 2 ] ) is true ,choose in such a way that see lemma [ auxiliary - lemma ] below . using ( [ e : sp - aug-5 - 1 ] ) and( [ e : sp - aug-5 - 1 - 0 - 5 ] ) , one can obtain that denoting , one has ( due to ( [ e : sp - aug-5 - 1 - 5 - 1 - 1 ] ) ) .this proves the validity of ( [ e : sp - aug-5 - 1 - 0 - 6 ] ) . from ( [ e : oms-0 - 2 ] ) it follows that the set can be rewritten in the form it follows that also , from ( [ e : oms-0 - 2 ] ) and from the definition of the map ( see ( [ e : h&th-1 ] ) ) it follows that having in mind the representations ( [ e : theta - comparison-1 ] ) and ( [ e : theta - comparison-2 ] ) and using corollary 3.6 of , one can come to the conclusion that the validity of ( [ e : sp - aug-5 - 1 - 0 - 6 ] ) for any , implies ( [ e - occupset - convergence - dis ] ) .the validity of ( [ e - objective - convergence - dis ] ) follows from ( [ e - occupset - convergence - dis ] ) ( due to the presentations ( [ e : oms-4 ] ) , ( [ e : oms-4-di - ave ] ) and the definition of the map ) .[ auxiliary - lemma ] if ( [ e : sp - aug-5 - 1 - 2 ] ) is valid , then there exists a monotone decreasing function defined on an interval ( c is some positive number ) such that ( [ e : sp - aug-5 - 1 - 5 - 1 - 1 ] ) is valid ._ proof of lemma [ auxiliary - lemma ] _ .let us assume ( without loss of generality ) that is decreasing if is decreasing ( with fixed ) and is increasing if is increasing ( with fixed ) .let us define the sequence by the equations }\{\epsilon \ : \\beta_n(\epsilon , k)\leq \frac{1}{2^{k}}\ \ } \ , \ \k=1,2 , ... \ .\ ] ] note that , due to monotonicity of in , it is easy to verify ( using the fact that is increasing in ) that , and , hence , there exists a limit let us show that .assume it is not true and .then , for any and for any fixed , by letting go to infinity in the last inequality , one comes to the conclusion that , and , consequently , to the conclusion that for any ( due to monotonicity in ) .the latter contradicts ( [ e : sp - aug-5 - 1 - 2 ] ) .thus , let be a sequence of natural numbers such that and such that . define the function on the interval by the equation it is easy to see that the function is increasing when is decreasing , and also , according to the construction above , _ proof of proposition [ prop - sp-2 ] ._ to prove ( [ equality-1-sp - new ] ) , let us first prove that the inclusion is valid . take an arbitrary .that is , for some . by ( [ e : h&th-1 ] ) , \phi(p)(du , dy , dz)\ ] ] p(d\mu , dz).\ ] ] by definition of ( see ( [ e : graph - w ] ) ) , consequently ( see ( [ e : sp - w-3 ] ) ) , \phi(p)(du , dy , dz)=0\ \\ \rightarrow \ \ \\phi(p)\in \mathcal{d}.\ ] ] also from ( [ e : h&th-1 ] ) and from the fact that it follows that \phi(p)(du , dy , dz)\ ] ] p(d\mu , dz ) \= \ 0 \ \ \ \rightarrow \ \ \\phi(p)\in \mathcal{a}_{di}(z_0).\ ] ] thus , .this proves ( [ incl - dang-1 ] ) .let us now show that the converse inclusion is valid . to this end , take and show that . due to ( [ tilde - w-1 ] ) , can be presented in the form ( [ e : sp - w-4-extra ] ) with for almost all . changing values of on a subset of having the measure , one can come to the conclusion that can be presented in the form ( [ e : sp - w-4-extra ] ) with let be a subspace of ] , with due to the fact that is positive , one obtains that where is the closed unit ball in ] , ( see , e.g. , theorem 5.8 on page 38 in ) . using this relationship for ,one obtains ( see ( [ e : sp - l - subspace-2 ] ) and ( [ e : sp - l - subspace-3 ] ) ) that since the latter is valid for any ] if is continuous from the left at and if is continuous from the right at . by the definition of the discounted occupational measure generated by ( see ( [ e : occup - c ] ) ) , this implies ( [ e : convergence - important-3 ] ) .assume now that ( [ e : convergence - important-1 ] ) is not true .then there exists a number and sequences , , ( and ) such that and such that where is the set of the concentration points of the dirac measures in ( [ e - nm - minimizer - proof-2 ] ) , that is , taken with and , and where from ( [ e : convergence - important-4 ] ) it follows that for ( large enough ) .hence , the latter implies that where is defined by ( [ e - nm - minimizer - proof-2 ] ) .due to the fact that the optimal solution of the idlp problem ( [ e - ave - lp - opt - di ] ) is unique ( assumption [ set-1](i ) ) , the relationship ( [ e : graph - w - m-102 - 101 ] ) is valid . consequently , from ( [ e : convergence - important-5 ] ) and ( [ e : convergence - important-6 ] ) it follows that the latter contradicts to ( [ e : convergence - important-3 ] ) .thus , ( [ e : convergence - important-1 ] ) is proved .let us now prove the validity of ( [ e : convergence - important-2 ] ) .assume it is is not valid .then there exists and sequences , with and with such that where is the set of the concentration points of the dirac measures in ( [ e - nm - minimizer - proof-3 ] ) , taken with and with , , and where from ( [ e : convergence - important-4-(u , y ) ] ) it follows that the latter implies that where is defined by ( [ e - nm - minimizer - proof-3 ] ) ( taken with and ) . from ( [ e : convergence - important-1 ] ) it follows , in particular , that the later and ( [ e : convergence - important-5-(u , y ) ] ) lead to which contradicts to ( [ e : convergence - important-7 ] ) . thus ( [ e : convergence - important-2 ] ) is proved ._ proof of lemma [ fast - convergence ] .let be such that the ball is not empty .let also be as in ( [ e : convergence - important-1 ] ) and be as in ( [ e : convergence - important-2 ] ) .note that , due to ( [ e - nm - minimizer - proof-5 ] ) , where is as in ( [ e - nm - minimizer-1 ] ) . from ( [ e : convergence - important-1 ] ) and ( [ e : convergence - important-2 ] ) it follows that for and large enough .hence , one can use ( [ e : hjb-1 - 17 - 0-per - lip - u - nm ] ) to obtain by ( [ e : convergence - important-1 ] ) and ( [ e : convergence - important-2 ] ) , the latter implies since is not empty for almost all ( assumption [ set-3 ] ( i ) ) , the convergence ( [ e : convergence - important-12 ] ) takes place for almost all .let us take an arbitrary and subtract the equation from the equation we will obtain using assumption [ set-3 ] ( ii),(iii ) , one can derive that d\tau'\ ] ] where is a constant defined ( in an obvious way ) by lipschitz constants of and , and . also , due to ( [ e : convergence - important-12 ] ) and the dominated convergence theorem ( see , e.g. , p. 49 in ) let us introduce the notation and rewrite the inequality ( [ e : l-1 - 6 ] ) in the form }|| y_{t}^{n , m}(\tau')-y^*_t(\tau')|| \leq \kappa_{t,\tau}(n , m)e^{l_1\tau}.\ ] ] since , by ( [ e : hjb-1 - 17 - 1-n - z - const ] ) and ( [ e : l-1 - 8 ] ) , the inequality ( [ e : l-1 - 10 ] ) implies ( [ e : hjb-1 - 17 - 1-n - z - const - fast-1 ] ) . by ( [ e : hjb-1 - 17 - 1-n - z - const - fast-1 ] ) , for and large enough ( for such that the ball is not empty ) . hence , the latter implies ( [ e : hjb-1 - 17 - 1-n - z - const - fast-2 ] ) ( by ( [ e : hjb-1 - 17 - 1-n - z - const - fast-1 ] ) and ( [ e : convergence - important-12 ] ) ) ._ proof of theorem [ main - sp - nemeric ] ._ let be continuous . by( [ e - opt - om-1 - 0-nm - star ] ) and ( [ e - opt - om-1 - 0-nm ] ) , for an arbitrary small there exists such that and using ( [ e : la - airport-1 ] ) and ( [ e : la - airport-2 ] ) , one can obtain due to lemma [ fast - convergence ] , the latter implies the following inequality which , in turn , implies ( due to the fact that can be arbitrary small ) . since is an arbitrary continuous function , from ( [ e : la - airport-3 ] )it follows that the latter being valid for almost all .taking an arbitrary and subtracting the equation from the equation one obtains from ( [ e : hjb-1 - 17 - 1-per - lip - g ] ) and from the definition of the set ( see ( [ e : intro-0 - 4-n - a - t ] ) ) , it follows that '\ ] ] where .this and ( [ e : hjb-16-nm - proof-5 ] ) allows one to obtain the inequality note that , by ( [ e : hjb-16-nm - proof-2 ] ) , which , along with ( [ e : hjb-1 - 17 - 1-n ] ) , imply that by gronwall - bellman lemma , from ( [ e : hjb-16-nm - proof-6 - 11 ] ) it follows that }||z^{n , m}(t')-z^*(t')||\leq \kappa_t(n , m ) e^{lt}.\ ] ] the latter along with ( [ e : hjb-16-nm - proof-9 ] ) imply ( [ e : hjb-1 - 19 - 2-nm ] ) .let us now establish the validity of ( [ e : hjb-1 - 19 - 1-nm ] ) .let be such that the ball introduced in assumption [ set-2 ] is not empty . by triangle inequality , due to ( [ e : hjb-1 - 19 - 2-nm ] ), for and large enough .hence , by ( [ e : hjb-1 - 17 - 1-per - cont - mu - nm - g-1 ] ) , which implies that the latter , along with ( [ e : hjb-16-nm - proof-2 ] ) and ( [ e : summarizing - n - m - convergence ] ) , imply ( [ e : hjb-1 - 19 - 1-nm ] ) .to prove ( [ e : hjb-19-nm ] ) , let us recall that for an arbitrary , choose in such a way that then by ( [ e : hjb-1 - 19 - 2-nm ] ) and ( [ e : hjb-1 - 19 - 1-nm ] ) , hence , since can be arbitrary small , the latter implies ( [ e : hjb-19-nm ] ) . .without loss of generality , one may assume that is decreasing with and that ( the later can be achieved by replacing with if necessary ) . having this in mind , define as the solution of the problem the solution of this problem exists since is right - continuous and , by definition , fix an arbitrary and introduce the notation where is an the statement of the theorem .note that , by construction , as can be readily seen , }||z(t)-z(t_l)||\leq m\delta(\epsilon),\ ] ] where is the solution of ( [ e - opt - om-1 ] ) and .hence , \ ] ] for all small enough .consequently , due to the assumed weak piecewise lipschitz continuity of the acg family under consideration ( see definition [ ass - locally - lipschitz ] ) , where is a lischitz constant of .being the solution of ( [ e - opt - om-1 ] ) , satisfies the equality which along with ( [ e - implicit-5 - 1 ] ) allow one to obtain in addition to the above , one can obtain ( by ( [ e - implicit-4 ] ) ) to continue the proof , let us rewrite the sp system ( [ e : intro-0 - 1])-([e : intro-0 - 2 ] ) in the stretched " time scale let us also introduce the following notations in these notations , the control defined by ( [ e : contr - rev-100 - 3 ] ) and ( [ e : contr - rev-100 - 4 ] ) is rewritten in the form and the solution of the system ( [ e : intro-0 - 1-str])-([e : intro-0 - 2-str ] ) obtained with this control satisfies the equations ,\ ] ] .\ ] ] note that the estimate ( [ e : convergence - to - gamma - z - estimate ] ) , which we are going to prove , is rewritten in the stretched time scale as follows }||z_{\epsilon}(\tau)-z(\tau \epsilon)||\leq \beta_t(\epsilon ) , \ \ \ \ \ \\lim_{\epsilon\rightarrow 0}\beta_t(\epsilon ) = 0.\ ] ] let us consider ( [ e - implicit-8 ] ) with and subtract it from the expression . using ( [ e - implicit-6 ] ) , one can obtain - [ z(t_{l})-z_{\epsilon}(\tau_{l})]\ ] ] where is the solution of the associated system ( [ e : intro-0 - 3 ] ) considered with and with the initial condition .the control steers the associated system to , with the estimate ( [ e - opt - om-2 - 105 ] ) being uniform with respect to the initial condition and the values of .this implies that there exists a function , , such that consequently , for and , where is a lipschitz constant of .note that the last inequality is valid since satisfies lipschitz condition on and since ( as \setminus \cup_{i=1}^k ( \bar t_i - \delta(\epsilon ) , \bar t_i+ \delta(\epsilon ) ) $ ] ; see ( [ e : convergence - to - gamma - extra - condition ] ) and ( [ e - implicit-1 - 0 - 100 ] ) ) . also , }||y_{y_{\epsilon}(\tau_l),z_{\epsilon}(\tau_l)}(\tau ) - y_{\epsilon}(\tau + \tau_l)|| + m\delta(\epsilon)\ ) , \ ] ] where it has been taken into account that , by ( [ e - implicit-8 ] ) , } ||z_{\epsilon } ( \tau_l)-z_{\epsilon}(\tau + \tau_l)||\leq m(\epsilon s(\epsilon))= m\delta(\epsilon).\ ] ] by definition , satisfies the equation .\ ] ] rewriting ( [ e - implicit-8 - 1 ] ) in the form \ ] ] and subtracting it from ( [ e - opt - om-2 - 105-extra-14 ] ) , one can obtain ( by ( [ e - opt - om-2 - 105-extra-13 ] ) ) ,\ ] ] where is a lipschitz constant of . by gronwall - bellman lemma , the latter implies( see also ( [ e : contr - rev-100 - 0 - 1 ] ) and ( [ e - implicit-7 - 1 ] ) ) }||y_{y_{\epsilon}(\tau_l),z_{\epsilon}(\tau_l)}(\tau ) - y_{\epsilon}(\tau + \tau_l)||\leq m \epsilon s^2(\epsilon ) e ^{l_f s(\epsilon ) } = m \epsilon ( \frac{1}{2l_f}\ln \frac{1}{\epsilon})^2 \epsilon^{-\frac{1}{2}}\leq \epsilon^{\frac{1}{4}}\ ] ] for small enough. taking ( [ e - opt - om-2 - 105-extra-11 ] ) , ( [ e - opt - om-2 - 105-extra-12 ] ) and ( [ e - opt - om-2 - 105-extra-15 ] ) into account , one can rewrite ( [ e - implicit-9 ] ) in the form where are appropriately chosen constants and note that . by subtracting ( [ e - implicit-8 ] ) ( taken with ) from the expression and taking into account ( [ e - implicit-7 ] ) , one obtains - [ z(t_{l})-z_{\epsilon}(\tau_{l})]\ ] ] which leads to the estimate where . denoting as , one can come to the conclusion ( based on ( [ e - implicit-12 ] ) and ( [ e - implicit-14 ] ) and on lemma [ lemma - estimates - sigmas ] stated below ) that , for any , there exists , , such that stands for the floor function ( is the maximal integer number that is less or equal than ) . using ( [ e - opt - om-2 - 105-extra-13 ] ) , ( [ e - implicit-15 ] ) and having in mind the fact that and that the inequality ( [ e - implicit-4 ] ) can be rewritten as }||z(\tau \epsilon)-z(\tau_l \epsilon)||\leq m\delta(\epsilon),\ ] ] one can establish the validity of ( [ e : convergence - to - gamma - z - estimate - tau ] ) with . to show that the discounted occupational measure generated by converges to the measure defined in ( [ e - opt - om-2 ] )it is sufficient to show that , for any lipschitz continuous function , where is the discounted occupational measure generated by the solution of ( [ e - opt - om-1 ] ) and is defined by ( [ e - opt - om-1 - 101 ] ) . by the definition of ( see ( [ e - opt - om-1-extra-101 ] ) ) , also , by the definition of the discounted occupational measure ( see ( [ e : oms-0 - 2 ] ) ) and due to the fact that the triplet is considered in the stretched time scale , as can be readily seen , with the convergence being uniform in ( in the second case ) .thus , to prove ( [ e - implicit-16 ] ) , it is sufficient to prove that the main steps in proving ( [ e - implicit-20 ] ) are as follows .let . as can be readily seen , the following estimates are valid : where is a constant .similarly to ( [ e - opt - om-2 - 105-extra-11 ] ) and ( [ e - opt - om-2 - 105-extra-12 ] ) , one can obtain ( using the estimates ( [ e - opt - om-2 - 105-extra-15 ] ) and ( [ e - implicit-15 ] ) ) for , where .in addition to that , one has the following estimate for , where is a constant .from ( [ e - implicit-6 - 105 - 7 - 1 ] ) and ( [ e - implicit-6 - 105 - 8 ] ) it follows that where .the latter along with ( [ e - implicit-6 - 103 - 1 ] ) and ( [ e - implicit-6 - 104 ] ) prove ( [ e - implicit-20 ] ) .[ lemma - estimates - sigmas ] let be as in ( [ e - implicit-1 - 0 - 100 ] ) and be as in ( [ e - implicit-13 ] ) .assume that and that the numbers , satisfy the inequalities and then where , may depend on .k. avrachenkov , j. filar and m. haviv , singular perturbations of markov chains and decision processes .a survey `` , a chapter in handbook of markov decision processes : methods and applications '' , e.a .feinberg , a. shwartz ( eds ) , _ international series in operations research and management science _ , 40 ( 2002 ) , pp.113 - 153 , kluwer academic publishers .borkar and v. gaitsgory , on existence of limit occupational measures set of a controlled stochastic differential equation , _siam j. on control and optimization _ , 44 ( 2005/2006 ) : 4 , pp .1436 - 1473 .l. finlay , v.gaitsgory and i. lebedev , duality in linear programming problems related to long run average problems of optimal control " , _siam j. on control and optimization _ , 47 ( 2008 ) : 4 , pp .1667 - 1700 .v. gaitsgory , on representation of the limit occupational measures set of a control systems with applications to singularly perturbed control systems " , _ siam j. control and optimization _ , 43 ( 2004 ) : 1 , pp 325 - 340 v. gaitsgory and m. quincampoix , linear programming approach to deterministic infinite horizon optimal control problems with discounting " , _ siam j. of control and optimization _ , 48 ( 2009 ) : 4 , pp . 2480 - 2512 .v. gaitsgory and m. quincampoix , on sets of occupational measures generated by a deterministic control system on an infinite time horizon " , _ nonlinear analysis series a : theory , methods & applications _ , 88 ( 2013 ) , pp 27 - 41 .v. gaitsgory , s. rossomakhine and n. thatcher , approximate solutions of the hjb inequality related to the infinite horizon optimall control problem with discounting " , _ dynamics of continuous and impulsive systems series b : applications and algorithms _ , 19 ( 2012 ) , pp .65 - 92 .d. goreac and o.s.serea , linearization techniques for - control problems and dynamic programming principles in classical and control problems `` , _ esaim : control , optimization and calculus of variations _ , doi:10.1051/cocv/201183 , 2011 g. grammel , averaging of singularly perturbed systems '' , _ nonlinear analysis _ , 28 ( 1997 ) , 1851 - 1865 .naidu , singular perturbations and time scales in control theory and applications : an overview " , _ dynamics of continuous discrete and impulsive systems , series b : applications and algorithms _ , 9 ( 2002 ) ,233 - 278 .
|
the paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed ( sp ) optimal controls problems ( that is , problems of optimal control of sp systems ) considered on the infinite time horizon . we mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well . our consideration is based on earlier results on averaging of sp control systems and on linear programming formulations of optimal control problems . the idea that we exploit is to first asymptotically approximate a given problem of optimal control of the sp system by a certain averaged optimal control problem , then reformulate this averaged problem as an infinite - dimensional ( i d ) linear programming ( lp ) problem , and then approximate the latter by semi - infinite lp problems . we show that the optimal solution of these semi - infinite lp problems and their duals ( that can be found with the help of a modification of an available lp software ) allow one to construct near optimal controls of the sp system . we demonstrate the construction with two numerical examples . * key words . * singularly perturbed optimal control problems , averaging and linear programming , occupational measures , numerical solution * ams subject classifications . * 34e15 , 34c29 , 34a60 , 93c70 * i. introduction and preliminaries . *
|
recent advances in instrumental sensitivity have challenged the very definition of elliptical galaxies in hubble s galaxy classification scheme .it is now clear that ellipticals contain a complex , diverse ism , primarily in the form of hot ( ) , x - ray - emitting gas , with masses m .small amounts of h , h , ionized gas , and dust have been detected as well in many ellipticals ( _ e.g. _ , bregman _et al . _ ) . unlike the situation in spiral galaxies ,physical and evolutionary relationships between the various components of the ism in ellipticals are not yet understood .a number of theoretical concepts have been developed for the secular evolution of the different components of the ism of ellipticals .the two currently most popular concepts are _( i ) _ the `` cooling flow '' picture in which mass loss from stars within the galaxy , heated to k by supernova explosions and collisions between expanding stellar envelopes during the violent galaxy formation stage , quiescently cools and condenses ( cf . the review of fabian _ et al . ) and _ ( ii ) _ the `` evaporation flow '' picture in which clouds of dust and gas have been accreted during post - collapse galaxy interactions . subsequent heating ( and evaporation ) of the accreted gasis provided by thermal conduction in the hot , x - ray - emitting gas and/or star formation ( cf .de jong _ et al . _ ; sparks _ et al . _ ) .the first direct evidence for the common presence of cool ism in ellipticals was presented by jura _et al . _ ( ) who used _ iras _ addscans and found that % of nearby , bright ellipticals were detected at 60 and 100 .implied dust masses were of order m ( using h = 50 mpc ) .interestingly , there are several x - ray - emitting ellipticals with suspected cooling flows ( cf .forman _ et al . _ ) among the _ iras _ detections .the presence of dust in such objects is surprising , since the lifetime of a dust grain against collisions with hot ions ( `` sputtering '' ) in hot gas with typical pressures is only yr ( draine & salpeter ) .what is the origin of this dust , and how is it distributed ? in order to systematically study the origin and fate of the ism of elliptical galaxies ,we have recently conducted a deep , systematic optical survey of a complete , blue magnitude - limited sample of 56 elliptical galaxies drawn exclusively from the rsa catalog ( sandage & tammann ) .deep ccd imaging has been performed through both broad - band filters and narrow - band filters isolating the nebular h+[n ] emission lines . in this paperi combine results from this survey with the _ iras _ data to discuss the distribution and origin of dust and gas in ellipticals .part of this paper is based on goudfrooij & de jong ( , hereafter paper iv ) .optical observations are essential for establishing the presence and distribution of dust and gas in ellipticals , thanks to their high spatial resolution .a commonly used optical technique to detect dust is by inspecting color - index ( _ e.g. _ , ) images in which dust shows up as distinct , reddened structures with a morphology different from the smooth underlying distribution of stellar light ( _ e.g. _ , goudfrooij _ et al . _ , hereafter paper ii ) . however , a strong limitation of optical detection methods ( compared to the use of _ iras _ data ) is that only dust distributions that are sufficiently different from that of the stellar light ( _ i.e. _ , dust lanes , rings , or patches ) can be detected .moreover , detections are limited to nearly edge - on dust distributions ( _ e.g. _ , no dust lanes with inclinations have been detected , cf .sadler & gerhard ; paper ii ) .thus , the optical detection rate of dust ( currently 41% , cf .paper ii ) represents a firm lower limit .since an inclination of 35 is equivalent to about half the total solid angle on the sky , one can expect the _ true _ detection rate to be about twice the measured one ( at a given detection limit for dust absorption ) , which means that _ the vast majority of ellipticals could harbor dust lanes and/or patches_.optical emission - line surveys of luminous , x - ray - emitting ellipticals have revealed that these galaxies often contain extended regions of ionized gas ( _ e.g. _ , trinchieri & di serego alighieri ) , which have been argued to arise as thermally instable regions in a `` cooling flow '' . as mentioned before , the emission - line regions in these galaxies are suspected to be dust - free in view of the very short lifetime of dust grains .however , an important result of our optical survey of ellipticals is the finding that emission - line regions are essentially _ always _ associated with substantial dust absorption ( paper ii ; see also macchetto & sparks ) , which is difficult to account for in the `` cooling flow '' scenario . this dilemma can however be resolved in the `` evaporation flow '' scenario ( de jong _ et al . _ ) in which the ism has been accreted from a companion galaxy .closely related to the origin of the dust and gas is their dynamical state , _i.e. _ , whether or not their motions are already settled in the galaxy potential . this question is , in turn , linked to the intrinsic shape of ellipticals , since in case of a settled dust lane , its morphology indicates a plane in the galaxy in which stable closed orbits are allowed ( _ e.g. _ , merritt & de zeeuw ) .these issues can be studied best in the inner regions of ellipticals , in view of the short relaxation time scales involved , allowing a direct relation to the intrinsic shape of the parent galaxy .a recent analysis of properties of _ nuclear _ dust in 64 ellipticals imaged with hst has shown that dust lanes are randomly oriented with respect to the apparent major axis of the galaxy ( van dokkum & franx ) . moreover ,the dust lane is significantly misaligned with the _ kinematic _ axis of the stars for almost all galaxies in their sample for which stellar kinematics are available .this means that _ even at these small scales _ , the dust and stars are generally dynamically decoupled , which argues for an external origin of the dust .this conclusion is strengthened by the decoupled kinematics of stars and gas in ellipticals with _large - scale _ dust lanes ( _ e.g. _ , bertola _ et al . _as mentioned in the introduction , dust in ellipticals has been detected by optical as well as far - ir surveys .since the optical and far - ir surveys yielded quite similar detection rates , one is tempted to conclude that both methods trace the same component of dust . in this section , this point will be addressed by discussing the distribution of dust in ellipticals .the methods used for deriving dust masses from optical extinction values and from the iras flux densities at 60 and 100 m , and the limitations involved in these methods , are detailed upon in goudfrooij _et al . _( , hereafter paper iii ) and paper iv .it is found that the dust masses estimated from the optical extinction are significantly _ lower _ than those estimated from the far - ir emission ( see paper iv ) .quantitatively , the average ratio = for the galaxies in our `` rsa sample '' for which the presence of dust is revealed by both far - ir emission and optical dust lanes or patches .i should like to emphasize that this `` dust mass discrepancy '' among ellipticals is quite remarkable , since the situation is _ significantly different _ in the case of spiral galaxies : careful analyses of deep multi - color imagery of dust extinction in spiral galaxies ( _ e.g. _ , block _ et al . _ ; emsellem ) also reveal a discrepancy between dust masses derived from optical and _ iras _ data , _ but in the other sense , i.e. , _ _ ! _this can be understood since the _ iras _ measurements were sensitive to `` cool '' dust with temperatures , but much less to `` cold '' dust at lower temperatures which radiates predominantly at wavelengths beyond 100 m ( _ e.g. _ , young _ et al . _. since dust temperatures of order 20 k and lower are appropriate to spiral galaxies ( greenberg & li and references therein ) , dust masses derived from the _ iras _ data are strict _ lower limits _ by nature .evidently , the bulk of the dust in spiral disks is too cold to emit significantly at 60 and 100 m , but still causes significant extinction of optical light .interestingly , is _ also _ appropriate to the outer parts of ellipticals ( cf .paper iv ) , underlining the significance of the apparent `` dust mass discrepancy '' among ellipticals . what could be the cause ?_ orientation effects ? _if the discrepancy would be due to an orientation effect , the ratio / be inversely proportional to cos , where is the inclination of the dust lane with respect to the line of sight .however , we have measured inclinations of regular , uniform dust lanes in ellipticals from images shown in homogeneous optical ccd surveys , and found that the relation between / and cos is a scatter plot ( cf.fig . 1 of paper iv ) .thus , the effect of orientation on the dust mass discrepancy must be weak if present at all .this suggests that the dust in the lanes is concentrated in dense clumps with a low volume filling factor . _diffusely distributed dust ? _ having eliminated the effect of orientation , the most plausible way out of the dilemma of the dust mass discrepancy is to postulate an additional , diffusely distributed component of dust , which is therefore virtually undetectable by optical methods .we note that this diffuse component of dust is not unexpected : the late - type stellar population of typical giant ellipticals ( l ) has a substantial present - day mass loss rate ( 0.1 1 m of gas and dust ; cf .faber & gallagher ) which can be expected to be diffusely distributed .an interesting potential way to trace this diffuse component of dust is provided by radial color gradients in ellipticals .with very few significant exceptions , `` normal '' ellipticals show a global reddening toward their centers , in a sense approximately linear with log(radius ) ( goudfrooij _ et al . _ ( hereafter paper i ) and references therein ) .this is usually interpreted as gradients in stellar metallicity , as metallic line - strength indices show a similar radial gradient ( _ e.g. _ , davies _ et al . _however , compiling all measurements published to date on color- and line - strength gradients within ellipticals shows no obvious correlation ( cf .[ fig : gradients ] ) , suggesting that an additional process is ( partly ) responsible for the color gradients .although the presence of dust in ellipticals is now beyond dispute , the implications of dust extinction have been generally discarded in the interpretation of color gradients .however , recent monte carlo simulations of radiation transfer within ellipticals by witt _et al . _( , hereafter wtc ) and wise & silva have demonstrated that a diffuse distribution of dust throughout ellipticals can cause significant color gradients even with modest dust optical depths .we have used wtc s `` elliptical '' model to predict dust - induced color gradients appropriate to the far - ir properties of ellipticals in the `` rsa sample '' , as derived from the _ iras _ data .the model features king profiles for the radial density distributions of both stars and dust : ^{-\alpha/2}\ ] ] where represents the central density and is the core radius . the `` steepness '' parameter was set to 3 for the stars , and to 1 for the dust [ the reason for the low value of for the dust distribution is that it generates color gradients that are linear with log( ) , as observed ( paper i ; wise & silva ) ] . using this model , far - infrared - to - blue luminosity ratios and color gradientshave been derived as a function of the total optical depth of the dust ( _ i.e. _ , the total dust mass ) .the result is plotted in fig .[ iv : lirlbdbidr ] .it is obvious that color gradients in elliptical galaxies are generally larger than can be generated by a diffuse distribution of dust throughout the galaxies according to the model of wtc .this is as expected , since color gradients should be partly due to stellar population gradients as well .however , _ none _ of the galaxies in this sample has a color gradient significantly _ smaller _ than that indicated by the model of wtc .i argue that this is caused by a `` bottom - layer '' color gradient due to differential extinction , which should be taken seriously in the interpretation of color gradients in ellipticals .we have checked whether the assumption of the presence of a diffusely distributed component is also _ energetically _ consistent with the available _ iras _ data . to this end , we computed heating rates for dust grains as a function of galactocentric radius .we assumed heating by _( i ) _ stellar photons , using the radial surface brightness profiles from paper i , and _( ii ) _ hot electrons in x - ray - emitting gas , if appropriate .radial dust temperature profiles are derived by equating the heating rates to the cooling rate of a dust grain by far - ir emission . using the derived radial dust temperature profiles , we reconstructed _ iras _ flux densities for both the optically visible component and the ( postulated ) diffusely distributed component .apparently regular dust lanes were assumed to by circular disks of uniform density , reaching down to the galaxy nucleus .after subtracting the contribution of the optically visible component of dust to the _ iras _ flux densities , the resulting flux densities were assigned to the diffuse component . using the wtc model calculations ( cf .[ iv : lirlbdbidr ] ) , / ratios were translated into total optical depths of the dust ( and hence dust mass column densities ) . dividing the dust masses of the diffusely distributed component by the dust mass column densities , outer galactocentric radii for the diffusely distributed dust componentwere derived ( typical values were kpc ) .finally , the _ iras _ flux densities were constructed from the masses of the diffusely distributed component by integrating over spheres ( we refer the reader to paper iv for details ) . a comparison of the observed and reconstructed _ iras _ flux densities ( cf .[ fig : compare100 ] ) reveals that the _ observed _ _ iras _ flux densities can _ in virtually all elliptical galaxies in the rsa sample _ be reproduced _ within the 1 uncertainties _ by assuming two components of dust in elliptical galaxies : an optically visible component in the form of dust lanes and/or patches , and a newly postulated dust component which is diffusely distributed within the inner few kpc from the center of the galaxies .we remind the reader that we have only considered dust which was detected by _ iras _ , _i.e. _ , with .in reality , the postulated diffuse component of dust in elliptical galaxies may generally be expected to extend out to where the dust temperature is lower .observations with the _ infrared space observatory ( iso ) _ of the rsa sample of elliptical galaxies are foreseen , and may reveal this cooler dust component in ellipticals .i am very grateful to the soc for allowing me to participate in this great conference .it is also a pleasure to thank drs.teije de jong , leif hansen , henning jrgensen , and hans - ulrik nrgaard - nielsen for their various contributions to this project . bertola f. , buson l.m . , zeilinger w.w . , 1988 ,nat 335 , 705 block d.l . ,, grosbl p. , stockton a. , moneti a. , 1994 , a&a 288 , 383 bregman j.n . ,, roberts m.s ., 1992 , apj 387 , 484 carollo c.m . ,danziger i.j . ,buson l.m . , 1993 ,mnras 265 , 553 carollo c.m . ,danziger i.j . , 1994 , mnras 270 , 523 de jong t. , nrgaard - nielsen h.u . , hansen l. , jrgensen h.e . , 1990 , a&a 232 , 317 davies r.l ., sadler e.m . , peletier r.f . ,1993 , mnras 262 , 650 draine b.t ., salpeter e. , 1979 , apj 231 , 77 emsellem e. , 1995 , a&a 303 , 673 faber s.m . ,gallagher j.s . , 1976 , apj 204,365 fabian a.c . ,nulsen p.e.j ., canizares c.r . , 1991 , a&ar 2 , 191 forman w. , jones c. , tucker w. , 1985 , apj 293 , 102 goudfrooij p. , _ et al ._ , 1994b , a&as 104 , 179 ( paper i ) goudfrooij p. , hansen l. , jrgensen h.e ., nrgaard - nielsen h.u ., 1994b , a&as 105 , 341 ( paper ii ) goudfrooij p. , de jong t. , nrgaard - nielsen h.u . , hansen l. , 1994c , mnras 271 , 833 ( paper iii ) goudfrooij p. , de jong t. , 1995 , a&a 298 , 784 ( paper iv ) greenberg j.m . , li a. , 1995 , in : _ the opacity of spiral disksdavies & d. burstein , kluwer , dordrecht , p. 19jura m. , kim d .- w . ,knapp g.r . , guhathakurta p. , 1987, apjl 312 , l11 macchetto f. , sparks w.b . , 1991 , in : _ morphological and physical classification of galaxies _ , eds .g. longo , m. cappaccioli & g. bussarello , kluwer , dordrecht , p. 191merritt d. , de zeeuw p.t ., 1983 , apjl 267 , l19 king i.r ., 1962 , aj 67 , 471 peletier r.f . , 1989 , ph .d. thesis , university of groningen phillips m.m ., jenkins c.r ., dopita m.a ., sadler e.m . , binette l. 1986 , aj 91 , 1062 sadler e.m . ,gerhard o.e . , 1985 , mnras 214 , 177 sandage a.r ., tammann g.a . , 1981 ,_ a revised shapley - ames catalog of bright galaxies _ , carnegie institution of washington sparks w.b . , macchetto f. , golombek d. , 1989 , apj 345 , 153 trinchieri g. , di serego alighieri s. , 1991 , aj 101 , 1647 van dokkum p.g . , franx m. , 1995 , aj 110 , 2027 vron - cetty m.p . , vron p. , 1988, a&a 204 , 28 wise m.w . , silva d.r ., 1996 , apj , in press ( noao preprint no .677 ) witt a.n ., thronson h.a .jr . , capuano j.m . , 1992 ,apj 393 , 611 ( wtc ) young j.s ., schloerb f.p . , kenney d. , lord s.d . , 1986 ,apj 304 , 443
|
= 0.90
|
the aim of this paper is to develop analytic tools in order to design a relevant mechanism for carbon markets , where relevant refers to emissions reduction . for this purpose, we focus on electricity producers in a power market linked to a carbon market . the link between marketsis established through a market microstructure approach . in this context ,where the number of agents is limited , standard game theory applies .the producers are considered as players behaving on the two financial markets represented here by carbon and electricity .we establish a nash equilibrium for this non - cooperative -player game through a coupling mechanism between the two markets .the original idea comes from the french electricity sector , where the spot electricity market is often used to satisfy peak demand .producers behavior is demand driven and linked to the maximum level of electricity production .each producer strives to maximize its market share . in the meantime, it has to manage the environmental burden associated with its electricity production through a mechanism inspired by the eu ets ( european emission trading system ) framework : each producer unit of emissions must be counterbalanced by a permit or through the payment of a penalty .emission permit allocations are simulated through a carbon market that allows the producers to buy allowances at an auction .our focus on the electricity sector is motivated by its prevalence in the emission share ( 45% of the whole emission level worldwide ) , and the introduction in phase iii of the eu ets of an auction - based allowance allocation mechanism . in the present paper , the design assumptions made on the carbon market aim to foster emissions reduction in the entire electricity sector .our approach proposes an original framework for the coupling of bidding strategies on two markets .given a static elastic demand curve on the electricity market ( referring to the time stages in an organized electricity market , mainly day - ahead and intra - day ) , we solve the local problem ( just a single time period of the same length for both markets ) of establishing a non - cooperative nash equilibrium for the two coupled markets .this simplification is justified here , as we aim to raise the condition under which a carbon market would be a real efficient instrument for carbon mitigation policies .this analysis is conducted for non - continuous and non - strictly monotone supply functions and bidding strategies on both markets in the complete information framework . while literature on applied game theory to strategic bidding on power markets mainly addresses profit maximization ( see eg with complete information , with private information , with incomplete information ) , our objective function is share maximization .the equilibria of the coupled markets are based on the full characterization of the equilibrium electricity price ( on the electricity market alone ) .we prove the uniqueness of the price and shares , for share maximization whereas , to our knowledge this property is not established ( under our hypotheses ) for profit maximization ( see eg ) .moreover , share maximization approach deals with profit by making specific assumptions , i.e. no - loss sales , and a tradeoff between the purchase of allowances and the carbon footprint of the electricity generated .hence , this work is the first attempt on power and carbon markets coupling through game theory approach .other coupling approaches use , for instance , models that produce dynamics for both electricity and carbon prices jointly , as in , . in section [ sec : market - rules ] , we formalize the market ( carbon and electricity ) rules and the associated admissible set of players coupled strategies .we start by studying ( in section [ sec : power - market ] ) the set of nash equilibria on the electricity market alone ( see proposition [ propo - nash ] ) .this set constitutes an equivalence class ( same prices and market shares ) from which we exhibit a dominant strategy .section [ sec : design ] is devoted to the analysis of coupled markets equilibria : given a specific carbon market design ( in terms of penalty level and allowances ) , we compute the bounds of the interval where carbon prices ( derived from the previous dominant strategy ) evolve .we specify the properties of the associated equilibria .in the electricity market , demand is aggregated and summarized by a function , where is the quantity of electricity that buyers are ready to obtain at maximal unit price .we assume the following : [ ass : demande ] the demand function is non - increasing , left continuous , and such that .each producer is characterized by a finite production capacity and a bounded and non - decreasing function \longrightarrow { \mathbb{r}}^{+} ] , right continuous and such that . for a non - decreasing strategy , is its generalized inverse function with respect to . given two strategies and such that , for all we have for any positive indeed , if then from which we deduce that is non - decreasing .next , if , for any fixed , we have from which the reverse order follows for the requests .we shall now describe the electricity market clearing .note that from a market view point , the dependency of the supply with respect to the marginal cost does not need to be explicit .for the sake of clarity , we write and instead of and .the dependency will be expressed explicitly whenever needed . by aggregating the asking size functions, we can define the overall asking function a producer strategy profile as : hence , for any producer strategy profile , is the quantity of electricity that can be sold on the market at unit price .the overall supply function is a non - decreasing surjection defined from to p ] as an output of a specific market clearing rule . to keep the price consistency ,the market rule must be such that for any two strategy profiles and , note that only if the demand curve is constant on some intervals ] to ] , which reduces to the point in various situations , in particular when strictly decreases at , or when is chosen equal to .proofs of _( i ) _ and _ ( iii ) _ , which are rather tedious due to non - strictly monotony and possible discontinuity of supply and offers , are postponed to appendix [ appendix : proofnash ] . from this pointwe restrict our attention to a particular market design . in the following, the scope of the analysis applies to a special class of producers , a specific electricity market price clearing ( satisfying definition [ def : clearingelec ] ) and a range of quantities of allowances available on the 2 market .although not necessary , the following restriction simplifies the development .* on the producers .* [ ass : producers ] each producer operates a single production unit , for which * the initial marginal cost contribution ( that does not depend on the producer positions in the 2 market ) is constant , } ] , , , ~j=1,\ldots , j.\ ] ] in this _ tax _ framework , the dominant strategy on the electricity market is also parametrized by as defined in .the clearing electricity price and quantities follow as price will be referred to as the _ taxed _ electricity price , by contrast with price issued from the _ marginal production cost strategy _ that results from the position on the carbon market .[ rem : diffdeprix ] considering a carbon tax and a carbon market strategy such that , we emphasize the fact that the corresponding electricity prices are not equivalent , but we always have the following inequality this follows from the fact that for all , and hence . the gap between and comes both from the width ( effect ) and the height ( penalty effect ) of their steps .we start with the following : [ lem : peleccroissante ] under assumption [ ass : elecclearingprice ] , the map is non - decreasing and right continuous .we determine the _ willing - to - buy - allowances functions _ and , as follows : for producer , is the quantity of emissions it would produce under the penalization , and consequently the quantity of allowances it would be ready to buy at price . given the 2 value , the total amount represents the allowances needed to cover the global emissions generated by the players who have won electricity market shares .we also define the functions given that the 2 value , is the amount of allowances needed by the producers who have won electricity market shares and want to cover their overall production capacity .obviously we have .\ ] ] moreover , [ lem : wdecreasing ] the function is non - increasing : the proofs of both lemma [ lem : peleccroissante ] and lemma [ lem : wdecreasing ] can be found in appendix [ appendix : prooflemmas1et2 ] . the main result of the section is the computation of the bounds of the interval in which the coupled carbon market nash equilibria prices evolve : we demonstrate that there is no possible deviation enabling a nash equilibrium carbon price outside this interval .the price bounds are elaborated as specific carbon prices associated to two explicit strategies , build from the _ willing - to - buy - allowances _ functions : the _ lower price strategy _ , and the _ higher price strategy_. in order to characterize further nash equilibria candidates , evolving in this price interval , we analyze a third set of strategies that are _ intermediate strategies_. those strategies rely on our last design assumption which prevents the carbon market from market failure : * on the carbon market design . *[ ass : hypotw ] the available allowances satisfy moreover , is chosen such that no producer is sidelined from the game : for all , is not identically zero on ] . since for , it follows that , \mbox { s.t . }\sum_{j } { { a}^{\tiny { { \mathcal{w } } } } } _ { j}(\tau ) > { \omega}\ } \geq { { \tau^{\text{\rm lower}}}} ] .this means that producer 1 may sell , for any tax level in ] .consider a deviation of player 1 , such that the resulting clearing price on 2 market , ] .the interval in which the coupled carbon market nash equilibria prices evolve is then ] .\(ii ) if there exists a favorable deviation from a producer , say producer 1 , that chooses instead of , such that , then there exists another favorable deviation defined by such that , and such that .( i ) _ follows directly from lemma [ lem : lemmastratunder]-_(i ) _ and lemma [ lem : lemmastratupper]-_(i)_. to prove _ ( ii ) _ , we first observe that , as producers are served first on the carbon market , moreover , we have , and from the 2 market mechanism it follows that since for any , it follows that . indeed , for strategy , the producers such that receive a quantity of quotas , from which .we also deduce that . to conclude , it is sufficient to notice that . the following aims to characterize the form of effective nash equilibria .let be an effective nash equilibrium ( i.e ) .then the following is also an effective nash equilibrium : from lemmas [ lem : lemmastratunder ] and [ lem : lemmastratupper ] , ] .once 2 is emitted into the atmosphere , it remains there for more than a century . estimating its value is an essential indicator for efficiently defining policy .carbon valuation is crucial for designing markets that foster emission reductions . in this paper , we established the links between an electricity market and a carbon auction market through an analysis of electricity producers strategies .we proved that they lead to the interval where relevant nash equilibria evolve , enabling the computation of equilibrium prices on both markets . for each producer , each equilibrium derives the level of electricity produced and the 2 emissions covered . for a given design and set of players ,the information provided by the interval may be interpreted as a diagnosis of market behavior in terms of prices and volume .indeed , it enables the computation of the 2 emissions actually released , and opens the discussion of a relevant carbon market in terms of mitigation issues .in addition to this analysis of the nash equilibrium we plan to analyze the electricity production mix , with a particular focus on renewable shares that do not participate in emissions .this work was partly supported by grant 0805c0098 from ademe .suppose that one producer , let us say producer , deviates and chooses instead of .we have to show that its market share can not be reduced by this deviation . by definition of the admissibility ( see ) we have .\ ] ] hence the offer functions defined by satisfy . by adding the unchanged offers of the other producers where denotes the strategy profile that includes producer 1 deviation . the minimum market clearing price for strategy profile is the minimum market clearing price for strategy profile is the inequality together with the fact that the demand is a non - increasing function imply that , from which , with we deduce that * we first consider the case where . * by definition of the minimum clearing price , the fact that and the fact that is non - decreasing , we have hence , from the market clearing we get according to definition [ def : quantities ] , let us denote we have since we get but for any , the quantity .as is non - decreasing ans since we have assumed , we get for such we thus have from which it follows that if , by the market clearing we get from , we have for hence , if is non empty then at least one producer exists , such that .and from the desegregation of and definition of it results that we note that and that by definition of .then since and , we can deduce that this follows from the fact that when , the map is decreasing on .as arguments are very similar to the proof of _( i ) _ , we just sketch them .let such that .assume that producer 1 is such that .+ if , then by the market clearing if , by the market clearing we get thus , assuming that , we note that since by definition of , as , we get * we prove that the quantities are the same for all nash equilibria . *let an other nash equilibrium that differs from .on the global offers we always have that implies note that when , all admissible strategies are nash as for all by the offers ordering , it is straightforward to show that assume that the quantities are not the same , then there exists a producer , say producer 1 , such that and we also have if , then by lemma [ lem : dominancebis ] , we have that and hence . in other words , has a strictly favorable deviation for producer 1 that contradicts the assumption that is a nash equilibrium .* we prove that the equilibrium best bid price is unique : , for an other nash equilibrium .* assume the contrary , .then by the definition of , we have that although the result of this lemma is intuitive , the proof is rather technical .this is due to our assumptions , in particular regarding demand , that allow the demand function to have discontinuity points and some non - elasticity areas ( see assumption [ ass : demande ] ) . [[ i - we - first - consider - the - case - dunderbarptau - mathcal - omathcal - otau - underbarptau . ] ] ( i ) we first consider the case .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this means that is of the form , for a given .then when is small enough , we also have .indeed , and for a small enough , thus , which implies that and hence [ [ ii - we - consider - next - the - case - dunderbarptau - mathcal - omathcal - otau - underbarptau . ] ] ( ii ) we consider next the case .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this means that is at a discontinuity point , say of the demand , .then , for any , but and we can choose to be small enough so that . then , for a small enough , which implies that , so we obtain [ [ iii - we - consider - now - the - case - dunderbarptau - mathcal - omathcal - otau - underbarptau . ] ] ( iii ) we consider now the case .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this means that , say then , for any , but , and we can choose small enough such that . then , for small enough , which implies that , so we get the right - continuity of follows , by definition as is a continuous transformation of .we define the function valued in the subsets of that lists the producers in the electricity market producing at tax level : in particular we have for all $ ] , we add the following shortened notation : .+ we break down and into the sets , and .we denote by the set of index such that .in particular , when , then . among the indexes in the set , we observe that at most one index exists ( say ) in the set .if , if , then , by the definition of the sets from which , we easily deduce that for and , we have from which , we also easily deduce that for the same and , for representative of index in , and representative of index in , we also have from which , we deduce that we multiply ( dc ) by , we get by we subtract ( eh ) we arrange the terms if exists , then and but , and the contradiction follows .+ if does not exist , then but , and the contradiction follows . *( ii - b-2 ) * if , we go back to the analysis of the case * ( ii - a-2 ) * , with the main difference that all quantities are now equal to .we go to inequalities and which are simplified as the right - had sides are now zero .the contradiction follows with the same arguments .mireille bossy , nadia mazi , geert jan olsder , odile pourtallier , and etienne tanr .electricity prices in a game theory context . in _ dynamic games :theory and applications _ , volume 10 of _ gerad 25th anniv ._ , pages 135159 .springer , new york , 2005 .
|
in this paper , we analyze nash equilibria between electricity producers selling their production on an electricity market and buying 2 emission allowances on an auction carbon market . the producers strategies integrate the coupling of the two markets via the cost functions of the electricity production . we set out a clear nash equilibrium on the power market that can be used to compute equilibrium prices on both markets as well as the related electricity produced and 2 emissions released .
|
photometric redshifts ( connolly _ et al . _ 1995 ,et al . _1998 , benitez 2000 ) are a key component of galaxy surveys .as surveys get larger , reducing statistical uncertainties , systematic errors become more important .systematic errors in photometric redshifts are therefore a top concern for future large galaxy surveys , for example as highlighted by the dark energy task force ( albrecht _ et al . _ 2006 ) .much of the concern has centered on `` catastrophic outliers '' which are galaxies for which the photometric redshift is very wrong , for example when mistaking the lyman break at for the 4000 break at very low redshift .even a small fraction of outliers can significantly impact the downstream science , and modeling this impact requires going beyond simple gaussian models of photometric redshift errors .in many cases , however , outliers are `` catastrophic '' only because they have a multimodal redshift probability , which can not be accurately represented by a single number such as the most probable redshift .fernandez - soto _et al . _ ( 2002 ) showed that after defining confidence intervals around the peaks , 95% of galaxies in their sample had spectroscopic redshifts within the 95% confidence interval and 99% had spectroscopic redshifts within the 99% confidence interval .yet the same data appear to contain catastrophic outliers on a plot where each galaxy is represented only by a point with symmetric errorbars .there is a second motivation for using the full .the redshift ambiguities described above are due to color - space degeneracies .but even without these degeneracies , photometric redshift errors should be asymmetric about the most probable redshift due to the nonlinear mapping of redshift into color space .avoiding biases from this effect also requires reference to .indeed , mandelbaum _et al . _ ( 2008 ) showed that using in sloan digital sky survey ( sdss ) data ( which are not deep enough to suffer serious degeneracies ) substantially reduced systematic calibration errors for galaxy - galaxy weak lensing . in this paperwe demonstrate , using simple simulations , the reduction in systematic error that can result from using the full in a deep survey with significant degeneracies .we also introduce a simple way to reduce the computational cost of doing so .we conducted simulations similar to those in margoniner & wittman ( 2008 ) and wittman _ et al . _ ( 2007 ) , which used the bayesian photometric redshift ( bpz , benitez 2000 ) code , including its set of six template galaxy spectral energy distributions ( seds ) and its set of priors on the joint magnitude - sed type - redshift distribution .we started with an actual r band catalog from the deep lens survey ( dls , wittman _et al . _2002 ) . for each galaxy , we used the magnitude to generate a mock type and redshift according to the priors , and then generated synthetic colors in the bvrz filter set used by dls .( the filter set is not central to the argument here , but one must be used for concreteness . ) we then added photometry noise and zero - point errors representative of the dls .the color distributions in the resulting mock catalog were similar to those of the actual catalog , indicating that the mock catalog is consistent with a real galaxy survey .we then ran the mock catalog of 83,000 galaxies through bpz , saving the full . in a post - processing stage, we can extract from the full not only the most probable redshift ( which had already been determined by bpz and labeled ) , but other candidate one - point estimates such as the mean and median of , as well as the summed for any desired set of galaxies .we first show one of the traditional one - point estimates to more clearly illustrate the problem .the left panel of figure [ fig - zmc ] shows the most probable redshift vs. true redshift . to accurately render both the high- and low - density parts of this plot ,we show it as a colormap rather than a scatterplot .the core is rather tight , requiring a logarithmic mapping between color and density to bring out the more subtle features in the wings . with this mapping ,the systematics are clear : a tendency to put galaxies truly at at very low redshift ; a tendency to put galaxies truly at low redshift at ; and asymmetric horizontal smearing in several different intervals , e.g. at .the specifics of the features depend on the filter set , but their general appearance is typical ( but note that they will be difficult to see in plots based on spectroscopic followup of deep imaging surveys , as the brighter , spectroscopically accessible , galaxies form a much tighter relation ) .corresponding plots are not shown for other one - point estimates such as the mean and median of , but they have similar to worse systematic deviations . which systematics are most important depends on the application .in this paper we consider two - point correlations of weak gravitational lensing ( cosmic shear ) , which require that the photometric redshift _ distribution _ of a sample of galaxies be as close as possible to the true redshift distribution ; errors on specific galaxies are not important .also , accurate knowledge of the scatter is much more important than minimizing the amount of scatter .thus , figure [ fig - zmc ] would ideally be symmetric about a line of unity slope , reflecting the fact that the photometric and true redshift distributions are identical .. ] this is clearly not the case for the top panel of figure [ fig - zmc ] .ideally contains the required information lacking in the single number , but it is also more difficult to work with , requiring the storage and manipulation of an array of numbers for each galaxy .we simplify the computational bookkeeping by defining a single number which is by construction representative of the full .this estimate is simply a random number distributed according to the probability distribution and is denoted by because it is a monte carlo sample of the full .specifically , for each galaxy , a random number is drawn uniformly from the interval , and the monte carlo redshift is defined such that .figure [ fig - mcprocess ] illustrates the process .the bottom panel shows as usually plotted , while the top panel shows the cumulative , that is , the probability that the galaxy lies at redshift less than the value on the abscissa .a random number in the range 0 - 1 is drawn , in this case 0.32 , and the redshift at which the cumulative has a value of 0.32 ( dotted line ) is recorded as , in this case 0.47 .this results in a single number for each galaxy , which remains unbiased even if is multimodal and/or asymmetric .of course , some precision is lost in this process ; it should be avoided when studying a small number of galaxies in great detail , but for large samples of galaxies it must converge to the of the sample .furthermore , it requires only a minor modification to most photometric redshift codes .the bottom panel of figure [ fig - zmc ] shows vs. .clearly , the systematics are vastly improved . even with the logarithmic scaling , it is difficult to see departures from symmetry about a line of unity slope .we therefore compare one - dimensional histograms in what follows . a typical use of photometric redshifts in a galaxy survey will be to bin the galaxies by redshift , for example to compute shear correlations in redshift shells .because the true redshifts will not be known , the galaxies must be binned by some photometric redshift criterion . for simplicity , we choose .figure [ fig - sumpz ] ( upper panel ) shows true and inferred redshift distributions of galaxies in four bins : 0 - 0.1 , 0.4 - 0.5 , 0.9 - 1.1 , and 1.4 - 1.6 .the true distributions is shown in black , the distribution inferred from is shown in red , and the distribution inferred from summing the galaxies is shown in blue . the asymmetry and the wings of the true redshift distributionare well captured by or by summing . by comparison , using each galaxy s most probable redshift to infer the redshift distributions would have resulted in four vertical - sided bins , which would become roughly gaussian after convolving with the typical galaxy s uncertainty .it is clear that this would not capture the true redshift distribution nearly as well as the method does .for example , the inset in figure [ fig - sumpz ] shows a small high - redshift ( ) bump in the 0.4 - 0.5 bin which is captured by or by summing .looking only at the most probable redshift would result in these galaxies being considered catastrophic outliers . another way to reduce `` catastrophic outliers '' andrelated systematics might be to discard galaxies whose is multimodal , not sharply peaked , or otherwise fails some test .this may be effective , but it greatly reduces the number of galaxies available to work with . the and full methods accurately reflect the true redshift distributions without requiring any reduction in galaxy sample size .this is important for applications such as lensing , for which galaxy shot noise will always be an issue .having demonstrated that using ( whether by sampling or by using the full distribution ) greatly reduces photometric redshift systematics , two questions naturally arise .how much better is it in terms of a science - based metric ? and what are the remaining errors or limitations ?because and the full give very similar results , references to using in the remainder of the paper should be understood to include either implementation .for each of the four bins shown in fig .[ fig - sumpz ] , we compute the redshift bias ( inferred minus true mean redshift ) for the full and approaches .these are shown in fig .[ fig - zbias ] as the solid and dotted lines respectively ( the results for are nearly indistinguishable from those for the full and are not shown ) . for the dls filter set and noise model ,the bias is within a few hundredths of a unit redshift when using , but only within several tenths when using the most probable redshift . for lensing applications ,the bias at low redshift is exaggerated somewhat , because it results from a small outlying bump at high redshift , as shown in the inset of fig .[ fig - sumpz ] . in a real survey, these high - redshift interlopers would be smaller and fainter than the galaxies that really belong in the low - redshift bin , and therefore have less precise shape measurements and smaller weight on the shear statistics .we account for this effect by assigning a lensing weight to each mock galaxy based on its magnitude and drawn from the actual distribution of weights as a function of magnitude in the dls .this reduces the bias of the lowest - redshift bin to 0.02 for and 0.24 for , and has a progressively smaller effect on higher - redshift bins .fig . 7 of ma _ et al . _( 2006 ) shows the degradation in dark energy parameters for a wide and very deep weak lensing survey , as a function of the tightness of priors that can be put on the redshift bias and scatter in photometric redshift bins .it shows that loose priors of order 0.2 result in 80 - 85% degradation in estimates ( with respect to a survey with absolutely no redshift errors ) . if , on the other hand , one need only allow for a bias as with the approach , the degradation decreases to 50 - 60% . for estimating , the degradation decreases from a factor of six to a factor of about 2.5 by employing .these are only rough estimates , for a number of reasons .future surveys as deep as those contemplated by ma _et al . _ ( 2006 ) will use more extensive filter sets , which will probably improve the performance somewhat with respect to the performance . and nearer - term ,shallower surveys have looser redshift requirements because their shear measurements are not as precise .but it is clear that using greatly improves the survey at essentially no cost .we also conducted simulations using a more extensive filter set , to check the generalitythe simulations are simplistic in that the same six sed templates ( and priors ) used to infer are used to generate the mock catalogs . in real photometric redshift catalogs , will be less perfect because galaxy seds are more varied and priors are imperfectly known .smaller effects of the same nature include uncertainties in real - life filter and throughput curves , which are artificially reduced to zero here. however , these errors also affect the most probable redshift . therefore , although the simulated results presented here are optimistic overall ( given this filter set ) , the performance of _ relative _ to using the most probable redshift may not be .more sophisticated simulations beyond the scope of this paper will be required to determine the limits of accuracy for any given filter set and survey depth .the remaining redshift bias is not trivial , 0.01 - 0.02 .this is an order of magnitude larger than required to keep the degradation within 10% of an ideal survey ( for the deepest surveys ; requirements are less stringent for shallower surveys ) .however , a few factors are actually pessimistic here compared to future large surveys : the limited filter set , and relatively large zeropoint errors ( mag here vs 0.01 mag for sdss and future large surveys ). a more extensive filter set will improve _ both _ the and the etimates , but will probably improve the estimate more as it eliminates some degeneracies .again , survey - specific simulations will be required to make more specific conclusions .finally , there is a source of bias not simulated here : eddington ( 1913 ) bias .the type and redshift priors are based on magnitude , but at the faint end magnitudes are biased due to the asymmetry between the large number of faint galaxies that noise can scatter to brighter magnitudes , versus the smaller number of moderately bright galaxies that noise can scatter to fainter magnitudes .surveys wishing to derive photometric redshifts for galaxies detected at , say , 10 or fainter , must use the hogg & turner ( 1998 ) prescription for removing eddington bias from each galaxy s flux measurements if they are to avoid nontrivial systematic errors .we have shown that using the photometric redshift probability distribution greatly reduces photometric redshift systematic errors , as compared to using a simple one - point estimate such as the most probable redshift or the mean or median of .various authors have made similar points previously , particularly fernandez - soto _et al . _ ( 2002 ) , who wrote that `` this information [ p(z ) ] can and must be used in the calculation of any observable quantity that makes use of the redshift . ''however , adoption of this practice has been slow to nonexistent , even among authors who are aware of the point , because it is cumbersome to track a full for each galaxy .we have shown that a very simple modification to photometric redshift codes , namely choosing a monte carlo sample from the , produces a single number for each galaxy which greatly reduces the systematic errors compared to using any other one - point estimate such as the mean , median , or mode of .in contrast to approaches which simply reject galaxies which _ could _ be outliers , this method can make use of every galaxy in a survey .we have shown that this method results in substantial improvements in a flagship application , estimating dark energy parameters from weak lensing , at no cost to the survey .
|
we use simulations to demonstrate that photometric redshift `` errors '' can be greatly reduced by using the photometric redshift probability distribution rather than a one - point estimate such as the most likely redshift . in principle this involves tracking a large array of numbers rather than a single number for each galaxy . we introduce a very simple estimator that requires tracking only a single number for each galaxy , while retaining the systematic - error - reducing properties of using the full and requiring only very minor modifications to existing photometric redshift codes . we find that using this redshift estimator ( or using the full ) can substantially reduce systematics in dark energy parameter estimation from weak lensing , at no cost to the survey .
|
as the first generation of gravitational wave interferometers perform observations at or near their design sensitivities , new methods are being developed to detect and characterize gravitational wave burst signals .accurate determination of the source direction is fundamental for all analyses of a candidate signal .there are two approaches to localizing a source on the celestial sphere : coincident and coherent .coincident methods analyze the data from each detector separately and then identify events that occur simultaneously in multiple interferometers .the arrival times and amplitudes in each detector can then be used to determine the source direction if the signal is linearly polarized or has polarization peaks separated by less than the timing uncertainty .coherent methods combine the data streams from multiple detectors into a single statistic before reconstructing gravitational wave candidates .one previous coincident study on source location estimation was completed by cavalier et al . in that work ,arrival times ( assumed known ) and their uncertainties ( assumed gaussian ) were input to a minimimization routine to determine the source location .the approach is applicable to an arbitrary number of interferometers and shows excellent resolution in monte carlo simulations with arrival time uncertainties on the order of 0.1 ms . in this paperwe implement an equivalent least squares approach in the two and three interferometer cases , but also consider arrival time uncertainites up to 3 ms .these larger uncertainties have been observed in studies with real noise and reveal a systematic bias in the source directions obtained from timing reconstruction methods for moderate to low signal - to - noise ratios ( snrs ) .we define the snr as where is the fourier transform of the gravitational wave signal and is the one - sided power spectrum of the detector noise .we will show that the bias is most significant near the plane of interferometers in the three detector case , agreeing with the observation of reduced resolution in this region by .we compute a numerical table of corrections for the three - interferometer bias and note that a similar procedure may be followed for an arbitrary network of detectors . we also find that the bias may be corrected through the application of a more detailed parameter estimation , which has the effect of reducing arrival time errors .our application of the least squares estimator with bias correction to simulated data is the first test of coincident source localization with simulated detector noise .coherent methods also offer promise for source direction estimation .the first coherent analysis was described by grsel and tinto .their approach combines the data from a three interferometer network to form a `` null stream '' where the gravitational wave signal should be cancelled completely if the assumed source direction is correct .the true source direction is estimated by looping over a grid of angular locations and minimizing the result . other grid - based approaches to source localization include the maximum likelihood approach of flanagan and hughes and the constraint likelihood method of klimenko et al .the required resolution of the grids used in these analyses increases with the maximum frequency of the search ; for ligo burst searches that extend past hz approximately grid points are necessary . in practice much coarser grids must be used due to computational limitations , potentially leading to missed minima and incorrect directional estimates .it should also be noted that coherent methods implemented with constraints are insensitive to some signal morphologies and ( small ) sky regions .the amplitude test described here is similar to the null stream approach , but does not require minimization over a grid because it works in terms of the root - sum - squared strain amplitude throughout this paper will refer to the `` intrinsic '' gravitational wave signal at the earth , i.e. prior to reduction by the detector antenna pattern .the test uses the values from three non co - located interferometers to choose between the two possible source directions given by timing considerations and is effective for moderate to high snr .the organization of this paper is as follows .section ii describes the angular bias in the least squares approach to source localization using arrival times from either two or three interferometers . in each case , a monte carlo simulation is pursued and used to characterize and numerically correct the observed bias .section iii introduces the aforementioned amplitude check that can be used to differentiate between the two possible source locations given by arrival time considerations in the three interferometer geometry .both the time - based source direction estimator in ( ii ) and the amplitude check in ( iii ) were tested on ligo - virgo simulated data and the results are described in section iv .in the following we examine the standard least squares approach to source localization for both the two and three interferometer geometries . we restrict ourselves to studying short duration ( 1 s ) `` burst '' signals so that the motion of the earth may be neglected .we also assume that the difference in travel time between sites is due only to the direction of the source , as explained in the introduction .is constrained in the two interferometer ( ifo ) case , while both angles and are constrained in the three ifo case.,scaledwidth=45.0% ] when the network consists of two detectors at different locations , the angle between the source unit vector and the vector connecting the sites ( baseline ) is given by where is the time delay , is the distance between detectors , and is the speed of light .this relationship is only exact in the absence of noise and constrains the source direction to a ring on the sky .when the arrival time estimates are affected by noise , the following least squares estimator can be used : this is the maximum likelihood estimator and is the optimal choice for small arrival time errors . note that the addition of noise introduces timing errors that may lead to estimated time delays greater than the light travel time between detectors .these unphysical delays are mapped to the polar angle that produces the time delay closest to what is observed .= = 0.1 ms , 1 ms.,scaledwidth=45.0% ] = = 0.1 ms , 1 ms.,scaledwidth=45.0% ] the characteristics of the estimator ( 6 ) were studied through multiple monte carlo simulations . in the first simulation we assumed that the distribution of measured arrival times for a given source location and detector follows a gaussian distribution with mean equal to the true arrival time and variance equal to the variance of the arrival time estimate . in the second simulation we assumed that the distribution of measured arrival times for a given source location and detector follows a multimodal distribution consisting of a gaussian main lobe and two side lobes each containing half as many points as the main lobe .this second scenario is often the case for signals with multiple peaks . in both cases we chosecoordinates as indicated in figure 1 and used the time delay provided by the ligo livingston ( l1 ) - ligo hanford ( h1 ) baseline .twenty thousand sets of arrival times ( with simulated error ) from 1800 different sky positions were produced via a gaussian random number generator .the sky points were spaced by on the = meridian of our symmetric coordinate system . for each sky location ,the estimator expectation value and its variance were computed .a nonzero systematic bias was observed and is plotted against true polar angle in figures 2 and 3 .the bias in each trial is similar and in each case increases with the variance of the arrival time distribution .we observed the bias to grow with the arrival time uncertainty , and expect it to maintain a similar shape for symmetric distributions .the bias is a result of two features of the estimator : 1 . )mapping of unphysical time delays ( ) to the two interferometer baseline has the effect of pushing the expectation value of the estimated angle towards the equator . 2 . )the nonlinear mapping between the time delay and the angle contributes to the bias through the following relation : this implies that the distribution will reflect the distribution of time delays modulated by a sinusoidal jacobian . assuming a normal distribution for results in the bias shape seen in figure 2 , while the multimodal distribution discussed above yields the shape seen in figure 3 .note that the bias is a result of the network geometry and is therefore independent of coordinate choice .we verified this statement numerically by observing the same bias for every sky position when was chosen along an axis different from the interferometer baseline .we also verified this property in the three interferometer case , as discussed below . in the three interferometer casethere are two independent baselines , so a least squares estimator will constrain the source direction to two patches on the sky .these patches will be mirror images around the plane formed by the three detectors and are indistinguishable because they yield identical time delays .the plane of interferometers is the y - z plane in our chosen coordinate system as shown in figure 1 .with these coordinates , we can easily determine the output of a least squares estimator given three arrival times .the delay times , and their uncertainties are defined in terms of the three arrival times , , : the uncertainties were defined assuming a gaussian spread around each true arrival time .using these definitions the least squares location estimator ( also the maximum likelihood estimator if the arrival times follow a gaussian distribution ) is equivalent to the following : 1 . ) if & : 2 . )if & : 3 . )if & : 4 . )if & : 5 . )if & : 6 . )if & : the case ( 1 ) occurs when both time delays are unphysical and pins the source location on the baseline that is the most standard deviations from physical .the cases ( 2 ) and ( 3 ) occur when one time delay is unphysical and places the source location along the appropriate baseline .the cases ( 4 ) and ( 5 ) are a result of analyzing one angle first , and place the source at the correct azimuthal angle if the second time delay is unphysical given the first .the last case is the only one where both time delays together yield a physical result .note that the choice of is arbitrary when the source is located on a pole .= = = 1 ms .note that only one hemisphere is shown here as we assume the correct sky patch is chosen and the result in the other atmosphere will be the mirror image of this plot across the y - z plane.,scaledwidth=45.0% ] = = = 1 ms .this magnitude is defined by ( 22 ) , and the color scale is in degrees .note that only one hemisphere of the sky is shown here , as the bias of the other hemisphere is its mirror image across the y - z plane.,scaledwidth=45.0% ] = = = 1 ms.,scaledwidth=45.0% ] the performance of this least squares analysis was evaluated through a monte carlo simulation similar to that used for the two interferometer case and similar to those conducted in .the simulation consisted of 8281 grid points spread isotropically across one hemisphere .the hemisphere was chosen to be in our symmetric coordinates ( figure 1 ) ; the results for the other hemisphere would be a mirror image of these . for 3000 iterations at each grid point , the true arrival time in each detector was added to normally distributed random noise .this gaussian noise was taken to have zero mean and variance equal to that of the arrival time .the average angular error in the location reconstruction of this simulation is shown in figure 4 .this error was calculated by first computing the angle between the true source position ( grid point ) and estimated location in each trial .these angular errors were then averaged for each true position ( grid point ) and turned into the map shown in figure 4 . as in the two interferometer case ,a systematic bias made a significant contribution to the overall error .we define the bias in the polar and azimuthal angles as and respectively .note that trials where the source was placed on a pole were only counted toward the expectation value , as is well - defined at these points but is not .the total magnitude of the angular bias was calculated using all trials and is given by where is the unit vector in the direction and is the unit vector in the direction of the source .a skymap of the value of for the hanford - livingston - virgo ( h1-l1-v1 ) network is plotted in figure 5 , while figure 6 shows the direction of the bias .an arrival time uncertainty of 1 ms was assumed for all detectors in both plots ; in general the bias increases as arrival time uncertainty increases .note that the location estimation performed by this algorithm and the bias correction are independent of coordinate choice .this is because the estimator places the source in the bin that yields time delays closest to what is observed regardless of whether the input time delays are physical .we verified this property by recomputing the bias in an arbitrary coordinate system and observing the same results upon rotation back to our original coordinates .the bias observed in the three interferometer geometry can be easily corrected numerically . for a given network configuration and set of arrival time uncertainties, one can construct a sky map of the effect of the bias .if the output of the least squares estimator is then assumed to be equal to the expectation value of the estimator , the output coordinates can be matched to a term in the bias array and the effect of the bias subtracted out . as evidenced by figures 4 and 5, this correction can reduce the mean square errors on the estimated angles significantly .the numerical bias correction was applied to ligo - virgo simulated data and its performance is described in section iv .measurements of the gravitational wave signal amplitude at different sites in a network provide valuable information for source localization . in the followingwe will describe an amplitude check that resolves the source direction degeneracy inherent in the three interferometer timing analysis , provided the signal - to - noise ratio is sufficiently high in each detector .the antenna patterns will be calculated using the method of thorne as presented by yakushin , klimenko , and rakhmanov .+ in the source frame , the gravitational wave metric can be written in the transverse traceless gauge as where and are the two independent polarizations of the source . for gravitational wave bursts of short duration, the time dimension can be taken to be fixed and we can define the traceless and transverse vectors and the strain tensor is then the signal in an interferometer due to an incident gravitational wave is given by where and are the antenna patterns of the detector with respect to the two polarizations .as shown in , these coefficients are equal to and where is the rotation from the interferometer frame to the source frame .this rotation matrix may be decomposed into three euler rotations , as shown in figure 7 .the total rotation is thus where is the polarization angle .now note that where has the same form as with and since the transverse traceless form of is preserved by the final euler rotation , we may absorb the polarization angle into the choice of ( ) .this choice will not affect the response of any interferometer to the gravitational wave source , but does allow us to write new antenna patterns as a function of only the source direction .the plus check is given by ( 40 ) while the minus check is given by ( 41).,scaledwidth=45.0% ] the response of a three interferometer network to gravitational radiation from direction ( ) will have the following form : here the terms are the gravitational wave signals , and are the time delays from ifo 1 to ifo 2 and ifo 3 , and and are the plus and cross antenna patterns in each detector .note that the time delays are a function of the source direction and we have chosen the source coordinate system so as to set the polarization angle to . combining ( 34 ) - ( 36 ) , we can write one signal in terms of the others : where and many pipelines produce an estimate of ( defined by ( 4 ) ) to quantify the strain amplitude associated with a burst event. we can rearrange ( 37 ) to include terms of this type : or the last terms in ( 40 ) and ( 41 ) are coherent and refer to the sum / difference of the signals from interferometers 1 and 2 .both terms are heavily dependent on accurate arrival time recovery .the relations ( 40 ) and ( 41 ) can be used to resolve the source direction degeneracy of the three detector geometry by comparing the relative errors of the true location and its mirror image .either test ( 40 ) or ( 41 ) may be more effective depending on the intererometer network and source .a simulation was performed to determine which check was more efficient for the h1-l1-v1 network .linearly polarized gravitational wave sources were placed on an isotropic grid of 4060 points with four different polarization angles chosen randomly from a uniform distribution between and to are identical to those from to , while only a minus sign differentiates the antenna patterns from to and to . the minus sign should not affect the values used in the check , so only polarizations between and were considered .] at each point .note that the poles and points in the plane of the three detectors were excluded as the checks are irrelevant in those directions .the antenna patterns and ratios for each trial were calculated and then added to noise to see which test held up better as the accuracy of the estimates deteriorated . specifically , each of the three values and the sum / difference termwere multiplied by one plus a normally distributed random variable with zero mean and standard deviation equal to a specified fraction of the true .this process was repeated for each trial and the results were fed to the checks ( 40 ) and ( 41 ) and averaged over all trials to yield an efficiency ( i.e. the fraction of times the check was successful ) for a given relative uncertainty in .this was repeated for several fractions between 0 and 1 , giving the plot shown in figure 8 .this plot demonstrates that ( 40 ) is on average more effective for the chosen network , so ( 40 ) was chosen for all subsequent tests .the directional estimator of section ii and the amplitude check of section iii were both applied to ligo - virgo simulated data .reference describes in detail how the noise in the three interferometers ( h1 , l1 , v1 ) was produced by filtering stationary gaussian noise so as to resemble the expected spectrum at design sensitivity in each detector .random phase modulation and sinusoids were used to model resonant sources ( lines ) .six types of limearly polarized simulated signals were produced using the graven algorithm : sine gaussians with q=15 and central frequencies of 235 ( sg235 ) and 820 ( sg820 ) hz , gaussians of duration 1 ( gauss1 ) and 4 ( gauss4 ) ms , and two families of dimmelmeier - font - mueller ( dfm ) supernova core collapse waveforms with parameters a=1 , b=2 , g=1 and a=2 , b=4 , g=1 .these waveforms are shown in figure 9 .the injections were configured as above to produce an isotropic distribution of 3960 points on the sky , with each of these grid points being tested with 4 different polarization angles .the waveforms were delayed and scaled appropriately and added to the noise of each detector for analysis .the process was repeated for each of the six waveforms . a stand - alone parameter estimation was used to determine the arrival times , their uncertainties , and the values in each trial .the parameter estimation processed input data by first applying high pass ( 100 hz ) and linear predictor ( whitening ) zero - phase filters .the noise power spectrum was determined by averaging the spectra of an interval before and an interval after the injected signal .this power spectrum was subtracted from the spectrum of the interval containing the signal and the result divided by the whitening filter frequency response .the measured amplitude was computed over the band from 100 to 1000 hz as subject to a threshold on the signal - to - noise ratio . specifically , the data was broken into frequency bins and the content in a given bin [ ( ) was only counted if this snr threshold was chosen to emphasize components with moderate visibility above the noise floor while reducing spurious excess power .the estimated signal power spectrum was also used to determine the frequency band that contained the middle 90% of the signal power .the original whitened time series was zero - phase bandpassed according to this estimate and its maximum taken as the arrival time .the arrival time uncertainty was taken to be the cramer - rao lower bound of the arrival time estimator .the directional estimator consisted of the standard least squares estimator with bias correction as described in section ii .the bias correction array had directional resolution of 2 degrees azimuthally and 90 bins evenly spaced in the cosine of the polar angle .its timing resolution was 0.1 ms in the arrival time uncertainty at each detector .trials were conducted with intrinsic inband varying from to , corresponding to realistic to loud astrophysical sources .these amplitudes were scaled down by the antenna pattern before being added to the detector noise , on average multiplying the incident by 0.38 .the plot in figure 10 shows the average angular error for all waveforms as a function of intrinsic inband .this error is defined as .fraction of events with sufficient energy for parameter estimation to estimate parameters .note that this fraction is larger than what would be detected by a search algorithm tuned to the same frequency range due to inclusion of low energy events that would fall below standard detection thresholds .the performance of the directional estimator presented here is therefore a conservative estimate due to the inclusion of these low energy events . [cols="^,^,^,^,^,^",options="header " , ] where is the unit vector in the direction of the estimate and is the unit vector in the direction of the source .the average was taken over all sky locations and polarization angles where a signal was visible to the parameter estimation in all three detectors , and was found to be virtually identical with and without the bias correction . as can be seen from figure 11 , a few outlying samples caused the average error in each case to be significantly higher than most individual errors .these outliers were due to incorrect peak time estimates from the parameter estimation .in most cases the algorithm picked a secondary peak of the signal ( see figure 12 ) , causing the arrival time estimate to be off by a few milliseconds and resulting in a large error in the directional estimate .the sg820 and dfm a1b2g1 waveforms were particularly susceptible to this , as shot noise becomes firmly established at the higher frequencies where these signals are concentrated. the lack of impact of the bias correction can be attributed to the relatively small timing uncertainties ( approximately 0.1 ms ) given by the parameter estimation .the bias is negligible for such small timing errors , so the correction did not lead to substantial improvement in the trials undertaken . even with weaker input signals , the bias correction was seen to be inconsequential as the arrival time uncertainties remained low .it should also be noted that the errors in the arrival time estimates were observed to follow multi - modal distributions in waveforms with multiple peaks , as the most significant errors were a result of the parameter estimation choosing the wrong peak . .in all cases the effect of the bias correction was negligible due to sufficiently small arrival time uncertainties.,scaledwidth=45.0% ] of .note the logarithmic scale of the y - axis.,scaledwidth=45.0% ] the observed angular uncertainties can be placed in context through comparison with the classical limit commonly used in radio astronomy .the classical limit is given by where is the wavelength of the radiation being considered and is the baseline of the network . using the longest such baseline in our network ( h1-v1 ) and assuming a frequency in the middle of our sensitive band ( 550 hz ) gives a classical limit for our study of .this differs by less than an order of magnitude from the observed average angular error for all waveforms at an intrinsic inband of , which corresponds to a loud astrophysical source ( snr 20 ) . for each of the sample waveforms .the values shown here were scaled down further by the antenna factors before entering the data .note that these curves were generated assuming that the time delays between ifos 1 and 2 were known exactly.,scaledwidth=45.0% ] the same distribution of points , polarization angles , and signal strengths was used to determine the efficiency of the amplitude check ( 40 ) as a function of input signal amplitude .the simulations were again generated in terms of intrinsic inband and scaled down by their antenna factors before being added to the data .note that the reported efficiencies are an average over all directions and polarization angles where the signal was detected in all three interferometers .two sets of trials were conducted : one where the exact arrival times were assumed to be known for forming the last ( coherent ) term in ( 40 ) and one where the parameter estimation peak times were used for its determination .the efficiencies were only slightly better with the former `` aligned '' approach , showing that the peak times from the parameter estimation are sufficient to accurately calculate the coherent term .the efficiencies for the sg820 and dfm a1b2g1 waveforms were slighly below those of the other waveforms due to their power being concentrated at higher frequencies where the interferometer noise is worse .this work focused on two aspects of coincident source localization .the first was a least squares analysis that used signal arrival times and their uncertainties to estimate the direction of the source .this estimator was studied and found to be biased in both the two and three interferometer cases , with the effect increasing with arrival time uncertainty .the cause of the bias was seen to be twofold as both the allocation of unphysical time delays to their most likely direction and the nonlinear mapping between the time delays and source coordinates contributed to the effect .the bias will exist for networks with an arbitrary number of detectors , and can be determined numerically through monte carlo simulations with a minimization routine such as that described in . in each case the bias can be corrected numerically with the correction being relevant for arrival time uncertainties greater than about 1 ms .studies on real data indicate that such uncertainties are probable using conventional detection algorithms .our analysis indicates that the bias may also be corrected through the reduction of arrival time errors provided by a thorough parameter estimation such as the one used here .the second piece of coincident analysis described here was an amplitude check , applicable to both linear and multiply polarized signals , that can be used to resolve the source location degeneracy inherent in the three interferometer geometry .while this analysis is not coincident in a strict sense ( one term requires combined data streams ) , it is extremely simple and so provides a lightweight alternative to more sophisticated methods .as shown in section iv , the check is effective for realistic to loud sources , particularly those where the peak time may be recovered accurately . one potential application for these source localization techniques would be the generation of skymaps of background and foreground triggers . in this case the amplitude check would improve the efficiency of a distributional test based on sky locations even for cases where its efficiency is only slightly better than 50% .we would expect such maps to be isotropic based on current detector sensitivity ( i.e. non - gravitational wave triggers dominate ) but in the the future such plots may give us a picture of the gravitational wave sky .methods such as those described above are useful in that they are inexpensive computationally and quite accurate in most cases . however they may struggle if a source has similar amplitudes in both the + and polarizations as well as a delay between the peaks of the waveforms . due to the differing antenna responses of the members of an interferometer network ,peaks corresponding to the different polarizations may be recorded in different detectors , leading to an inconsistency in arrival time estimates that can not be accounted for with coincident methods .the likelihood of this situation is currently under investigation , as is the performance of the algorithms presented here on real data with randomly polarized signals .ligo was constructed by the california institute of technology and massachusetts institute of technology with funding from the national science foundation and operates under cooperative agreement phy-0107417 .this paper has ligo document number ligo - p080003 - 00-z .
|
in this article we study two problems that arise when using timing and amplitude estimates from a network of interferometers ( ifos ) to evaluate the direction of an incident gravitational wave burst ( gwb ) . first , we discuss an angular bias in the least squares timing - based approach that becomes increasingly relevant for moderate to low signal - to - noise ratios . we show how estimates of the arrival time uncertainties in each detector can be used to correct this bias . we also introduce a stand alone parameter estimation algorithm that can improve the arrival time estimation and provide root - sum - squared strain amplitude ( ) values for each site . in the second part of the paper we discuss how to resolve the directional ambiguity that arises from observations in three non co - located interferometers between the true source location and its mirror image across the plane containing the detectors . we introduce a new , exact relationship among the values at the three sites that , for sufficiently large signal amplitudes , determines the true source direction regardless of whether or not the signal is linearly polarized . both the algorithm estimating arrival times , arrival time uncertainties , and values and the directional follow - up can be applied to any set of gravitational wave candidates observed in a network of three non co - located interferometers . as a case study we test the methods on simulated waveforms embedded in simulations of the noise of the ligo and virgo detectors at design sensitivity .
|
toroidal structures are now detected in astrophysical objects of various types .such objects are , for example , the ring galaxies , where a ring of stars is observed . in some galaxies , the ring - like distribution of stars is believed to be due to collisions of galaxies , as , for example , in m31 ( block et al .1987 ) , and arp 147 ( gerber et al .the analysis of the sdss data ( ibata et al .2003 ) indicates the existence of a star ring in the milky way on scales of about 15 - 20 kpc , which is believed to be originated from the capture of a dwarf galaxy .obscuring tori are observed in central regions of active galactic nuclei ( agn ) ( jaffe et al .2004 ) and play an essential role in the unified scheme ( antonucci 1993 ; urry & padovani 1995 ) .ring - like structures exhibit themselves in dark matter as well .an example can be the galaxy cluster c10024 + 17 where a ring - like structure has been found in distribution of dark matter with the use of gravitational lensing method ( jee et al .2007 ) . in the milky way, the rotation curves together with the egret data can be explained by existence of two rings of dark matter located at distances of about 4 kpc and 14 kpc from the galaxy center ( de boer et al .such toroidal structures can possess a significant mass , and thus gravitationally affect the matter motion .b. riemann devoted one of his last works to the gravitational potential of a homogeneous torus ( see collected papers , 1948 ) .this work remained unfinished however .for over a century , no attention has been paid to a torus gravitational potential .kondratyev ( 2003 ) has returned to this problem for the first time . in this workan exact expression for the potential of a homogeneous torus on the axis of symmetry was obtained . in ( kondratyev 2007 ) the integral expressions for a homogeneous torus potentialwere found using a disk as a primordial gravitating element .stacking up such disks will result in a torus with the potential equal to a sum of potentials of component disks .however , it is evident that any integral expressions are problematic to use both in analytic studies and in numerical integration of motion equations , and also in solving the problems of gravitational lensing .b.kondratyev et al .( 2009 , 2010 ) have obtained an expansion of torus potential in laplace series , but showed , however , that such an expansion is impossible inside some spherical shell . in this paperwe propose a new approach to investigation of the gravitational potential of a torus .special attention has been paid to finding approximate expressions for the potential , which would simplify investigation of astrophysical objects with gravitating tori as structural elements .in contrast to ( kondratyev 2007 ) , we used an infinitely thin ring as a torus component .such ring is actually a realization of a torus , with its minor radius tending to zero and the major one equaling the ring radius .using such an approach , we obtained an integral expression for the potential of a homogeneous circular torus ( section 2 ) and approximate expressions for the potential in the outer ( section 3 ) and inner ( section 4 ) regions . in section 5 , the method of determining a torus potential for the entire region is suggested .compose a torus with mass , outer ( major ) radius and minor radius , of a set of infinitely thin rings - component rings hereafter , - ( see fig . 1 ) , with their planes being parallel to the torus symmetry plane . select a central ring with mass and radius from a set of rings composing a torus .the potential produced by this ring at an arbitrary point has a form : where the dimensionless potential of the infinitely thin ring is is the complete elliptical integral of the first kind , and its parameter is the potential at a point produced by an arbitrary ring with radius and mass located in the torus at a hight ( fig . 1 ) , has a form where the expression for is obtained by substitution in ( [ eq_2.2 ] ) of a kind and .denote a ring coordinate and by the ring distance from the torus plane of symmetry . ]counted off the center of a torus cross - section ( fig .1 ) by , and therefore , its radius is .we may conveniently introduce the dimensionless coordinates , and , that will result in an expression for the dimensionless potential of the form where from a condition of the torus homogeneity , mass - to - radius ratios for the central and arbitrary component rings are the same , and thus , .expression for the potential of the component ring is then where is determined by expressions ( [ eq_2.5 ] ) , ( [ eq_2.6 ] ) . due to additivity, the torus potential can be represented as the integral over potentials of the component rings . to do this ,we replace a discrete mass of the ring in ( [ eq_2.7 ] ) by a differential , which for the homogeneous torus equals , where is a total mass of the torus equaling to a sum of masses of the component rings , and is a dimensionless minor radius of the torus ( a geometrical parameter ) .then , the potential of the homogeneous circular torus takes the form this integral expression for the torus potential is valid for both the inner and outer points .the validity of expression ( [ eq_2.8 ] ) is confirmed by calculation of the potential made by direct integration over the torus volume .hereafter , in analyzing approximate expressions , we will use the term `` exact '' for the values of potential obtained from the integral formula ( [ eq_2.8 ] ) . in fig . 2 , dependencies of the torus potential on the radial coordinateare presented , which were obtained numerically from formula ( [ eq_2.8 ] ) for tori with different values of the geometrical parameter .the potential curves for all values of are seen to be inscribed into the potential curve of the infinitely thin ring of the same mass and radius , located in the torus symmetry plane .the potential curve to the right of the torus surface ( ) virtually coincides with the potential curve of the ring , while to the left ( ) , it passes lower and differs by a quantity that depends on ( see section 3 ) . in fig .3 the dependencies of the torus potential on the radial coordinate are presented , which were calculated from expression ( [ eq_2.8 ] ) for different values of .note , that in contrast to the work by kondratyev ( 2007 , p.196 , expression ( 7.26 ) ) , where the torus potential is expressed only through a single integration of the elliptical integrals of all the three kinds , the torus potential ( [ eq_2.8 ] ) in our work is expressed by double integration of the elliptical integral of the first kind .however , further analysis of this expression for the torus potential ( [ eq_2.8 ] ) allows us to obtain approximations that are physically understandable and enable solving practical astrophysical tasks which need multiple calculations of the gravitational potential of the torus . for further analysis of the torus potential, we define the inner region as the volume bounded by the torus surface ( inside the torus body ) and outer region as the region outside this surface .it is seen from fig . 2 that the outer potential of the torus can be approximately represented by the potential of an infinitely thin ring of the same mass up to torus surface .for , the values of the torus potential and potential of the infinitely thin ring differ by a quantity that depends on a geometric parameter , that is especially evident for a thick torus ( ) . find a relationship between the outer potential of the torus and the potential of a ring of the same mass , that is , derive an approximate expression for torus potential in the outer region , where a condition holds . within this region ,the integrand in ( [ eq_2.8 ] ) does not have singularities for all , , therefore , it can be expanded as the maclaurin series in powers of , in the vicinity of a point . since the integrals in symmetrical limits from the series terms that contain cross derivatives and derivatives of the odd orders are equal to zero ,only summands with the even orders remain in the expansion . with the quadratic terms of the seriesbeing restricted , the potential of the component ring is : substituting ( [ eq_3.1 ] ) into ( [ eq_2.8 ] ) , we will have after integration : \right).\ ] ] ultimately , the approximate expression for the torus potential in the outer region has a form : where is a dimensionless potential of the central ring ( [ eq_2.2 ] ) , and is the complete elliptical integral of the second kind .we may conveniently proceed to a new variable that allows expression ( [ eq_3.4 ] ) to be represented as where expression ( [ eq_3.3 ] ) for the torus potential ( we will further call it the s - approximation ) , with ( [ eq_3.4 ] ) or ( [ eq_3.5 ] ) taken into account , represents the torus potential accurately enough in the outer region ( fig . 2 )since the second multiplier in ( [ eq_3.3 ] ) is a slowly varying function in and .let us simplify the expression ( [ eq_3.3 ] ) replacing the second multiplier by its asymptotic approximations . in the first case , corresponding to , the parameter and , therefore , . the expression for the torus potential in this case is since the dimensionless potential of the ring ( [ eq_2.2 ] ) at the symmetry axis is , we get for the torus and for , the second * summand * in ( [ eq_3.8 ] ) describes displacement of the torus potential at the symmetry axis as compared to the potential of an infinitely thin ring ( fig . 2 ) . in the second case , at large , the parameter , and in ( [ eq_3.3 ] ) , therefore: that is , the torus potential is equal to the potential of the infinitely thin ring with the same mass and radius in this case .it is seen from fig . 4 that _ the -approximation for the torus outer potential ( [ eq_3.3 ] ) is applicable up to the torus surface_(upper curves ) . indeed , in the region , difference between the potential obtained from the integral expression ( [ eq_2.8 ] ) and its value taken from the s - approximation reaches maximum near the torus surface anddoes not exceed for .the difference remains small even for a thick torus : it does not exceed for . for the points are outer , and the curves for the exact potential and -approximation virtually coincide ( deviation is less than ) . note that asymptotics of the -approximation for the outer potential ( [ eq_3.7 ] ) and ( [ eq_3.9 ] ) also describe the torus potential well enough ( dotted line in fig . 4 ) .thus , for , the approximation ( [ eq_3.7 ] ) can be used to estimate the potential inside the region bounded by a cylinder with radius , while the approximation ( [ eq_3.9 ] ) is applicable outside the region bounded by a cylinder with radius . at , expression ( [ eq_3.7 ] ) tends to ( [ eq_3.9 ] ) , and expression for potential of the infinitely thin ring ( [ eq_2.2 ] ) can be used within the whole outer region to approximately evaluate the torus potential . therefore , _ the outer potential of the torus can be represented with good accuracy by a potential of an infinitely thin ring of the same mass .the dependence of the geometrical parameter appears only in the torus hole ; it is taken into account in the `` shifted'' of the infinitely thin ring ( [ eq_3.6 ] ) .these approximations are valid up to the surface of the torus .is the same as that generated by a point mass located at the sphere s center .note , however , that torus has another system of equigravitating elements ( kondratyev , 2007 ) . ] _to analyze the inner potential of the torus , it is convenient to select the origin of a coordinate system in the center of the torus cross - section ( fig . 5 ) .then , the dimensionless potential of the central ring takes a form : where consider the potential of the central ring ( [ eq_4.1 ] ) in the vicinity of , , that corresponds to . in this case , the elliptical integral in ( [ eq_4.1 ] ) can be expanded in terms of a small parameter . with the series clipped by two terms, we will have : where .the approximate formula for the ring potential expressed through the parameter is then : passage to the potential of an arbitrary component ring is fulfilled by substitutions and , which results in an expression \ ] ] where , .a summand is a square of the distance between the component ring and a point ( fig . 5 ) .expansion ( [ eq_4.4 ] ) is valid for , therefore , .confine ourselves by the case of a thin torus ( ) .then and , and the first multiplier in ( [ eq_4.4 ] ) can be written to the second - order terms as : after expanding the square root in ( [ eq_4.5 ] ) in powers of and , we obtain similarly , the second multiplier ( in square brackets ) in expression ( [ eq_4.4 ] ) can be written to the terms quadratic in coordinates as : thus , we obtain the following expression for the potential of the component ring : in consideration of the inner potential of the torus , rewrite expression ( [ eq_2.8 ] ) in the polar coordinates ( fig . 5 ) : where coordinates of the component ring are , , and coordinates of a point are , .substitute ( [ eq_4.6 ] ) and ( [ eq_4.7 ] ) into ( [ eq_4.8 ] ) , and after multiplying , restrict ourselves by the terms quadratic in , and , .then , after integration of ( [ eq_4.9 ] ) , we obtain the approximate expression for the inner potential of the torus : ,\ ] ] where the first summand in ( [ eq_4.10 ] ) is the value of the torus potential in the center of the torus cross - section : . to further analyze the inner potential , it is convenient to transfer to a coordinate system normalized to the geometrical parameter of the torus . then the series coefficients will transform to the form : the expression for the torus potential ( [ eq_4.10 ] ) can be written as .\ ] ] it follows from ( [ eq_4.11 ] ) that the maximal value of the potential reaches at a point , , while equipotential lines are ellipses with their centers displaced an amount with respect to the center of the torus cross - section , and a ratio of semiaxes of the ellipses is .note , that location of the potential maximum , corresponds to the weightlessness point , where the resultant of all the forces affecting a particle inside the torus equals zero . in such an approximation ,components of the force inside the torus depend on the coordinates linearly at that . in fig .6 , the curves of the inner potential in the coordinate system normalized to are presented for three values of . though we confined ourselves to the case of a thin torus ,the curves of potential taken from expression ( [ eq_4.11 ] ) are well consistent with the curves for the exact potential ( [ eq_2.8 ] ) up to , where the deviation is maximal near the torus surface and is of the order of ( fig .the value of potential in the center of the torus cross - section ( a constant in ( [ eq_4.10 ] ) ) , also coincides with its exact value .it is of interest to investigate the solutions obtained for limiting cases .indeed , the case corresponds to two limiting passages to an infinitely thin ring ( is fixed while ) and to a cylinder , when is fixed and . dwell on the limiting passage to the cylinder potential . at , the coefficients , , and coefficients , and expression ( [ eq_4.11 ] )takes the following form in this case : \ ] ] where .it is known that the inner potential of a circular cylinder with the length , much larger than the radius of its cross - section ( kondratyev 2007 ) , has the form : .\ ] ] after a formal substitution in ( [ eq_4.13 ] ) , ( the cylinder length is equal to the length of the central ring ) , we get an expression : .\ ] ] expression ( [ eq_4.14 ] ) coincides with ( [ eq_4.12 ] ) to a constant .the quadratic dependence on for the inner potential of a thin torus can be also derived in the case , when the minor radius .the outer potential of the torus was shown in section 3 to be approximately equal to the potential of an infinitely thin ring of the same mass and radius . in this case , the smaller is the torus geometrical parameter , the more accurate is this approximation .therefore , at , the outer potential of the torus tends to the potential of an infinitely thin ring . in this case , , and thus , the elliptical integral in the expression for an infinitely thin ring ( [ eq_2.2 ] ) can be expanded in the vicinity of .if we confine ourselves to the first term of the expansion , we get an approximate expression for the potential of a central infinitely thin ring which remains valid for the outer potential of the thin torus as well . it should be noted that there is no dependence on , because in such an approximation , all thin tori with the same masses and major radii are equigravitating for the outer potential .the derivatives of the outer potential of the thin torus in , are then : and take the following forms at the torus surface ( ) : it is the linear dependence of the force on coordinates , that satisfies such boundary conditions .thus , the inner potential of the thin torus can be represented to the integration constant in the form : equating ( [ eq_4.16 ] ) with ( [ eq_4.17 ] ) at the torus surface , we obtain the expression for the constant that coincides with expression ( [ eq_4.10 ] ) obtained above at .it becomes evident from analysis of the inner potential for the two limiting cases ( and ) that the first summand in coefficients , of the power series ( [ eq_4.11 ] ) represents properties of the inner potential of a cylinder .with the cylinder potential separated , the inner potential of the torus ( [ eq_4.11 ] ) can be written as : where ,\ ] ] the second summand , which we will call a potential of curvature , implies curvature of the torus surface . indeed, all the coefficients of the series ( [ eq_curv ] ) tend to zero in the limiting passage to the cylinder ( ) , and .therefore , _ the inner potential of the torus can be represented as a sum of the cylinder potential and a term comprising a geometrical curvature of the torus surface_.in the previous sections , we derived approximate expressions for the torus potential in the outer ( ) and inner ( ) regions .it has been shown also that the inner potential of the torus can be represented by a series in powers of and , and the constant , linear and quadratic terms of the series were determined analytically . to find a larger number of the series terms sufficient to represent the inner potential accurately enough , and to obtain a continuous approximate solution for the potential and its derivatives in the whole region that would satisfy the boundary conditions at the surface, we will act in the following way .represent the inner potential of the torus as a power series : ) must be multiplied by . ] where , , , are unknown coefficients .note , that the series ( [ eq_5.1 ] ) contains only the terms with even powers of , because the torus potential is symmetric in .suppose that we have an analytical expression for the torus potential at its surface ( ) . also , write down the inner potential of the torus ( [ eq_5.1 ] ) at its surface ( and ) : from conditions of equality of the inner and outer potential and its derivatives in coordinates at the torus surface for several angles , we obtain a system of linear equations to determine coefficients , , , : thus , if we had the analytic solution for the outer potential of the torus , we could obtain an exact expression for the inner potential as an infinite series in powers of , , using the boundary conditions and solving the system of equations ( [ eq_5.3 ] ) . since there is no analytic expression for the outer potential , we can use the above approximate expression ( [ eq_3.3 ] ) for the torus potential in the outer region ( the s - approximation ) , and introduce designations : ^ 2 \\\phi_1 = \sum\limits_k \left(\frac{\partial}{\partial\eta } \left [ \phi_{in}(\theta_k , r_0 ) - \phi_{out}(\theta_k , r_0 ) \right ] \right)^2 \\ \phi_2 = \sum\limits_k \left(\frac{\partial}{\partial\zeta } \left [ \phi_{in}(\theta_k , r_0 ) - \phi_{out}(\theta_k , r_0 ) \right ] \right)^2 , \end{array}\ ] ] where , are solutions for the inner ( [ eq_5.2 ] ) and outer ( [ eq_3.3 ] ) potentials at the torus boundary , respectively .the unknown coefficients of the series can be then determined from a condition of the minimal value of a functional : the functional ( [ eq_5.4 ] ) was minimized with the least squares method , and coefficients of the series ( [ eq_5.2 ] ) were determined up to the 4-th power .the coefficients of the series are presented in appendix ( table a1 ) . in fig .7 dependence of the potential on the radial coordinate from the exact expression ( [ eq_2.8 ] ) is presented for the entire region , as well as its approximate solution obtained by sewing together the s - approximation ( [ eq_3.3 ] ) and the inner potential ( [ eq_5.2 ] ) .though the approximate solutions were obtained assuming that the torus is thin , we see that the exact ( [ eq_2.8 ] ) and approximate solutions are consistent even for the torus with . in fig . 8 ,the equipotential curves on the plane of the torus cross - section are shown , where a good agreement for all values of , is seen as well .in the present work , the gravitational potential of a homogeneous circular torus is investigated in details . an integral expression for its potential that is valid for an arbitrary pointis obtained by composing the torus of infinitely thin rings .this approach has made it possible to find an approximate expression for the outer potential of the torus ( s - approximation ) , that has a sufficiently simple form .it is shown that the outer potential of the torus can be represented with good accuracy by a potential of an infinitely thin ring of the same mass .the dependence of the geometrical parameter appears only in the torus hole ; it is taken into account in the `` shifted'' of the infinitely thin ring .these approximations are valid up to the surface of the torus . for the inner potential ,an approximate expression is found in the form of a power series to the second - order terms , where the coefficients depend only on the geometric parameter .expressions for the potential in the center of the torus cross - section and for coordinates of the potential maximum are obtained , and the limiting passage to a cylinder potential is considered .it is shown that the inner potential of the torus can be represented as a sum of the cylinder potential and a term comprising a geometrical curvature of the torus surface . a method for determining the torus potential over the whole regionis proposed that implies sewing together at the surface of the outer potential ( s - approximation ) with the inner potential represented by the power series .this method provided a continuous approximate solution for the potential and its derivatives , working throughout the region .surely , matter distribution within a torus is inhomogeneous in actual astrophysical objects , and a torus cross - section may differ from a circular one .therefore , it is further interesting to account for inhomogeneity of matter distribution inside a torus , for difference of the torus cross - section from a circular form , and so on .this work was partly supported by the national program `` cosmomicrophysics '' .we thank professor v.m .kontorovich for some helpful suggestions and dr v.s .tsvetkova for critical reading of the original version of the paper .99 antonucci r. , 1993 , ann . rev .31 , 473 block d.l ., bournaud f. , combes f. , groess r. , barmby p. , ashby m.l.n . , fazio g.g ., pahre m. a. , and willner s.p . , 2006 , nature , 443 , 832 de boer w. , sander c. , zhukov v. , gladyshev a.v . , and kazakov d.i . , 2005 ,a&a , 444 , 51 gerber r.a . ,lamb s.a . , and balsara d.s . , 1992 ,apj , 399 , 51 ibata r.a ., irwin m.j . , lewis g.l . ,ferguson a.m.n . , and tanvir n. , 2003 , mnras , 340 , 21 jaffe w. , meisenheimer k. , rottgering h.j.a . , leinert ch ., richichi a. , chesneau o. , fraix - burnet d. , et al . , 2004 , nature , 429 , 47 jee m.j . ,ford h. c. , illingworth g. d. , white r. l. , broadhurst t. j. , coe d. a. , meurer g. r. , et al . , 2007 , apj , 661 , 728 kondratyev b.p .`` the potential theory and equilibrium figures '' , moscow - izhevsk : regular and chaotic dynamic press , 2003 ( in russian ) kondratyev b.p.``the potential theory . new methods and problems with solutions '' , moscow , mir , 2007 , 512p .( in russian ) kondratyev b.p . ,dubrovskii a.s ., trubitsina n.g . ,mukhametshina e.sh . , technical physics , 2009 , 54 , 176 kondratyev b.p ., trubitsina n.g . , technical physics , 2010 , 55 , 22 .riemann b. collected papers , edited by goncharov v.l .moscow : ogiz , 1948 ( russian translation ) smythe w.r .static and dynamics electricity , n.y . : mcgraw - hill 1950 urry c.m ., padovani p. , 1995, 107 , 803in table a1 , coefficients of the power series ( up to the 4-th power ) for the inner potential of the torus , are presented , which were calculated from the sewing condition for the torus with various values of the geometrical parameter . the analytic expression ( [ eq_4.11 ] ) was used to determine the zero - th coefficient of the series . in fig .a1 , the linear ( ) and quadratic ( , ) coefficients of the power series as functions of obtained analytically from ( [ eq_4.10 ] ) are shown by solid lines ; the dots are the proper values of these coefficients obtained from the condition of sewing ( see table a1 ) .the values of the analytic coefficients are seen to coincide up to with their values obtained independently with the method of sewing from ( [ eq_5.4 ] ) .
|
the integral expression for gravitational potential of a homogeneous circular torus composed of infinitely thin rings is obtained . approximate expressions for torus potential in the outer and inner regions are found . in the outer region a torus potential is shown to be approximately equal to that of an infinitely thin ring of the same mass ; it is valid up to the surface of the torus . it is shown in a first approximation , that the inner potential of the torus ( inside a torus body ) is a quadratic function of coordinates . the method of sewing together the inner and outer potentials is proposed . this method provided a continuous approximate solution for the potential and its derivatives , working throughout the region . [ firstpage ] galaxies : general - gravitation : gravitational potential - torus
|
characterizing a physical system at the most fundamental level requires specifying its quantum mechanical state .arriving at such a description from measured data is called quantum tomography and the output of such a process is often a single point in the space of allowed parameters . in theory , by considering an infinite amount of data , a unique state can be identified .in practice , however , only a finite amount of data can be obtained . in such cases , it is impossible for a single reported state to coincide with the true state . in classical data fitting , _ error bars _ give a measure of the _ accuracy _ of the estimate . in the quantum state tomography setting , _ regions _ generalize this concept .a region of quantum states should colloquially be understood to contain the true state with high probability ( with the exact interpretation depending subtly on how the region is constructed ) .although this idea is quite simple , formalizing the concept of region estimation is not straightforward and there exists many competing alternatives , each with its own set of advantages and drawbacks . for example , _ bootstrap resampling _ is a common technique to produce error bars in tomographic experiments .however , as in for example , bootstrapping has so far been exclusively used to calculate statistics on quantities derived from state estimates , such as fidelity to some target state .bootstrapping is conceptually simple and easy to implement .however , the errors bootstrapping estimate come with no guarantees and it can grossly underestimate errors for estimators which produce states near the boundary .the fisher information ( matrix ) is most often used , via the cramer - rao inequality , to lower bound the variance of unbiased estimators . in this sense, it gives the errors one would expect in the asymptotic limit , provided an efficient estimator is used . in terms of regions ,the fisher information matrix is also asymptotically the inverse of the covariance matrix of the posterior distribution of parameters , which in turn defines an error ellipse . for most problems , however , even computingthe fisher information numerically is an intractable problem . in some estimation strategies , such as compressed sensing ( also ) ,the estimate of the state comes with a _certificate_. that is , an estimated state is provided along with an upper bound on the distance to the true state .this implicitly defines a ball in state space centered on the estimated state .however , the statistical meaning of this ball is not clear nor does a ball provide information about correlated errors .confidence regions ( and comparable constructions ) are the most stringently defined regions from a statistical perspective. these regions would be ideal if they admitted a method of construction which is computationally tractable .there is one overarching theme to notice here : trade - offs . on one end of the spectrumis conceptual simplicity , ease of implementation and computational tractability ; on the other is statistical rigor , precision , accuracy and optimality .here we take the approach of constructing statistically rigorous region estimators via a numerical algorithm which possess tunable parameters to control the trade - off between optimality and computational efficiency .the regions we construct here are approximations to _ high posterior density credible regions _ and are in some sense the bayesian analogs of confidence regions . to aid in the descriptive simplicity of the regions , we use _ ellipsoids _ which are well understood and easy to conceptualize geometrical objectsmoreover , ellipsoids provide a useful summary of how the state parameters are correlated .the numerical algorithm we use is a monte carlo approximation to bayesian inference and has been used to in the tomographic estimation of one and two qubit states as well as in the continuous measurement of a qubit .it has also been recently used to construct point _ and _ region estimates of hamiltonian parameters in .in particular , the region used was the ellipse defined by the posterior covariance matrix .here we show that the same method can be applied to quantum states and , more importantly , such regions approximate high posterior density credible regions .one of the major advantages to this approach is that it naturally accommodates the possibility of unknown errors in modeling .for example , we might assume that the source of quantum states is fixed when it is not ; or , we might assume that the measurements are known exactly when they are not .previous analysis of errors in quantum state estimation have focused on assessing its effect ; _ detecting _ its presence ; and , most recently , selecting the best model for it . herewe demonstrate that the our approach can estimate and construct regions for both quantum mechanical state and error model parameters _ simultaneously_. that is , our algorithm produces a region in the space defined by all parameters .this work is outlined as follows . in section[ sec : bayes ] we overview the precise problem and theoretical solution . in section [ sec : smc ] we give the numerical algorithm which constructs the regions . in section [ sec : ex ]we describe the examples used to test the method and in section [ sec : results ] the results of the numerical experiments are presented .finally , in section [ sec : end ] we conclude discussion .each quantum mechanical problem specification produces a probability distribution , where is the data obtained and are the experimental designs ( or _ controls _ ) chosen for measurement , and where is a vector parameterizing the system of interest .suppose we have performed experiments with control settings and obtained data .the model specifies the likelihood function however , we are ultimately interested in , the probability distribution of the model parameters given the experimental data .we achieve this using use bayes rule : where is the _ prior _ , which encodes any _ a priori _ knowledge of the model parameters .the final term can simply be thought as a normalization factor . sinceeach measurement is statistically independent given , the processing of the data can be done on or off - line .that is , we can sequentially update the probability distribution as the data arrive or post - process it afterward . in many scenarios the mean of the posterior distribution : =\int f(x)\pr(x ) dx ] . ] : )^{\rm t}{\rm cov}_{\vec{x}|d;c}[\vec{x}]^{-1}(\vec{x}-\mathbb{e}_{\vec{x}|d;c}[\vec{x } ] ) \leq z_\alpha^2\}\ ] ] where is the -quantile of the distribution , the values of which are readily available in countless tables .note that this is simply the covariance ellipse under a gaussian approximation to the posterior ,{\rm cov}_{\vec{x}|d;c}[\vec{x}])$ ] .in addition to being hpd credible regions in the asymptotic limit , pces are computationally tractable and , even for modest numbers of experiments , they are remarkably close in size and coverage probability to true hpd regions .the remainder of this work is devoted to detailing the algorithm and demonstrating the above claims via simulated experiments on qubits .our numerical algorithm fits within the subclass of monte carlo methods called _ sequential monte carlo _ or smc . we approximate the posterior distribution by a weighted sum of delta - functions : where the weights at each step are iteratively calculated from the previous step via followed by a normalization step .the elements of the set are called _ particles _ and are initially chosen by sampling the prior distribution and setting each weight . in the equations above , is the number of particles and controls the accuracy of the approximation .note that the approximation in equation is not a particularly good one _ per se _( we are approximating a continuous function by a discrete one after all ) .however , it does allow us to calculate some quantities of interest with arbitrary accuracy .like all monte carlo algorithms , it was designed to approximate expectation values , such that \approx \sum_{j=1}^n w_j(d;c ) f(\vec{x}_j).\ ] ] in other words , it allows us to efficiently evaluate difficult multidimensional integrals with respect to the measure defined by the posterior distribution .the smc approximation provides a nearly trivial method to approximate hpd credible regions , which surprisingly has been overlooked .since the smc approximate distribution is a discrete distribution , the credible regions will be ( at least initially ) discrete sets .in particular , the hpd -credible set , , is defined by the following construction : 1 .sort the particles according to their weights : ; 2 . collect particles ( starting with the highest weighted ) until the sum of the collected particle weights is at least .the resulting set is .the proof that this an hpd -credible set is as follows . assuming the particles are sorted as above .begin with the highest weighted particle with weight .then , the set clearly has weight and the largest satisfying equation , in the definition of hpd credible regions , is .now take the set with weight .the largest is now .iterate this process until we reach the first weight such that set satisfies .this set will have largest .the set is clearly an -credible set but it is also an hpd -credible set since any will result in a set excluding all particles with and necessarily have weight less than .the immediate problem with is that it is a discrete set of points and while it is hpd -credible set for the smc approximated distribution , _ any _ discrete set has zero measure according the true posterior .the resolution is quite simple .suppose we have some region which contains .then , according to the smc approximation , \approx \sum_j w_j\mathbbm 1_{\hat x}(\vec{x}_j ) \geq 1- \alpha,\ ] ] since for all in .thus , any region enclosing the points in will be a -credible region .but we do not want just any -credible region .the hpd requirement is conceptually similar to asking for the region to be as small as possible while maintaining weight .if we assume ( relaxed later on ) the credible regions are convex , then we seek the smallest convex set containing .this defines the _ convex hull _ of : since is a convex polytope in , it can be most compactly represented by a list of its vertices , which in the absolute worst cases is the entire set of smc particles .that is , we require numbers to specify .although certain classes of convex polytopes contain many symmetries and are easy to conceptualize geometrically , specifying the vertices of the convex hull of a random set of points is not most convenient representation for credible regions . the most efficient way to describethis hull is the smallest ball containing it since this would be described by a location and single radial parameter .however , a ball would not account for large covariances in the posterior distribution . to account for these covariances, we will use ellipsoidal regions where and define an ellipsoid via the set of states satisfying .in other words , is the center of the ellipsoid and its covariance matrix .crucially , we want the smallest ellipse containing the hull , the so - called _ minimum volume enclosing ellipse _ ( mvee ) : to numerically construct the mvee , we use the algorithm of khachiyan . to summarize , is the numerical approximation to the hpd credible region .the posterior covariance regions , , defined earlier in equation , are far less computationally intensive to construct than and are expected to be hpd credible regions in the asymptotic limit . in order to show that they are also approximately hpd credible regions for finite data , we compare and in a number of examples and also look at the performance in cases where limited computational resources prohibit constructing over many simulations .finally , we note that to compare the sizes of various ellipsoids , we will use the volume where is the dimension of the parameter space and is the well - known gamma function .consider repeated preparations of a qubit subjected to random pauli measurements .we label the pauli operators such that an arbitrary operator can be written and for many qubits , the situation is similar .the reconstruction is given by where and we index by .then the parameterization is equivalent to that in equation .since each pauli is idempotent , , each individual measurement has possible outcomes which we label for the and eigenvalues .the likelihood function of a single measurement is then we also consider the effect of errors .we will not assume a particular model for the errors since any error model , for our two outcome measurements , manifests as a bit flip , or equivalently , randomization channel .for simplicity we assume the process is symmetric so we have a single parameter , called the _ visibility _ , which has the following effect on the likelihood function : we consider two cases : the visibility in known and fixed or the visibility is unknown but still fixed run - to - run . in the former case ,the task is to compute the pce for the state only , while in the latter case the task is to compute the pce over the _ joint distribution _ of .if only a region of states is desired , we can orthogonally project the pce onto the subspace of the parameter space defining the state ( and similarly for ) .hence , we will have separate marginal pces for the state and visibility separately .examples of how the regions are constructed are presented in figures [ fig : eq_rebit ] and [ fig : eq_qubit ] .we first look at a comparison of and .these results are presented in figures [ fig : qubit_sizes_known_vis],[fig : qubit_size_known_vis ] and [ fig : qubit_pr_known_vis ] . in figures [ fig : qubit_sizes_known_vis ] and [ fig : qubit_size_known_vis ] , the size of the two classes of regions is compared .initially the volume of the pce is not smaller than that of the entire parameter space , which is to be expected since it is motivated from asymptotic normality .however , it rapidly converges in volume to , and becomes slightly smaller than , .both sets of regions decrease in size at the same rate as a function of the number of measurements .this suggests that the pces are approximate hpd credible regions .this is important because , as opposed to all other region estimators , pce regions are computationally tractable . that the pces remain valid in higher dimensions is shown in figure [ fig:23 ] . in figure[ fig:23 ] the probability for the state to lie in the constructed pce region is shown to be consistent with the target of 95% containment probability for two and three - qubits subjected to random pauli measurements . the volume of the constructed ellipsoid for ( left ) perfect measurements and ( right ) limited visibility measurements .for all constructed regions , particles where used .the results are from 100 simulations , where the line represents the mean and the shaded areas are those volumes one standard deviation from the mean . ]the relative size of the 95% credible regions , where is a numerical approximation to the hpd 95% credible region . for both strategies , particles where used .the results are from 100 simulations , where the line represents the mean and the shaded area are ratios one standard deviation from the mean .note that the posterior covariance ellipsoid is on average 10% smaller than the hpd region after about 100 measurements . ]the probability of the state lying in the constructed ellipsoid for ( left ) perfect measurements and ( right ) limited visibility measurements . in all cases ,the target was a 95% credible region .for all constructed regions , particles where used .the results are from 100 simulations , where the line represents the mean and the shaded area is ( just to be meta ) the hpd 95% credible region of the probability ( derived from the beta distribution and a uniform prior ) . ]the probability of the state lying in the constructed posterior covariance ellipsoid ( approximating the 95% hpd region ) for the two and three qubit model described in the text and using a visibility parameter . for all constructed regions , particles where used .the results are from 25 simulations , where the line represents the mean and the shaded area is the hpd 95% credible region of the probability ( derived from the beta distribution and a uniform prior ) . ] in the above mentioned cases , the visibility was assumed to be known . in figure[ fig : qubit_pr_unknown_vis ] , the case of unknown visibility is considered . when the visibility in known relatively accurately , the pce captures the state and visibility accurately . however , as the initial variance in the prior on the visibility increases , the ability of the pce to capture the both the state and visibility diminishes .surprisingly , the pce still finds the state even when it can not resolve the visibility accurately .the probability of the state lying in the constructed ( 95% credible ) posterior covariance ellipse for varying levels of knowledge of the visibility parameter . in the upper left ,the visibility is known ( this is identical to figure [ fig : qubit_pr_known_vis ] ) . in all other figures , the visibility in unknown ( but known to lie in the specified interval with a uniform prior ) .for both the state and the visibility parameter , a marginal posterior covariance ellipse is constructed and the tested against the true state and parameter . ]the problem is easily identified to be the assumption that this posterior has a single mode with a convex hpd credible region . to illustrate the problem graphically , we need to reduce the dimensionality . to this end, we assume the state is of the form , with unknown and to be estimated along with the visibility .a typical example of a posterior distribution and the possible regions is shown in figure [ fig : nc ] .there are two things to note : ( 1 ) the posterior distribution has two modes ; ( 2 ) even within each mode , the distribution is not well approximated by a gaussian. both of these are due to the degeneracy in the posterior distribution of arising from the symmetry in of and in the likelihood function .for example , an outcome could equally well be explained by a large and small as a small and large .the problem of many modes in the posterior can be resolved by reporting disjoint ellipsoidal regions , one for each mode .the discrete set of highest weighted smc points naturally find themselves within the modes .given this set , the task is to then identify which particles belong to which modes . in machine learning parlance, this is the problem of _clustering_. many solutions to this problem exist , each with its own set of advantages and drawbacks . herewe have used dbscan , as it seems to require the fewest number of assumptions .construction of regions for qubit ( known to be of the form ) with unknown visibility subjected to 1000 randomly selected measurements . on the left , we have the initial 1000 particles ( blue ) randomly select according to a uniform prior and the randomly generated `` true '' state ( red ) . in the middle figure , we have the posterior smc particle cloud after 1000 randomly selected measurements along with the following regions : the green line is the convex hull of those highest weighted particles comprising at least 95% of the particle weight ( this is ) ; the red ellipse is , the smallest ellipse containing ; and , in black is the ellipse defined by the estimated convariance matrix of the particle cloud , . when the posterior is disjoint , all regions poorly approximate the hpd credible region . on the right , the same distribution of particlesis shown along with the convex hull and mvee regions after the modes of the distribution have been identified via the dbscan clustering algorithm . ] when the visibility is known fairly well only the first problem , disjoint regions , is automatically resolved . in other words , it is not likely that the pce will be the optimal region estimator unless the visibility is relatively well - known .practically , when the noise is known with some but not perfect accuracy , the mvee region still behaves properly even when the pce region does not .this is demonstrated in figure [ fig : mveevis ] where we see that contains the true state with the correct probability but does not . in practicethen , the recommendation is to identify whether the problem specifies convex credible regions .if so , then the pce is the appropriate choice ; if not , then a clustering algorithm should be used to identify the modes of the distribution first .the probability of the state lying in the ( 95% credible ) posterior covariance ellipse or the minimum volume enclosing ellipse for varying levels of knowledge of the visibility parameter .the visibility in unknown ( but known to lie in the specified interval with a uniform prior ) .for both the state and the visibility parameter , a marginal posterior covariance ellipse is constructed and the tested against the true state and parameter . ]for the three qubit data shown in figure [ fig:23 ] , the constructed pces were ellipsoids in a dimensional parameters space . that these simulations were performed on an average laptop computer lends credence to the claim that the regions discussed here , constructed via the smc algorithm , are a computationally attractive solution to the problem of providing region estimators of quantum states .for the implementation given in , further optimization of the code itself would be required to move beyond 100-or - so parameters for conventional computing resources .alternative to this is the `` throw money at it '' solution and run the code on high performance computing machines .the key obstacle to a fully scalable solution is the _ curse of dimensionality _ , which is ever - present in quantum theory .the problem that the parameter space grows exponentially presents a challenge in representing the quantum state and simulating the dynamics of the quantum system . both of these obstacles can be overcome within the computational framework presented here . to address these problems, we follow the already many ingenious methods reducing the complexity of identifying and characterizing quantum states and processes . these include identifying stabilizer states ; tomography for matrix product states ; tomography for permutationally invariant states ; learning local hamiltonians ; tomography for low - rank states via compressed sensing ; and tomography for multi - scale entangled states .these techniques employ efficient simulation algorithms which propagate efficient representations of the state vector to calculate of the probabilities defining the likelihood function . physical constraints and careful engineering lead to such drastic dimensional reductions that we can use along with additional prior information in the bayesian method .the above mentioned methods aimed at reducing the complexity of estimating parameters has relied on the notion of _ strong _ simulation , where the likelihood function is computed exactly . on the other handis _ weak _ simulation , where the likelihood is not computed but is instead sampled from .the distinction between strong and weak simulation has been a topic of recent interest in quantum computational complexity where it has been shown , for example , that there is exists subtheories of quantum mechanics which admit efficient weak simulation but do not allow for efficient strong simulation . it has recently been shown that the bayesian sequential monte carlo algorithm can also be used in the case where one only has access to a weak simulator . finally , we might find ourselves with a quantum system which does not admit efficient strong nor weak simulation .in that case , it still may be possible to efficiently characterize the system using a trusted _ quantum _ simulator . for the case of estimating dynamical parameters, it has been shown that the bayesian smc approach can also perform estimation tasks efficiently using a quantum simulator . in this workwe have considered supplementing point estimates of quantum states with _regions _ of state space .these regions contain the true state with a pre - determined probability and within the tolerance of the numerical algorithm .the numerical algorithm has tunable parameters which trades accuracy for computational efficiency and thus can be determined based on desired optimality and available computational resources . when the noise is known with relatively high accuracy , the optimal regions are the _ posterior covariance ellipsoids_. when the noise is unknown , more complex techniques are available to construct ellipsoids which capture the state . in any case , the constructed regions are ellipsoids which are easily described and conceptualized . in the context of classical statistics ,quantum state estimation can simply be thought of as overly ambitious parameter estimation .that is , quantum state estimation is just classical parameter estimation with a specific model and , perhaps , oddly appearing constraints .the point is that the framework presented here for region estimation is suitable to any parameter estimation problem . in particular , we have already shown that additional noise on the measurements can be estimated _ simultaneously _ with the unknown state .more generally , the framework possesses a beautiful modularity which allows arbitrary statistical models to be learned .the algorithm presented here has been implemented in a software package called _ qinfer _ using the python programming language and the scientific computation library _ scipy _ .the author thanks chris granade for many discussions , timely advice and the majority of contributions necessary to make this open - source software a reality , which has indeed proven useful in many of our collaborations .this work was supported by the national science foundation grant nos .phy-1212445 and phy-1314763 , by office of naval research grant no .n00014 - 11 - 1 - 0082 , and by the canadian government through the nserc pdf program .construction of regions for a rebit subjected to 100 randomly selected and . on the left , we have the initial 100 particles ( blue ) randomly select according to the hilbert - schmidt prior and the randomly generated `` true '' state ( red ) .in the middle figure , we have the posterior smc particle cloud after 100 randomly selected and measurements and the estimated state , the mean of distribution , is shown in teal .the larger weighted particles are represented as larger dots . on the right ,the same distribution of particles is presented along with the regions discussed in the text .the green line is the convex hull of those highest weighted particles comprising at least 95% of the particle weight ( this is ) .the red ellipse is , the smallest ellipse containing .finally , in black , is the ellipse defined by the estimated convariance matrix of the particle cloud , .these objects are blown up below the figure to show details . ]construction of regions for qubit subjected to 20 randomly selected pauli measurements using the smc approximation with 100 particles .on the left , we have the initial 100 particles ( blue ) randomly select according to the hilbert - schmidt prior and the randomly generated `` true '' state ( red ) . in the middle and right figure, we have the posterior smc particle cloud after 20 randomly selected pauli measurements and the estimated state , the mean of distribution , is shown in teal .the larger weighted particles are represented as larger dots . in the middle figure ,the gray object is the convex hull of those highest weighted particles comprising at least 95% of the particle weight ( this is ) . in the right figure ,the blue ellipsoid is , the smallest ellipse containing while the red ellipsoid is the posterior covariance ellipsoid . ]
|
regions of quantum states generalize the classical notion of error bars . high posterior density ( hpd ) credible regions are the most powerful of region estimators . however , they are intractably hard to construct in general . this paper reports on a numerical approximation to hpd regions for the purpose of testing a much more computationally and conceptually convenient class of regions : posterior covariance ellipsoids ( pces ) . the pces are defined via the covariance matrix of the posterior probability distribution of states . here it is shown that pces are near optimal for the example of pauli measurements on multiple qubits . moreover , the algorithm is capable of producing accurate pce regions even when there is uncertainty in the model .
|
urbanization , measured by the fraction of people living in urban areas , gradually increased in modern countries with a quick growth since the middle of the 19 century until reaching values around in most european countries .this process depends in general on several economic variables and is connected to transportation technologies . although urban development and the distribution of residential activity in urban areas are long - standing problems tackled by economists and geographers a quantitative understanding of the different processes characterizing this phenomenon is still lacking . among the first empirical analysis on population density , meuriot provided a large number of density maps of european cities during the nineteenth century , and clark proposed the first quantitative analysis of empirical data . in anas presents an economic model for the dynamics of urban residential growth , and in regional models describe the population dynamics of systems divided into zones characterised by a set of socio - economic indicators and that exchange with one another population , goods , capital , etc . according to some optimization rule . in this framework , the authors of proposed a dynamical central place model highlighting the importance of both determinism and fluctuations in the evolution of urban systems . in , the author reviews different approaches used to model population dynamics in cities and in particular the ecological approach , where ideas from mathematical ecology models are used to study urban systems .an example is given by where phase portraits of differential equations bring qualitative insights about urban systems behavior .other important theoretical approaches comprise the classical alonso - muth - mills model developped in urban economics , and also numerical simulations based on cellular automata . for most of these studies ,numerical models usually require a large number of parameters that makes it difficult to test their validity and to identify the main mechanisms governing the urbanization process .on the other hand , theoretical approaches propose in general a large set of coupled equations that are difficult to handle and amenable to quantitative predictions that can be tested against data .in addition , even if a qualitative understanding is brought by these theoretical models , empirical tests are often lacking . the recent availability of geolocalized , historical data ( such as in for example ) from world cities has the potential to change this state and allows to revisit with a fresh eye these long - standing problems .many cities created open - data websites and the city of new york ( us ) played an important role with the release of the pluto dataset ( short for property land use tax lot output ) , where tax lot records contain a lot of information about the urbanization process .for example , in addition to the location , property value , square footage etc , this dataset gives access to the construction date for each building .this type of geolocalized data at a very small spatial scale allows to monitor the urbanization process in time and at a very good spatial resolution .these datasets allow in particular to produce ` age maps ' where the construction date of buildings is displayed on a map ( see figure 1 for the example of the bronx borough in new york city ) .century , followed by the construction in some localized areas of buildings in the second half of the 20 century .( see material and methods for details on the dataset ) . ]many age building maps are now available : chicago .new york city ( us ) , ljubljana ( slovenia ) , reykjavik ( iceland ) , etc .in addition to be visually attractive , these maps together with new mapping tools ( such as the urban layers proposed in ) provide qualitative insights into the history of specific buildings and also into the evolution of entire neighborhoods .palmer studied the evolution of the city of portland ( oregon , us ) from 1851 and observed that only 942 buildings are still left from the end of the 19th century , while 75,434 buildings were built at the end of the 20th century and are still standing , followed by a steady decline of new buildings construction since 2005 .inspired by palmer s map , plahuta constructed a map of building ages in his home town of ljubljana , slovenia , and proposed a video showing the growth of this city from 1500 until now .plahuta observed that the number of new buildings constructed each year displays huge spikes that signalled important events : an important spike occurred a few years after a major earthquake hit the area in 1899 when people were able to rebuild and other periods of rebuilding occurred after the two world wars . in the case of los angeles ( us ) ,the ` built : la project ' shows the ages of almost every building in the city and allows to reveal the city growth over time .these different datasets allow thus to monitor at a very small spatial resolution the urbanization process .in particular , for a given district or zone , we can ask quantitative questions about the evolution of the population and of the number of buildings .surprisingly enough , such a dual information is difficult to find and up to our knowledge was not studied at the quantitative level .here , we use data for different cities ( chicago , ; london , ; new york city , ; paris , ) in order to answer questions about these fundamental quantities . in particular , we will show that the number of buildings versus the population follows the same unique pattern for all cities studied here .we then propose an explanation for the existence of such a pattern and provide a theoretical model and empirical evidences supporting it .we investigate the urban growth of four different cities : chicago ( us ) , london ( uk ) , new york ( us ) , and paris ( france ) . an important discussion concerns the choice of the scale at which we study the urbanization process .we have to analyze the urbanization process at a spatial scale that is large enough in order to obtain statistical regularities , but not too large as different zones may evolve differently . indeed geographers observed that the population density is not homogeneous and decreases in general with the distance to the center .also , during the evolution of most cities , they tend to spread out with the density decreasing in central districts and increasing in the outer ones .we thus choose to focus on the evolution of administrative districts of each city as at this level we can get insight about the growth process and exclude longer term processes . more precisely , we considered the boroughs of new york , the sides of chicago , the arrondissements of paris and the london districts . also , in this way we do not have to tackle the difficult problem of city definition and its impact on various measures and focus on the urbanization process of a given zone with fixed surface area .the datasets for these cities come from different sources ( see materials and methods ) and cover different time periods . for chicago , for london , for new york , and for paris .an important limitation that guided us for choosing these cities is the simultaneous availability of building age and historical data for district population . in the following , in order to provide an historical context , we will first measure the evolution of the population density and then anayze the evolution of the number of buildings in a given district and its population . in order to get a first understanding of the urban growth behavior of these different cities , we begin with an empirical analysis of the evolution of urban density on a fixed geographical area . in fig . [ fig : popdensity ]we show the average population density for the four cities studied here .this plot reveals that these different cities follow similar dynamics . after a positive growth and a population increase that accelerates around ,we observe a density peak . after this peak ,the density decreases ( even sharply in the case of nyc ) or stays roughly constant . in the last years , new york city , paris and london display a re - densification periodthis first figure highlights the existence of a seemingly ` universal ' pattern governing the urbanization process .these large cities are divided in districts that usually display different properties and in fig .[ fig:3 ] we plot the time evolution of some district densities .we indeed observe that they display different behaviors .
|
the recent availability of geolocalized historical data allows to address quantitatively spatial features of the time evolution of urban areas . here , we discuss how the number of buildings evolves with population and we show on different datasets ( chicago , ; london , ; new york city , ; paris , ) that this curve evolves in a universal way with three distinct phases . after an initial pre - urbanization phase , the first phase is a rapid growth of the number of buildings versus population . in a second regime , where residences are converted into offices and stores , the population decreases while the number of buildings stays approximatively constant . in another subsequent modern phase , the number of buildings and the population grow again and correspond to a re - densification of cities . we propose a simple model based on these simple mechanisms to explain the first two regimes and show that it is in excellent agreement with empirical observations . these results bring evidences for the possibility of constructing a simple model that could serve as a tool for understanding quantitatively urbanization and the future evolution of cities .
|
reaching the ultimate precision limits in the estimation of parameters is an important challenge in science .usually , this estimation is made by measuring the state of a probe that has undergone a parameter - dependent process .post - selection techniques , stemming from the pioneering work of y. aharonov and collaborators , have been proposed with the aim of amplifying the signal obtained from the probe . in this formulation ,the quantum system being analyzed gets coupled to a measuring apparatus ( usually called `` meter '' ) through a unitary operation , which involves operators for the system and for the meter , and depends on the parameter to be estimated .the goal is to estimate by measuring the change of an observable of the meter after the joint unitary evolution , given that a specified state of was successfully post - selected .for a small coupling constant , the shift of the mean value of the relevant meter observable is modified by a prefactor , known as the weak - value , where and are the initial and the post - selected states of , respectively .this quantity allows one to observe amplification effects provided the initial and the final state of the system are almost orthogonal , so long as the weak - value regime remains valid .the regime of validity of this result has been analyzed in several publications .the possibility of amplifying a tiny displacement of the meter weak - value amplification ( wva ) has been envisaged as a valuable resource for the estimation of the coupling constant , eventually circumventing technical thresholds that may hinder the evaluation of this parameter .wva experiments have been performed with this metrological purpose , while claiming practical advantages . moreover , alternative protocols have been proposed to enhance the precision of the technique . however , there has been a long debate in the literature whether this post - selection process can actually be beneficial for parameter estimation .indeed , the amplification of the signal comes at the cost of discarding most of the statistical data , due to the post - selection procedure . .see text for complete description of the experiment .[ fig : setup],scaledwidth=48.0% ] here we experimentally investigate the estimation of a small deflection of a mirror inside a sagnac interferometer within the framework of quantum metrology .we employ two post - selection protocols , which were shown to lead to the ultimate quantum limits for precision , for sufficiently small . in the first one , related to the wva approach , we explore the region of validity of wva and show that , beyond this region , when the meter does not give useful information on , estimation of this parameter can be obtained from the statistics of post - selection .we also experimentally demonstrate a post - selection procedure which , even though not leading to wva , may also reach the fundamental limits of precision , but with a much larger post - selection probability .this implies that the number of events registered by measuring the meter is much larger than that in the wva scheme , for the same amount of resources .this reflects in our experimental results , which clearly show that this second procedure leads to a more efficient determination of probabilities regarding the meter , in terms of frequencies of clicks in the measurement apparatus .the experimental setup is shown in fig . [ fig : setup ] . a red diode laser ( nm )is sent through a single - mode optical fiber ( sm - fiber ) and decoupled by an objective lens , producing , in good approximation , a collimated free - space gaussian beam with a width m .a 650 nm bandpass filter removes unwanted light .the polarization degree of freedom corresponds to the _ system _ , while the transverse spatial degree of freedom of the beam stands for the _ meter_. a polarizing beam splitter ( pbs1 ) and a half wave plate ( hwp1 ) are used to prepare a linear - polarization state .accordingly , the system - meter input state for the interferometer is well described by : \ ! \otimes\ ! |\phi_i\rangle , \label{eqtheta}\ ] ] where and represent the horizontal and vertical polarization states , respectively , and stands for the initial transverse spatial state .the sagnac interferometer is composed of three mirrors ( m4 , m5 and m6 ) and a polarizing beam splitter ( pbs2 ) .the horizontal polarization component of the input beam propagates through the interferometer in the clockwise direction , while the vertical one circulates in the counterclockwise direction , recombining again at pbs2 .a stepper motor controls the deflection angle of mirror m5 .this results in transverse momentum shifts in opposite directions for the horizontal- and vertical - polarized components , respectively .therefore , the overall effect of the interferometer on the input beam can be represented by the unitary operator : where , represents the transverse position operator and is the shift in transverse momentum , which is much smaller than the wavenumber of the light beam . after the interferometer , a mm lens ( l ) implements a fourier transform of the transverse spatial field at mirror m5 onto the detection plane , defined by the detection aperture of a single - photon avalanche detector ( apd ) .the polarization measurement setup consists of a half - wave plate ( hwp2 ) and a polarizing beam splitter ( pbs3 ) , which allows for post - selecting any linear polarization state .a sliding beam - blocking stage ( bbs ) is used for the meter measurement after post - selection .this system works like a quadrant detector .the detection aperture of the apd is 8 mm diameter , much larger than the beam . by counting photons while blocking half of the detector, we can determine the center of the beam , as will be discussed below .the cramr - rao inequality provides the lower bound on the uncertainty in the estimation of the parameter : . here is the number of repetitions of the measurement and is the fisher information , defined by [dp_j(g)/dg]^2 ] .taking into account the decoherence channels presented in the last section , the meter state after the post - selection is given by [ meter - f - dd ] _ f(g)= , where is the trace over the system ( polarization ) space .the measurement of is then given by =tr_m(_f(g)k)=. after some straightforward calculation , we have : [ deslocamento - k - dd ] = , where the sign corresponds to ( ) . herewe describe how we provide estimatives for the desired parameter for each experimental measurement outcome .the maximum likelihood estimation procedures consists of finding the value of the coupling that best matches a given experimental result in terms of the probability of occurrence .thus , the _ estimator _ for is found to be the one that maximizes the theoretical probability associated with a certain measured outcome . for the case of estimationbased solely on the post - selection probability , this procedure leads to solving eq.(4 ) in the main text for .analogously , for the estimation based on the meter measurements , the equation to solve is given by eq.(7 ) of the main text , with the aid of eq .. however , for the estimation based on both results , the outcome is defined by the set of numbers , where .the likelihood probability is then given by [ likelihood ] = p_r(g)^n_rp_l(g)^n_l(1-p_f(g))^n^_f where , are the theoretical probabilities of the meter to be detected at the left , right half of the detector . the estimator is then found by solving [ mle ] _g_est=0 . for the case of post - selection ,the meter remains approximately gaussian and the probabilities can be calculated as [ p - lr ] p_r(g)=p_f(g)-p_l(g)p_f(g ) , where . using the equations , , and one can finally solve eq . by numerical methods, once there is no analytical solution .as explained , besides the desired parameter ( to be estimated ) , the model incorporates the visibilities and , which are measured before the mirror angle is displaced ( ) .experimentally , the interferometer is set to the best possible alignment conditions , and the visibilities are measured when preparing the states and , subtracting photocounts due to ambient noise . for our alignment conditions , we obtained and , where this last one was expected to be very close to unity , provided the high efficiency in the wave plates .we now describe how the meter measurements are performed .since the displacement is measured through the imbalance between the two halves in the transverse plane , one has to calibrate the detector before the interferometer is misaligned ( ) to set the reference point .this is realized by matching the counts in the two halves of the detector ( within statistical fluctuations of the photocounts ) when the pre- and post - selected states are ( for which ) , once this post - selection scheme is expected to have a null displacement according to eq .for any value of the coupling . after displacing the mirror m5 , we are able to measure the new values of the intensities and by sequentially inserting and removing the bbs at the same position for the each post - selected state. however , the calibration was realized with a micrometer ( 10 micron precision , mounted on the bbs ) , which did not have the desired precision .we then added a constant to the theoretical displacement by simply replacing , to account for any experimental error in the reference point , that best describe the data set .it is expected that this constant should be very small compared to the beam size at the focus , which were confirmed by our data ( see figures [ fig : plot - pf ] and [ fig : desl - meter ] ) .
|
post - selection strategies have been proposed with the aim of amplifying weak signals , which may help to overcome detection thresholds associated with technical noise in high - precision measurements . here we use an optical setup to experimentally explore two different post - selection protocols for the estimation of a small parameter : a weak - value amplification procedure and an alternative method , that does not provide amplification , but nonetheless is shown to be more robust for the sake of parameter estimation . each technique leads approximately to the saturation of quantum limits for the estimation precision , expressed by the cramr - rao bound . for both situations , we show that information on the parameter is obtained jointly from the measuring device and the post - selection statistics .
|
hsdpa , energy efficiency , power control , link adaptation .how to acquire higher throughput with lower power consumption has become an important challenge for the future wireless communication systems .`` moore s law '' renders the use of ever more powerful information and communications technology ( ict ) systems for the mass market . in order to transport this exponentially rising amount of available data to the user in an acceptable time ,the transmission rate in cellular network rises at the speed of nearly 10 times every 5 years , meanwhile the energy consumption doubles every 5 years , as illustrated in .high speed downlink packet access ( hsdpa ) has been successfully applied commercially , which brings high spectral efficiency ( se ) and enhances user experience . according to , hsdpa has introduced a new downlink physical channel called high speed physical downlink shared channel ( hs - pdsch ) , and some new features such as adaptive modulation and coding scheme ( amc ) , hybrid automatic repeat request ( harq ) , fast scheduling and multiple input multiple output ( mimo ) .thus it improves the downlink peak data rate and system throughput greatly . for the mimo technology in hsdpa ,the so - called dual stream transmit adaptive antennas ( d - txaa ) is applied , in which the node b would select single stream mode or dual stream mode based on the channel conditions . to the best of the authors knowledge , most of the previous research works focused on spectral efficient schemes in umts hsdpa and only a few literatures focused on the network energy savings . in ,the authors proposed to switch off a second carrier in dual - cell hsdpa to save energy through exploiting the network traffic variations .and the authors in investigated the possibility of cutting down the energy consumption of the wireless networks by reducing the number of active cells when the traffic load is low .these works mainly considered energy savings from a network point of view . however , there is no literature focusing on the link level energy efficient schemes in hsdpa , which is also an important aspect in green communication research . energy efficiency ( ee ) is always defined as the transmission rate divided by the total power consumption , which represents the number of information bits transmitted over unit energy consumption measured in bits / joule . in the previous works considering ee from a link level perspective , ee maximization problems are formulated and solved based on shannon capacity , in which the impact of constant circuit power is involved .it is demonstrated that joint power control and link adaptation is an effective method to improve the ee .however , practical modulation and channel coding schemes are not considered in these works and the users quality of service ( qos ) constraints are not taken into account either . moreover , as the fast power control is not available in hs - pdsch due to the functionality of amc and harq , it is hard to apply joint power control and link adaptation in the hsdpa system directly . in this paper , we will discuss the potential link level energy saving in hsdpa .first , a power model including dynamic circuit power related with antenna number is taken into account .based on this model , we propose a practical semi - static joint power control and link adaptation method to improve ee , while guaranteeing the users transmission rate constraints . as fast power controlis no longer supported , we propose a dual trigger mechanism to perform the method semi - statically .after that , we extend the scheme to the mimo hsdpa systems .simulation results confirm the significant ee improvement of our proposed method .finally , we give a discussion on the potential ee gain and challenges of the energy efficient mode switching between single input multiple output ( simo ) and mimo configuration .the rest of the paper is organized as follows .section 2 introduces the preliminaries .section 3 proposes the energy efficient power control and link adaptation scheme in the single input single output ( siso ) hsdpa systems .the extension of the scheme to the mimo hsdpa systems is presented in section 4 .simulation results and discussion are given in section 5 , and finally section 6 concludes this paper .in this section , preliminaries are provided .the system model and power model are introduced at first . the theoretic se - ee tradeoff is then provided to help the description .we consider the system with a single node b and a single user in this paper , but note that our work can be extended to the multi - user scenario easily .we assume that the node b has a maximum transmit power constraint and the user has a minimum modulation and coding scheme ( mcs ) constraint which can be viewed as the qos requirements .the traditional link adaptation of the hsdpa systems is illustrated as follows .first , the node b determines the transmit power of hs - pdsch .once the transmit power is determined , it can not be changed frequently , due to the existence of amc and harq . the user measures the channel quality between the node b and itself and feeds back a channel quality indication ( cqi ) to the node b. the feedback cqi corresponds to a mcs level which is always chosen to maximize the transmission rate under a certain bit error rate ( ber ). then the node b delivers data to the user with the mcs level . in this way, the transmission parameters can be adjusted according to current channel conditions and thus high throughput can be provided .d - txaa is selected as the mimo scheme for hsdpa in 3gpp specification release 7 .two antennas at the node b and the user are supported .specifically , the node b sends buffered data through either one or two independent data streams at the physical layer . at first , the user determines the preferred cqi for the single stream mode and the preferred pair of cqis for the dual stream mode . after comparing the transmission rates of the two modes ,the user can choose the better mode and corresponding cqi(s ) and then feed them back to the node b. thus , the node b can decide the mode and corresponding mcs level(s ) .in addition to cqi feedback , the user also reports precoding control indicator ( pci ) index which indicates the optimal precoding weights for the primary stream , based on which precoding weights for the second stream can be calculated .the precoding weights are defined as follows : .\\ \end{array}\ ] ] power consumption model here is based on in order to capture the effect of transmit antenna number .denote the number of active transmit antennas as and transmit power as .the total power consumption of node b is divided into three parts .the first part is the power conversion ( pc ) power accounting for the power consumption in the power amplifier and related feeder loss , in which is the pc efficiency .the second part is the dynamic circuit power which corresponds to antenna number and can be given by : representing circuit power consumption for radio frequency(rf ) and signal processing . the third part is the static power related to cooling loss , battery backup and power supply loss , which is independent of and . the total power consumption can be modeled as before introducing our proposal , we need to have a discussion about the theoretical basis of the energy efficient power control and link adaptation scheme . according to the shannon capacity , se and ee of a siso additive white gaussian noise ( awgn ) channel can be expressed as and respectively , where and represent system bandwidth and the noise density respectively .it is obvious from ( [ eq5 ] ) that the transmit power is exponentially increasing as a function of the se with the assumption of constant bandwidth and noise power . in other words ,higher se incurs significant increase of energy consumption .in fact , ee is monotonically decreasing with se if only the transmit power is considered .thus in order to improve ee , node b should reduce the transmit power .however , the existence of practical and breaks the monotonic relation between se and ee , so balancing the , and is also important to increase ee .figure 1 shows the ee - power and se - power relations in an awgn channel with the theoretical shannon capacity formula .as indicated in figure 1 , there exists a globally optimal transmit power for ee .moreover , based on the shannon capacity , we can obtain the explicit close - form solution of the globally optimal ee and optimal transmit power , and some examples in mimo systems can be found in .however , one may argue that whether the relation between ee and se still satisfies in the hsdpa systems when practical amc and harq are taken into account .fortunately , we confirm this principle through the hsdpa link level simulation and the result with siso channels based on table g is shown in figure 2 .the mimo systems with d - txaa have the similar relations , which is shown later in this paper .although this trend is still fulfilled , the challenge in the hsdpa systems is that the explicit close - form solution to obtain the optimal transmit power and corresponding mcs level is no longer available when practical amc and harq are applied here . to meet this challenge, we will solve this problem through a novel ee estimation mechanism in the rest of this paper . besides, the data rate constraints are considered due to the users qos requirements in practice . according to the constraints, we should find the feasible transmit power region first , and then determine the transmit power with constrained optimal ee based on the feasible region .more details will be given in the next section .a semi - static power control and link adaptation method is proposed in this section to improve the ee while guaranteeing the mcs level constraint .different from the previous energy efficient schemes which are only applicable for the shannon capacity , our proposed scheme determines the energy efficient transmit power and mcs level according to a practical ee estimation mechanism , which is based on cqi feedback .furthermore , we propose a semi - static dual trigger to control the transmit power and mcs level configuration , which is practical in the hsdpa systems .figure 3 shows the operational flowchart of the proposed power control and link adaptation procedure at the node b. as long as cqi and acknowledgement / negative acknowledgement(ack / nack ) information are received by the node b , node b can estimate the ee and the required transmit power for each mcs level based on the estimation mechanism .then node b can determine the mcs level and transmit power with maximum ee .after that , the node b will determine whether they need to be configured immediately or not , where a semi - static dual trigger mechanism is employed .if it is triggered , the derived optimal transmit power and corresponding optimal mcs level will be reconfigured . in this way , the scheme is realized in a semi - static manner .there are two benefits here .for one thing , the semi - static feature makes the scheme practical in hsdpa which does not support inner loop power control . for another , the cost of signaling can be reduced significantly through controlling the power reconfiguration cycle length adaptively . in the following subsections, we will introduce the scheme in details .we propose the addition of an ee estimation mechanism to the traditional link adaptation operation , whereby it employs the mcs table to estimate the ee and required transmit power for different mcs levels based on cqi feedbacks , and then determines the ee optimal transmit power and mcs level .the mcs table here is defined as the mapping relationship between hs - pdsch received signal to interference and noise ratio ( sinr ) threshold and the corresponding feedback cqi index , based on the initial ber target .each cqi index corresponds to a dedicated mcs level in hsdpa .an example of table g is shown in figure 4 . at first, we need to estimate the transmit power required for different mcs levels . according to , the sinr of hs - pdsch is denoted as where , , , , and denote the spreading factor , hs - pdsch power , the instantaneous path gain , the channel orthogonality factor , the total received power from the serving cell and the inter - cell interference , respectively . as the link level simulation has captured the effect of the inter - code interference , according to ( [ eq7 ] ) , received sinr is proportional to transmit power assuming that the interference is constant . by taking the logarithm on both sides of ( [ eq7 ] ), we can find that the difference between two transmit power and is equal to the difference between the two sinr and derived from them : where transmit power is measured in dbm and sinr is measured in db .after replacing the actual sinrs in ( [ eq8 ] ) by the sinr thresholds in the mcs table , we can utilize the equation to estimate the transmit power required for the mcs levels .in other words , we propose to approximate the difference between the transmit power required for two mcs levels as the difference between the two s sinr thresholds .for example , assume that the current transmit power is and the feedback cqi index is .for an arbitrary cqi index denoted by , the corresponding sinr threshold is denoted as and the mcs level denoted as .we can estimate the transmit power required for mcs level as follows : the offset here is to deal with the impact of channel variations which can be determined based on the feedback ack / nack information from the user side . in the simplest case , can be set to zero and ( [ eq9 ] ) can be rewritten as : note that transmit power is measured in dbm and sinr threshold is measured in db in ( [ eq9 ] ) and ( [ eq10 ] ) .one may argue that the adjustment would cause the variation of ber , and then affect the average number of the retransmissions , which may cause the energy wasting .this is not the case .the same ber can be guaranteed for the current and adjusted power level and mcs level , which can be explained as follows .note that the mcs table at both the bs and the user is based on a fixed ber target .therefore , it is obvious that the current power level and feedback cqi can guarantee the ber . during the adjustment , to make sure the same ber can be guaranteed , the transmit power and the mcs level are jointly adjusted .that is to say , when the transmit power is decreased , the corresponding mcs level should also be decreased . as the same ber is guaranteed in this way , the same retransmission probabilitycan also be guaranteed , and the average number of the retransmissions will not be affected . in a word , our scheme would work well without affecting the mechanism of the retransmission , which is practical in real systems .then the estimation of ee for the mcs level is given by : where represents the transport block size of the mcs level , and is equal to two milliseconds and represents the duration of one tti for hsdpa .then we compare the estimated ee for each mcs level , determine the optimal cqi index by the corresponding mcs level is denoted as and the required transmit power denoted as .as the minimum mcs level of the user is and the maximum transmit power of the node b is , the constrained optimal mcs level and the optimal transmit power can be given by : the same estimation mechanism above can be employed to determine the corresponding minimum transmit power and the corresponding maximum mcs level .correspondingly , the estimated ee for the optimal mcs level and transmit power is denoted as . in our proposed algorithm ,only the feedback cqi and ack / nack information are necessary for node b to do the ee estimation and energy efficient power determination. however , the power configuration can not be performed instantaneously due to the following two reasons .for one thing , the support for fast amc and harq functionality in hsdpa does not allow the transmit power change frequently .for another , in order to guarantee the accuracy of the cqi measurement and user demodulation especially for high order modulation , node b should inform the user of the transmit power modifications through the signalling called measurement power offset ( mpo ) in radio resource control ( rrc ) layer when the transmit power is reconfigured .if the configuration performs frequently , the signaling overhead is significant . therefore , we propose a semi - static trigger mechanism to control the procedure .assume that the ee derived from the last transmission is , define relative ee difference as follows : in our proposed scheme , the minimum trigger interval is set to be , and the maximum trigger interval to be which satisfies .a timer is used to count the time from the last power configuration and the timing is denoted as .first , if both are satisfied , the proposed energy efficient power configuration and corresponding mcs reselection process is triggered .this event trigger can guarantee ee gain and also avoid frequent power configuration . on the other hand ,if is satisfied , the power configuration process must be triggered regardless of the value of .this periodical trigger ensures that the scheme is always active and gurantees the ee gain .if the power configuration is triggered , the timer must be reset to zero .the whole trigger mechanism above is robust as its parameters can be configured adaptively according to actual systems .it can be implemented practically in hsdpa and signaling overhead can be reduced .as the mimo technique called d - txaa can be applied in hsdpa , we propose a modified power control and link adaptation scheme which is applicable to mimo hsdpa systems in this section .when mimo is configured , the node b will transmit data to the user through either single stream or dual streams in the physical layer .if the former is selected , the proposed scheme in the previous section still works well and the estimated ee is also given by ( [ eq11 ] ) .if the latter is selected , only the ee estimation mechanism in the node b need to be modified .in this situation , the node b estimates the sum ee of the two streams instead of a single stream . as transmit poweris always shared equally between the two streams , transmit power modifications of the two streams must be the same during the reconfiguration .according to ( [ eq8 ] ) , the corresponding sinr threshold difference between the reconfigured cqi and the previous one is also the same for the two streams .for example , denote the feedback cqi index of the first stream as and the second stream .the mcs levels they indicated are and respectively .if the corresponding cqi index for the first stream is adjusted to and that for the second stream is adjusted to when the transmit power is reconfigured , the mcs levels used will be changed into and respectively .denote the corresponding sinr threshold for cqi index , , and as , , and , respectively , the following equation must be satisfied : the estimation of transmit power required for the new mcs level pair and can be given by the estimation of the sum ee can be given by through comparing the sum ee among all possible mcs level pairs of the two streams , the optimal transmit power and the corresponding mcs level pair for dual streams is selected .as the mode switching between single stream and dual streams is done at the user side based on maximizing se , one may argue that the chosen mode may not be the most energy efficient one .interestingly , as the total power consumption is the same for the two mode according to the power model given by ( [ eq4 ] ) , the choice made at the user side can lead to the most energy efficient mode , which can be explained as follows . comparing ( [ eq11 ] ) with ( [ eq19 ] ), we can know that the denominators of the expressions on the right side are the same , so the value of estimated ee is determined by the numerators .thus if the sum transport block sizes of the preferred mcs levels for dual stream mode is greater than that for single stream mode , dual stream mode is selected by the user , and vice versa .so the energy efficient criterion for mode selection between single stream and dual streams is the same as maximizing se criterion .in this section , we evaluate the performance of the proposed algorithm in different scenarios and give some discussions on mode switching between mimo and simo configuration along with our proposed scheme according to hsdpa link level simulation results .a multi - path rayleigh fading channel model and path loss model of pa3 is considered .bandwidth is 5mhz , and the duration of a subframe is 2ms . the parameters of power model are set as , and .the maximum transmit power is set to be 43dbm .figure 5 to figure 7 depict the performance of the proposed semi - static power control method .proposed energy efficient power control used in every subframe is viewed as a performance upper bound and the traditional scheme as a baseline where a transmit power of 40.5dbm is configured .if the energy efficient scheme is used , the transmit power will be configured based on the ee estimation as long as user s feedback is available . hereparameters of the semi - static trigger are set as , and .figure 5 shows that a considerable ee gain of our proposed semi - static power control scheme can be acquired over the baseline .furthermore , the proposed scheme s ee performance is comparable with the upper bound .figure 6 demonstrates that transmit power reconfiguration frequency is reduced compared with the upper bound algorithm , thus signaling overhead is significantly reduced , due to the proposed dual trigger .the event trigger which sets a threshold for the gap and the periodical trigger also ensures ee gain .figure 7 also evaluates the performance of the algorithm under different user speed .we can find that the ee gain would decline with increasing user moving speed , and the reason is explained that when the channel fluctuation becomes faster because of increased moving speed , ee optimal power changes more quickly . however , our proposed power configuration can not track this rapid change due to the semi - static characteristic , so the ee gain decreases , but a considerable ee gain can still be observed at high user speed . figure 8 and figure 9 show the impact of path loss and minimum cqi constrains on ee gain of our proposed scheme .each minimum cqi constraint corresponds to a minimum mcs constraint .user speed is set as 3 km / h .when the minimum cqi constraints are not so tight , we can see that the ee gain of the proposed algorithm is similar in figure 8 .ee gain decreases when user moves away from node b and the reason is that the optimal transmit power increases and gradually approaches the transmit power configured in the baseline . from figure 9, we can also observe that the looser the minimum cqi constrain is , the larger ee gain we can acquire .figure 10 gives ee comparison between d - txaa and simo configuration under different transmit power in hsdpa , and figure 11 illustrates the simulation results for different ee performances between hsdpa - simo and hsdpa - mimo systems by employing our proposed power control method . from figure 10 , we can see that there exists an ee optimal transmit power for each mode .another observation is that ee performance of simo mode is better than mimo when transmit power is not large , and vice versa .the reason is explained as follows . the total power can be divided into three parts : pc power , transmit antenna number related power , and transmit antenna number independent power .when transmit power is large , dominates the total power ( the denominator of the ee ) and is negligible . because the mimo mode can acquire higher capacity , higher ee is available for this mode in the large transmit power scenario .when transmit power is low , the ratio of to the total power increases , and leads to lower ee for mimo compared with simo .figure 11 provides insights on the impact of the distance on the mode switching .when the distance between the user and the node b is getting larger , mimo is better , and vice versa .this is because in the long distance scenario , the first part increases and dominates the total power , then more active antenna number is preferred . from figure 10 and figure 11, we can conclude that significant energy saving can be further acquired when adaptive mode switching between simo and mimo is applied. however , adaptive mode switching may be difficult due to some practical reasons .firstly , when simo mode is configured , parameters like pci and cqi for the second stream are not available because the second antenna is switched off to save energy .thus , how to estimate the available ee for d - txaa is a challenge . secondly , the transmit antenna number information should be informed through the system information , so the mode switching will impact all users in the cell and bring huge signaling overhead . to sum up , the protocol may need to be redesigned to utilize the potential ee improvement with mode switching .nevertheless , the node b can decide the active antenna number according to the load of the systems , which should be realized in the network level and is beyond the scope of this paper .in this paper , we investigate the impact of transmit power and mcs level configurations on ee in hsdpa and propose an energy efficient semi - static joint power control and link adaptation scheme .we extend the proposed scheme to the mimo hsdpa scenario .simulation results prove that the ee gain is significant and the method is robust .finally , we have a discussion about the potential ee gain of mode switching between simo and mimo configuration along with the practical challenging issues .hsdpa , high speed downlink packet access ; mcs , modulation and coding scheme ; mimo , multiple input multiple output ; simo , single input multiple output ; ict , information and communications technology ; hs - pdsch , high speed physical downlink shared channel ; amc , adaptive modulation and coding scheme ; harq , hybrid automatic repeat request ; d - txaa , dual stream transmit adaptive antennas ; qos , quality of service ; pc , power conversion ; ber , bit error rate ; siso , single input single output ; cqi , channel quality indicator ; pci , precoding control indicator ; rf , radio frequency ; se , spectral efficiency ; ee , energy efficiency ; awgn , additive white gaussian noise ; sinr , signal to interference and noise ratio ; ack , acknowledgement ; nack , negative acknowledgement ; mpo , measurement power offset ; rrc , radio resource control .the authors declare that they have no competing interests .this work is supported by huawei technologies , co. ltd . , china .
|
high speed downlink packet access ( hsdpa ) has been successfully applied in commercial systems and improves user experience significantly . however , it incurs substantial energy consumption . in this paper , we address this issue by proposing a novel energy efficient semi - static power control and link adaptation scheme in hsdpa . through estimating the ee under different modulation and coding schemes ( mcss ) and corresponding transmit power , the proposed scheme can determine the most energy efficient mcs level and transmit power at the node b. and then the node b configure the optimal mcs level and transmit power . in order to decrease the signaling overhead caused by the configuration , a dual trigger mechanism is employed . after that , we extend the proposed scheme to the multiple input multiple output ( mimo ) scenarios . simulation results confirm the significant ee improvement of our proposed scheme . finally , we give a discussion on the potential ee gain and challenge of the energy efficient mode switching between single input multiple output ( simo ) and mimo configuration in hsdpa .
|
in this article we describe an apparatus designed for the continuous - frequency measurement of low temperature electromagnetic absorption spectra in the microwave range . the motivation to develop this instrument comes from a desire to resolve , in great detail , the microwave conductivity of high - quality single crystals of high - t cuprate superconductors . however, the technique we describe should find a wealth of applications to other condensed matter systems , providing a means to explore the dynamics of novel electronic states with unprecedented resolution .the possibilities include : the physics of the metal insulator transition , where charge localization should lead to frequency scaling of the conductivity ; electron spin resonance spectroscopy of crystal field excitations ; ferromagnetic resonance in novel magnetic structures ; and cyclotron resonance studies of fermi surface topology .in addition to the cuprate superconductors , other natural possibilities in the area of superconductivity include heavy fermion and ruthenate materials , as well as high resolution spectroscopy of low - frequency collective excitations such as josephson plasmons and order - parameter collective modes .early microwave measurements on high quality single crystals of yba showed that cooling through t k decreased the surface resistance very rapidly , by four orders of magnitude at 2.95 ghz , reaching a low temperature value of several . resolving this low absorption in the microwave region has provided a technical challenge that has been successfully met over most of the temperature range below t by the use of high precision cavity - perturbation techniques. in these experiments , the sample under test is brought into the microwave fields of a high quality - factor resonant structure made from superconducting cavities or low - loss dielectric pucks .a limitation of such techniques is that the resonator is generally restricted to operation at a single fixed frequency , therefore requiring the use of many separate experiments in order to reveal a spectrum .furthermore , a very general limitation of the cavity perturbation method is that the dissipation of the unknown sample must exceed the dissipation of the cavity itself in order to be measured with high precision a very strong demand for a high quality superconductor in the limit .the measurement of the residual absorption in superconductors is challenging at any frequency : in the case of infra - red spectroscopy the problem becomes that of measuring values of reflectance that are very close to unity .the challenge lies in the calibration of the measurement , and in both microwave and infra - red work , one relies on having a reference sample of known absorption to calibrate the loss in the walls of the microwave resonator or the infra - red reflectance . despite these limitations , resonant microwave techniques are the only methods with sufficient sensitivity to measure the evolution of the microwave absorption over a wide temperature range . in a recent effort by our group ,five superconducting resonators were used to map a coarse conductivity spectrum from 1 ghz to 75 ghz in exceptionally clean samples of yba from 4 k to 100 k. this work revealed low temperature quasiparticle dynamics inconsistent with simple models of -wave superconductivity , whose key signatures appear in the frequency dependence of the conductivity. the failure of simple theories to give a complete description of the temperature evolution and shape of the conductivity spectra in the best quality samples has driven us to develop the technique described here , with the result that we can now resolve low - temperature microwave conductivity spectra in unprecedented detail .bolometric detection is a natural method for measuring the surface resistance spectrum over a continuous frequency range . for any conductor ,the power absorption in a microwave magnetic field is directly proportional to the surface resistance : where is the r.m.s .magnitude of the tangential magnetic field at the surface . as a result, a measurement of the temperature rise experienced by a weakly - thermally - anchored sample exposed to a known microwave magnetic field directly gives . to enhance rejection of spurious temperature variations ,the rf power should be amplitude modulated at low frequency and the resulting temperature oscillations of the sample detected synchronously .we note that as part of a pioneering study of superconducting al , a similar bolometric microwave technique was used by biondi and garfunkel to examine the detailed temperature dependence of the superconducting gap frequency. this earlier experiment had the simplifying advantage of measuring the absorption by a large waveguide made entirely from single crystalline al .unfortunately , in more complicated materials such as the multi - elemental cuprate superconductors , the best quality samples can only be produced as small single crystals .more recently , frequency - scanned bolometric measurements have proven useful in probing collective excitations in small samples of high - t cuprates at frequencies above 20 ghz where the absorption is larger and much easier to measure. these techniques , however , have not focussed on the challenge of resolving the low temperature absorption of high - quality single crystals across a broad frequency range .a characteristic feature of many electronic materials of current interest is reduced dimensionality , which gives rise to highly anisotropic transport coefficients . when making microwave measurements , a well - defined geometry must be chosen in order to separate the individual components of the conductivity tensor , and also to ensure that demagnetization effects do not obscure the measurement .one particularly clean approach that has been widely used places the sample to be characterized near a position of high symmetry in a microwave enclosure , in the quasi - homogeneous microwave magnetic field near an electric node . often , single crystal samples grow naturally as platelets having a broad plane crystal face and thin -axis dimension , and demagnetization effects are minimized if the broad face of the sample is aligned parallel to the field . in response to the applied rf magnetic field ,screening currents flow near the surface of the sample along the broad or face and must necessarily flow along the direction to complete a closed path .in some cases , it is desirable to work with samples that are very thin , rendering the -axis contribution negligible .alternatively , by varying the aspect ratio of the sample by either cleaving or polishing , one can make a series of measurements to disentangle the different crystallographic contributions , without having to change samples .for example , in the cuprate superconductors , the conductivity parallel to the two dimensional cuo plane layers can be several orders of magnitude larger than that perpendicular to the weakly - coupled planes .typical as - grown crystal dimensions are mm . for this aspect ratio , experiments where a sample was cleaved into many pieces showed that the -axis contribution is unimportant. we note here that all measurements presented in this article employ the low - demagnetization sample orientation discussed above .the broadband surface - resistance measurement technique we describe in the following sections provides three distinct technical advances over previous bolometric approaches : a uniform microwave field configuration in the sample region that permits the separation of anisotropic conductivity components ; the use of an _ in - situ _ reference sample that calibrates the microwave field strength at the sample absolutely ; and very high sensitivity afforded by the choice of a resistive bolometer optimized for the low - temperature range and mounted on a miniaturized thermal stage .these features of our apparatus permit precision measurements of the absolute value of in very - low - loss samples down to 1.2 k and over the frequency range 0.1 - 21 ghz .we will briefly demonstrate that this range captures the key frequency window for long - lived nodal quasiparticles in extremely clean samples of yba , and to further demonstrate the performance and versatility of the apparatus , we also show an example of zero - field electron - spin - resonance spectroscopy . a simple thermal model consisting of a heat capacity thermally isolated from base temperature by a weak thermal link of conductance .the resistive bolometer is thermally anchored to and monitors its temperature , , which is elevated above by a constant current bias passing through the bolometer .the absorption of incident signal power causes heating in , detectable as a temperature rise through a change in the voltage .,width=153 ]it is instructive to calculate the minimum power detectable by a simple thermal stage , the temperature of which is monitored by a resistive bolometer , as depicted in fig .[ fig : model]. the bolometer has a resistance and is in thermal equilibrium with a larger heat capacity representing contributions from the sample , its holder , and the weak thermal link .this combination is weakly connected , via a thermal conductance , to a heat sink maintained at base temperature .the bolometer is heated to its operating temperature by a bias power , where is the fixed bolometer bias current . for this analysiswe do not consider feedback effects , although they are very important in the special case of transition edge bolometers. as a result , we consider a configuration where provides only modest self - heating of the bolometer , such that .an incident signal power raises the temperature by an amount , causing a change in the readout voltage across the bolometer .we then define a threshold detectable signal level that is equal to the thermal noise generated in a bandwidth in the bolometer , .it is then possible to write an expression for the minimum detectable power in terms of the dimensionless sensitivity of the bolometer , typically of the order of unity , the noise power , and the bolometer bias power : from this expression one immediately sees that it is desirable to minimize both the bias and noise powers , within the combined constraints of maintaining the bolometer temperature at and keeping the thermal response time fixed at a suitably short value . by miniaturization of the sample holder , the bias power required to reach a given temperaturecan be considerably reduced , while at the same time maintaining a practical thermal time constant .the noise power is limited intrinsically by the thermal ( johnson ) noise from the bolometer resistance at temperature .however , most real sensors show substantial excess noise , and the cernox 1050 sensor used in the present implementation of our experiment is no exception , showing approximately 40 db of excess noise in the presence of a 1.3 bias current .this completely accounts for the discrepancy between the minimum detectable power at 1.3 k of 17 fw calculated using eq .[ eqn : minpower ] assuming only johnson noise , and the experimentally determined value of 1.5 pw .for our method of bolometric detection to be most useful , it is necessary to deliver microwaves to the sample across a broad range of frequency and , at the same time , not only accurately control the polarization of the microwave field at the sample , but also maintain a fixed relationship between the field intensity at the sample under test and the field intensity at the reference sample .essential to this is the design of the microwave waveguide .we use a custom - made transmission line , shown in cross - section in fig .[ fig : endwall ] , that consists of a rectangular outer conductor that measures 8.90 mm 4.06 mm in cross - section and a broad , flat centre conductor , or septum , that measures 4.95 mm 0.91 mm .this supports a tem mode in which the magnetic fields lie in the transverse plane and form closed loops around the centre conductor , setting a fixed relationship between the microwave field strengths on either side of the septum . the line is terminated by shorting the centre conductor and outer conductor with a flat , metallic endwallthis enforces an electric field node at the end of the waveguide , adjacent to which we locate the small platelet sample and reference , with their flat faces parallel and very close to the endwall .the broad centre conductor ensures spatially uniform fields over the dimensions of the sample , making it possible to drive screening currents selectively along a chosen crystallographic direction .the electrodynamics of the rectangular waveguide are discussed in more detail in appendix [ app : modes ] . a strong variation in the power delivered to the sample as a function of frequency arises due to standing waves in the microwave circuit .in order to properly account for this , we have incorporated an _ in - situ _ normal - metal reference sample of known surface resistance that acts as an absolute power meter .this second sample is held in a position that is electromagnetically equivalent to the that of the test sample , on a separate thermal stage .schematic cross - section of the terminated coaxial line region showing the sample and reference materials suspended on sapphire plates in symmetric locations in the rf magnetic field .the sapphire plate is epoxied into the bore of a quartz tube which thermally isolates it from the copper holder , fixed at the temperature of the 1.2 k helium bath . ]scale drawing of the assembled apparatus indicating the details of the vacuum can and sample region .the alloy reference sample is not visible in this cut - away view . ]one of the challenges of cryogenic microwave absorption measurements on small , low - loss samples is the design of the sample holder , which must measure and regulate the sample temperature , and yet contribute negligible dissipation compared to the sample .a widely used technique that satisfies these requirements is that of a sapphire hot - finger in vacuum, allowing the thermometry to be electromagnetically shielded from the microwave fields . in our apparatus, the sample holder is inserted through a hole that is beyond cut off for all operating frequencies . for ac calorimetric measurements ,the design of the thermal stage is critical in setting the sensitivity of the system .the experimental arrangement is shown schematically in fig .[ fig : endwall ] with the sample under test fixed on the end of a 100 thick sapphire plate using a tiny amount of vacuum grease. the plate extends 17 mm from the sample to where it is epoxied into the bore of a 1.2 mm diameter quartz glass tube that acts as a thermal weak - link to the liquid helium bath .a cernox thermometer and a 1500 surface - mount resistor used as a heater are glued directly onto the sapphire plate with a very thin layer of stycast 1266 epoxy, ensuring intimate thermal contact with the sapphire and hence the sample .we use 0.05 mm diameter nbti superconducting electrical leads to the thermometer and heater for their very low thermal conductance , which is in parallel with the quartz weak - link .the microwave circuit is powered by a hewlett - packard 83630a synthesized sweeper ( 0.01 - 26.5 ghz ) combined with either an 8347a ( 0.01 - 3 ghz ) or 8349b ( 2 - 20 ghz ) amplifier , generating up to 23 dbm of rf power across the spectrum .approximately 2 m of 0.141 stainless steel coaxial line delivers power from the amplifier down the cryostat to the vacuum can where it is soldered into the rectangular line ( see fig . [fig : fullprobe ] ) .the r.m.s .microwave magnetic field amplitude at the samples is typically oersteds , which generates modulations in the sample - stage temperature for a typical high quality 1 mm high - t sample having a low frequency value of 1 .an assembled view of the low temperature apparatus including the microwave transmission line and the positions of the sample and reference holders is provided in fig .[ fig : fullprobe ] .the sapphire plates that support both the test sample and reference sample are inserted through 4 mm cut - off holes into the microwave magnetic field .the rectangular coaxial line consists of a centre conductor made from a 0.91 mm thick copper plate soldered at one end onto the centre conductor of the 0.141 semi - rigid coaxial line , and at the other end into the wall of the copper cavity that comprises the outer conductor of the transmission line . to minimize the rf power dissipated in the low temperature section of transmission line, the entire surface exposed to microwave radiation , including the final 15 cm of semi - rigid coaxial line , was coated with pbsn solder , which is superconducting below 7 k. during experiments , the vacuum can is completely immersed in a pumped liquid helium bath having a base temperature of 1.2 k. the selection of a reference material for low - frequency work must be made carefully .initially , we chose samples cut from commercially available stainless - steel shim stock , a common choice in infrared spectroscopy work .calibration experiments produced erratic results which were eventually traced to the presence of anisotropic residual magnetism in the stainless steel .subsequently , we produced our own reference material , choosing an ag : au alloy ( 70:30 at.% made from 99.99% pure starting materials ) , because it exhibits a very simple phase diagram that guarantees homogeneity. by using an alloy , we ensure that the electrodynamics remain local at microwave frequencies , avoiding the potential complications arising from the anomalous skin effect. our sample was cut from a 93 m thick foil having a measured residual dc resistivity value of =5.28 cm , constant below 20 k. while the thermal stage for the reference sample is similar in design to that used for the sample under test , it uses a higher conductance stainless steel thermal weak - link ( in place of the quartz tube ) , since the dissipation of the normal metal calibration sample is orders of magnitude larger than that of a typical superconducting sample .because the apparatus was implemented as a retro - fit to an existing experiment , the reference thermal stage had to be mounted directly onto the body of the transmission line structure .although the cavity walls are superconducting to reduce their absorption , we use a nylon spacer to thermally isolate the base of the reference from the transmission line to avoid direct heating .the heat - sinking of the reference base to the helium bath is made using a separate copper braid that is not visible in fig .[ fig : fullprobe ] .as considered previously in our generic analysis , we operate the cernox bolometer with a constant dc current bias , typically a few , provided by the series combination of an alkaline battery ( 1.5 v or 9 v ) and bias resistor whose value is much larger than that of the cernox sensor . with the helium bath under temperature regulation , the choice of bias power sets the temperature of the sample for a given experiment , with no other temperature control necessary .all electrical leads into the cryostat are shielded twisted pairs of insulated manganin wire , and true four - point resistance measurements are made on all sensors .the voltage signal appearing on the cernox thermometer is amplified outside the cryostat by a carefully shielded and battery - powered circuit .we use a two - stage cascaded amplifier with one analog devices ad548 operational amplifier per stage , chosen because these are readily available , low - noise amplifiers .the dc level is nulled between stages to prevent saturation , and the total gain is 10 .the amplified signal , corresponding to the temperature modulation of the sample , is then demodulated with a stanford research systems sr850 digital lock - in amplifier that is phase - locked with the rf - power amplitude modulation .there are two such systems , one for the sample and one for the reference measurements .the entire experiment is operated under computer control when collecting data .raw absorption spectra corresponding to the temperature rise of the sample . taking the ratio of the two signalsaccounts for the strong frequency dependence of introduced by standing waves in the transmission line .the remaining frequency dependence of the ratio is due to the different spectra of the two samples . ]ratio of the sample absorption to reference absorption for identical samples , compared to measurements of the field amplitude at equivalent positions in a scale model .( frequencies for the scale model have been scaled by a factor of 4 for the comparison . )the ratio technique is seen to break down with a sharp resonance in both cases .the origin of these resonances , which limit the useful frequency range of the apparatus , is discussed in detail in appendix [ app : modes ] and shown to be due to the presence of standing waves of the te waveguide mode .for this mode , the three arrows indicate : the cut - off frequency ghz at which the mode is first free to propagate ; its quarter - wave resonance frequency ghz , for open - circuit termination conditions ; and the half - wave resonance frequency ghz , for short - circuit termination conditions .the and resonance frequencies bracket the observed resonances . the scale model , which has a large transition capacitance between circular and rectangular coax sections , is seen to fall at the high end of the range .] two steps are necessary for an absolute calibration of the surface resistance of an unknown specimen from the measured temperature - rise data .the first is to calibrate the absolute power sensitivity of the sample and reference thermal stages at the actual operating temperature and modulation frequency .this is achieved using the small _ in - situ _ heater to drive well - characterized heat pulses that mimic absorption by the sample , while at the same time measuring the corresponding temperature response .the second step requires the calibration of the magnetic field strength at the sample , at each frequency , using the known absorption of the reference sample .we exploit the fact that the metallic reference sample experiences the same incident microwave field as the sample under test , guaranteed by conservation of magnetic flux .this ensures that taking the ratio of the absorbed power per unit surface area of each sample provides the ratio of the surface resistance values : the surface resistance of the unknown sample is then trivially found by multiplying the power - absorption ratio , shown in fig .[ fig : ratioplot ] , by of the metallic reference sample calculated using the classical skin - effect formula where is the frequency and is the permeability of free space .the raw power - absorption spectra , shown in the first two panels of fig .[ fig : ratioplot ] , highlight the necessity of the reference sample .the absorption spectra of the samples is completely masked by the large amplitude variations of caused by the standing waves in the microwave circuit .comparison of measurements made on the same sample of yba using the broadband bolometric experiment ( solid symbols ) with those from five microwave resonators ( open symbols ) .the agreement between methods is excellent .the data is plotted as to remove the frequency dependence associated with superfluid screening . ]an essential test of the method is to make a frequency - scanned measurement with identical samples mounted on the sample and the reference stages .the result should be a frequency - independent ratio across the spectrum , equal to unity for samples with the same surface area . in fig .[ fig : metalcal ] we show such a measurement using two thin - platelet samples of our ag : au reference alloy , both at the base temperature of 1.2 k. the data reveal a ratio of 0.82 in the cryogenic apparatus , due to the fact that the centre conductor is offset from centre by 0.1 mm in the termination region , intensifying the fields on one side relative to the other .this scale factor must be included in the calibration of all experimental data .the sharp resonance seen in the ratio at 22.5 ghz indicates the presence of a non - tem electromagnetic mode in the sample cavity that breaks the symmetry in field strength between sample and reference positions . for our present design ,this sets the upper frequency limit of operation . in an attempt to gain further insight into the field configurations in the transmission line , and to understand how the higher order waveguide modes limit the upper frequency range, we built a scale model of the setup having all dimensions larger than those of the cryogenic apparatus by a factor of four .for comparison , a frequency scan of the model structure is included in fig .[ fig : metalcal ] , using loop - probes in the positions of the samples .the data show that the non - tem - mode resonance occurs at 27 ghz , considerably higher than in the low temperature experiment .it turns out that the breakdown of the sample - reference symmetry occurs not at the frequency at which higher order waveguide modes first propagate in our structure , but at the frequency at which they form a resonant standing wave . a full discussion of this is given in appendix [ app : modes ] .a number of other experimental tests were important to verify the proper operation of the system .frequency scans without samples mounted on the sapphire stages confirmed that background absorption due to the sapphire and tiny amount of vacuum grease used to affix the samples is negligible it is unmeasurable at low frequency , and contributes no more than 2 to an measurement at 21 ghz .scans without a sample also confirmed that no significant leakage heat current propagates to the thermometers directly from the microwave waveguide .the high thermal stability of the cryostat system is due in part to the very large effective heat capacity of the pumped 4 litre liquid - helium bath at 1.2 k. in addition , it is always important to make certain the temperature modulations of the samples are sufficiently small that the response of the thermal stages remains in the linear regime .furthermore , measurements with the same sample located in different positions along the sapphire plate , with up to 0.5 mm displacement from the central location in the waveguide , confirmed that there is enough field homogeneity that our sample alignment procedure using an optical microscope is sufficient , and that samples of different sizes experience the same fields . a very convincing verification of the technique is provided by the ability to compare broadband data with measurements of the _ same sample_ in five different high - q microwave resonators .these experiments probe the temperature dependence of the absorption to high precision at a fixed microwave frequency : however , the determination of the _ absolute _ value of is limited to about 10% as discussed previously .the bolometric method has the advantage of being able to measure a true spectrum because the dominant uncertainty , the absolute surface resistance of the reference sample , enters as a scale factor that applies across the entire frequency and temperature range .a detailed discussion of the uncertainties in the bolometry data will be presented in the subsequent section .figure [ fig : resonators ] shows that there is very good agreement of both the temperature and frequency dependence of the surface resistance as measured independently by the fixed - frequency and broadband experiments .broadband measurements of the microwave surface resistance spectrum of yba obtained with the bolometric apparatus below 10 k. the low frequency absorption approaches the resolution limit of the apparatus , while the upper frequency limit is imposed by the resonance in the microwave structure . ] the real part of the microwave conductivity extracted from the broadband measurements .we use the self - consistent fitting procedure described in appendix [ app : kramers ] to properly account for contributions to field - screening by the quasiparticles . ]figure [ fig : rsplot ] presents an example of high resolution broadband measurements of the frequency - dependent and temperature - dependent surface resistance of a superconducting sample. this particular data set is for -axis currents in a yba single crystal ( t=56 k ) having dimensions 1.25.96.010 mm .the data span the range 0.6 - 21 ghz , limited at high frequency by the resonance in the system , and at low frequency by the small dissipation of the sample , which approaches the resolution limit of the experiment . at 1 ghz ,the values for the statistical r.m.s .uncertainty in surface resistance , , are about 0.2 , 0.4 , 0.6 , and 1.3 for = 1.3 , 2.7 , 4.3 , and 6.7 k respectively .error bars have been omitted from the figure for clarity .systematic contributions to the uncertainty enter as overall scale factors in the data and are attributed to an uncertainty in the dc resistivity of the thin ag : au alloy foil used as a reference sample ( % ) , the surface area of the samples ( % ) , and the absolute power sensitivity of the thermal stage ( % ) . the electromagnetic absorption spectrum of a y single crystal at 1.3 k. the spectrum consists of a broad background due to the quasiparticle absorption in the superconductor in addition to the zero field esr lines generated by the low concentration of magnetic gd impurities .only ions residing within a distance of the crystal surface contribute to the signal as the applied rf field is strongly screened by the superconductor . ]the frequency dependence observed in is due to absorption by quasiparticles thermally excited from the superfluid condensate .the quantity of fundamental theoretical interest is the real part of the conductivity spectrum , which must be extracted from the experimentally measured data . a thorough discussion of the method we use to do this is given in appendix [ app : kramers ] but , to first approximation, the shape of the conductivity spectrum can be found by dividing by a factor of to account for the screening of the applied field by the superfluid .figure [ fig : sigma1 ] shows the conductivity spectra extracted from the data using the complete analysis .it is immediately apparent why improving the sensitivity of the experiment is of the utmost importance .the low frequency region , where the power absorption becomes very small , is where the conductivity exhibits the strongest frequency dependence and is most important to measure accurately .the spectrum at 1.3 k has a width of the order of 5 ghz , signifying very long quasiparticle scattering times , indicative of the high quality of our yba crystal . for this very clean sample ,most of the spectral weight resides below the experimental frequency limit of 21 ghz at 1.3 k , but the increase in scattering with increasing temperature quickly broadens the spectra thus motivating future designs capable of probing a broader frequency range .many cuprate materials , such as bi , have scattering rates that are orders of magnitude higher and require thz frequency techniques to probe their dynamics. as a final demonstration of the sensitivity of the broadband instrument we have described , we include a frequency scan of a superconducting sample that exhibits clearly discernable absorption lines due to zero field electron spin resonance ( esr ) of a low density of magnetic impurities .gadolinium ions ( electron spin ) substitute for yttrium , sandwiched between the two cuo planes in the ybacuo unit cell , and the splitting of the degenerate gd hyperfine levels by the crystalline field provides a very sensitive probe of the local microscopic structure. these measurements are typically performed in a high field spectrometer , but the bolometric system provides a means of performing zero - field measurements . figure [ fig : esr ] shows the 1.3 k absorption spectrum of a 1 mm y single crystal consisting of a broad background due to the quasiparticle conductivity , essentially unaltered by the presence of the gd ions , with the esr absorption peaks superposed .the high signal - to - noise ratio achieved with the experiment allows one to resolve the spectrum in great detail .the apparatus described here has sufficient sensitivity and frequency range for it to be immediately applicable to many other interesting problems in condensed matter physics .these might include : the study of low - lying collective modes in metals and superconductors ; zero - field electron spin resonance in correlated insulators ; and the study of critical phenomena at the metal insulator transition and near the zero - temperature magnetic critical points that occur in certain - and -electron metals . with a little attention to thermal design , specifically the thermal separation of sample stages from the microwave waveguide, the superconducting coatings on the waveguide could be removed and the system used in high magnetic fields .this would open interesting possibilities in the area of metals physics , such as high - resolution cyclotron and periodic - orbit resonance , as well as the study of vortex dynamics and vortex - core spectroscopy in superconductors .finally , further miniaturization of the experiment should also be possible : the ultimate goal would be to extend the frequency range of this type of spectroscopy so that is joins seamlessly on to the thz range now accessible using pulsed - laser techniques .the authors are indebted to pinder dosanjh for his technical assistance , as well as to ruixing liang for the ybacuo samples employed in these experiments .we also acknowledge financial support from the natural science and engineering research council of canada and the canadian institute for advanced research .we wish to calculate the temperature response of a simple thermal stage to a sinusoidal heat flux superimposed on a static temperature gradient .we consider an arrangement where the isothermal sample stage has neglible thermal mass and is connected to base temperature by a weak thermal link with distributed heat capacity per unit volume .it is a straightforward extension to include an additional lumped heat capacity for the isothermal stage ; once the lumped heat capacity dominates , the frequency response simplifies to that of a single - pole low - pass filter .however , in our case this is unnecessary : for electrodynamic measurements at low temperatures , the sample holder is required to be both electrically insulating and highly crystalline , and will therefore have very low heat capacity .here we consider the one - dimensional problem of a thin bar ( the quartz tube in our apparatus ) of length and cross - sectional area , with one end fixed at a base temperature while the other end is heated by a heat flux due to sample power absorption .the propagation of a heat current through the bar is constrained by the continuity equation and the thermal conductivity is defined by .together , these lead to the one - dimensional heat equation where is the thermal diffusivity . defining a complex thermal diffusion length , the time - dependent part of the temperature profile can be written where is fixed by the heat - flux boundary condition : .this completely determines the frequency - dependent temperature rise of the sample stage : in the low frequency limit , the temperature rise reverts to the usual result : where , without loss of generality , we have set the phase of the input heat flux to zero . in the high frequency limit ,the thermal diffusion length becomes shorter than the weak link and the temperature rise is reduced , being given by : where . at finite frequencies , part of the heat fluxis diverted into the distributed heat capacity of the thermal link . for a fixed input power ( and hence fixed temperature _gradient _ at the end of the thermal link ) this leads to smaller temperature rises and a decreased sensitivity of the bolometric method .clearly , the experimental sensitivity of the bolometric method will be optimized by operating in the low - frequency limit : or .a consideration of the thermal diffusivity and dimensions of the weak - link must therefore be part of any plan to increase the modulation frequency .[ fig : thermalresponse ] shows the frequency response of the sample thermal stage in our apparatus when it was subjected to a sinusoidally varying heater power , normalized to the static response .included in the figure are fits to the distributed - heat - capacity model , eq .[ eqn : tempresponse ] and a single - pole low - pass filter response .although both curves fit the data well over most of the frequency range , the best - fit value of the time constant in the lumped - element model corresponds to a heat capacity much larger than the calculated heat capacity of the sapphire sample stage . instead, the value obtained from the fit is approximately half the heat capacity of the quartz tube , indicating the correct physics is that of heat diffusion in a distributed thermal system . low temperature ( k ) measurements of the dynamic thermal response of the quartz - tube bolometer platform . curves on the plot show fits using lumped and distributed heat capacity models . ]in optimizing a microwave transmission line for the bolometric measurement of surface resistance the guiding aims must be : to deliver microwave power efficiently to the sample region , over as wide a frequency range as possible and with a well defined polarization ; to have regions of uniform microwave magnetic field at the sample and reference positions ; and , at these positions , to have a fixed , frequency - independent ratio between the field strengths .these aims can be met by using an impedance - matched ( 50 ) , single - mode coaxial line , with rectangular cross section and a broad , flat center conductor or septum , as shown in fig .[ fig : coaxmodes](i ) .in addition , the dimensions of the rectangular coaxial line should be chosen carefully , to prevent higher - order waveguide modes from entering the operating frequency range of the experiment , as these modes break the symmetry in field strength between sample and reference positions .this appendix outlines how to undertake the optimization .cross sections of the rectangular coaxial transmission line : ( i ) physical layout of the transmission line showing dimensions ( ) of the inner and outer conductors , and sample and reference positions ( shaded squares ) ; ( ii ) fields of the tem mode , showing how continuity of flux and a broad , flat inner conductor produce uniform , well polarized fields of equal intensity and opposite direction at the sample and reference positions ; ( iii ) fields of the te mode ; and ( iv ) fields of the te mode .( magnetic fields of the transverse electric modes contain a component along the direction of propagation and do not form closed loops in the transverse plane . )it is clear that the te mode is most harmful to the operation of the broadband apparatus : its magnetic fields have high intensity at the sample and reference positions and break the balance that otherwise exists in the tem mode . ] a rectangular coaxial line , like any two - conductor line , supports a transverse electromagnetic ( tem ) wave at all frequencies .figure [ fig : coaxmodes](ii ) shows its electric and magnetic field configurations .the tem mode has the desirable property that its magnetic fields lie in a plane perpendicular to the direction of propagation , forming closed loops around the centre conductor .conservation of magnetic flux then leads to a fixed , frequency - independent relation between the fields on either side of the septum .these fields will also be quite homogeneous , as long as the height of the centre conductor is large compared to the gap between the centre and outer conductors . to deliver microwave power efficiently to the sample regionthe characteristic impedance of the tem mode must be close to that of the cylindrical coaxial line used to bring microwaves into the cryostat .gunston has tabulated data on the impedance of rectangular coaxial line , and gives some useful approximate formulas .the following expression , due to brckelmann , is stated to be accurate to 10% for and : where is the relative permittivity of the dielectric filling the transmission line .conducting walls introduced along special electric equipotentials allow the waveguide modes of rectangular coaxial line to be mapped onto the fundamental mode of ridged waveguide , a problem extensively studied in the literature .the figures show the relabelling of dimensions in pyle s notation as and . ]we now come to the question of what places an upper limit on the useful frequency range of the rectangular coaxial waveguide . at high frequenciesour method , which incorporates an in - situ power meter , suffers a spectacular breakdown in the ratio of the relative strengths of the microwave magnetic fields at the sample and reference positions , as shown in figure [ fig : metalcal ] .this is caused by the presence of higher - order waveguide modes , which have different character from that of the tem mode under the mid - plane reflection symmetries of the rectangular line .the waveguide modes with the lowest cut - off frequencies are the transverse electric modes te and te , shown in figures [ fig : coaxmodes](iii ) and [ fig : coaxmodes](iv ) respectively .these have the property that magnetic fields on opposite sides of the septum point in the _ same _ direction .the fields of the tem mode , in contrast , are _ antiparallel _ , causing an admixture of tem and te modes to lack the important characteristic of equal field intensities at sample and reference positions .particularly damaging is the te mode , which is not screened by the septum and has high field intensity in the vicinity of the sample and reference . in principleit is possible to avoid exciting the transverse electric modes by building a very symmetric transmission line . in practice, however , we find this to be impossible sufficiently large symmetry - breaking perturbations are always present .nevertheless , maintaining high symmetry is still desirable .a comparison of our results with calculations of the cut - off frequencies of the transverse electric modes shows that at frequencies where the higher order modes are free to propagate , they do not immediately cause a breakdown in field ratio : this only occurs when the transverse electric modes come into resonance .( this can be seen very clearly in figure [ fig : metalcal ] . ) as a result , the range of operating frequency can be extended by as much as 50% just by shortening the final section of transmission line and carefully designing the transition between the cylindrical and rectangular sections .optimizing the range of single - mode operation of the rectangular transmission line requires a method for calculating the cut - off frequencies of the te and te modes .while waveguide modes in two - conductor rectangular transmission lines have not been extensively studied , their field configurations can be mapped onto a more common geometry : that of ridged waveguide. figure [ fig : coaxmodes2 ] shows how .electric equipotentials run perpendicular to lines of electric flux , and special equipotentials , corresponding to local minima of the magnetic flux density , exist on the symmetry axes of the rectangle .a conducting wall can be introduced along these lines without disturbing the field distributions , thereby mapping each mode onto an equivalent ridged waveguide .figure [ fig : coaxmodes2 ] illustrates the two different ways this is done , for the te and te modes respectively .a very early calculation of the cutoff frequency of ridged waveguide was carried out by pyle and is notable for its simplicity , generality and enduring accuracy when compared to more recent numerical methods. pyle s approach is to solve for the transverse resonance condition of the waveguide , which is equivalent to finding the cut - off frequency .we have used this method in our design process , as it is easy to implement ( involving only algebraic equations ) and is accurate to several percent except when the septum becomes very thin .the length of the rectangular line , the cut - off frequency , and the discontinuity capacitance of the cylindrical - to - rectangular transition together determine the resonant frequencies of the transverse electric modes .there are two limiting cases , corresponding to open - circuit ( ) and short - circuit ( ) termination ( where is the wavelength along the guide ) , that follow from the waveguide dispersion relation . a high capacitance for the te modes at the transition from cylindrical to rectangular coax is clearly favourable : it better approximates the short circuit termination condition and leads to resonant frequencies at the upper end of the range .this effect is responsible for the difference in resonant frequencies between the scale model and the actual apparatus seen in figure [ fig : metalcal ] .there is , however , a trade - off to be made : too large a transition capacitance for the tem mode will result in most of the microwave power being reflected before it reaches the sample .the dimensions of the rectangular guide in our apparatus are mm , mm , mm , mm and mm . the cut - off frequencies for the te and te modes are calculated to be 19.68 ghz and 15.38 ghz respectively .the quarter wave - resonances would then occur at 22.72 ghz and 19.12 ghz , and the half wave resonances at 30.06 ghz and 27.44 ghz .in this appendix we show how the microwave conductivity spectrum of a superconductor can be obtained from a measurement of its frequency - dependent surface resistance .this process is similar to the extraction of conductivity spectra in the infra - red frequency range from reflectance measurements . in both cases , we begin with incomplete information about the electrodynamic response : the bolometric technique described in this paper measures only the _ resistive _ part of the surface impedance ; and optical techniques typically obtain the magnitude , but not the phase , of the reflectance .however , the conductivity is a causal response function , and its real and imaginary parts are related by a kramers krnig transform : where denotes the principle part of the integral . at first sightwe seem to have replaced one uncertainty , incomplete knowledge of the phase , by another , the finite frequency range over which the measurements have been made . however , a suitable extrapolation of the data out of the measured frequency range is usually possible and makes the transform a well - defined procedure in practice .we consider the limit of local electrodynamics , in which the microwave surface impedance is related to the complex conductivity in a straightforward manner by the expression very generally , the conductivity can be partitioned into a superfluid part , consisting of a zero - frequency delta function and an associated reactive term , and a normal - fluid component : where is the quasiparticle effective mass . in the clean - limit , where the quasiparticle scattering rate is much less than the spectroscopic gap ,sum - rule arguments enable a clean partitioning of the conduction electron density into a superfluid density and a normal - fluid density .in this type of generalized two - fluid model, the temperature dependence of is determined phenomenologically from measurements of the magnetic penetration depth through the relation ^{-1}.\ ] ] applying this , we can write the conductivity at finite frequencies as .\ ] ] from eq . [ eqn : surfimped ] it is clear that is determined by both the real and imaginary parts of the conductivity .however , one simplification occurs at temperatures well below , where few thermally excited quasiparticles exist , and the low frequency reactive response is dominated by the superfluid . in this case , a good approximation to the relations becomes at higher frequencies and temperatures , a more complete treatment would account for quasiparticle contributions to field screening , which enter through .we use an iterative procedure to obtain the quasiparticle conductivity spectrum , starting from the good initial guess provided by eq .the process goes as follows .a phenomenological form that captures the key characteristics of the dataset but has no physical motivation , namely $ ] , is fitted to the spectrum and used to extrapolate out of the measured frequency range .the kramers krnig transform ( eq . [ kk ] ) can then be applied to obtain . with in hand , and with the superfluid contribution to known from measurements of the magnetic penetration depth ,a new extraction of the conductivity from the data is made , this time using the _ exact _ expression , eq .[ eqn : surfimped ] .the whole procedure is repeated to self - consistency .we find that the procedure is stable and converges rapidly , and is not sensitive to the details of the high - frequency extrapolation . also , the corrections are quite small for the low temperature dataset shown in fig . [fig : sigma1 ] : at the highest temperature and frequency they amount to a 7% change in .in addition , two independent experimental checks give us further assurance that we obtain the correct conductivity spectra .we first note that eq .[ eqn : surfimped ] contains an expression for the surface reactance .therefore , once we have measured the penetration depth and extracted the conductivity spectra from the data , we can predict the temperature dependence of the surface reactance at _ any _ frequency and compare with experiment .we have made this comparison at 22.7 ghz , with surface reactance data obtained on the same crystal , and find excellent agreement .we note that this is a frequency high enough for quasiparticle scattering to have a discernible effect on the surface reactance .a second verification of the conductivity extraction procedure is its ability to predict the spectral weight that resides _ outside _ the frequency window of the measurement .a corollary of the kramers krnig relation [ kk ] is the oscillator - strength sum rule in a superconductor , in the clean limit , the sum rule requires that any spectral weight disappearing from the superfluid density as temperature is raised must reappear as an increase in the frequency - integrated quasiparticle conductivity .we have carried out this comparison , which is shown in the inset of fig .[ fig : sigma1]. the good agreement in the temperature dependence of the superfluid and normal - fluid densities is a strong verification of both the analysis procedure _ and _ the bolometric technique . in general , the thermal model of the bolometer stageshould include a distributed heat capacity through which the thermal currents flow .however , we show in appendix [ app : thermal ] that a necessary condition for optimal sensitivity is that the thermal weak link be operated in the low frequency limit where it _ is _ well approximated by a lumped thermal mass and a weak thermal link having negligible heat capacity .we use standard 50 impedance semi - rigid coaxial line having a teflon dielectric and a 0.141 diameter outer conductor made from stainless steel and 0.0359 diameter inner conductor made from silver - plated , copper - clad steel .
|
a novel low temperature bolometric method has been devised and implemented for high - precision measurements of the microwave surface resistance of small single - crystal platelet samples having very low absorption , as a continuous function of frequency . the key to the success of this non - resonant method is the _ in - situ _ use of a normal metal reference sample that calibrates the absolute rf field strength . the sample temperature can be controlled independently of the 1.2 k liquid helium bath , allowing for measurements of the temperature evolution of the absorption . however , the instrument s sensitivity decreases at higher temperatures , placing a limit on the useful temperature range . using this method , the minimum detectable power at 1.3 k is 1.5 pw , corresponding to a surface resistance sensitivity of for a typical 1 mm mm platelet sample .
|
cosmological n - body simulations are the main tool used to study the dynamics of collisionless dark matter and its role in the formation of cosmic structure .they first became widely used 20 years ago after it was realized that the gravitational potentials of galaxies are dominated by dark matter .at the same time , theories of the early universe were developed for dark matter fluctuations so that galaxy formation became an initial value problem .although many of the most pressing issues of galaxy formation require simulation of gas dynamics as well as gravity , there is still an important role for gravitational n - body simulations in cosmology .dark matter halos host galaxies and therefore gravitational n - body simulations provide the framework upon which one adds gas dynamics and other physics .moreover , many questions of structure formation can be addressed with n - body simulations as a good first approximation : the shapes and radial mass profiles of dark matter halos , the rate of merging and its role in halo formation , the effect of dark matter caustics on ultra - small scale structure , etc . in a cosmological n - body simulation ,the matter is discretized into particles that feel only the force of gravity .a subvolume of the universe is sampled in a rectangular ( not necessarily cubic ) volume with periodic boundary conditions . in principle , one simply uses newton s laws to evolve the particles from their initial state of near - perfect hubble expansion .gravity takes care of the rest . in practice ,cosmological n - body simulation is difficult because of the vast dynamic range required to adequately model the physics .gravity knows no scales and the cosmological initial fluctuations have power on all scales .after numerical accuracy and speed , dynamic range is the primary goal of the computational cosmologist .one would like to simulate as many particles as possible ( at least to sample galaxies well within a supercluster - sized volume ) , with as great spatial resolution as possible ( at least per dimension ) , for as long as possible ( to timesteps to follow the formation and evolution of structure up to the present day ) .a single computer is insufficient to achieve the maximum possible dynamic range .one should use many computers cooperating to solve the problem using the technique of parallelization . in a parallel n - body simulation, the computation and memory are distributed among multiple _ processes _ running on different _ nodes _ ( computers ) .unfortunately , ordinary compilers can not effectively parallelize a cosmological n - body simulation code .a programmer must write special code instructing the computers how to divide up the work and specifying the communication between processes .a parallel code is considered successful if it produces load - balanced and scalable simulations .a simulation is _ load balanced _ when the distribution of the effective workloads among the nodes is uniform ._ scalability _ for a given problem means that the wall clock time spent by the computer cluster doing simulations scales inversely with the number of nodes used .ideally , of course , the code should also be _ efficient _ : as much as possible , the wall clock time should be entirely devoted to computation . at present , there are two main algorithms used for cosmological n - body codes : tree and p m ( see bertschinger 1998 for review ) .the current parallel tree code implementations include treesph ) , hot , gadget , and gasoline .tree codes have the advantage of relatively easy parallelization and computing costs that scale as where is the number of particles .however , they have relatively large memory requirements . the p m ( particle - particle / particle - mesh ) method was introduced to cosmology by and is described in detail in ( see also bertschinger & gelb 1991 ) . for moderate clustering strengths ,p m is faster than the tree code but it becomes slower when clustering is strong .this is because p m is a hybrid approach that splits the gravitational force field of each particle into a long - range part computed quickly on a mesh plus a short - range contribution computed by direct summation over close pairs .when clustering is weak , the computation time scales as where is the number of grid ( mesh ) points , while when clustering is strong the computation time increases in proportion to .the scaling can be restored to using adaptive methods .currently there exist several parallel implementations of the p m algorithm , including the version of for the ( now defunct ) connection machine cm-5 and the hydra code of .the hydra code uses shared memory communications for the cray t3e .there is a need for a message - passing based version of p m ( and its adaptive extension ) to run on beowulf clusters .this need motivates the present work .the difficulty of parallelizing adaptive p m has led a number of groups to use other techniques to add short - range forces to the particle - mesh ( pm ) algorithm .the tree and pm algorithms have been combined by and while use a two - level adaptive mesh refinement of the pm force calculation .the flash code has been extended to incorporate pm forces with multi - level adaptive mesh refinement . when the matter distribution becomes strongly clustered , parallel codes based on pm and p m face severe challenges to remain load - balanced . in general, p m and pm - based parallel codes suffer complications when the matter becomes very clustered as happens at the late stages of structure formation .most of the existing codes use a static one - dimensional slab domain decomposition , which is to say that the simulation volume is divided into slices and each process works on the same slice throughout , even when the particle distribution becomes strongly inhomogeneous .the gotpm code uses dynamic domain decomposition , with the slices changing in thickness as the simulation proceeds , resulting in superior load balancing .however , even this code will break down at very strong clustering because it also uses a one - dimensional slab domain decomposition .the flash code uses a more sophisticated domain decomposition similar in some respects to the method introduced in the current paper .the motivation of the current work is to produce a publicly available code that will load balance and scale effectively for all stages of clustering on any number of nodes in a beowulf cluster .this paper introduces a new , scalable and load - balanced approach to the parallelization technique for the p m force calculation .we achieve this by using dynamic domain decomposition based on a space - filling hilbert curve and by optimizing data storage and communication in ways that we describe .this paper is the first of two describing our parallelization of an adaptive p m algorithm .the current paper describes the domain decomposition and other issues associated with parallel p m .the second paper will describe the adaptive refinement method used to speed up the short - range force calculation .the outline of this paper is as follows .the serial p m algorithm ( based on gelb & bertschinger 1994 and ferrell & bertschinger 1994 ) that underlies our parallelization is summarized in [ sec_serial ] .section [ sec_par ] discusses domain decomposition methods starting with the widely - implemented static one - dimensional slab decomposition method .we then introduce the space - filling hilbert curve and describe its use to achieve a flexible three - dimensional decomposition .section [ sec_loadbal ] presents our algorithm for dynamically changing the domain decomposition so as to achieve load balance .section [ sec_layout ] presents our techniques for organizing the particle data so as to minimize efficiency in memory usage , cache memory access , and interprocessor communications . in [ sec_hcforce ] we describe the algorithms used to parallelize the pm and pp force calculations .section [ sec_test ] presents code tests emphasizing load balance and scalability .conclusions are presented in [ sec_concl ] .an appendix presents an overview of the code and frequently appearing symbols , and another appendix briefly describe the routines used to map the hilbert curve onto a three - dimensional mesh and vice versa .in this section we summarize our serial cosmological c implementation based on an earlier serial fortran implementation of p m by one of the authors .we discuss in detail the code units and aspects of the force calculation that are necessary for understanding the parallelization issues covered in the later sections . given the pairwise force between two particles of masses and and separation , we define the interparticle force law profile . for a system of many particles ,the gravitational acceleration of particle is .the required interparticle force law profile depends on the shape of the simulation particles . for point particles one uses the inverse square force law profile .the inverse square force law is not used for simulation of dark matter particles in order to avoid the formation of unphysical tight binaries , which happens as a result of two - body relaxation . for cold dark matter simulations many authors use the force law where is the plummer softening length .we take the plummer softening length to be constant in comoving coordinates . with plummer softeningthe particles have effective size . in a p m code , is usually set to a fraction of the pm - mesh spacing . in a p m code ,the desired ( e.g. , plummer ) force law is approximated by the sum of a long - range ( particle - mesh or pm ) force evaluated using a grid and a short - range ( particle - particle or pp ) force evaluated by direct summation over close pairs .the pm force varies slightly depending on the locations of the particles relative to the grid ( see appendix a of ferrell & bertschinger 1994 ) .the average pm force law can be tabulated by a set of monte - carlo pm - force simulations each having one massive particle surrounded by randomly placed test particles . in practice, the mean pm force differs from the inverse square law by less than 1% for pair separations greater than a few pm grid spacings . for smaller separations , a correction ( the pp force ) must be applied .the total force is given by strictly speaking , the p m force is not translationally invariant and therefore depends on the positions of both particles .the p m force differs from the exact desired interparticle force profile by . at large separations , both the pm - force and the required force reduce to the inverse square law ( modified on the scale of the simulation volume by periodic boundary conditions ) .the pp - force can therefore be set to zero at for some .the pp - correction is applied only for separations . the pm - force on the other handis mainly contributed by remote particles .the equation of motion of particles in a robertson - walker universe is where is the comoving position and is comoving ( conformal ) time .the potential satisfies the poisson equation where is the excess of the proper density over the background uniform density .the equations take a simpler and dimensionless form in a special set of units that we adopt .the coordinates , energy and time in our code are brought to this form .let us denote by tildes variables expressed in code units .then for the units of time , position , velocity and energy ( or potential ) , we write , , and or , where is the expansion factor of the universe , is the proper velocity , is the hubble constant and is the cell spacing of the pm density mesh in our code ( see sec .[ sec_pm ] ) expressed in comoving mpc . in these units ,the equation of motion ( [ eq_motion_cosm ] ) reduces to we choose units of mass so that the poisson equation takes the following form in dimensionless variables : where is the proper mean matter density .particle masses are made dimensionless by ] , , where the square brackets signify taking the integer part ; is the spacing between the consecutive along the binned bar of length .we define the workloads in the discretized problem as . following equation ( [ eq_imbaltar ] ) , the load imbalance of a discrete partitioning state is defined by = 1-\frac{\langle \hat{r}_n \rangle } { \max \hat{r}_n}\ .\ ] ] the residual load imbalance is redefined in the discrete space as [ cf .( [ eq_resimbal ] ) ] = \min\limits_{\{\hat{r}'_b , \hat{r}'_n\}:\;b(\hat{r}'_b ) > 0 } \mathcal{\hat{l}}[\hat{r}'_n ] \ .\ ] ] the problem of load balancing is posed in the discrete space as finding the discrete target partitioning state that will minimize the load imbalance . we discuss how this is done in the next subsection .once the discrete target partitioning state is found , the continuous target partitioning state is also found by setting , where is the raw hc index of any cell such that ] . after the correction of the partition done , we move on to the next partition , applying the same technique but using the already corrected value for the position of partition .we then continue applying the same scheme for all the other partitions in cycles along the circle until the resulting shifts for all partitions become zero .the resulting positions of the partitions will define the target state in the circular cyclic correction repartitioning approach .this approach if used alone is not satisfactory just as for the cumulative partitioning approach above , however the nature of the problem is completely different .if a large variation in workload develops across a large range of indices ( e.g. between and ) , this variation will not be suppressed by the circular cyclic correction scheme since only the adjacent partitions and are used for correction of any given partition . on the other hand ,all the local fluctuations in workload will be suppressed very effectively . as we see , the cumulative repartitioning approach and the cyclic circular partitioning approaches smooth the large scale and small scale ( in terms of the range of indices ) workload fluctuations respectively .applying the two approaches in sequence works well to provide a nearly optimal solution for the discrete workload . in the example of figure [ fg_partit ], the bar shows the result of applying the circular partition correction approach to the output of the cumulative approach ( bar ) obtained from the initial discrete partitioning state ( bar ) . as follows from the bar of figure [ fg_partit ] ,the resulting target partitioning state is and .the resulting discrete load imbalance is is 3.4 times smaller than the load imbalance obtained using only the cumulative method .our experiments show that the combination of the two approaches results in a good approximation to the load - balanced target partitioning state . the residual load imbalance is generally limited not by our ability to find the optimal solution but instead by the cpu time fluctuations due to variations in cache usage .in a serial code , the array of particle structures ( [ eq_part ] ) is static , that is , it remains fixed length with unchanging particle labels . in a parallel code with domain decomposition, particles may move from one process to another .this not only requires interprocessor communication , it also complicates the storage of particle data .this section discusses our solutions to these problems .the particle data are stored as a single local particle array of pointer on each process .a slightly larger range is allocated to avoid reallocation every timestep .in addition to the particle array , we have a linked list that tells which particles lie in each hc cell . for each hc cellthere is a pointer ( the root ) that ( if it is non - null ) points into the particle array to the first particle in that hc cell . a complete list of particles within a given local hc region is obtained by dereferencing the appropriate linked list root and then following the linked list from one particle to the next , as illustrated in figure [ fg_linked ] .the linked list also has a root that points to disabled particles .there are several challenges associated with this simple linked list method of particle access .first , one must transfer particles between processes .second , hc cells are themselves exchanged between processes as a result of repartitioning .third , one must optimize the traversal of the linked lists to optimize code performance .finally , one must specify which hc cells are associated with a given process .we discuss these issues in the remainder of this section . of fig .[ fg_hc8 ] .the hc cells associated with this process are .the particle arrays are the horizontal bars ( with disabled particles corresponding to gaps in the array opened up when particles moved to other processes ) .the linked list is given by the arrows going from one particle to another ; the solid ( dashed ) arrows give the linked list for the active ( disabled ) particles .the linked list roots are the pointers ( for disabled particles ) and ( for active particles ) beneath the particle array bars .each linked list begins at a root and ends with the null pointer .the particle array is allocated slightly more storage ( ) than needed ( ) .a ) the particle array and linked list before sorting .b ) the same particles and the linked list after sorting . ] during each position advancement equation ( [ eq_leapfrog ] ) , twice every timestep some particles move across the boundary of their local particle domain . as a result, such a particle is sent from a process to another process whose local region it entered .particles may cross the boundary of any pair of domains .the associated communication cost scales linearly with the surface area .the hilbert curve domain decomposition minimizes this cost because of the low surface to volume ratio ( [ sec_hc ] ) .when a particle moves outside the local region , it leaves a gap in the local particle array .we set the particle mass to and call this particle array member a disabled particle .all the disabled particles on each process form a separate linked list with root .the particles entering from other processes replace the disabled particles or are added to the end of the particle array .as a particle initially in process crosses a boundary to another process , the i d of the target process should be immediately found in order to send this particle to the new process .dividing the new particle coordinates by the hc mesh spacing gives the new hilbert curve mesh cell coordinates .the target process i d can then be found calling moore s function for the new hc index . by using the current hilbert curve partitioning ,one finds the i d of the target process from .once all particles to move have been identified , the particles are transferred between processes . as we show in appendix [ sec_moore ] ,moore s function calls are relatively expensive . to avoid having this cost each time a particle crosses the boundary , we allocate an extra one layer of hc cells surrounding the boundary of , as shown in figure [ fg_pp ] , and we mark the surrounding cells with the ids of the appropriate processes by calling moore s function for each of them exactly once . by doing this once , we avoid calling moore s functions in the future. however we still have to call the function for the very small fraction of the boundary - crossing particles that went further than one boundary layer cell in one timestep .the extra layer of hc cells surrounding the local region is also used with the particle - particle force computation as described in [ sec_hcpp ] .we maintain the particle linked list throughout the simulation instead of reforming it each timestep .as particles cross from one hc cell to another even if they are in the same local region the linked list is updated to reflect these changes .the particle array is reallocated whenever the fraction of disabled particles exceeds a few percent ( the exact value is a parameter set by the user ) , or the amount of particles exceeds the boundary of the pre - allocated particle array . cells containing all the particles assigned to process .information about the layer of boundary cells ( all gray and white cells outside the local region ) is also stored by process .this information is used both when particles are transferred between processes and during the short - range ( particle - particle ) force computation . in the latter case ,the particle data for the shaded cells is used to compute forces on particles in cells a and b as discussed in [ sec_hcpp ] .] in addition to the pointer to the root of the linked list that contains all the particles within each hc cell , each cell of the local region contains other structure members : the process number the cell belongs to , the current and previous timestep cell workloads required by equation ( [ wk_cellrob ] ) , the number of particles in this cell , etc. we will refer to this structure as the _ hc cell structure _ and the array of structures for all hc cells the _ hc cell array_. one member of this array has size 16 bytes .when repartitioning occurs , we send and receive the relevant hc cell array members and the particles they contain to the appropriate processes .some program components , such as particle position advancement , require access to the complete particle list on each process .all local particles can be accessed using the particle array and filtering out the passive particle array members as follows : we found that because of cache memory efficiencies , it is up to ten times faster to use a simple array to access every local particle than it is to dereference the three - dimensional linked list roots for each of the local cells of .the reason for such difference is that simple array members are sequential in the machine memory , while the successive linked list members are not , and the cpu cache memory is more effectively used when data are accessed sequentially in an array .the improvement in efficiency is especially important in the particle - particle calculation because each particle is accessed many times during one force computation .here we introduce a fast sorting technique that places the particle data belonging to the same hc cell sequentially within the segments of the particle array , ordered by increasing hc - cell raw index .this sorting procedure is performed each timestep before the force computation .every timestep , before a force calculation , we follow all the cells in the order of their raw hc index , and concatenate their linked lists , resulting in just one linked list of all the particles in the local particle array . then , using the unnecessary acceleration g0 and g1 members of the particle structure as pointers , we form an extended linked list replacing the old one . the result is a new linked list which can be traversed both forward ( using g1 ) and backward ( using g0 ) .then , starting from the first particle of a simple array of particles , we swap it with the first particle in the extended linked list while the forward and backward pointers of the immediately adjacent within the extended linked list particles being updated .we then proceed to the next particle in the simple array and in the linked list doing the same , until we have sorted the entire particle list .the result of this sorting is illustrated by figure [ fg_linked]b .in addition to optimizing the cpu cache memory usage , the above sorting technique eliminates the need to allocate an additional buffer for sending and receiving particles while repartitioning , because all the particles to be moved as the result of repartitioning will occupy contiguous segments in the simple particle array .when the sorting is completed the original linked list is unnecessary and is deallocated in order to be formed again directly using the sorted particle array , before the particle advancement and repartitioning take place . to transfer particles between processes we use a modification of mpi_alltoallv that assuresno failure will occur if insufficient memory has been pre - allocated for the send and receive buffers .this achieved by using mpi_alltoall to exchange the numbers of particles to be sent and received and then using as many mpi_alltoall and mpi_alltoallv calls as necessary to avoid overflowing the available memory of each processor .as mentioned above , during particle exchange and force computation one needs frequent access to a cell s particle list and other cell data , given the indices of the cell in the hc mesh .the most obvious method is to call moore s function to get the global hc index and then use our table of hc entries ( [ sec_hci ] ) to convert into the raw index .the raw index then gives the root to the particle linked list as shown in figure [ fg_linked ] .this method is unsatisfactory because of the expense of calling moore s function many times during the force evaluation .( dark gray ) assigned to one process and the rectangular array that includes it ( light gray combined with dark gray ) .the ragged array ( middle gray combined with dark gray ) requires much less storage but only the ragged array with gaps ( dark gray ) corresponds exactly to .four cells belonging to are randomly selected and labelled b , c , d and e. ] another simple method of allocation for the cells would be a _-dimensional rectangular array _ of cells holding the frequently used roots of the linked lists to the particles contained in this cell and the total number of particles within it .the access to a hc cell given its coordinates in this case is given by dereferencing the array [c_1][c_2] ] , where is the integer function equal to the number of the completed contiguous intervals in the -ordered set of all the hc cells in the local region having coordinates and having -th coordinate less than .for example , in the case of figure [ fg_lrbad ] , access to the cells b , c , d , and e is given by [0][26] ] , [0][9] ] .the disadvantages of the other methods considered above do not apply now : the -array dereference call is exponentially faster than the function call , and the space allocated exactly equals the required number of cells . for , the function evaluation takes a time that grows only logarithmically with the number of disjoint parts along the last dimension for a give and .in this section , we present an efficient method for parallel pm and pp computation of forces for particles within the hc local regions . by using the techniques developed in [ sec_loadbal ] and [ sec_layout ] , we have made our algorithms load balanced and efficient .the pm force calculation requires communication between two different data structures with completely different distributions across the processes .the particles on one process are organized into irregularly - shaped hc local regions .the density and force meshes , on the other hand , have a one - dimensional slab decomposition based on fftw .the parallel computation is an spmd implementation of the five pm steps presented in [ sec_pm ] . , the small circles are the discrete set of pm density gridpoints .the filled circles are the pm gridpoints within a fftw slab , .the gray filled region is the hc local region .the set of all circles within the dashed line is ; the set of filled circles within the dashed line is . extending thislast set slightly gives the continuous set within the solid line , .( [ eq_lmes ] ) gives the intersection of this last set with the gray region . ]we define a few concepts that will be needed in order to describe and implement the data exchange between the two different data structures during the parallel pm force calculation .the various sets used in the calculation are illustrated in figure [ fg_lg ] .the fftw parallel fast fourier transform implementation allows one to compute forward and inverse fourier transforms of the complete three dimensional array of mesh points distributed among the processes in the form of slabs of grid points , where , each slab starting at the position along the 0-th dimension .we will call these slabs the _ density _ or _ force mesh slabs _ ( depending on the context ) and denote them by .the geometry of the slab is calculated once and for all at the start of the run by calling the fftw fourier transform plan initialization routine .let us denote the complete discrete set of all density mesh gridpoints needed for a complete fourier transform by , and the complete continuous set of all positions within the whole simulation volume by .we have here , labels the process holding the hc local region while labels the process holding a given density / force mesh slab . for a continuous set of positions ,let us define to be the minimal complete subset of the density grid points such that equation ( [ eq_int ] ) is satisfied for any position vector . by this definition , if all the local particles are contained within , after the density assignment of step 1 of the pm force calculation , the only non - zero pm - density grid points of are in fact within a subset .for a discrete subset of the density gridpoints , let us define to be the minimal complete continuous set of points such that equation ( [ eq_int ] ) is satisfied for any .now , if all the grid points local to a process are within a subset of all the particles in the simulation volume , only the particles of the subset may acquire any non - zero force contribution from those gridpoints during _ step 5 _ of the pm - force calculation . as we discussed in [ sec_pm ] , step 1 of the pm force calculation involves filling the density grid points in using the particles distributed in the volumes .steps 24 involve working only with and are straightforward since they do not require any interprocessor communication aside from the parallel fft . during step 5the information flows in the exactly opposite direction , therefore an algorithm for step 1 applies to step 5 as well with the direction of the information flow reversed .the problem remaining now is for step 1 of the pm force calculation to decide how to fill the local density grids from the particles distributed within the local regions . to solve this problem we considered a number of approaches described briefly below , but only the last one is implemented in our code and is effective over the entire range of clustering .[ [ a - sending - particles . ] ] _ a ) sending particles . _ + + + + + + + + + + + + + + + + + + + + + + + under this method , each pair of processes sends the appropriate portion of the particle data from process to process to fill the density mesh of slab . for each pair of processesthe set of the density gridpoints on process will be updated with the particles brought from the volume within the hc local region of process .this method is very efficient for the pairs where the particle sender processes have low particle number density , thus reducing the number of particles to be sent and the communication cost .[ [ b - sending - grid - points . ] ] _ b ) sending grid points . _+ + + + + + + + + + + + + + + + + + + + + + + + + under this method , each pair of the processes fills the portion ( [ eq_gmes ] ) of the grid points using the local particles within ( [ eq_lmes ] ) , then sends the filled gridpoints to process .this method performs poorly when the particle number density is low on the sender process , because most of the density values in the message are zero .this method is very efficient for the pairs where the particle sender processes have a high particle number density : each gridpoint of the sender process contains the contributions from many particles .[ [ c - combined - particle - and - grid - point - send . ] ] _ c ) combined particle and grid point send . _+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + method _ a ) _ is effective with low particle number density while method _ b ) _ is effective with high particle number density on the particle sender process .the idea of the combined particle and grid point send method is to choose for each pair of processes the approach that requires sending the least data .[ [ d - sending - compressed - grid - points . ] ] _ d ) sending compressed grid points . _ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this approach optimizes the communication cost in both the extreme cases of low and high number density of the particles on the sender process . the idea behind this method is to use the approach _b ) _ above and apply _ sparse compression _ to the gridpoint messages ( [ eq_gmes ] ) .as we know , the grid point approach performs poorly when the particle number density is low on the sender process . using sparse compression as we explain in the following subsectionsignificantly alleviates this problem by reducing the message size for the underdense regions .is sent to process using an mpi function call .a compressed force message is constructed on process using the template given by the density message .the forces are sent back to process and expanded .the bracketed values in the bottom array can be ignored because there are no particles nearby the relevant grid points . ] in a cosmological simulation , the overdense regions have small hc local regions with every grid point having many nearby particles so that the force and density messages are small . on the other hand , low - density regions have large hc local regions with many pm grid points but the density and force messages are made small by the compression method illustrated in figure [ fg_sparse ] . during step 1 of the pm computation ,if a number of binary zeros are encountered in the grid message , they all are substituted by a pair of numbers before sending packets : the first number is a delimiter ( an illegal density or force value such as flt_max ) and the second number is an integer giving the number of zeros to follow in the original uncompressed message .this technique is called _ run - length encoding_. the resulting compression factor is unlimited and depends on how frequent and contiguous the zero values are positioned in the grid message . the receiver process simply uncompresses the message by filling the gridpoints within . during step 5 , the force valuesare sent from process to three times ( once for each of the three dimensions ) .the force array message is identical in the size to the density message that was sent during step 1 for each pair of processes .we compressed the density values in step 1 using run - length encoding of zero value densities .in the force message the technique runs into a difficulty because the gravitational forces are long range forces by nature and their values are nowhere equal to zero .if we do not compress the force values , there is no advantage in choosing the compressed gridpoint approach , since the force messages would have the same length as the uncompressed density messages . by using packet information obtained while receiving the density array, we can compress the forces using exactly the same pattern formed by the packets of the density message , as shown in figure [ fg_sparse ] .the receiving process will decompress the force and obtain exactly the initial force array excluding the values of force at the array members which were skipped in the density assignment ( the square bracketed force values in fig .[ fg_sparse ] ) .this loss of information is however completely irrelevant for interpolation of the force values to the particles in _ step 5 _ because the square bracketed force values in the force array belong to grid points which earlier acquired absolutely no density values from the surrounding particles , which means that for that grid point and for any particle within , the gridpoint has no nearby particles [ the condition ( [ eq_int ] ) is not satisfied ] .thus the force values at that grid point will not be interpolated to any particles during step 5 .the idea of sparse array compression is not implemented in the hydra code .once implemented it will significantly reduce their communication and memory costs .equation ( [ eq_gmes ] ) gives the minimal set of density grid points on process needing to be filled with values from particles on process .this set is impractical to work with because of its irregular shape . for a practical implementationwe embed this region within a rectangular submesh of during steps 1 and 5 of the pm force computation , as follows . fora continuous set of positions inside the simulation volume , let us define to be the minimal rectangular subset of density grid points such that . for grid points with butoutside we set the density values to zero .it follows at once that if we use instead of equation ( [ eq_gmes ] ) for the definition of pm grid point messages , we will have the rectangular mesh for interpolation of density for particles within , and this still give the correct result .however , since the extent of the local region inside the simulation box is not limited , neither is the extent of . for example , when consists of just two cells with the coordinates and , it is easy to see from the definition that encloses the whole simulation density mesh as a subset and this is too much memory space for allocation on a process . to avoid this problem, we dissect the local region uniformly into slices along the 0-th dimension so that the extent of each slice along the 0-th dimension will not exceed . using the previous equation we have , now summed for all the receiving processes for each slice of the hc local region , the density is interpolated onto the rectangular mesh which is small enough to be allocated since its extent in the 0-th dimension is limited by roughly grid points .then , the messages under the inner sum of equation ( [ eq_pmmsg ] ) are sent to processes .the procedure is repeated for each slice . in the codepresented in this paper we use the blocking mpi routines for pm message communication , which requires synchronization between each pair of processes exchanging the message . in order to reduce waiting time, mpi allows bi - directional blocking communication using mpi_sendrecv . in the above equationthe process is described as the _ sender _ of the pm - grid messages obtained by interpolation from the particles within to the processes in order to update their fftw - slabs .note however , that the same process also behaves as a _ receiver _ of the pm grid messages from the other processes in order to update the fftw - slab .the set of the received messages is obtained by simply swapping the indices and in the above equation .adding the two together we have for the set of gridpoints participating in the communication on process in both directions = \sum_{k=0}^{n_k^{ij}-1 } \sum_{j=0}^{\donpc-1 } \left[\ ; \gsl^j \cap \rfun(m_h^{ik}\ , ) +\gsl^i \cap \rfun(m_h^{jk}\ , ) \right]\ , \ ] ] where and the is defined to be an empty set for . in order to access particles in a given slice of the local region we use the particle access technique described [ sec_voids ] .the sorting technique described in [ sec_adv ] speeds up the density and force interpolation .the timing of the interpolation for each hc cell gives the pm part of the hc - cell workloads in equation ( [ workload_p3 m ] ) .the above procedure is used for both density and force interpolation in the pm force calculation . in the current implementation, the mpi messages are blocking , which means additional waiting time . in a subsequent paperwe describe the implementation of non - blocking communication resulting in a significant speedup of the pm calculation . the particle - particle ( pp ) force calculation increments forces acting on each of the particles in a pair if the particles are closer than .the method of particle access developed in [ sec_voids ] allows one to access all the particles within a given hc cell . from equation ( [ eq_ppsimple ] ) , hc cells are coincident with the chaining mesh cells needed for the pp force calculation . to see how the communication and computation work , consider the example of figure [ fg_pp ] . to compute the pp force for a particle within chaining mesh cell ,the particle data in the surrounding cells are required .the particle data within the cell are locally available .however one needs to bring the positions and masses from the other processes to get the particle data for the boundary layer cells .once the particle data from the boundary layer cells are gathered , the pp force calculation may be performed by pair summation , after which the resulting forces for the particles within are sent back to their processes where the pp forces of the original particles are incremented .the same algorithm applies to any other cell within , for example the cell of figure [ fg_pp ] , for which the particle data for and are available locally while the particle data for cells and must be brought to the local process from the others .because of its pairwise nature only half the surrounding cells are needed for the pp force calculation for each hc cell . in total , the particle data for the non - local cells shaded in figure [ fg_pp ] are required for the pp force calculation for each particle within .the amount of communication needed for a complete pp force calculation is proportional to the number of particles in the cells required to be brought from the other processes through the boundary layer cells .if the pp pair summation step is started synchronously on all processes , it will finish at approximately the same time on all processes if the load imbalance is low . otherwise , the processes that complete the pp force computation first will have to wait for the remaining processes to finish their pair summation . since the pair summation is the most time - consuming step of p m ,it is crucial that the procedure be load - balanced .this is accomplished using the methods of [ sec_loadbal ] .the cpu time of the pair summation step is used in the cell workload calculation of equation ( [ workload_p3 m ] ) .the particle access time in the pair summation loop is minimized by pre - sorting the particles as described in [ sec_adv ] .in early versions of our code , the memory often exceeded the available resources causing the code to crash . by implementing runtime tracking of memory usagewe were able to identify the problems and optimize the memory requirements .memory usage was reduced largely in three ways : the irregular shaped local domain memory technique of [ sec_voids ] , the elimination of particle buffer allocation while repartitioning , and memory balancing when necessary during repartitioning as described in [ sec_rep ] ..[tb_parmem ] dominant memory requirements of the parallel code . here is the number of particles and is the thickness of the pm slab , both on process .[ cols= " < , < , < , < " , ] we tested the scalability of using two problem sizes ( and in a 200 mpc box with plummer softening length mpc , evolved to redshift zero , taking 634 and 657 timesteps , respectively ) and a range of numbers of computing nodes as shown in table [ tb_sclb ] .each computing node has two cpus .the runs have either two processes per node ( one per cpu , runs 4a , b ) or four processes per node ( two per cpu , using intel hyperthreading ) . for perfect scalability, the times in the last column would be equal for simulations of the same grid size . from table [ tb_sclb ]we may draw several conclusions .first , does not scale perfectly like an embarrassingly parallel application . on the other hand , increasing the number of processes up to 80 leads to a steadily decreasing wall clock time .comparing runs 3a and 3 g , we see that for up to 48 processes , the wall clock time scales as .hyperthreading also gives a significant speedup .comparing runs 3f and 4b , which have the same total number of processes but different numbers of compute nodes , we see that hyperthreading improves the code performance by a factor 1.62 .we also see that the code scales reasonably well as the problem size is increased .comparing runs 3f and 5b , the wall clock time is proportional to where is the number of particles . when the wall clock time is dominated by pp pair summation , we expect scaling as ., left ) and 5 ( , right ) .the individual runs are labelled ., title="fig : " ] , left ) and 5 ( , right ) .the individual runs are labelled ., title="fig : " ] the most significant deviations from perfect scalability arise with the largest numbers of processes , in particular runs 3h , 3i , and 5d .these arise from load imbalance , as shown in figure [ fg_sclb - imbal ] .a significant increase in load imbalance shows up after timestep 500 in runs 3 and timestep 600 in runs 4 due to the formation of a dense dark matter clump .when the number of processes is sufficiently large , this leads to one or a few hc cells beginning to take as much time for pp pairwise summation as the average time for the other processes . according to equation ( [ eq_resimbalcon ] ) ,the result is a growing residual load imbalance .scalability breaks down beyond a certain number of processes , given by equation ( [ eq_npclim ] ) .once the performance saturates , the instantaneous and residual load imbalance match because it is no longer possible to improve the load balancing by rearrangement of the partitioning .although the performance of is limited by the pp pair summation and not by the pm force computation , it is worth recalling that , because the current code uses blocking sends and receives to pass data between the particle and grid structures , the pm time also scales imperfectly .when we implement adaptive p m , the pp time will decrease significantly so that the pm time becomes a significant fraction of the total wall clock time . to improve the parallel scaling, it will be important to implement non - blocking communication for the pm particle / grid messages .parallelizing a gravitational n - body code involves considerably more work than simply computing different sections of an array on different processors .the extreme clustering that develops as a result of gravitational instability creates significant challenges .a successful parallelization strategy requires careful consideration of cpu load balancing , memory management , communication cost , and scalability .the first decision that must be made in parallelizing any algorithm is how to divide up the problem to run on multiple processes . in the present contextthis means choosing a method of domain decomposition . because p m is a hybrid algorithm combining elements of three - dimensional rectangular meshes and one - dimensional particle lists , we chose a hybrid method of domain decomposition .a regular mesh , distributed among the processes by a simple slab domain decomposition , is used to obtain the pm force from the mesh density .a one - dimensional structure the hilbert curve is introduced to handle the distribution of particles across the processes and to load balance the work done on particles . implementing hilbert curve domain decomposition in a particle code is the major innovation of our work . to take full advantage of it we had to employ a number of advanced techniques .first , in [ sec_loadbal ] we devised a discrete algorithm to find the nearly optimal partitioning of the hilbert curve so as to achieve load balance , the desirable state in which all processors have the same amount of work to do .this is a much greater challenge in a hybrid code than in a purely mesh - based code such as a hydrodynamic solver or a gridless particle code such as the tree code .we then made the domain decomposition dynamic by repartitioning the hilbert curve every timestep , allowing us to dynamically maintain approximate load balance even when the particle clustering became strong . in [ sec_voids ]we presented a fast method for finding the position of a cell along the hilbert curve given its three - dimensional location .this procedure allows us to access arbitrary cells in a general irregular domain by a lookup table much faster than using the special - purpose hilbert curve function of . in [ sec_compress ] we introduced run - length encoding to greatly reduce the communication cost for transferring information between the particle and mesh structures required during the pm force computation . in [ sec_adv ] we optimized the particle distribution within each process so as to improve the cache performance critical for efficient pair summation in the pp force calculation . by choosing the domain decomposition method appropriate for each data structure , and by implementing these additional innovations , we achieved good load balance and scalability even under extreme clustering .the techniques we introduced for effective parallelization should be applicable to a broad range of other computational problems in astrophysics including smooth - particle hydrodynamics and radiative transfer. tests of our algorithm in [ sec_test ] showed that we achieved our goals of scalability and load balance , with two caveats mentioned at the end .in figure [ fg_wall ] we demonstrated the importance of using a dynamic three - dimensional domain decomposition method instead of a static one - dimensional slab decomposition .the latter method is unable to handle extreme spatial inhomogeneity .next , we performed a long simulation ( performed on only 20 dual - processor computing nodes ) to thoroughly test the load balancing algorithm .the average load imbalance for this simulation run with 80 processes was only 12% , meaning that 12% of the total wall clock time of all the cpus was wasted .while not perfect , this is very good performance for the p m algorithm .the largest cause of load imbalance over most of the simulation was our inability to predict the total cpu time of the next timestep on each process because of variations in cache memory usage . finally , we tested the limits of scalability by performing the set of runs in table [ tb_sclb ] . for up to 48 processesthe code performed with very good parallel speedup the wall clock time scaled as for processes , as compared with for perfect scalability .our tests revealed two limitations to scalability that will be addressed in a later paper presenting an adaptive p m algorithm .first , the current code uses blocking communication for sending data between the particle and grid structures in the pm force calculation .in other words , some processes sit idle waiting for others to complete their communications requests .this inefficiency , while small when pp forces are expensive to compute , will become more important when adaptive mesh refinement reduces the pp cost .the solution is to restructure the communication to work with non - blocking sends and receives . finally , we observed our code to become inefficient when a handful of hilbert curve cells ( out of millions in the entire simulation ) begin to dominate the computation of pp forces .because a non - adaptive code does not allow refinement of one cell , a single process must handle these extremely clustered cells even if the other processes have to wait idly while it finishes .the solution to this problem is simply to use adaptive refinement . in a later paperwe present an algorithm for scalable adaptive p m building upon the techniques introduced in the current paper .once this paper is accepted for publication , the simulation codes presented here will be made publicly available at http://antares.mit.edu/. a. shirokov would like to thank paul shapiro and mike warren for useful discussions and serhii zhak for helpful comments on hardware issues .this work was supported by nsf grant ast-0407050 .figure [ fg_block ] presents a block diagram of our parallel hilbert curve domain decomposition code .the code may run on any number of processes ( this is not restricted to being a power of 2 ) .the code is written in ansi c with mpi calls .excluding fftw , it consists of about 33,000 lines of code .this appendix gives an overview of the code guiding the reader to the relevant parts of the main paper .m code . ]the code begins by loading particle data from one or more files . at the beginning of a simulation ,these files contain the initial conditions .a simulation may also be started using particle data that have already been evolved .the particle data may be either in one file on the cluster server or they may be in multiple files , one stored on each cluster compute node . the next step is to initialize the hilbert curve for domain decomposition based on the particle distribution , as described in [ sec_hci ] . the code stores particle data ( e.g. positions and other variables as described in [ ser_pa ] ) differently than mesh data ( e.g. density ) .mesh - based data are stored on a regular pm mesh which is divided by planes into a set of thick slabs , one for each parallel process .particle data are organized into larger cells called hilbert curve ( hc ) cells .( these cells have a size just slightly larger than the cutoff radius for the particle - particle or pp short - range force . )the cells are then connected like beads on a necklace by a closed one - dimensional curve called a hilbert curve .the hilbert curve initialization step computes and stores the information needed to determine the location of every bead on the necklace , that is , it associates a one - dimensional address with each hc cell .once the hilbert curve is initialized , the hilbert curve is cut into a series of segments , each segment ( called a hc local region ) containing a set of hc cells and their associated particles .each parallel process owns one of the local regions .the particles are thus sent from the process on which they were initially loaded to the process where they belong .when restarting a run on the same nodes , the particles are already on the correct processes . when starting a new simulation , the partitions are set with equal spacing along the hilbert curve and the particles are sent to the appropriate processes .this method of assigning particles to processes based on their position along a one - dimensional curve of discrete segments is called hilbert curve domain decomposition and it is explained in [ sec_par ] .the organization of particles within a process is described in [ sec_layout ] .after these initialization steps the code integrates the equations of motion given in [ sec_eom ] using a leapfrog scheme presented in [ sec_leapfrog ] .first the positions are advanced one - half timestep , and if they cross hc local region boundaries they are moved to the correct process .next , gravitational forces are computed .most of the work done by the code is spent computing forces .the interparticle forces are split into a long - range particle - mesh part computed on the mesh and interpolated to the particles , plus a short - range particle - particle correction , as described in [ sec_pm ] , [ sec_pp ] , and [ sec_hcforce ] .most of the communication between processes occurs during these steps .if the particle - mesh green s function has not yet been computed , it is computed just before the first pm calculation .the green s function is essentially the discrete fourier transform of , modified by an anti - aliasing filter to reduce anisotropy on scales of the pm mesh spacing .after the particle - mesh forces are computed , they are incremented by the particle - particle forces ( the most time - consuming part of p m ) . after the forces are computed , velocities and then positions are advanced to the end of the timestep .once more , particles that cross hc local region boundaries are transferred to the correct process .after the particles have moved , the cuts along the hilbert curve are moved so as to change the segment lengths and thereby change the domain decomposition .this step is called repartitioning .its purpose is to ensure that , as much as possible , each process takes the same amount of time to perform its work as every other process , so that processes do not sit idle waiting for others to finish their work .( certain operations , like the fft , must be globally synchronized . )when this ideal situation is met , the code is said to be load balanced .repartitioning is performed every timestep to optimize load balance , as explained in [ sec_loadbal ] . at the end of the integration step ,the code generally loops back to advance another step .periodically the code also outputs the particle data , usually writing in parallel to local hard drives attached to each compute node .table [ tb_cvars ] presents a list of frequently used symbols and variables in the code . description + + & & the simulation box size in comoving mpc , .+ & & the total number of particles in the simulation volume + & dx & the pm mesh spacing , same in all dimensions + & epsilon & plummer softening length , in units of + & etat & time integration parameter , usually + & cr.max & pp - force length , in units of , typically + & n0 .. n2 & the size of the simulation box , in units of pm cells + & ncm0 .. ncm2 & the size of the simulation box in chaining mesh cells + & ngrid & , the total number of pm grid points + & cr.len0..cr.len2 & chaining - mesh grid spacing along three dimensions + & , & starting and finishing pointers of particle array + & & pointer to the end of the preallocated particle array , equals pa_f in the serial code . in the parallel code .+ + & & starting index of fftw slab for the fft plan + & & thickness of fftw slab . the whole slab on process has size $ ] , where + & hc_n0 hc_n2 & the size of the simulation box in hilbert curve ( hc ) mesh cells , per dimension + & & , the total number of hc mesh cells .+ & & the number of particles local to process + & wk.nproc & number of worker processes , those containing particle data + & & coordinates of a cell in the hilbert curve mesh + & & a hilbert curve index + & & a raw hilbert curve index + & & mapping between the hc index and the cell s coordinates + & & the inverse of the above mapping + & & mapping between the hc raw index and the cell s coordinates + & & hc order : the number of cells in the hc mesh is , + & & hc mesh spacing along dimension , in units of + & & hc local region of process + & hc_stg & 3-d ragged array with gaps of the cells of the + & & the number of entries of the hc into the simulation volume + & & the hc index of the -th entry of the curve into the simulation volume , and the number of the hc cells that follow contiguously inside the simulation volume along the curve + & & the hc index of the bottom partition and number of cells on process + & & same , with the raw index +working with a hilbert ( space - filling ) curve requires a mapping from hc index to hc cell position and vice versa . implemented c functions that accomplish these mappings .the most straightforward implementation of the hilbert curve is too slow , since a hilbert curve is defined recursively by its self similarity .moore s implementation is based on a much faster non - recursive algorithm of .a one - to - one correspondence between a cell and the hc index is given by the following functions of moore s implementation : the hilbert curve index is of type long long unsigned and a vector of three integer indices giving the spatial coordinates of the cell .these two functions are inverse to each other .they are implemented for any spatial dimension .for example for in figure [ fg_virgo ] , a function will return the position of the curve s starting point , and the function returns the position of the next cell along the curve .we verified that the resulting curve indeed provides a one - to - one mapping between the cell and its hc index preserving space locality for all hc mesh sizes up to .table [ tb_moore ] shows the average measured cpu time to make one call to the hc function on a 2.4 ghz intel xeon processor . the time shown is compared with the average times to make other simple arithmetic operations or memory references .it is surprising how fast the implementation is : it takes just two minutes to make hilbert curve function calls on a single processor .however , in comparison with a simple arithmetic operation or triple array dereferencing , it is very slow : an average function call is about 120 times slower than a triple array dereferencing for the hc mesh ; the function call time increases linearly with the increase of the hilbert curve order as sec .we should therefore avoid using the hc implementation function calls when it is possible to use memory dereferencing instead . as we discuss in [ sec_adv ] and [ sec_hcpm ] we successfully avoid multiple calls to hilbert_c2i during the force calculation and the particle advancement by proper organization of memory usage .call , sec + nothing ( bare triple for loop ) & 7.75 + inline multiplication ( innermost integer index squared ) & 12.8 + arithmetic function call ( innermost integer index squared ) & 18.57 + triple array dereferencing & 16.29 + function call ( ) & 1056. + function call ( ) & 920 .+ bennett , c.l . , et al .2003 , , 148 , 1 bertschinger , e. 1991 , in after the first three minutes , ed .s. holt , v. trimble , & c. bennett ( new york : aip ) , 297 bertschinger , e. 1995 , cosmics software release ( astro - ph/9506070 ) bertschinger , e. 1996 , in cosmology and large scale structure , proc .les houches summer school , session lx , ed .r. schaeffer , j. silk , m. spiro , and j. zinn - justin ( amsterdam : elsevier science ) , 273 butz , a.r .1971 , ieee trans . comp ., 20 , 424 couchman , h.m.p .1991 , , 368 , l23 dav , r. , dubinski , d.r . & hernquist , l. 1997 , , 2 , 277 dubinski , j. , kim , j. , park , c. , & humble , r. 2004 , , 9 , 111 efstathiou , g. & eastwood , j.w . 1981 , , 194 , 503 moore , d. 1994 , hilbert curve implementation at http://www.caam.rice.edu/%7edougm/twiddle/hilbert/ pilkington , j. & baden , s. 1996 , ieee trans . par . dist .systems , 7 , 288 plummer , h. c. 1911 , , 71 , 460 quinn , t. , katz , n. , stadel , j. & lake , g. 1997 , preprint ( astro - ph/9710043 ) ruth , r. d. 1983 , ieee trans . nucl ., 30 , 2669 salmon , j. & warren , m. 1994 , j. comp .phys . , 111 , 136
|
we present a parallel implementation of the particle - particle / particle - mesh ( p m ) algorithm for distributed memory clusters . the ( gravitational cosmology ) code uses a hybrid method for both computation and domain decomposition . long - range forces are computed using a fourier transform gravity solver on a regular mesh ; the mesh is distributed across parallel processes using a static one - dimensional slab domain decomposition . short - range forces are computed by direct summation of close pairs ; particles are distributed using a dynamic domain decomposition based on a space - filling hilbert curve . a nearly - optimal method was devised to dynamically repartition the particle distribution so as to maintain load balance even for extremely inhomogeneous mass distributions . tests using simulations on a 40-processor beowulf cluster showed good load balance and scalability up to 80 processes . we discuss the limits on scalability imposed by communication and extreme clustering and suggest how they may be removed by extending our algorithm to include adaptive mesh refinement .
|
the large scale behaviour of randomly stirred fluids was originally studied by forster , nelson and stephen ( fns ) .they used a dynamic renormalization procedure to explore the effects of the progressive removal of small ( length ) scales in a perturbative model under several types of forcing .as they note , their study is only valid at the smallest momentum scales , and as such the study is well below the momentum scale of the inertial range .later , the procedure used by fns was extended by yakhot & orszag ( yo ) to a more general forcing spectrum ( of which the studies of fns were special cases ) and used to calculate the energy spectrum and a value for the kolmogorov constant in the inertial region . while their arguments allowing them to calculate inertial range properties are contested , these issues are not the main focus of this paper .instead , we will concentrate on another disagreement related to the results for the renormalized viscosity and noise . in the papers of fns and yo ,the authors calculate the viscosity increment , quantifying the effect of the removed subgrid scales on the super - grid scales .they find the prefactor ( from , with fns in agreement for their specific cases of study ) .the disagreement is centred around the use of a certain change of variables employed by fns and yo .this substitution has been highlighted as a cause for concern ( for example , ) , since navely the symmetric domain of integration appears to be shifted , violating conditions for the identities used to be valid .using methods that do not introduce any substitution , again for a general forcing spectrum , wang & wu ( ww ) and teodorovich arrive at a different , incompatible result for the viscosity increment .instead , they find the prefactor .this -free result is also used in the more field - theoretic work of adzhemyan _ et al .later , nandy attempted to determine which of the results was correct using a `` symmetrization argument '' and agreed with the original ( general forcing ) result by yo .the method used by fns and yo has found wide - ranging application , for example in soft matter systems , such as the kpz and burgers equations , and the coupled equations of magnetohydrodynamics . given the extensive use of this approach , it is unsatisfactory to have any lingering disagreement on the basic methodology .the aim of this paper is , therefore , to settle this dispute once and for all .there can not be two different results for the same quantity .we will show that an extra constraint mentioned by fns causes the elimination band not to be shifted , and that for substitution - free methods there are neglected boundary terms .these are evaluated and shown to compensate exactly the difference between and .we then show how correct treatment does not require a symmetrization to obtain the yakhot - orszag result .in addition to renormalization of the viscosity , there is also renormalization of the noise .all treatments consider an input noise that is gaussian with the forcing spectrum parametrized as , where is the wavenumber associated with the force . at one - loop order each of the two vertices will have a factor of the inflowing momentum , thus leading to a contribution to the forcing spectrum .both fns and yo acknowledge this correction . in fnsthey treat two specific cases , and . in the former ,they find a renormalization to whereas , in the latter , they conclude that all higher order corrections are subleading .yo restrict their analysis to and once again conclude all higher order corrections are subleading .we explicitly show how the leading contribution will always go as and as such can only be taken as an multiplicative renormalization for the case , as noted in .we find the prefactor agrees with found by fns and yo ( with ) and show it to be incompatible with the -independent .another author , ronis , calculates the viscosity and noise renormalization using a field - theoretic approach .his analysis agrees with fns and yo for , although appears to be presented for general . as we will argue in sec .[ sec : renorm_cond ] , this seems unjustified as the noise is only renormalized for the case .the paper is organised as follows . in sec .[ sec : discuss ] , we give a brief discussion on the validity and limitation of this type of low- renormalization scheme for a fluid system . in sec .[ sec : calculation ] our calculation for viscosity renormalization is done and then in sec .[ sec : renorm_cond ] for noise renormalization , along with comparison with other analyses .the results are summarised in table [ tbl : summary ] . finally , in sec .[ sec : conclusion ] , we present our conclusions along with a brief discussion of the relevance of this type of renormalization scheme for calculating inertial range quantities .we start with a brief discussion on the region of validity of this method and its limitations .turbulence is often viewed as an energy cascade , where energy enters large length scales in the production range and is progressively transferred to smaller and smaller scales , until viscous effects dominate and it is dissipated as heat .there must be a balance between the energy dissipated and the energy transferred through the intermediate scales , otherwise energy would build up and the turbulence would not remain statistically steady .thus the dissipation rate , , controls how small the smallest length scales need to be to successfully remove the energy passed down , giving the kolmogorov scale .when the reynolds number is sufficiently large , there exists a range of intermediate scales where the energy flux entering a particular length scale from ones larger than it is the same as that leaving it to smaller ones and is thus not dependent on the wavenumber .this is the inertial range. introduced by fns in their ir study of randomly stirred flows takes you to the fixed point at .iterative averaging from takes you to the non - gaussian fixed point , which marks the beginning of a line of fixed points along .,scaledwidth=65.0% ] the energy spectrum and a summary of the various ranges of it are presented in fig .[ fig : energy_spectrum ] ( based on a similar figure in ) . in the rg approach , the smallest length scales ( largest wavenumber scales ) are removed and an effective theory is obtained from the remaining scales .there is high- and low - energy asymptotic freedom since the renormalized coupling becomes weak in both limits .the dynamic rg method used by fns introduces a momentum cutoff well below the dissipation momentum scale , below the inertial range even ( see figure [ fig : energy_spectrum ] ) , in the production range and removes momentum scales towards . as such, this method can only ever account for the behaviour on the largest length scales .the production range is highly dependent on the method of energy input , and so it is obvious that the properties of the lowest modes will also share this dependence .taking the forcing to be gaussian then allows gaussian perturbation theory to be used , since the lowest order is simply the response to this forcing . since the inertial range is highly non - gaussian , we do not expect to study the inertial region with this analysis .an alternative rg scheme called iterative averaging ( mccomb ) instead takes a cutoff and removes successive shells of wavenumbers down to a non - gaussian fixed point , which marks the beginning of a line of fixed points following through the inertial region ( see figure [ fig : energy_spectrum ] ) .the asymptotic nature of this method therefore can not tell us anything about the forcing spectrum , and is only dependent on the rate at which energy is given to the system .no assumptions about gaussian behaviour are made . using the energy spectrum ( figure [ fig : energy_spectrum ] ) ,we see the location of the ir procedure of fns / yo and how it is inapplicable for the calculation of inertial range statistics . put simply , it does not have access to the inertial range , just as iterative averaging does not have access to the production range .this is discussed further in the conclusions , sec .[ sec : conclusion ] .the motion of an incompressible newtonian fluid in -spatial dimensions , subject to stochastic forcing , , is governed by the navier - stokes equation ( nse ) which , in configuration space , is where is the velocity field , is the pressure field , is the density of the fluid and is the kinematic viscosity .the index and there is an implied summation over repeated indices .we consider an isotropic , homogeneous fluid and , using the fourier transform defined by the nse may be expressed in fourier - space as where the incompressibility condition ( ) has been used to solve for the pressure field in terms of the velocity field . in equation ( [ eq : def : nse_fourier ] ) , we have also introduced ( ) as a book - keeping parameter to the non - linear term and the vertex and projection operators , respectively , are defined as \ , \\ p_{\alpha\gamma}({\bm{k } } ) & = \delta_{\alpha\gamma } - \frac{k_\alpha k_\gamma}{k^2}\ , \end{split}\ ] ] and contain the contribution from the pressure field .the integral over , could be trivially done to follow fns , yo and wang & wu ; however , we leave it in for comparison with nandy s calculation .it is common to specify the forcing term through its autocorrelations where is the forcing spectrum and the presence of the projection operator guarantees that the forcing is solenoidal ( and hence maintains the incompressibility of the velocity field ) .since the rhs is real and symmetric under , the configuration - space correlation is also real .following fns , we impose a hard uv cut - off , where is the dissipation wavenumber . with this choice ofcut - off the theory only accounts for the largest scale behaviour ( and therefore should not reproduce results for inertial range turbulence ) .this cut - off was later relaxed to by yo , although the rest of their renormalization procedure followed fns .the velocity field can then be decomposed into its high and low frequency modes , introducing a more compact notation as ( and are often also expressed as and ) , with such that and , allowing the nse to be rewritten for the component fields : \\ \delta({\hat{j}}+{\hat{p}}-{\hat{k}})\ , \end{split}\ ] ] and similarly for the high frequency modes , . the filtered vertex operator is understood to restrict in the non - linear term .this will later lead to an additional constraint on the loop integral .this constraint is neglected by many authors .together with a perturbation expansion and the zero - order propagator , it is possible to solve for in terms of using powers of , which may be substituted back into equation ( [ eq : nse_low ] ) .performing a filtered - averaging procedure , under which : 1 . low frequency components are statistically independent of high frequency componentslow frequency components invariant under averaging : and so .+ ( this can be shown more rigorously using a _ conditional average _ , and discussed in . )stirring forces are gaussian with zero mean : , ; and using equation ( [ eq : def : stirring_forces ] ) , we obtain + \lambda_0 m^{-}_{\alpha\beta\gamma}({\bm{k}})\int^{-}{d\hat{j}\ } \int^{-}{d\hat{p}\ } u_\beta^{{-}}({\hat{j } } ) u_\gamma^{{-}}({\hat{p } } ) \delta({\hat{p}}-{\hat{k}}+{\hat{j } } ) + \sigma^{-}_{\alpha}({\hat{k } } ) \ , \ ] ] where and we have used -functions , to explicitly control the shell of integration , so the momentum integrals are now . the induced random force , compensates for the effect of forcing on the eliminated modes see section [ sec : renorm_cond ] for more details . note that in equation ( [ eq : post_av ] ) we have , following and ,neglected the velocity triple non - linearity ( and thus all higher non - linearities which are generated by it ) .eyink showed that this operator is not irrelevant but marginal by power counting ( see appendix [ app : operator_scaling ] ) ; however , as noted in , this choice merely indicates the order of approximation and does nt require justification . in any case , these higher - order operators become irrelevant as . for feynman rules . from the factor ( see ), comes from exchanging which leg from the left vertex connects to the noise correlation , and the other from the thick line instead being incident on the lhs . ] multiplying both sides of equation ( [ eq : post_av ] ) by and neglecting the triple non - linearity , this expression can be found from the graph given in figure [ fig : v_graph ] using the rules in figure [ fig : rules ] and the form in equation ( [ eq : sigma_alpha ] ) .it should be noted that the symmetry factor of the graph is 4 ( figure 9 in ) . from equation ( [ eq : sigma_alpha ] ) , we may perform the frequency integrals in either order to give which, along with the definition of and , may be compared to ( 2.10 ) in or ( 4 ) in .we begin this section with the motivation for calling this a self - energy integral .the term has been borrowed from high - energy physics , and it represents the field itself modifying the potential it experiences . in high - energy physics , the renormalized or dressed propagator may be written using the dyson equation ( equation ( 27 ) ) as , where represents the self energy operator . in our case, we are instead writing , where the structure can be seen from the graph in figure [ fig : v_graph ] or equation ( [ eq : all_agree ] ) once the integral over has been trivially done . as can be seen in equation ( [ eq : all_agree ] ) ,the constraints on the integral are provided by the product of functions .we first show how this can be expanded before verifying that at the substitution causes two compensating corrections , and hence to there is no correction . following this ,corrections to the calculations by wang & wu and nandy are evaluated and their contribution to the final result accounted for .we perform the integral over so our product of functions becomes . the second constraint , , is sometimes ignored ( see for example equation ( 4 ) in ) and this is a source of error in these calculations .with the definition from equation ( [ eq : def : theta ] ) where , we taylor expand the latter about and our product becomes \right ) + { \mathcal o}(k^2)\ , \ ] ] see appendix [ app : theta_exp ] for details .we see that the additional constraint has introduced a first order correction to the constraint on .further , the presence of the -functions show that these contributions are evaluated on the boundaries .this correction is absent from the work of wang & wu , teodorovich and nandy as they ignore this constraint .we shall see later that , from a diagrammatic point of view , this is equivalent to ensuring that all internal lines have momenta in the eliminated band . and label the momenta in the directions parallel and perpendicular to that of the shift , .( left ) the constraint only , as used by wang & wu ; ( right ) as used by fns .white ( ) shows the position of the unshifted shell , with light grey shading ( --- ) showing the shifted shell(s ) .small tick marks indicate the centre for the shifted shell(s ) at .the dark grey highlights the overlap of two shifted shells . the green point ( in the top - left quadrant ) is a random momentum , , which lies within the resultant shell , while the red cross ( lower - right quadrant ) shows .clearly , in the left case the shell is not symmetric under , and as such identities requiring a symmetric shell are invalid . by correctly accounting for the additional momentum constraint , we are led instead to a shell like the one to the right , which _ is _ symmetric . in the right figure , where is shown by the ( horizontal ) height of the orange - filled triangles towards the left ,we see that as ( the triangles shrink to their vertical baselines ) the overlap increases and the resultant shifted shell is well approximated by the original , unshifted shell ( ) , as found in equation ( [ eq : shell_shift]).,scaledwidth=85.0% ] we now turn our attention to the substitution made by fns and yo , under which our constraints become taylor expansion of these high - pass filters is now + { \mathcalo}(k^2 ) \nonumber \\ & = & \theta^{+}({\bm{j } } ) \pm x({\bm{k}},{\bm{j } } ) + { \mathcal o}(k^2 ) \ , \end{aligned}\ ] ] and the product becomes the contributions at cancel one another exactly , and there is no correction to the simple constraint on . without the constraint , the substitution would have led to just , which clearly does introduce a first order correction .these points can be seen in figure [ fig : shell_shift ] .using this , we go on to find the result of yakhot & orszag ( a generalization of the fns result ) in the limit , where wang & wu were unsatisfied with the substitution used by fns and yo .this is because they do not impose the condition that on the self - energy integral , and on the face of things the substitution shifts the integration domain ( see figure [ fig : shell_shift ] ) .the authors then continue without making any substitution but simply taylor expanding the denominator and expanding the vertex operator to \ , \end{aligned}\ ] ] where the operator is defined for convenience .we now expand the product of functions as in equation ( [ eq : ww_theta_exp ] ) but note that only the last term in the square brackets above is not already of .as such , it is the _ only _ term that can generate a correction . then splits into where \ ] ] is the contribution used by wang & wu without imposing the additional constraint , and the correction \ ] ] includes the additional boundary terms .the first contribution above leads to the wang & wu result , that where we now evaluate the first order correction given by equation ( [ eq : ww_correction ] ) , using the standard convention that ( see , for example , ) , \big[d^{-\epsilon } - \lambda^{-\epsilon}\big ] \theta(0 ) \nonumber \\ & = & \frac{w_0\lambda_0 ^ 2}{\nu^2_0 } \frac{s_d}{(2\pi)^d } \frac{1}{2d(d+2 ) } \left(\frac{e^{\epsilon\ell}-1}{\lambda^{\epsilon}}\right ) k^2 u^{-}_\alpha({\bm{k}},0 ) \ , \end{aligned}\ ] ] and so the correction to the viscosity increment found by wang & wu is if this contribution is added to the result for the renormalized viscosity increment found by wang & wu in equation ( [ eq : ww_result ] ) , we find which is exactly the result obtained by yo , see equation ( [ eq : yo_result ] ) .hence we have shown that a more careful consideration of the region of integration used by wang & wu instead leads to the result found by fns and then later yo . the approach taken by teodorovich uses a different method for evaluating the angular part of the self - energy integral .however , the author misses the same constraint and thus arrives at the result as wang & wu . in the paper by nandy , the author presents an argument based on symmetrising the self - energy integral . referring to equation ( [ eq : all_agree ] ), he points out that there is no reason to do the integral first , and that the result should be an average of the two .performing the integrals first , gives taylor expanding the function and the denominator , and using the definition and properties of the vertex and projection operators , to leads to \ .\end{split}\ ] ] again , we see that all the terms in the square brackets are already except the last one , and so this is the only term which generates a correction .once again decomposing the contribution calculated by nandy is \ , \end{split}\ ] ] and the correction generated by expanding the product of functions is given by = - \delta \hat{\sigma}^{{-}}_\alpha \ .\ ] ] we see that , with the relabelling , the correction is exactly the same as equation ( [ eq : ww_correction ] ) only with the opposite sign .therefore , we see that = \tfrac{1}{2}\big [ \hat{\sigma}^{{-}}_\alpha + \bar{\sigma}^{{-}}_\alpha \big ] = \tfrac{1}{2}\big [ ( \hat{\sigma}^{{-}}_\alpha + \delta \hat{\sigma}^{{-}}_\alpha ) + ( \bar{\sigma}^{{-}}_\alpha + \delta \bar{\sigma}^{{-}}_\alpha ) \big ] = \sigma^{{-}}_\alpha \ , \ ] ] which is why this symmetrization produced the correct result .in fact , evaluation of leads to the result found by nandy for performing the integrals in this order and the correction is combining these results we again find the result of yakhot & orszag , equation ( [ eq : yo_result ] ) , showing that regardless of which integral is performed first we obtain the same result and so a symmetrization is not necessary . in a completely different approach , sukoriansky _ used a self - substitution method to solve for the low - frequency modes and claim to evaluate the cross - term exactly .this method does not generate the cubic linearity , instead it creates a contribution of the same form as wang & wu equation ( [ eq : ww_current ] ) but with the condition .combined , this then covers the whole domain , and the authors drop any conditions on .however , due to the upper momentum cutoff .this can then be taylor expanded for small , and leads to a contribution from the upper boundary , neglected in their analysis .in fact , this correction places their result somewhat between that found by yakhot & orszag and wang & wu , \\ & = \frac{w_0 \lambda^2}{\nu^2_0 } \left [ \frac{a_d^\star(\lambda e^{-\ell})^{-\epsilon } - a_d(\epsilon)\lambda^{-\epsilon}}{\epsilon}\right ] \ , \end{split}\ ] ] where it is missing the contribution from the lower boundary. however , this self - substitution is not the same as solving a dynamical equation for the low - frequency modes and substituting for the high - frequency components .this method is fundamentally different from the standard rg procedure , and its result agreeing with neither yo or ww is further evidence that it is another approximation entirely .in the paper by fns , the authors use two different scaling conditions ( see appendix [ app : operator_scaling ] ) when analysing their models due to the contribution of the induced force to the renormalization .we first mention the results used by yakhot & orszag for comparison ( see appendix [ app : operator_scaling ] ) , where is the reduced coupling at the non - trivial fixed point .* fns model a * ( ) : for this model the authors show using diagrams ( see figure 1 of ) how the propagator , force autocorrelation ( shown here in figure [ fig : w_graph ] ) and vertex are renormalized .they conclude that and are renormalized the same way in their equations ( 3.1011 ) .this condition implies fixing the mean dissipation rate rather than , and is then enforced under rescaling ( see appendix [ app : operator_scaling ] ) by choosing , which does not agree with above ( first relation ) used by yo with . at this point ,fns invoke galilean invariance ( gi ) to impose the condition that the vertex is not renormalized , such that to all orders in perturbation theory ( in the limit of small external momenta , appendix b of ) . while this is the case at , and as such does not invalidate the fns theory , in general the consequences of the symmetry are trivial and do not lead to a condition on the vertex .further discussion is given in the conclusions , sec .[ sec : conclusion ] . taking the condition to preserve galilean invariance, they find the non - trivial stable fixed point ( when ) . * fns model b * ( ) : in this case , the one - loop graph in figure [ fig : w_graph ] is claimed to be and so can not contribute to the constant part of the force autocorrelation .this term is then irrelevant and the force is rescaled accordingly .this requires , which is the same condition found by yo with . ensuring that galilean invariance is satisfied , they have , as do yo .this difference in scaling conditions leads to different differential equations and hence different solutions for the reduced coupling and the viscosity , depending on whether the noise is allowed to be renormalized or not . in the field - theoretic approach by ronis , the force is also allowed to be renormalized , and the author comments that yo ignore this in their analysis .in fact , they restricted their work to to avoid this issue .this discrepancy only really applies to when the noise coefficient is renormalized , although could lead to complications for as the induced force always contributes as to the autocorrelation and becomes the leading order as . in their paper , yo state that `` in the limit this [ induced ] force is negligible in comparison with original forcing with '' , and present an argument for neglecting it as equation ( 3.13 ) . for the case it is sub - leading and thus safely neglected .this highlights another potential problem with calculating inertial range statistics , since it is only sub - leading as . in this sectionwe will show how the induced force leads exactly to the graph in figure [ fig : w_graph ] .we then evaluate the graph to analyse the contribution to the renormalization of the force .for this , consider the form of the induced force shown in figure [ fig : induced ] . under averaging , we see that the graph forms a closed loop and is due to the vertex operator , and hence . the new random force is invariant under the filtered - averaging procedure , and has autocorrelation the new contribution due to the induced force is written as equation ( [ eq : app : delta_f_cor ] ) , since the forcing is taken to be gaussian , we may split the fourth - order moment and , using the definition of the force correlation equation ( [ eq : def : stirring_forces ] ) , we see that the first contribution leads to two disconnected loops ( which do not contribute to the force renormalization ) , whereas the other two both generate graphs like figure [ fig : w_graph ] and appear to contribute towards the renormalization of . ) .note that the two momenta , lie in the eliminated shell . ] for feynman rules . ] using the rules given in figure [ fig : rules ] , we write an analytic form for the diagram in figure [ fig : w_graph ] as equation ( [ eq : graph_analytic ] ) , the factor is due to symmetry of the graph ( figure 2 in ) .this may be compared to the correlation of the induced force given by equation ( [ eq : app : delta_f_cor ] ) which , along with the requirement that momenta of all internal lines in equation ( [ eq : graph_analytic ] ) are in the eliminated shell , agree exactly .an outline of the evaluation of this correction to leading order is given in appendix [ sec : app : eval_noise ] .as a result of our analysis , we find \ , \ ] ] where we see for the equilibrium case that the correction may be taken as an multiplicative renormalization to and write with \ .\ ] ] this may be compared to equation ( 3.11 ) of fns and ( 3.4b ) of ronis ( with ) .we note that both authors find the noise coefficient and viscosity to be renormalized with the same prefactor , what we have defined as . however , this prefactor only coincides with found by fns and yo for ( ) , and the analysis is only valid for the equilibrium case ( the work by ronis is an expansion about ) .the prefactor was calculated to leading order with no change of variables in a similar fashion to wang & wu , and agrees with found by fns and that by yo with .if the prefactor found by wang & wu and others , which is -independent , were the true expression , we should have recovered it from this analysis also .instead , it only agrees with when ( the critical dimension for ( where and ) ) . taking the induced contribution as an multiplicative renormalization only when , whereas only .we feel this supports our argument that the correct expression for the prefactor is the -dependent result found by yo .as noted by ronis , the pole structure leading to logarithmic divergence in the noise renormalization more generally occurs when .however for the case , we recover the same pole in found by fns and in the viscosity renormalization .since noise renormalization is only meaningful in this case , the presentation of equation ( 3.4b ) in as a general result seems misleading . in summary , we have calculated the renormalized noise coefficient to one - loop and find the prefactor to agree with fns as well as yo when setting in their result . while noise renormalization was not considered by wang & wu , we have found their -free result , , to only agree with in 2-dimensions .for , this induced forced correlation becomes sub - leading and is ignored , as assumed in the yo analysis .when , the induced contribution does not renormalize the noise coefficient but will be the leading term as . in this case , it is not clear how to interpret the validity of the results obtained , since the forcing appears on large scales to be dominated by the order contribution , making the viscosity calculation order , _ i.e. _ two - loop , which has not been done here .= 2ex .a summary of the prefactors for the viscosity increment , noise renormalization , and the -pole structure found in the various analyses considered in this paper .these expressions are valid for all , with the exception of fns models a ( ) and b ( ) .` t ' represents teodorovich and ` n ' nandy . and hence . by pole structure we mean the -dependence of the denominator for the induced noise correlation ( see equations ( [ eq : yo_result ] ) and ( [ eq : noise_result ] ) , for example ) .our analysis agrees with fns and yo .the work of ronis appears to be for general , but the viscosity is only in agreement for . [ cols="<,^,^,^,^,^,^",options="header " , ] a summary of our results and a comparison with other authors is presented in table [ tbl : summary ] .we conclude that the analysis of fns does not suffer from a shifted domain of integration in the self - energy integral which is evaluated due to the constraint , neglected by other authors . using functions to control the integration domain ,we have shown that the corrections cancel exactly at first order in when the change of variables is made .we then showed that this ignored constraint leads to a correction in the wang & wu- and nandy - style calculations which exactly reproduces the result found by yo .the noise renormalization for the case was then shown , using a substitution - free method similar to wang & wu , to lead to a prefactor compatible with yo for all and only compatible with wang & wu for , which we feel supports our claim as to the validity of the fns and yo results .that said , some comments should be made on the application of this method to calculating inertial range statistics , which may not be so well justified . despite its applicability only on the largest of length scales , yakhot & orszag use the expressions obtained with this infra - red procedure to calculate inertial range properties , such as the kolmogorov constant .to do this , they use a set of assumptions that they term the _ correspondence principle_. briefly , the correspondence principle states that an unforced system which started from some initial conditions with a developed inertial range is statistically equivalent to a system forced in such a way as to generate the same scaling exponents . in particularif forcing is introduced to generate the scaling exponents at low , this artificially generated `` inertial range '' can then be used to calculate values for various inertial range parameters using the properties of universality .there is an implicit assumption that , as long as the scaling exponents match , all other quantities will also match .this may be the reason that yo raise the cutoff out of the production range to ( see above equation ( 2.2 ) in ) so that the renormalization passes through the inertial range , whereas fns explicitly consider ( final paragraph of their section ii.a ) .yo find that when the noise coefficient , , has the dimensions of the dissipation rate , , and they take ( with constant ) .they can then obtain a kolmogorov scaling region when is used , but also require in the prefactor in the same equation .this has been unsatisfactory for many authors , and appears to favour the -free result found by wang & wu , as then alone reproduces the famous result .however , we have shown why the -free result is incorrect .there are still a number of technical difficulties associated with taking and generating a spectrum : * the wilson - style expansion is valid only for small , and there is no evidence that results will be valid at .the neglected cubic and higher - order non - linear terms generated by iterating this procedure may not be irrelevant , and there is no estimate of the accumulation of error even for , let alone . in the review by smith & woodruff , they discuss the only justification for the validity of being that it leads to good agreement with inertial range constants , and describe it as `` intriguing and difficult to interpret '' .they also present an argument for yo s use of in the prefactor , it being required for a self - consistent asymptotic expansion at each iteration step .* the ir behaviour as is dominated by the fixed point which , for , is at .to lowest order in , this is then evaluated with .however , is no longer small , nor is . in 3-dimensions with ,this fixed point is at to leading order in , or when evaluated to all orders . *as shown by figure [ fig : energy_spectrum ] , the asymptotic nature of this renormalization scheme taking us to the infra - red means we do nt enter the inertial range , and are always sensitive to the forcing spectrum . *the forcing spectrum required to obtain is divergent as ( requires , so ) , as is the energy spectrum itself . as shown by mccomb , ensuring that there is a balance between energy input at large length scales and energy dissipated at small ( this is statistically stationary turbulence ) we see that the range of forced wavenumbers predicted by their analysis has , where and are , respectively , the upper and lower bounding wavenumbers of the input range .the energy input is also logarithmically divergent as or . *the condition of galilean invariance ( gi ) used by fns and adopted by yo to enforce the non - renormalization of the vertex at all orders is actually only valid at . in general , the consequences of gi are trivial and provide no constraint on the vertex .this is supported by recent numerical results from a kpz model on a discretized lattice with a broken gi symmetry , which have found the same critical exponents as the actual kpz model ( which does possess gi ) , even though gi has been explicitly violated .this questions the connection between gi and the scaling relations associated with the critical exponents .as such , care must be taken when extending this theory to .this introduces another issue for the study of inertial range properties using the correspondence principle , as can not be chosen to lie in the inertial range without the vertex being renormalized . * the assumed gaussian lowest - order behaviour of the fluid is only valid at the smallest wavenumbers when subject to gaussian forcing , since the response of the system is then also gaussian .however , this assumption can not be translated to the inertial range , which should be insensitive to the details of the energy input and is inherently non - gaussian .the need to use two different values for in the same formula to estimate inertial range properties is therefore not the only failure of this scheme .the solution for the renormalized viscosity at the largest scales ( ) can be found to behave as where is the new cut off . with the assumption , interestingly this does have the same form as that found by other methods ( e.g. ) , that the viscosity is proportional to and , with , the cut off .however , there is an important difference : the cut off is going to zero for this expression to hold , which is not the location of the inertial range , unlike the iterative averaging approach by mccomb .smith & woodruff note that this is not dependent on the dissipation range quantities and , as inertial range coefficients should be .but it is still dependent on the forcing spectrum through , which it should not be .we thank david mccomb for suggesting this project and for his helpful advice during it .we were funded by stfc .although rescaling the variables after performing an iteration of the renormalization procedure outlined above is not performed in the calculation of the renormalized viscosity ( and thus it can be argued not to be an rg procedure ) , it is still useful to consider how the rescaling would affect the equations of motion . using a scaling factor , ,the spatial coordinates transform as and ( where the unprimed variables are the original scale ) , with , and so , with . in yo , , and .equation ( [ eq : def : nse_fourier ] ) then transforms under the scaling to and so we find using equation ( [ eq : force ] ) with the definition of the force autocorrelations equation ( [ eq : def : stirring_forces ] ) and the scaling for and , we find equations ( [ eq : nu])([eq : w_0 ] ) agree with ( 2.28)(2.33 ) of . due to galilean invariance , as equation ( [ eq : lambda ] ) is forced to give the condition that . for ,the elimination of scales should not affect as there is no multiplicative renormalization , a condition that must also be preserved under scaling to find .yo note ( from ( 2.34 ) of ) that the renormalized viscosity _ at the fixed point _ is -independent if . as noted by fns and discussed in section [ sec : renorm_cond ] , for the case we must consider the renormalization of and instead require that and be renormalized in the same way , _i.e. _ .this scaling condition is not the same as that for , and leads to a different solution . under the yo prescription, the triple non - linearity gives , using the expression , this should be compared to equation ( 2.45 ) in , which reads they comment that for ( ) the operator is irrelevant , and marginal when ( ) .however , we see that their result requires , which only agrees with the above expression for ( ensuring is not altered ) when .therefore , they have already used to obtain this result . if we do not specify but do require that ( so that the viscosity at the fixed point is -independent ) , we see from equation ( [ eq : trip_nonlin ] ) that and the operator is not irrelevant but marginal .( this could also have been seen by requiring in equation ( [ eq : trip_nonlin ] ) , and we see that if the vertex is not renormalized the triple moment can not be irrelevant . ) this is discussed in a paper by eyink .attempts to retain the effects of the triple non - linearity on the viscosity increment are analysed in .we here describe the procedure for taylor expanding a -function .the high - band filter is defined as where the first restricts us to and the second to .the -function product of consideration here is and we taylor expand as : our expansion is then + { \mathcal o}(k^2)\ .\ ] ]we now evaluate the correlation of the induced force to leading order using a more compact notation . starting from equations ( [ eq : app : delta_f_cor][eq : split_4th ] ), we note that our integrals here are unconstrained and the shell of integration is controlled by the -functions . since the substitution preserves the product , we may use it , along with the property of the vertex operator and index relabelling for the second term in the square brackets , to combine the two contributions and write which we see is exactly equation ( [ eq : graph_analytic ] ) .this reveals that the symmetry factor of 2 associated to the graph is due to exchanging legs on the vertex . in to thiswe substitute the definition of the force autocorrelation , then integration over is trivially done using the -functions obtained to give the constraint enforced by the remaining -function is then used to restrict , along with the property , resulting in the frequency integral is then performed , closing the contour in the upper - halfplane and collecting the residue from two poles and , with the result \\ & \qquad \times \left[\frac{1}{i\omega + \nu_0(q^2 - \lvert{\bm{k}}-{\bm{q}}\rvert^2)}\right ] \\\stackrel{\omega\rightarrow 0}{= } & \frac{\pi}{\nu_0 ^ 3}\left[\frac{1}{q^2 \lvert{\bm{k}}-{\bm{q}}\rvert^2 \left(q^2 + \lvert{\bm{k}}-{\bm{q}}\rvert^2 \right)}\right ] \ .\end{split}\ ] ] the limit offers a huge simplification to the result .this is inserted in to equation ( [ eq : app : noise_1 ] ) and the integrand is expanded to leading order in as note that there is a power of associated to each of the vertex operators , hence the leading contribution will _ always _ go as . expanding the function we do not generate corrections as we are working to zero - order in in the integrand and the corrections are . expanding the projection operators and performing the angular integrals we find \\\int dq\ q^{-2(y+3)+d-1 } \theta^{+}_{{\bm{q}}}\ , \end{split}\ ] ] where we expand the vertex operators , do the remaining integral and perform contractions to obtain \left(\frac{e^{\ell(\epsilon+y+2 ) } - 1}{(\epsilon+y+2)\lambda_0^{\epsilon+y+2}}\right ) \ , \end{split}\ ] ] which we rearrange to our final result
|
dynamic renormalization group ( rg ) methods were originally used by forster , nelson and stephen ( fns ) to study the large - scale behaviour of randomly - stirred , incompressible fluids governed by the navier - stokes equations . similar calculations using a variety of methods have been performed since , but have led to a discrepancy in results . in this paper , we carefully re - examine in -dimensions the approaches used to calculate the renormalized viscosity increment and , by including an additional constraint which is neglected in many procedures , conclude that the original result of fns is correct . by explicitly using step functions to control the domain of integration , we calculate a non - zero correction caused by boundary terms which can not be ignored . we then go on to analyze how the noise renormalization , absent in many approaches , contributes an correction to the force autocorrelation and show conditions for this to be taken as a renormalization of the noise coefficient . following this , we discuss the applicability of this rg procedure to the calculation of the inertial range properties of fluid turbulence . _ in press physical review e , 2010 .
|
underlay cognitive transmission represents one of the most promising spectrum sharing techniques , where secondary ( unlicensed ) users utilize the spectrum resources of another primary ( licensed ) service . to this end, the transmission power of the cognitive system is limited , such that its interference onto the primary users remains below prescribed tolerable levels . however , this dictated constraint dramatically affects the coverage and/or capacity of the secondary communication .such a condition can be effectively counteracted with the aid of relayed transmission . on the other hand ,using multiple antennas at each node ( and , thus , benefiting from the emerged spatial diversity of each transmitted stream ) is another effective approach to enhance the performance of a cognitive system - . due to the complementary benefits of relayed transmission and spatial diversity gain , the performance analysis of these schemes , under the cognitive transmission regime , is of prime interest lately ( e.g. ,see and references therein ) . all the previous research works assumed ideal hardware at the transmitter and/or receiver end , where the scenario of impaired transceivers ( namely , non - ideal hardware ) was neglectednevertheless , this condition represents a rather overoptimistic scenario for practical applications .more specifically , the hardware gear of wireless transceivers may be subject to impairments , such as i / q imbalance , phase noise , and high power amplifier nonlinearities . yet, very few research works have analytically investigated the impact of hardware impairments .specifically , the outage probability of one - way and two - way dual - hop relayed transmission systems is analytically expressed , in the case of single - antenna transceivers and non - cognitive environments .nevertheless , a corresponding analysis when multiple antennas are employed at each node and/or under a cognitive transmission regime lacks from the open technical literature so far . capitalizing on these observations , in this paper ,the performance of a dual - hop cognitive system with multiple - antennas and hardware impairments at the transceiver of each hop is analytically investigated .particularly , two popular spatial diversity techniques are adopted , namely , transmit antenna selection with maximum ratio combining ( tas / mrc ) , or tas with selection combining ( tas / sc ) are established in each hop .it is noteworthy that with tas , the number of rf chains required is equal to the number of antennas selected for communication , a rather cost - efficient solution for various applications .most importantly , even with lower complexity , tas gives full diversity .hence , it is recently of great importance due to its low feedback demand - , whereas it plays a key role to the uplink of 4 g networks .also , the decode - and - forward ( df ) protocol is used for the relayed transmission .new closed - form expressions are derived with respect to the outage probability of the system , which generalize some previously reported results .in addition , simplified asymptotic expressions of the outage probability , in the high signal transmission regime are also obtained , revealing the diversity and array order of the system , the effectiveness of the balance on the number of transmit / receive antennas , and the impact of hardware impairments to the end - to - end ( e2e ) communication . specifically , it is demonstrated that the diversity order and the performance difference between tas / mrc and tas / sc remain unaffected from hardware impairments .consider an underlay secondary system , where a source node ( s ) communicates with a destination node ( d ) via an intermediate relay node ( r ) , as illustrated in fig .[ fig1 ] . the direct link between s and d is assumed to be absent due to strong propagation attenuation and keeping in mind that the transmission power of the underlay system is , in principle , maintained quite low .let , , and denote the number of antennas at the primary receivers , s , r and d , respectively .the antennas of each node are sufficiently separated from one another ( with respect to the transmission wavelength ) to prevent any channel fading correlation .also , assume identical rayleigh faded channels for the antennas of each node and not necessarily identical channels from one node to another .furthermore , the df transmission protocol is adopted for the dual - hop relayed transmission , which has shown very good results in terms of error rates and outage performance . for mathematical tractability ,let for the 1st hop , while for the 2nd hop . in the casewhen signal distortion due to hardware impairments is present and tas is established at the transmitter side , the input - output relation of the received signal at the transmission hop is given by .let denote vectors for the received signal , the channel fading , the distortion noise to the received signal , and the additive white gaussian noise ( awgn ) of the hop , respectively .moreover , , and ( scalar values ) correspond to the transmission power , the transmitted signal and the distortion noise to the transmitted signal of the hop , respectively .notice that , and , . also , and represent certain parameters describing the underlying distortion noise at the transmitter and receiver , respectively , is the power spectral density of awgn , while stands for the -sized identity matrix .further , assume that the hardware quality of each antenna is identical for the same device , i.e. , there is an equal distortion noise variance per antenna for each node , while this variance is perfectly known at the receiver side ( e.g. , through pilot or feedback signaling ) . in principle ,channel state information ( csi ) of the links between the primary and secondary nodes can be obtained through a feedback channel from the primary service or through a band manager that mediates the exchange of information between the primary and secondary networks .hence , the received signal - to - noise - and - distortion ratio ( sndr ) at the hop reads as ( * ? ? ?* eq . ( 6 ) ) where and is the channel fading gain of the desired signal , which differs according to the adopted reception strategy ( which is analyzed in the next section ) .notice that can not exceed the predefined interfering power threshold to the primary receiver ( the so - called interference temperature ) , such that , where , and denote the aggregated channel fading gain between the secondary transmitter ( s or r , depending on the hop ) and the primary receiver ( i.e. , ) , the tolerable interfering power threshold and the maximum achievable transmission power , respectively .hence , ( [ sndr ] ) becomes start by defining the cumulative distribution function ( cdf ) of sndr at the hop , namely . based on ( [ sndrr ] ), we have that , yielding where and are the euler s gamma function ( * ? ? ?* eq . ( 8.310.1 ) ) and the lower incomplete gamma function ( * ? ? ?* eq . ( 8.350.1 ) ) , respectively .moreover , is the probability density function ( pdf ) of with representing the average channel fading gain of , is the cdf of , and ] .it is straightforward to show that , where is the cdf of sndr , which is obtained as since $ ] . in the casewhen all the nodes are equipped with single - antennas , and reduce to the classical exponential cdf / pdf .thus , keeping in mind that can be alternatively derived as , outage probability simplifies to .\label{sndrfinal1}\end{aligned}\ ] ] notice that when there is no transmission power constraint ( i.e. , ) , ( [ sndrfinal1 ] ) reduces to ( * ? ?( 31 ) ) .the previously derived expressions are exact ; albeit they admit a more amenable formulation in the high sndr regime . in the asymptotically high sndr regime ,while utilizing ( * ? ? ?* eq . ( 3.381.4 ) ) , ( [ 4b ] ) becomes ( for both tas / mrc and tas / sc ) yielding in this case , both the maximum available transmission power and the fading gain of the desired channel are asymptotically high ( e.g. , very low propagation attenuation ) .recognizing that as ( * ? ? ?* eq . ( 8.354.2 ) ) , ( [ 4b ] ) is efficiently approached by following similar lines of reasoning as in the tas / mrc scenario , we have in the above cases , outage probability of the sndr is approximated by . since for both the tas / mrc and tas / sc scenarios ,it is straightforward to show that the diversity order of the system remains , it is maximized when equal number of antennas are used for transmission and reception , and it is not affected by the impact of hardware impairments . the performance difference between the two diversity techniques is highlighted in the underlying array order ( or coding gain ) . for identical channel fading statistics of each hop , we have that , , ( thus , and ) , and .thus , the array order of the dual - hop system can be expressed as and the impact of hardware impairments is manifested in the array order of the system , for both diversity scenarios , as expected .interestingly , the performance gain of the array order for tas / mrc as compared with tas / sc is , thereby it remains unaffected from hardware impairments ( i.e. , independent from ) ., , , and .,width=480 ] ) .also , , , and .,width=480 ] , and and when ( i.e. , ) or ( i.e. , ).,width=480 ] in this section , the theoretical results are presented and compared with monte - carlo simulations . for ease of tractability and without loss of generality , we assume symmetric levels of impairments at the transceiver , i.e. , an equal hardware quality at the transmitter and receiver of each node , i.e. , (= ) . also , assume identical statistics for each hop , e.g. , and .notably , one may observe from fig .[ fig2 ] that it is preferable to enable multiple antennas with non - ideal ( e.g. , low cost ) transceivers rather than single - antenna transceivers with ideal hardware . hence, spatial diversity seems to overcome hardware impairments at the transceiver , even if the rather suboptimal sc ( as compared to the performance of tas / sc or tas / mrc ) scenario is used .also , the value of interference threshold plays an important role to the outage probability , since it dramatically affects the floor on outage occurrence .the performance of a typical zero forcing ( zf ) detection scheme is also presented for cross - comparison reasons .it can be seen that zf is inferior to tas / sc and sc schemes due to the reduced spatial diversity gain .moreover , the tightness of the asymptotic curves in moderately medium - to - high channel power gain regions is depicted in fig .it can be readily seen that the diversity orders of the considered schemes remain unaffected from the impact of hardware impairments .the same argument holds for the performance difference between these schemes .finally , the superiority of tas / mrc against tas / sc , in both cases of ideal and non - ideal hardware , is verified in fig .[ fig4 ] . obviously , the impact of hardware impairments plays a key role to the overall performance of both schemes , regardless of the number of transmit / receive antennas .current study provides some new results satisfying performance requirements of practical cognitive multiple - antenna df relaying systems with non - ideal hardware : a ) new straightforward and exact closed - form outage performance expressions are derived ; b ) the selection of equal number of antennas at the transceiver of each link is the most optimal solution regardless of the amount of hardware impairments ; and c ) tas / mrc always outperforms tas / sc , while such a performance difference is irrespective of hardware quality .m. xia and s. aissa , `` cooperative af relaying in spectrum - sharing systems : performance analysis under average interference power constraints and nakagami-_m _ fading , '' _ ieee trans .60 , no . 6 , pp . 1523 - 1533 , june 2012 .f. a. khan , k. tourki , m .- s .alouini , and k. a. qaraqe , `` performance analysis of a power limited spectrum sharing system with tas / mrc , '' _ ieee trans . signal process .954 - 967 , feb .p. l. yeoh , m. elkashlan , t. q. duong , n. yang , and d. b. costa , `` transmit antenna selection for interference management in cognitive relay networks , '' _ ieee trans .63 , no . 7 , pp . 3250 - 3262 , sept .2014 .m. matthaiou , a. papadogiannis , e. bjornson , and m. debbah , `` two - way relaying under the presence of relay transceiver hardware impairments , '' _ ieee commun .17 , no . 6 , pp . 1136 - 1139 , jun .
|
the performance of an underlay cognitive ( secondary ) dual - hop relaying system with multiple antennas and hardware impairments at each transceiver is investigated . in particular , the outage probability of the end - to - end ( e2e ) communication is derived in closed - form , when either transmit antenna selection with maximum ratio combining ( tas / mrc ) , or tas with selection combining ( tas / sc ) are established in each hop . simplified asymptotic outage expressions are also obtained , which manifest the diversity and array order of the system , the effectiveness of the balance on the number of transmit / receive antennas , and the impact of hardware impairments to the e2e communication . cognitive systems , decode - and - forward ( df),hardware impairments , spectrum sharing , transmit antenna selection ( tas ) .
|
the distinction between default negation and strong negation has been useful in answer set programming . in particular, it yields an elegant solution to the frame problem .the fact that block stays at the same location by inertia can be described by the rule l ion(b , l , t+1)ion(b , l , t ) , ion(b , l , t+1 ) along with the rule that describes the uniqueness of location values , ion(b , l_1,t)ion(b , l , t ) , ll_1 . here ` ' is the symbol for strong negation that represents explicit falsity while ` ' is the symbol for default negation ( negation as failure ) .rule ( [ inertia - on - sneg ] ) asserts that without explicit evidence to the contrary , block remains at location .if we are given explicit conflicting information about the location of at time then this conclusion will be defeated by rule ( [ unique - on - sneg ] ) , which asserts the uniqueness of location values .an alternative representation of inertia , which uses choice rules instead of strong negation , was recently presented by [ ] . instead of rule, they use the choice rule \{ion(b , l , t+1 ) } ion(b , l , t ) , which states that `` if is at at time , then decide arbitrarily whether to assert that is at at time . '' instead of rule , they write weaker rules for describing the functional property of : which can be also combined into one rule : in the absence of additional information about the location of block at time , asserting is the only option , in view of the existence of location constraint .but if we are given conflicting information about the location of at time then not asserting is the only option , in view of the uniqueness of location constraint .rules , , and together can be more succinctly represented in the language of by means of intensional functions .that is , the three rules can be replaced by one rule \{iloc(b , t+1)=l}iloc(b , t)=l , where is an intensional function constant ( the rule reads , `` if block is at location at time , by default , the block is at at time '' ) .in fact , corollary 2 of tells us how to eliminate intensional functions in favor of intensional predicates , justifying the equivalence between and the set of rules , , and .the translation allows us to compute the language of using existing asp solvers , such as smodels and gringo .however , dlv can not be used because it does not accept choice rules . on the other hand ,all these solvers accept rules and , which contain strong negation .the two representations of inertia involving intensional predicate do not result in the same answer sets . in the first representation , which uses strong negation ,each answer set contains only one atom of the form for each block and each time ; for all other locations , negative literals belong to the answer set . on the other hand, such negative literals do not occur in the answer sets of a program that follows the second representation , which yields fewer ground atoms .this difference can be well explained by the difference between the symmetric and the asymmetric views of predicates that lifschitz described in his message to texas action group , titled `` choice rules and the belief - based view of asp '' : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the way i see it , in asp programs we use predicates of two kinds , let s call them `` symmetric '' and `` asymmetric . ''the fact that an object does not have a property is reflected by the presence of in the answer set if is `` symmetric , '' and by the absence of if is `` asymmetric . '' in the second case , the strong negation of is not used in the program at all . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ according to these terminologies , predicate is symmetric in the first representation , and asymmetric in the second representation .this paper presents several technical results that help us understand the relationship between these two views . in this regard, it helps us to understand strong negation as a way of expressing intensional boolean functions .* our first result provides an alternative account of strong negation in terms of boolean intensional functions .for instance , can be identified with and can be identified with under complete interpretations , we show that minimizing both positive and negative literals in the traditional answer set semantics is essentially the same as ensuring the uniqueness of boolean function values under the functional stable model semantics . in this sense , strong negation can be viewed as a mere disguise of boolean functions . *we show how non - boolean intensional functions can be eliminated in favor of boolean functions .combined with the result in the first bullet , this tells us a new way of turning the language of into traditional answer set programs with strong negation , so that system dlv , as well as smodels and gringo , can be used for computing the language of . as an example , it tells us how to turn into the set of rules and .* [ ] recently proposed `` two - valued logic programs , '' which modifies the traditional stable model semantics to represent complete information without distinguishing between strong negation and default negation . using our result that views strong negation in terms of boolean functions , we show that two - valued logic programs are in fact a special case of the functional stable model semantics in which every function is boolean . while the main results are stated for the language of , similar results hold with the language of based on the relationship between the two languages studied in .furthermore , we note that the complete interpretation assumption in the first bullet can be dropped if we instead refer to the language of , at the price of introducing partial interpretations . the paper is organized as follows . in section [ sec : preliminaries ] we review the two versions of the stable model semantics , one that allows strong negation , but is limited to express intensional predicates only , and the other that allows both intensional predicates and intensional functions . as a special case of the latter we also present multi - valued propositional formulas under the stable model semantics .section [ sec : sneg - bf ] shows how strong negation can be viewed in terms of boolean functions .section [ sec : nonbool - sneg ] shows how non - boolean functions can be eliminated in favor of boolean functions .section [ sec : tvlp ] shows how lifschitz s two - valued logic programs can be viewed as a special case of the functional stable model semantics .section [ sec : sneg - cabalar ] shows how strong negation can be represented in the language of .this review follows .a _ signature _ is defined as in first - order logic , consisting of _ function constants _ and _ predicate constants_. function constants of arity are also called _ object constants_. we assume the following set of primitive propositional connectives and quantifiers : .the syntax of a formula is defined as in first - order logic .we understand as an abbreviation of .the stable models of a sentence relative to a list of predicates are defined via the _ stable model operator with the intensional predicates _ , denoted by ] stands for the second - order sentence where is defined recursively : * for any list of terms ; * for any atomic formula ( including and equality ) that does not contain members of ; * ; ; * ; * ; . a model of a sentence ( in the sense of first - order logic ) is called _-stable _ if it satisfies ] is which is equivalent to first - order sentence l x(p(x ) x = a ) x(q(x ) x = b ) x ( r(x ) ( p(x ) q(x ) ) ) ( see , example 3 ) .the stable models of are any first - order models of ( [ ex3f - comp ] ) .the only herbrand stable models of is . [ ] incorporate strong negation into the stable model semantics by distinguishing between intensional predicates of two kinds , _ positive _ and _ negative_. each negative intensional predicate has the form , where is a positive intensional predicate and ` ' is a symbol for strong negation . in this sense , syntactically is not a logical connective , as it can appear only as a part of a predicate constant .an interpretation of the underlying signature is _ coherent _ if it satisfies the formula , where is a list of distinct object variables , for each negative predicate .we consider coherent interpretations only .[ ex : bw - sneg0 ] the following is a representation of the blocks world in the syntax of logic programs : & & ion(b_1,b , t ) , ion(b_2,b , t ) & ( b_1b_2 ) + ion(b ,l , t+1 ) & & imove(b , l , t ) + & & imove(b , l , t ) , ion(b_1,b , t ) + & & imove(b , b_1,t ) , imove(b_1,l , t ) + ion(b , l,0 ) & & ion(b , l,0 ) + ion(b , l,0 ) & & ion(b , l,0 ) + imove(b , l , t ) & & imove(b , l , t ) + imove(b , l , t ) & & imove(b , l , t ) + ion(b , l , t+1)&&ion(b , l , t ) , ion(b , l , t+1 ) + ion(b , l , t ) & & ion(b , l_1,t ) & ( ll_1 ) . here and are intensional predicate constants , , , are variables ranging over the blocks , , are variables ranging over the locations ( blocks and the table ) , and is a variable ranging over the timepoints .the first rule asserts that at most one block can be on another block .the next three rules describe the effect and preconditions of action .the next four rules describe that fluent is initially exogenous , and action is exogenous at each time .the next rule describes inertia , and the last rule asserts that a block can be at most at one location .the functional stable model semantics is defined by modifying the semantics in the previous section to allow `` intensional '' functions . for predicate symbols ( constants or variables ) and , we define as .we define as if and are predicate symbols , and if they are function symbols .let be a list of distinct predicate and function constants and let be a list of distinct predicate and function variables corresponding to .we call members of _ intensional _ constants .by we mean the list of the predicate constants in , and by the list of the corresponding predicate variables in .we define as and ] , where is the fol - representation of the program .the following is a review of the stable model semantics of multi - valued propositional formulas from , which can be viewed as a special case of the functional stable model semantics in the previous section .the syntax of multi - valued propositional formulas is given in .multi - valued propositional signature _ is a set of symbols called _ constants _ , along with a nonempty finite set of symbols , disjoint from , assigned to each constant .we call the _ domain _ of .a _ boolean _constant is one whose domain is the set .an _ atom _ of a signature is an expression of the form ( `` the value of is '' ) where and .a _ ( multi - valued propositional ) formula _ of is a propositional combination of atoms .a _ ( multi - valued propositional ) interpretation _ of is a function that maps every element of to an element of its domain .an interpretation _ satisfies _ an atom ( symbolically , ) if .the satisfaction relation is extended from atoms to arbitrary formulas according to the usual truth tables for the propositional connectives . is a _ model _ of a formula if it satisfies the formula .the reduct of a multi - valued propositional formula relative to a multi - valued propositional interpretation is the formula obtained from by replacing each maximal subformula that is not satisfied by with .interpretation is a _ stable model _ of if is the only interpretation satisfying .[ ex6 ] consider a multi - valued propositional signature , where and .the following is a multi - valued propositional formula : + consider an interpretation such that , and .the reduct is + and is the only interpretation of that satisfies .similar to example [ ex : f=1 ] , consider the signature such that .let be an interpretation such that , and be such that .recall that is shorthand for .the reduct of this formula relative to is , and is the only model of the reduct . on the other hand ,the reduct of relative to is and is not its unique model . also, the reduct of relative to is and is not a model .the reduct of relative to is , and is the only model of the reduct .multi - valued propositional formulas can be identified with a special case of first - order formulas as follows .let be a multi - valued propositional formula of signature .we identify with a first - order signature that consists of * all symbols from as object constants , and * all symbols from where is in as object constants .we may view multi - valued propositional interpretations of as a special case of first - order interpretations of .we say that a first - order interpretation of _ conforms _ to if * the universe of is the union of for all in ; * for every in ; * for every in where .[ prop : mvpf - fo ] for any multi - valued propositional formula of signature such that for every , an interpretation of is a multi - valued propositional stable model of iff is an interpretation of that conforms to and satisfies ] but does not satisfy ] iff is a model of .( ii ) an interpretation of the signature of is a model of iff for some model of .the other direction , eliminating boolean intensional functions in favor of symmetric predicates , is similar as we show in the following .let be a -plain formula such that every atomic formula containing has the form or , where is any list of terms ( not containing members from ) .formula is obtained from as follows : * in the signature of , replace with predicate constants and , whose arities are the same as that of ; * replace every occurrence of , where is any list of terms , with , and with .[ thm : sneg - boolfunc - pred ]let be a set of predicate and function constants , let be a function constant , and let be a -plain formula such that every atomic formula containing has the form or . formulas ( [ pb ] ) and entail \lrar \sm[f^{~b}_{(p,\sneg\ p)};\p,\sneg p , \bc]\ .\ ] ] the following corollary shows that there is a 11 correspondence between the stable models of and the stable models of . for any interpretation of the signature of that satisfies , by denote the interpretation of the signature of obtained from by replacing the function with predicate such that [ cor : sneg - boolfunc - pred ] let be a set of predicate and function constants , let be a function constant , and let be a -plain sentence such that every atomic formula containing has the form or .( i ) an interpretation of the signature of is a model of ] iff is a model of ] iff for some model of ] iff is a model of .( ii ) an interpretation of the signature of that satisfies is a model of ] . theorem [ thm : composition2 ] and corollary [ cor : composition2cor ] are similar to theorem 8 and corollary 2 from .the main difference is that the latter statements refer to the constraint called that is weaker than .for instance , the elimination method from turns the blocks world in example [ ex : bw - func ] into almost the same program as except that the last rule is turned into the constraint : ion(b , l , t)ion(b , l_1,t)ll_1 .it is clear that the stable models of are under the symmetric view , and the stable models of are under the asymmetric view . to see how replacing by turns the symmetric view to the asymmetric view , first observe that adding ( [ uec - ex ] ) to program does not affect the stable models of the program . let s call this program .it is easy to see that is a conservative extension of the program that is obtained from by deleting the rule with in the head . [ ] presented a high level definition of a logic program that does not contain explicit default negation , but can handle nonmonotonic reasoning in a similar style as in reiter s default logic . in this sectionwe show how his formalism can be viewed as a special case of multi - valued propositional formulas under the stable model semantics in which every function is boolean .let be a signature in propositional logic .a _ two - valued rule _ is an expression of the form l_0 l_1 , , l_n : f where are propositional literals formed from and is a propositional formula of signature . a _ two - valued program _ is a set of two - valued rules .an interpretation is a function from to .the _ reduct _ of a program relative to an interpretation , denoted , is the set of rules corresponding to the rules ( [ liftvrule ] ) of for which .interpretation is a stable model of if it is a minimal model of .[ ex : tv ] l a : a , a : a , b a : the reduct of this program relative to consists of rules and .interpretation is the minimal model of the reduct , so that it is a stable model of the program . as described in ,if in every rule ( [ liftvrule ] ) has the form of conjunctions of literals , then the two - valued logic program can be turned into a traditional answer set program containing strong negation when we consider complete answer sets only .for instance , program ( [ tv - example ] ) can be turned into this program has two answer sets , and , and only the complete answer set corresponds to the stable model found in example [ ex : tv ] .given a two - valued logic program of a signature , we identify with the multi - valued propositional signature whose constants are from and the domain of every constant is boolean values . for any propositional formula , is obtained from by replacing every negative literal with and every positive literal with . by we denote the multi - valued propositional formula which is defined as the conjunction of for each rule ( [ liftvrule ] ) in . for any interpretation of , we obtain the multi - valued interpretation from as follows . for each atom in , [ thm : liftvtrans ] for any two - valued logic program ,an interpretation is a stable model of in the sense of iff is a stable model of in the sense of .* example [ ex : tv ] continued * consider extending the rules to contain variables .it is not difficult to see that the translation can be straightforwardly extended to non - ground programs .this accounts for providing the semantics of the first - order extension of two - valued logic programs .there are other stable model semantics of intensional functions . theorem 5 from states that the semantics by [ ] coincides with the semantics by [ ] on -plain formulas .thus several theorems in this note stated for the bartholomew - lee semantics hold also under the cabalar semantics .a further result holds with the cabalar semantics since it allows functions to be partial .this provides extensions of theorem [ thm : sneg - bool - fo ] and corollary [ cor : bfelim ] , which do not require the interpretations to be complete .below we state this result . due to lack of space , we refer the reader to for the definition of , which is the second - order expression used to define the cabalar semantics .similar to in section [ ssec : sneg - bool - fo ] , by we denote the conjunction of the following formulas : , where is a list of distinct object variables .is true if is undefined .see for more details . ][ thm : sneg - bool - fo - c ] let be a set of predicate constants , and let be a formula . formulas and entail \lrar \cbl[f^{(p,\sneg\ p)}_{~~b};\ b,\bc]\ .\ ] ] the following corollary shows that there is a 11 correspondence between the stable models of and the stable models of .. ] for any interpretation of the signature of , by we denote the interpretation of the signature of obtained from by replacing the relation with function such that since is coherent , is well - defined .we also require that satisfy ( [ fc1 ] ) .consequently , satisfies .[ cor : bfelim - c ] let be a sentence , and let be a set of predicate constants .( i ) an interpretation of the signature of is a model of $ ] iff is a model of .( ii ) an interpretation of the signature of is a model of iff for some model of .in this note , we showed that , under complete interpretations , symmetric predicates using strong negation can be alternatively expressed in terms of boolean intensional functions in the language of .they can also be expressed in terms of boolean intensional functions in the language of , but without requiring the complete interpretation assumption , at the price of relying on the notion of partial interpretations .system cplus2asp turns action language + into answer set programs containing asymmetric predicates . the translation in this paper that eliminates intensional functions in favor of symmetric predicates provides an alternative method of computing + using asp solvers .* acknowledgements : * we are grateful to vladimir lifschitz for bringing attention to this subject , to gregory gelfond for useful discussions related to this paper , and to anonymous referees for useful comments .this work was partially supported by the national science foundation under grant iis-0916116 and by the south korea it r&d program mke / kiat 2010-td-300404 - 001 .joseph babb and joohyung lee .: computing action language c+ in answer set programming . in _ proceedings of international conference on logic programming and nonmonotonic reasoning ( lpnmr ) _ , 2013 .to appear .michael bartholomew and joohyung lee .stable models of formulas with intensional functions . in _ proceedings of international conference on principles of knowledge representation and reasoning ( kr ) _ , pages 212 , 2012 .paolo ferraris , joohyung lee , vladimir lifschitz , and ravi palla .symmetric splitting in the general theory of stable models . in _ proceedings of international joint conference on artificial intelligence ( ijcai ) _ ,pages 797803 .aaai press , 2009 .
|
the distinction between strong negation and default negation has been useful in answer set programming . we present an alternative account of strong negation , which lets us view strong negation in terms of the functional stable model semantics by bartholomew and lee . more specifically , we show that , under complete interpretations , minimizing both positive and negative literals in the traditional answer set semantics is essentially the same as ensuring the uniqueness of boolean function values under the functional stable model semantics . the same account lets us view lifschitz s two - valued logic programs as a special case of the functional stable model semantics . in addition , we show how non - boolean intensional functions can be eliminated in favor of boolean intensional functions , and furthermore can be represented using strong negation , which provides a way to compute the functional stable model semantics using existing asp solvers . we also note that similar results hold with the functional stable model semantics by cabalar .
|
the _ subgraph homeomorphism problem _ , shp also known as _topological containment _ is an important problem in graph theory , and belongs to garey and johnson s original list of np - complete problems .any fixed _ pattern graph _ gives rise to the problem : [ cols= " < " , ] it is known that this problem can be solved in polynomial time for any fixed pattern graph , but practical algorithms exist only for a few small pattern graphs . among these are certain members of the wheel class of graphs , for which characterizations have been obtained : and in , and in . a result has also been obtained for in , which leads to an efficient algorithm for solving shp( ) .the length and difficulty of the proof increases for each as increases .the proof takes only a paragraph , and the proof occupies 7 pages . the proof , however ( 16 pages , with some automated analysis ) , requires extensive amounts of repetitive case analysis , and the proof even more so ( around 90 pages , also with automated analysis ) .this case analysis involves looking at numerous small graphs of bounded size , and searching for -subdivisions in those graphs .this paper presents some algorithms developed to automate parts of the searching and analysis required in developing the results for - and -subdivisions .the proofs of these results are similar in structure : both involve beginning with a pattern graph for which some good characterization already exists , then examining all possible ways in which certain structures can be added to this graph to satisfy some necessary condition .it must then be determined whether or not the resulting graphs topologically contain the pattern graph for which the new characterization is desired .this technique involves testing many small graphs for the presence of - or -subdivisions .since the process of constructing these small graphs is repetitive in nature , it was possible to create a program that automates their construction . given the sheer number of test cases that arise , particularly for the result , this program is important in obtaining the information necessary to complete these proofs , as examining each graph individually by hand would take an inordinate amount of time . in particular , one of the key algorithms used in the program ( given in section [ furthertests ] ) could be applicable in a broader context most obviously for developing characterizations relating to wheels with more than seven spokes , but also potentially for obtaining results for subdivisions of graphs other than wheels , if similar techniques can be used .each of the graphs generated by the program must be individually tested for the presence of a - or -subdivision .those that do not contain such a subdivision require further analysis in the proof , and so are given as output . in order to successfully perform a test for the presence of a -subdivision on each generated graph , an algorithm is required that will solve shp( ) for each graph .we used a naive algorithm which runs in exponential , rather than polynomial time .it performs adequately for the small input graphs that arise in the proofs , and its correctness is easily verifiable .section [ testcases ] describes the types of case analysis required in the proofs of and , gives the algorithms that have been developed for generating these cases , and demonstrates how these algorithms are used in the context of the proofs .section [ wk_alg ] gives the exponential - time algorithm for solving shp( ) that is used in testing the generated graphs .each algorithm mentioned has been implemented in c , and the code can be found in the appendices at the end of the paper .the complete code for all implementations can be found online at ` http://www.csse.monash.edu.au/~rebeccar/wheelcode.html ` .if is a subset of graph , then denotes the set of all maximal subsets of such that any two vertices of are joined by a path in with no internal vertex in .each element of is referred to as a _ bridge _ of .a vertex of degree 2 is _ contracted _ in a graph by adding an edge between s neighbours , if such an edge does not already exist , then deleting .in developing proofs for the characterizations of and , algorithms were written to generate specific graphs that arise as cases in these proofs , then test these graphs for the presence of a - or -subdivision .this section outlines how such automated graph generation is done .section [ wheelproof ] describes the ` wheelproof ` function , which is used to perform preparatory work in the proofs of the and results .its role in the proofs is simple , but it provides a good illustration of the search techniques used .section [ furthertests ] describes the ` exception_generator ` function , which is a more general function that is applicable in a wider range of situations , and as such it is used often throughout the proofs .each of the proofs for the theorems regarding graphs with no -subdivision , for , follows a similar overall structure : * firstly , it is proved that for some graph that meets the conditions of the hypothesis , there must exist some -subdivision centred on a specific vertex of degree . *it is then observed that some neighbour of exists such that is not a neighbour of in , and that , since is 3-connected , there must be two disjoint paths and in from to that do not meet . *all possible placings of the paths and must be examined , and each resulting graph must contain a -subdivision , if it is to satisfy the hypothesis of the theorem . in situations where the graph contains a -subdivision ,this is simple . where this is not the case , closer examination of the structure of required .the function ` wheelproof(k ) ` was created specifically to generate all possible placings of and for which does not contain a -subdivision ( for any input ) .we refer to such graphs as _ exception _ graphs .this function firstly constructs the graph , then generates all possible ways in which a ^th^ neighbour can be added to the centre vertex , while still preserving the 3-connectivity of the graph . for each graph that is generated , the function ` findkwheel ` is then run with the arguments , to test for the presence of a -subdivision .any graph generated which is found not to contain such a subdivision is recorded as an exception graph ; the function returns a list of all such graphs found .the c implementation for ` wheelproof(k ) ` is given in appendix [ app_wheelproof ] .running the ` wheelproof ` function with an input of generates no exception graphs .this is to be expected , as the characterization for graphs topologically containing is as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if is a 3-connected graph and is any vertex of degree in , then contains a -subdivision centred on ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the output of ` wheelproof(5 ) ` is also as expected , returning two different exception graphs ( shown in figure [ w5_exceptions ] ) , each of which is isomorphic to the starting graph of subcase 2b in theorem 3 of .( this theorem characterizes graphs containing no -subdivision ; the subcase mentioned deals specifically with a section of the proof requiring the imposition of extra restrictions on the input graph , namely , that contains no internal 3-edge - cutsets , and that contains a cycle of length at least 5 disjoint from the selected vertex of degree . )the output of ` wheelproof(6 ) ` generates five different exception graphs .these graphs are isomorphic , and thus further analysis of only one is sufficient ( shown in figure [ w6_exceptions ] ) . such analysis is given in case ( b)(ii ) of the main theorem of .the output of ` wheelproof(7 ) ` gives 15 different graphs , but when examined for isomorphism , this number is reduced to three ( see figure [ w7_exceptions ] ) .each of these three graphs are analysed further in cases ( b)(i ) , ( b)(ii ) , and ( b)(iii ) of the main theorem of . case .each of the 15 graphs generated from the output of ` wheelproof(7 ) ` is isomorphic to one of these three graphs . ]certain other situations arise in the proofs characterizing the and cases which lend themselves to further automated generation of test cases .these situations all have the following features : * only part of the structure of is known , represented by a smaller graph , .each edge in corresponds to a path in .* contains a separating set , and contains a number of bridges of .* it is unknown whether is also a separating set of , or if each bridge of is contained in a separate bridge of .the proof requires that it be known how many bridges of are contained in separate bridges of .thus , a path is added to , where is disjoint from except at its endpoints , each of which are in two separate bridges of ( but not in ) .all possible graphs are generated , for all possible placements , in , of the endpoints of .each generated graph is then tested for the presence of a -subdivision , and only those graphs which do not contain such a subdivision require further analysis . the function `exception_generator ` is used to automate this process .this function takes a graph , and the vertex sets of two subgraphs of , say and .the function generates all possible graphs of the form , where is some path disjoint from except at its endpoints , one of which is in and one of which is in .the function tests each generated graph for the presence of a -subdivision , and outputs those that do not contain such a subdivision .an outline of the algorithm is as follows : * for each pair of vertices , where and : * * add edge * * check for existence of -subdivision * * remove edge * * for each vertex in adjacent to : * * * create a new vertex , and subdivide the edge into two new edges and * * * add edge * * * check for existence of -subdivision * * * remove edge * * * contract vertex * * for each vertex in adjacent to : * * * create a new vertex , and subdivide the edge into two new edges and * * * add edge * * * check for existence of -subdivision * * * remove edge * * * for each vertex in adjacent to : * * * * create a new vertex , and subdivide the edge into two new edges and * * * * add edge * * * * check for existence of -subdivision * * * * remove edge * * * * contract vertex * * * contract vertex the implementation of this algorithm is given in appendix [ app_exception_generator ] .we now give an example of how ` exception_generator ` is used in proofs . in the main result ( theorem 18 ) of , case ( b)(i ) 1.1.1.1.1 , we start with the graph of figure [ searcheg_start ] .note that the edges marked , , , and in this graph each have four possible placements in the graph ( represented by dotted lines in figure [ searcheg_start ] ) .thus , there are in fact possible starting graphs . for each of these graphs , we consider the set , with the aim of discovering whether some path can be added to such that is not a separating set of , and does not contain a -subdivision .the ` exception_generator ` function can be used as follows : * for each starting graph : * * let , be the two components of * * call ` exception_generator(g_{i } , a , |v(a)| , b , |v(b)| ) ` running this algorithm finds that each generated graph contains a -subdivision .thus , it can be assumed from this point onwards in the proof that is a separating set .the main algorithm in this section is ` findkwheel ` , and is given in section [ findkwheel ] .it solves shp( ) for any given input graph , and for any value of .this algorithm runs in exponential time , but performs sufficiently quickly on input graphs of small size .the algorithm ` findkwheel ` makes a call to another algorithm , ` iskwheel ` , which determines whether or not some input graph is a -subdivision , for a given value of . `iskwheel(g , k ) ` takes two arguments , a graph and an integer , and determines whether or not is a -subdivision . a 2-connected graph is isomorphic to the wheel if the following is true : * ; * contains exactly vertices of degree 3 ; and * contains exactly one vertex of degree .a graph is a -subdivision if , after contracting all vertices of degree 2 , becomes isomorphic to the graph .thus , the function ` iskwheel ` uses the following algorithm : * step 1 . check to see if is two - connected . if not , can not be a -subdivision : return null . *contract all vertices of degree 2 in .if contains exactly vertices , of which have degree 3 , and one of which has degree , then return ; otherwise return null . determiningif is two - connected in step 1 is done with a worst - case complexity of , using an implementation of hopcroft s biconnectivity algorithm .( the implementation is given in appendix [ app_is2connected ] . ) contracting all vertices of degree 2 until there are no such vertices left has a complexity of .counting the degrees of remaining vertices in step 3 is .thus , the entire algorithm s complexity is .the exact code is given in appendix [ app_iskwheel ] .the function ` findkwheel(g , k ) ` also takes as its arguments a graph and an integer .this function searches for a -subdivision as a subgraph of ; if such a subgraph exists , ` findkwheel ` will return it , otherwise it returns null .this is done by recursively testing all subgraphs obtained by removing a single edge from the input graph .base cases are graphs that are -subdivisions , or small graphs that clearly do not contain such a subdivision .the following algorithm is used .remove any vertices in with degree zero .* step 2 . call ` iskwheel(g ) ` . if is not a -subdivision , go to step 3 ; otherwise return . * step 3 .if , or , then is too small to contain a -subdivision .return null .if contains no vertex with degree , return null .* step 5 . for each edge that exists , call ` findkwheel(g - e , k ) ` .if a -subdivision is found , return that graph , otherwise continue to step 6 . does not contain a -subdivision .return null .this algorithm runs in exponential time , but still performs effectively on reasonably small graphs .the code is given in appendix [ app_findkwheel ] .the proofs of the main results in and regarding - and -subdivisions are of sufficient complexity that completing such proofs without the aid of a computer program becomes extremely difficult .the algorithms presented in this paper , particularly the ` exception_generator ` algorithm given in section [ furthertests ] , form a key component in automating the generation and testing of graphs required as test cases in these proofs .the ` exception_generator ` algorithm may well be useful in developing other characterizations of shp - related problems , where a similar approach is adopted in the proof of moving from a problem with a good characterization to one without . 1 alfred v. aho , john e. hopcroft , and jeffrey d. ullman . .addison - wesley , reading , ma , 1974 .g. farr .the subgraph homeomorphism problem for small wheels ., 71:129142 , 1988 .garey and d.s .johnson . .w. h. freeman , new york , 1979 .n. robertson and p.d .graph minors .the disjoint paths problem .63:65110 , 1995 . rebecca robinson and graham farr . structure and recognition of graphs with no 6-wheel subdivision . published online in _ algorithmica_ , january 2008 ; awaiting print publication .rebecca robinson and graham farr . graphs with no 7-wheel subdivision .technical report 2009/239 , clayton school of information technology , monash university ( clayton campus ) , 2009 ..... / * takes an integer k , and outputs all exception graphs - graphs that * * do not contain a w_{k}-subdivision - from the starting point of the * * proof . */ static graphlist * wheelproof(const int k ) { graph * graph ; graphlist * exception_list = ( graphlist * ) malloc(sizeof(graphlist ) ) ; graph * prev = null ; graph * next_exception ; int i , j , l , m , x , y ; int u = k ; int u1 = k+1 ; int u2 = k+2 ; exception_list->size = 0 ; / * make the starting graph w_{k-1}. * / graph = makewk(graph , k-1 ) ; / * adding vertex u and joining to the centre vertex * / graph = addvertex(graph , u ) ; graph = addedge(graph , 0 , u ) ; / * main loop : creates edges between u and u1 , and between u and u2 , * * for each possible placement of u1 and u2 . each resulting graph * * is tested for a w_{k}-subdivision .* / / * for each vertex i in the graph ( except where i = u ) : * / for ( i = 0 ; i < = graph->highestid ; i++ ) { if ( i! = u & & graph->vertices[i ] != null ) { if ( i ! = 0 ) / * vertex 0 is already adjacent to u * / { / * where u1 is an already existing vertex - add edge .* / graph = addedge(graph , u , i ) ; / * now look at possibilities for u2 .* / for ( l = i ; l < = graph->highestid ; l++ ) { if ( l! = u & & graph->vertices[l ] ! = null ) { / * where u2 is already existing vertex : * / if ( l ! = 0 ) { graph = addedge(graph , u , l ) ; / * check if the two new edges make a w_{k } subdivision * / if ( ! findkwheel(graph , k , 0 , 0 ) ) { if ( !is3connected(graph ) ) printf("graph not 3-connected.\n " ) ; else { printf("exception found.\n " ) ; next_exception = graphcpy(graph ) ; if ( prev ! = null ) prev->next = next_exception ; else exception_list->head = next_exception ; prev = next_exception ; exception_list->size++ ; } } / * remove u - u2 edge again . */ graph = removeedge(graph , u , l ) ; } / * this time , u2 is a new vertex on some ' edge ' * * ( since each edge represents a path in g ) .* / for ( m = i ; m < graph->vertices[l]->degree ; m++ ) { y = graph->vertices[l]->neighbours[m ] ; if ( y > l & & y ! = u & & graph->vertices[y ] ! = null ) { graph = addvertex(graph , u2 ) ; graph = addedge(graph , u , u2 ) ; graph = expand_edge(graph , l , y , u2 ) ; / * check if the two new edges make a w_{k } subdivision * / if ( !findkwheel(graph , k , 0 , 0 ) ) { if ( !is3connected(graph ) ) printf("graph not 3-connected.\n " ) ; else { printf("exception found:\n " ) ; next_exception = graphcpy(graph ) ; if ( prev ! = null ) prev->next = next_exception ; else exception_list->head = next_exception ; prev = next_exception ; exception_list->size++ ; } } / * remove u2 again . */ graph = removeedge(graph , u , u2 ) ; graph = contractvertex(graph , u2 ) ; } } } } / * remove the edge that was added before trying next possibility .* / graph = removeedge(graph , u , i ) ; } / * look at all possibilities for u1 where u1 is a new vertex * * that lies on the path between i and one of its ' neighbours ' * / for ( j = 0 ; j < graph->vertices[i]->degree ; j++ ) { x = graph->vertices[i]->neighbours[j ] ; if ( x > i & & x ! = u & & graph->vertices[x ] ! = null ) { / * new edge joins at a new vertex which splits an * * existing edge into two edges .* / graph = addvertex(graph , u1 ) ; graph = addedge(graph , u , u1 ) ; graph = expand_edge(graph , i , x , u1 ) ; / * now select u2 * / for ( l = i ; l < = graph->highestid ;l++ ) { if ( l! = u & & graph->vertices[l ] ! = null ) { / * u2 is an already existing vertex * / if ( l ! = 0 ) { graph = addedge(graph , u , l ) ; / * check if the two new edges make a w_{k } subdivision * / if ( !findkwheel(graph , k , 0 , 0 ) ) { if ( !is3connected(graph ) ) printf("graph not 3-connected.\n " ) ; else { printf("exception found:\n " ) ; next_exception = graphcpy(graph ) ; if ( prev ! = null ) prev->next = next_exception ; else exception_list->head = next_exception ; prev = next_exception ; exception_list->size++ ; } } / * remove u - u2 edge again .* / graph = removeedge(graph , u , l ) ; } / * u2 is a new vertex : generate all possibilities * / for ( m = i ; m < graph->vertices[l]->degree ;m++ ) { y = graph->vertices[l]->neighbours[m ] ; if ( y > l & & y != u & & graph->vertices[y ] ! = null ) { graph = addvertex(graph , u2 ) ; graph = addedge(graph , u , u2 ) ; graph = expand_edge(graph , l , y , u2 ) ; / * check if the two new edges make a w_{k } subdivision * / if ( !findkwheel(graph , k , 0 , 0 ) ) { if ( ! is3connected(graph ) ) printf("graph not 3-connected.\n " ) ; else { printf("exception found:\n " ) ; next_exception = graphcpy(graph ) ; if ( prev ! = null ) prev->next = next_exception ; else exception_list->head = next_exception ; prev = next_exception ; exception_list->size++ ; } } / * remove u2 again . */ graph = removeedge(graph , u , u2 ) ; graph = contractvertex(graph , u2 ) ; } } } } / * take u1 out again .* / graph = removeedge(graph , u , u1 ) ; graph = contractvertex(graph , u1 ) ; } } } } graph = removeedge(graph , 0 , u ) ; graph = removevertex(graph , u ) ; return exception_list ; } / * makewk : returns the graph w_{k } for given k * / static graph * makewk(graph * graph , int k ) { int i = 0 ; graph = initialise_graph(graph ) ; / * create k+1 vertices * / while ( i < = k ) { graph = addvertex(graph , i ) ; i++ ; } i = 1 ; / * create k spokes * / while ( i < = k ) { graph = addedge(graph , 0 , i ) ; i++ ; } i = 1 ; / * create rim of wheel * / while ( i < k ) { graph = addedge(graph , i , i+1 ) ; i++ ; } graph = addedge(graph , k , 1 ) ; return graph ; } ........ / * function to process all the possible exceptions that can be * * generated from each starting graph .* / static void exception_generator(graph * graph , int sectiona [ ] , int asize , int sectionb [ ] , int bsize ) { int i , j , k , l , p , p1 , n , m , skip=0 , skip1=0 ; vertex * currvertex , * currvertex1 ; int newvertex1 = graph->highestid ; int newvertex2 = ( graph->highestid ) + 1 ; int nbr[maxdegree ] , nbr1[maxdegree ] ; / * process possible graphs * / for ( i=0 ; i < asize ; i++ ) { for ( j=0 ; j < bsize ; j++ ) { / * add new path : endpoints are vertices that already exist * / graph = addedge(graph , sectiona[i ] , sectionb[j ] ) ; / * is there a w7 ? * / if ( findkwheel(graph , 7 , 0 , 0 ) = = null ) { printf("exception:\n " ) ; printgraph(graph ) ; } / * remove new path * / graph = removeedge(graph , sectiona[i ] , sectionb[j ] ) ; / * add new path : endpoints are new vertex in section a * * and already existing vertex in sectionb * / n = 0 ; currvertex = graph->vertices[sectiona[i ] ] ; while ( n < currvertex->degree ) { nbr[n ] = currvertex->neighbours[n ] ; n++ ; } / * for each neighbour k of i , try expanding the edge ik * / for ( k=0 ; k < n ; k++ ) { / * if we 've looked at this neighbour before , skip it .* / for ( p=0 ; p < i ; p++ ) { if ( sectiona[p ] = = nbr[k ] ) skip = 1 ; else skip = 0 ; } / * ... otherwise , create a new vertex in section a along the * * path between i and k , and make a path between this and * * vertex j in section b * / if ( ! skip ) { graph = addvertex(graph , newvertex1 ) ; graph = expand_edge(graph , sectiona[i ] , nbr[k ] , newvertex1 ) ; graph = addedge(graph , newvertex1 , sectionb[j ] ) ; if ( findkwheel(graph , 7 , 0 , 0 ) = = null ) { printf("exception:\n " ) ; printgraph(graph ) ; } / * remove path .* / graph = removeedge(graph , newvertex1 , sectionb[j ] ) ; graph = contractvertex(graph , newvertex1 ) ; } } / * add new path : endpoints are new vertex in section b * * and already existing vertex in section a * / n = 0 ; currvertex = graph->vertices[sectionb[j ] ] ; while ( n < currvertex->degree ) { nbr[n ] = currvertex->neighbours[n ] ; n++ ; } / * for each neighbour k of j , try expanding the edge jk * / for ( k=0 ; k < n ; k++ ) { / * if we 've looked at this neighbour before , skip it . * / for ( p=0 ; p < j ; p++ ) { if ( sectionb[p ] = = nbr[k ] ) skip = 1 ; else skip = 0 ; } / * ... otherwise , create a new vertex in section b along the * * path between j and k , and make a path between this and * * vertex i in section a * / if ( ! skip ) { graph = addvertex(graph , newvertex1 ) ; graph = expand_edge(graph , sectionb[j ] , nbr[k ] , newvertex1 ) ; graph = addedge(graph , newvertex1 , sectiona[i ] ) ; if ( findkwheel(graph , 7 , 0 , 0 ) = = null ) { printf("exception:\n " ) ; printgraph(graph ) ; } graph = removeedge(graph , newvertex1 , sectiona[i ] ) ; / * do n't contract new vertex yet , but rather ... * / / * add new path : endpoints are new vertex in section b * * ( that is , the one we just made ) and new vertex in section a * / m = 0 ; currvertex1 = graph->vertices[sectiona[i ] ] ; while ( m < currvertex1->degree ) { nbr1[m ] = currvertex1->neighbours[m ] ; m++ ; } / * for each neighbour l of i , try expanding the edge il * / for ( l=0 ; l < m ; l++ ) { / * if we 've looked at this neighbour before , skip it . * / for ( p1=0 ; p1<i ; p1++ ) { if ( sectiona[p1 ] = = nbr1[l ] ) skip1 = 1 ; else skip1 = 0 ; } / * ... otherwise , create a new vertex in section a along the * * path between i and l , and make a path between this and * * the new vertex ( newvertex1 ) in section b * / if ( ! skip1 ) { graph = addvertex(graph , newvertex2 ) ; graph = expand_edge(graph , sectiona[i ] , nbr1[l ] , newvertex2 ) ; graph = addedge(graph , newvertex1 , newvertex2 ) ; if ( findkwheel(graph , 7 , 0 , 0 ) = = null ) { printf("exception:\n " ) ; printgraph(graph ) ; } graph = removeedge(graph , newvertex1 , newvertex2 ) ; graph = contractvertex(graph , newvertex2 ) ; } } graph = contractvertex(graph , newvertex1 ) ; } } } } } ........ / * is2connected : returns 1 if the graph starting at input vertex * * ' head ' is 2-connected .returns 0 otherwise . */ int is2connected(graph * graph ) { int visited[maxgraphsize ] ; int dfnumber[maxgraphsize ] ; int low[maxgraphsize ] ; int father[maxgraphsize ] ; int count = 0 ; int i = 0 ; for ( i=0;i < maxgraphsize;i++ ) { visited[i ] = 0 ; dfnumber[i ] = -1 ; low[i ] = -1 ; father[i ] = -1 ; } / * find the first vertex in the graph .* / i = 0 ; while ( graph->vertices[i ] = = null ) i++ ; return ( is2conn_rec(graph , i , visited , dfnumber , low , father , & count ) ) ; } / * is2conn_rec : recursive function used by is2connected . * / static int is2conn_rec(graph * graph , int v_id , int visited [ ] , int dfnumber [ ] , int low [ ] , int father [ ] , int * count ) { vertex * v = graph->vertices[v_id ] ; vertex * w ; int w_id ; int i = 0 ; visited[v_id ] = true ; dfnumber[v_id ] = * count ; ( * count)++ ; low[v_id ] = dfnumber[v_id ] ; while ( i < v->degree ) { w_id = v->neighbours[i ] ; w = graph->vertices[w_id ] ; if ( w = = null ) { printf("error : vertex connected to vertex that does n't exist.\n " ) ; exit(1 ) ; } if ( visited[w_id ] = = false ) { father[w_id ] = v_id ; if ( !is2conn_rec(graph , w_id , visited , dfnumber , low , father , count ) ) return false ; if ( low[w_id ] > = dfnumber[v_id ] & & ( ( dfnumber[v_id ] != 0 ) || i > 0 ) ) return false ; low[v_id ] = min(low[v_id ] , low[w_id ] ) ; } else if ( father[v_id ] != w_id ) { low[v_id ] = min(low[v_id ] , dfnumber[w_id ] ) ; } i++ ; } return true ; } ........ / * iskwheel : takes a graph ' graph ' and an integer k. if the input * * graph is the graph w_{k } once all vertices of degree 2 have been * * contracted , then the function returns a copy of the input graph * * with all such vertices contracted . * * if the input graph is not the graph w_{k } , the function returns * * the null pointer .* / graph * iskwheel(graph * graph , int k ) { int countk = 0 ; int count3 = 0 ; int countcontracted = 1 ; int i=0 ; vertex * v ; graph * newgraph ; if ( !is2connected(graph ) ) return null ; / * graph must be 2-connected .* / newgraph = graphcpy(graph ) ; / * for each vertex in the graph : if v is of degree 2 , contract v * / while ( countcontracted ! = 0 ) { i = 0 ; countcontracted = 0 ; while ( i < = newgraph->highestid ) { v = newgraph->vertices[i ] ; if ( v != null ) { if ( v->degree = = 2 ) { newgraph = contractvertex(newgraph , i ) ; countcontracted++ ; } } i++ ; } } / * if v is degree 3 , increment counter of degree 3 vertices * * if v is of degree k , increment counter of degree k vertices * / i = 0 ; while ( i < = newgraph->highestid ) { v = newgraph->vertices[i ] ; if ( v ! = null ) { if ( v->degree = = 3 ) count3++ ; else if ( v->degree = = k ) countk++ ; } i++ ; } / * number of vertices ( not including those of degree 2 which * * were contracted ) must equal k+1 for graph to be w_{k}-subdivision . */ if ( newgraph->size != k+1 ) { killgraph(newgraph ) ; return null ; } / * must be k vertices of degree 3 and 1 vertex of degree k. * / if ( count3 = = k & & countk = = 1 ) return newgraph ; / * special case for w_{3 } , where there are 4 vertices of degree 3 .* / else if ( k = = 3 & & count3 = = 4 ) return newgraph ; / * if graph is not a w_{k}-subdivision , return null .* / else { killgraph(newgraph ) ; return null ; } } ........ / * findkwheel : takes as input a graph and an integer k. if * * the input graph contains a w_{k } subdivision , the function * * returns the input graph contracted to be w_{k}. otherwise , * * the null pointer is returned . * / i = 0 ; while ( i < = graph->highestid ) { v = graph->vertices[i ] ; if ( v! = null ) { if ( v->degree = = 0 ) { graph = removevertex(graph , i ) ; } } i++ ; } / * if the graph is not w_{k } , the function is called recursively * * to test the removal of every possible combination of edges * * between vertices higher than startvertex1 and startvertex2 . * * if some combination of edge removal results in w_{k } , the program * * exits the while loop and returns successfully . ** once there are too few edges or vertices left in the graph , or * * once all edge removal possibilities have been tried , the function * * returns null .* / if ( ( subgraph = iskwheel(graph , k ) ) = = null ) / * only do this if the graph is n't w_{k } * / { / * too few edges to be able to remove any more , * * or too few vertices to make w_{k } * / if ( graph->edges < = 2*k || graph->size < k+1 ) return null ; / * or no vertex left of degree at least k * / for ( i = 0 ; i < = graph->highestid ; i++ ) { if ( graph->vertices[i ] != null & & graph->vertices[i]->degree > = k ) break ; } if ( i > graph->highestid ) return null ; i = startvertex1 ; / * removing each edge in turn : * / while ( i < = graph->highestid ) { if ( i = = startvertex1 ) // first time through outer loopj = max(i , startvertex2 ) ; else j = i ; while ( j < = graph->highestid ) { if ( edgeexists(graph , i , j ) & & j >i ) / * j >i check ensures only edges in one direction are detected * / { subgraph = graphcpy(graph ) ; / * call the function recursively on the graph with one fewer edge . * / if ( ( foundwheel = findkwheel(removeedge(subgraph , i , j ) , k , i , j ) ) != null ) { return foundwheel ; } killgraph(subgraph ) ; } j++ ; } i++ ; } return null ; } return subgraph ; } ....
|
practical algorithms for solving the subgraph homeomorphism problem are known for only a few small pattern graphs : among these are the wheel graphs with four , five , six , and seven spokes . the length and difficulty of the proofs leading to these algorithms increase greatly as the size of the pattern graph increases . proving a result for the wheel with six spokes requires extensive case analysis on many small graphs , and even more such analysis is needed for the wheel with seven spokes . this paper describes algorithms and programs used to automate the generation and testing of the graphs that arise as cases in these proofs . the main algorithm given may be useful in a more general context , for developing other characterizations of shp - related properties .
|
size and complexity of software systems has increased tremendously .therefore , the development of high - quality software requires rigorous application of sophisticated software engineering methods .one such method which has become very popular is the unified modeling language .uml has been developed by the `` three amigos '' booch , jacobson , and rumbaugh as a common framework for designing and implementing object - oriented software .uml contains many different notations to describe the static and dynamic behavior of a system on all different levels and phases of the software design process .although uml provides a common notational framework for requirements and design , uml , as any other language , does not eliminate bugs and errors .these bugs must be found and fixed in order to end up with a correctly working and reliable system .it is well known , that debugging a large software system is a critical issue and can be a major cost - driving factor .changes which have to be applied to the system ( e.g. , to fix a bug ) are becoming substantially more expensive , the later they are detected ( figure [ fig : bugcosts ] ) .when an error is detected early during the definition phase , its cost is relatively low , because it only influences the requirements definition .bugfixes in a product already shipped can be up to 60100 times more expensive .therefore , it is mandatory to start with debugging as early in the project as possible . in this paper , we will discuss an approach which supports debugging of scenarios ( more precisely uml sequence diagrams ) with respect to given domain knowledge .this is done as a part of an algorithm which can synthesize uml statecharts from a number of sequence diagrams .this synthesis step can be seen as a transformation from requirements to system design .it does not only facilitate fast and justifiable design from requirements ( sequence diagrams ) , but also substantially helps to debug the generated designs . because sequence diagrams usually cover only parts of the system s intended behavior , the generated statecharts need to be refined and modified manually . by applying the synthesis algorithm in a `` backward '' way, the refined statechart can be checked against the requirements .each conflict is reported to the user and indicates a bug . for practical applicability of any debugging aid , the presentation of the bug , its cause and effect is of major importance . in our approach ,we rely on logic - based explanation technology : all conflicts correspond to failure in logical reasoning about sequence diagrams , statecharts , and domain knowledge. ongoing work , as discussed in the conclusions , uses methods from automated deduction to point the user to the exact place where the conflict occurred and which parts of the models and specification are affected . this paper is organized as follows : section 2 gives an overview of major uml notations and a typcial iterative software design process .then we will describe how sequence diagrams are annotated for a justified synthesis of statecharts ( section 4 ) .based on this algorithm we discuss methods for debugging a sequence diagram and a synthesized statechart . in section 7we discuss future work and conclude . throughout this paper, we will use one example to illustrate our approach .the example concerns the interaction between an espresso vending machine and a user who is trying to obtain a cup of coffee .this example ( based on the atm example discussed in ) is rather small , yet complex enough to illustrate the main issues .the requirements presented here are typical scenarios for user interaction with the machine ( e.g. , inserting a coin , selecting the type of coffee the user wants , reaction on invalid choices , and pressing the cancel button ) .more details of the requirements will be discussed when the corresponding uml notations have been introduced .the unified modeling language is the result of an effort to bring together several different object - oriented software design methods .uml has been developed by booch , jacobson and rumbaugh and has gained wide - spread acceptance .a variety of tools support the development in uml ; among them are rhapsody , rational s rose , or argo / uml . on the top - level ,requirements are usually given in the form of _ use cases _ , describing goals for the user and system interactions .for more detail and refinement , uml contains three major groups of notations : _ class diagrams _ for describing the static structure , _ interaction diagrams _ for requirements , and _ state diagrams _ and _ activity diagrams _ for defining dynamic system behavior .below , we will illustrate the notations which are important for our approach to debugging of uml designs .although no explicit development process is prescribed for uml , uml design usually follows the steps of inception , elaboration , construction , and transition , used in an iterative manner . in this paper, we will not elaborate on the process model . for details , cf ., e.g. , .the importance of support for debugging of uml designs on the level of sequence diagrams ( requirements ) , and statecharts becomes evident , when we look at a graphical representation of an iterative development process ( figure [ fig : process ] ) .the design starts by analyzing the ( physical ) process at the lower left part of the figure .the result of the analysis comprises the requirements ( e.g. , as a set of sequence diagrams ) , and _ knowledge _ about the domain ( henceforth called domain theory ) .based on these , a _ model _ of the system is developed , consisting of class diagrams , statecharts and activity diagrams .this model must now be implemented .modern software engineering tools provide automatic code - generation ( or at least support ) for this step .finally , the produced system must be verified against the physical process , and its performance tuned .traditionally , the way to get a working system is simulation ( process requirements model ) , and testing ( requirements model system ) . here , errors and bugs have to be found and removed . within an _ iterative _ design process ,these steps are performed over and over again , depicted by the circular arcs . to keep these iterations fast ( and thus cost - effective ) , powerful techniques for _ debugging requirements _ against domain knowledge , and models against requirements are vital .our approach supports this kind of debugging and it will be discussed in the next section , following a short description of the basic concepts of class diagrams , sequence diagrams , and statecharts .a _ class diagram _ is a notation for modeling the static structure of a system .it describes the classes in a system and the relationships between them .figure [ fig : class : atm1 ] shows an example of a class diagram for our coffee - vending machine example . in an object - oriented fashion , the main class ( here `` coffee machine '' ) is broken down into sub - classes .the aggregation relation ( ) shows when one class is _ part of _ another one .the generalization relation ( ) shows when one class is _ an instance of _ another . for further details , see e.g. , ._ statecharts _ , are finite state machines extended with hierarchy and orthogonality .they allow a complex system to be expressed in a compact and elegant way .figure [ fig : sc - general ] shows a simple example of a statechart .nodes can either be simple nodes ( a1 , a2 , a3 , b , and c ) , or composite nodes ( node a in the figure ) which themselves contain other statecharts .the initial node in a statechart is marked by .transitions between states have labels of the form /a ] denotes the element at position in ( similarly for ) .in the first step of the synthesis process , we assign values to the variables in the state vectors as shown in figure [ extend - sv ] .the variable instantiations of the initial state vectors are obtained directly from the message specifications ( lines 1,2 ) : if message assigns a value to a variable of the state vector in its pre- or post - condition , then this variable assignment is used .otherwise , the variable in the state vector is set to an undetermined value .since each message is specified independently , the initial state vectors will contain a lot of unknown values .most ( but not all ) of these can be given a value in one of two ways : two state vectors , and ( ) , are considered the same if they are unifiable ( line 6 ) .this means that there exists a variable assignment such that .this situation indicates a potential loop within a sd .the second means for assigning values to variables is the application of the frame axiom ( lines 8,9 ) , i.e. , we can assign unknown variables of a pre - condition with the value from the preceeding post - condition , and vice versa .this means that values of state variables are propagated as long as they are not changed by a specific pre- or post - condition .this also assumes that there are no hidden side - effects between messages .a conflict ( line 11 ) is detected and reported if the state vector immediately following a message and the state vector immediately preceding the next message differ . __ an annotated sd + _ output . _a sd with extended annotations 1 * for * each message * do * + 2 has a precond * then * : = y ] * fi * + 3 has a postcond * then * : = y ] * fi * + 4 each state vector * do * + 5 there is some and = some unifier with * then * + 6 unify and ; + 7 propagate instantiations with frame axiom : + 8 with : * if * = \ , ? ] * fi * + 9 = \ ,? ] * fi * + 10 there is some with \neq s^{{\mbox{{\em\footnotesize pre}}}}_{i+1}[j]$ ] * then * + 11 report conflict ; + 12 ; + let us consider how this algorithm operates on the first few messages of sd1 from figure [ atmbadac ] .when annotating the first message ( `` display ready light '' ) , we obtain the following state vector on the side of the user - interface : .the values of the first two state variables are determined by the message s pre - condition in the domain theory .the state - vector on the receiving side of our message only consists of `` ? '' . as a pre - condition for the message `` insert coin '' we have coininmachine = f.thus we have as the state vector .all other messages in sd1 are annotated in a similar way .now , our algorithm ( lines 412 ) tries to unify state vectors and propagate the variable assignments . in our case, the attempt to unify with would assign the value f to the first variable in , yielding .now , both state vectors are equal . then , variable values are propagated using the frame axiom . in our case, we can propagate the value of coininreturnslot = f ( from ) into and , because the domain theory does not prescribe specific values of this state variable at these messages .hence , its current value f can be used in the other state vectors , finally yielding . after performing all unification and propagation steps ,we obtain an annotated sequence diagram as shown in figure [ atm - extended ] .the conflict indicated there will be discussed in the next section .the algorithm from the previous section detects conflicts of a sd with the domain theory ( and thus with other sequence diagrams ) .any such conflict which is detected corresponds to a bug which needs to be fixed .the bug can be in the sequence diagrams , which means that one or more sequences of actions are not compatible with the domain theory , and henceforth with other sds .such a situation often occurs when sequence diagrams and domain theory for a large system are developed by different requirements engineers .our algorithm is capable of directly pointing to the location where the conflict with the domain theory occurs . the respective message , together with the instantiated pre- and post - conditions , as well as the required state vector values are displayed .this feature allows to easily debug the sequence diagram . of course, the error could be in the domain theory instead .for example , one designer could have set up pre- or post - conditions which are too restrictive to be applicable for scenarios , specified by other designers . in that case , the domain theory must be debugged and modified . our algorithm can also provide substantial support here , because it is able to display the exact location where the conflicting state variables have been instantiated . especially in long sequence diagrams the place where a state variable is instantiated and the place where the conflict occurs can be far apart .the current version of our algorithm provides only rudimentary feed - back as demonstrated in the example below. future work ( which also allows richer ocl constructs to be used ) requires more elaborate , human - readable descriptions of the error trace .automated theorem provers and work on proof presentation , like the ilf system will be used for that purpose .such a system will not only _ explain _ the possible reasons for a conflict , but can also give ( heuristics - driven ) hints to the user on how to fix the problem .the following example shows , how conflict detection can be used for debugging : figure [ atm - extended ] shows sd1 from figure [ atmbadac ] after the state vectors have been extended by our algorithm of figure [ extend - sv ] .our procedure has detected a conflict with the domain theory . as an output it provides the messages and state vectors which are involved in the conflict : .... conflict in sd1 : object coffee - ui statevector after " insert coin " = < t , f , t,1,none > [ msg 2 ] statevector before " request selection " = < t , f , f,1,none > [ msg 3 ] conflict in variable " coffeetypeselected " conflict occurred as consequence of unification of statevector after " display ready light " = < f , f , t,0,none > [ msg 1 ] statevector after " display ready light " = < f , f , t,0,none > [ msg 11 ] statevector after" take coin " = < f , f , t,0,none > [ msg 10 ] .... this arises because state vectors sv1 ( state vector before `` display ready light '' ) and sv2 ( after `` take coin '' ) are unified ( figure 9 shows the instantiations of the vectors after unification ) .this corresponds to the fact that the coffee machine returns to its initial state after `` take coin '' is executed .the state vectors tell us that there is a potential loop at this point .a second execution of this loop causes the state variable `` coffeetypeselected '' to true , when the system asks for a selection .however , the domain theory tells us that this variable must be false as a pre - condition of the `` request selection '' message .hence , there is a conflict , which represents the fact that the developer probably did not account for the loop when designing the domain theory .the user must now decide on a resolution of this conflict i.e. , to debug this situation .the user either * can tell the system that the loop is not possible , in which case the unifier that detected the loop is discarded .this amounts to modifying the annotated sequence diagram ( by restricting possible interpretations ) .the user can * modify the sequence diagram at some other point , e.g. , by adding messages ; or * modify the domain theory . in our example, the action taken might be that the domain theory is updated by giving `` release coin '' the additional postcondition coffeetypeselected = false .this extra post - condition resets the value of the variable ( i.e. , the selection ) when the user is asked to remove the coin .the position of the change has been obtained by systematically going backwards from sv2 .although possible locations are automatically given by the system , the decision where to fix the bug ( at `` release coin '' or at `` take coin '' ) must be made by the user . here ,the second possibility was chosen , because the specification for that message modified a state variable which is related to the variable which caused the conflict .when the statechart synthesis algorithm successfully terminates , it has generated a human - readable , hierarchically structured statechart , reflecting the information contained in the sds and the domain theory . in general , however , sequence diagrams usually describe only parts of the intended dynamic behavior of a system . therefore , the generated statechart can only be a _ skeleton _ rather than a full - fledged system design .thus , the designer usually will extend , refine , and modify the resulting statechart manually .our approach takes this into account by generating a well structured , human - readable statechart which facilitates manual refinement and modification. however , these manual actions can be sources of errors which will have to be found and removed from the design . in the following ,we describe two approaches , addressing this problem .the traditional way to find bugs in a statechart is to run simulations and large numbers of test cases .most commercial tools for statecharts , like betterstate , statemate , or rhapsody support these techniques .some tools also provide more advanced means for analysis , like detection of deadlocks , dead branches , non - deterministic choices , or even model checking for proving more elaborate properties . in this paper, we will not discuss these techniques .whenever a design ( in our case the statechart ) is modified , care must be taken that all requirements specifications are still met , or that an appropriate update is made .traditionally , this is done manually by updating the requirements document ( if it is done at all ) .bugs are usually not detected ( and not even searched for ) until the finished implementation is tested .thereby , late detection of bugs leads to increased costs . by considering the `` reverse '' direction of our synthesis algorithm , we are able to * check that all sequence diagrams are still valid , i.e. , that they represent a possible sequence of events and actions of the system * detect conflicts between the current design ( statechart ) and one or more sds , and * detect inconsistencies with respect to the domain theory .the basic principle of that technique is that we take one sequence diagram after the other , together with the domain theory , and check if that sequence of messages is a possible execution sequence in the given statechart . hereagain we use logic - based techniques , similar to those described above ( unification of state vectors , value propagation with the frame axiom ) .an inconsistency between the ( modified ) statechart and the sd indicates a bug ( in the sd or sc ) . by successively applying patches to the sd ( by removing or adding messages to the sd ) the algorithm searches for possible ways to obtain an updated and consistent sd . since in general more than one possible fix for an inconsistency exists , we perform an iterative deepening search resulting in a solution with the fewest modifications to the sequence diagram .we are aiming to extend this search by applying heuristics to select `` good '' fixes . here again , the form of feed - back to the user is of major importance .we are envisioning that the system can update the requirements and provide explanations for conflicts in a similar way as described above .the statechart in figure [ fig : sc - deb ] has been refined .the transition between and has been extended in such a way that first event , then with action has to occur before the state is reached .the original statechart has been generated from a sequence diagram as shown on the right - hand side of fig .the modification of the statechart is propagated back to the sequence diagrams where the change is clearly marked . in this example , the extension could be made without causing a conflict .however , it is advisable for the designer and/or the requirements engineer to carefully observe these changes in order to make sure that these modified requirements still meet the original intended system behavior .we have presented a method for debugging uml sequence diagrams and statecharts during early stages in the software development process . based on an algorithm , designed for justified synthesis of statecharts ,we have identified two points where conflicts ( as a basis for debugging ) can be detected : during extending the annotations of a sd ( conflicts w.r.t . the domain theory ) , and updating of sequence diagrams based upon a refined or modified statechart .the algorithm which is described in has been implemented in java and has been used for several smaller case studies in the area of object - oriented systems , user interfaces , and agent - based systems .current work on this part include integration this algorithm into a commercial uml tool ( magicdraw ) .currently we are extending our synthesis algorithm to provide the debugging facilities described in this paper .future work will mainly focus on integrating and extending explanation technology into our system . debugging large designs with lengthy and complex domain theoriesvitally depends upon an elaborate way of providing feed - back to the user .starting from the basic information about a conflict ( i.e. , a failed unification ) , we will use theorem proving techniques of abduction and counter - example generation to provide as much feed - back as possible on where the bug might be , and how to fix the problem .these techniques will be combined with tools capable of presenting a logic statement in human - readable , problem - specific way ( e.g. , ilf ) .only , if debugging feedback can be given in the notation of the engineering domain rather than in some logic framework , such debugging aids will be accepted in practice .it is believed that uml ( and tools based upon this notation ) will have a substantial impact on how software development is made . by providing techniques which do not only facilitate design by synthesis , but also provide powerful means to debug requirements and designs in early stages we are able to contribute to tools which are useful in design of large software systems .
|
design of large software systems requires rigorous application of software engineering methods covering all phases of the software process . debugging during the early design phases is extremely important , because late bug - fixes are expensive . in this paper , we describe an approach which facilitates debugging of uml requirements and designs . the unified modeling language ( uml ) is a set of notations for object - orient design of a software system . we have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts . this algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge . after synthesizing statecharts from sequence diagrams , these statecharts usually are subject to manual modification and refinement . by using the `` backward '' direction of our synthesis algorithm , we are able to map modifications made to the statechart back into the requirements ( sequence diagrams ) and check for conflicts there . fed back to the user conflicts detected by our algorithm are the basis for deductive - based debugging of requirements and domain theory in very early development stages . our approach allows to generate explanations on why there is a conflict and which parts of the specifications are affected .
|
quantile regression , introduced by , generalizes the notion of sample quantiles to linear and nonlinear regression models including the least absolute deviation estimation as its special case .the method provides an estimation of conditional quantile functions at any probability levels and it is well known that the family of estimated conditional quantiles sheds a new light on the impact of covariates on the conditional location , scale and shape of the response distribution : see .quantile regression has been widely used to analyze time series data as an alternative to the least squares method ( see ) since it is not only robust to heavy tails but also allows a flexible analysis of the covariate effects .especially , in risk management , it is also a functional tool to calculate the value - at - risk ( var ) .quantile regression has been studied in linear and nonlinear autoregressive models by , , , and : see also and , who handled ` linear ' autoregressive conditional heteroscedasticity ( arch ) and generalized arch ( garch ) models , and who considered ordinary garch models . considered the quantile regression method for a broad class of time series models and designated the conditional autoregressive var ( caviar ) model .although the results of are applicable to a wide class of time series models , the caviar specification therein mainly focuses on the case of pure volatility models , as pointed out by and . unlike the previous studies dealing with the models having either conditional location or scale components , in this study , we take an approach to simultaneously estimate the conditional mean and variance through the quantile regression method . and explored the quantile regression for location - scale models without autoregressive structure and proposed a robust test for heteroscedasticity .this paper focuses on the quantile regression for a wide class of conditional location - scale time series models including the arma models with asymmetric garch ( agarch ) errors in which the dynamic relation between current and past observations is characterized in terms of a conditional mean and variance structure .typically , the conditional mean is assumed to follow an either ar or arma type model and the conditional volatility is assumed to follow a garch type model ( ) . here, we demonstrate that the quantile regression can be extended to conditional location - scale models rather than mean - variance models through a slight modification , and as such , the estimation of the conditional location and scale can be properly carried out .more precisely , to activate the proposed method , we remove the constraints imposed on the mean and variance of the model innovations and reformulate the mean - variance model to become the conditional location - scale model described in section [ subsec22 ] .it is noteworthy that the reformulated models to incur the quantile regression estimation are exactly the same as those in ( 1.3 ) of who pointed out that non - gaussian quasi - maximum likelihood ( qml ) estimators may be inconsistent in the usual conditional mean - variance models and instead proposed location - scale models to remedy an asymptotic bias effect . from this angle , it may be mentioned that our quantile regression method is comparable with other estimation methods like the gaussian and non - gaussian qml estimation methods . in this study , we intend to verify the strong consistency and asymptotic normality of quantile regression estimators in general conditional location - scale time series models .particularly , in the derivation of the -consistency , one has to overcome the difficulty caused by the lack of smoothness of the quantile regression loss function . to resolve this problem, we adopt the idea of and and extend lemma 3 of to stationary and ergodic time series cases ; see section [ subsec23 ] and lemma [ br lemma ] in the appendix for details .to apply the obtained results in general models to the arma - agarch model , we deduce certain primitive conditions leading to the desired asymptotic properties . here , the task of checking the identifiability condition appears to be quite demanding and accordingly a newly designed technique is proposed : see remark [ identifiability proof remark ] below . in comparison to ,our approach has merit in its own right .first , a weaker moment condition is used to obtain the asymptotic normality : for instance , in the arma - agarch model , only a finite second moment condition is required while a third moment condition is demanded in their paper .second , more basic conditions such as strict stationarity and ergodicity of models are assumed in our case rather than the law of large numbers and central limit theorems assumed in their paper : however , more general data generating processes are considered therein .third , our parametrization of conditional quantile functions exhibits a more explicit relationship with the parametrization of original models .finally , a general identifiability condition is provided for the arma - agarch model and is rigorously verified .the rest of this article is organized as follows . in section [ sec2 ], we introduce the general conditional location - scale time series models and establish the asymptotic properties of the quantile regression estimator . in section [ sec3 ] , we verify the conditions for the strong consistency and asymptotic normality in the arma - agarch model . in section [ sec4 ] , we report a finite sample performance of the estimator in comparison with the gaussian - qmle . in section [ sec5 ] ,we demonstrate the validity of our method by analyzing the daily returns of hong kong hang seng index .all the proofs are provided in the appendix and the supplementary material .before we proceed to general conditional location - scale models ( see ( [ loc - scale model ] ) below ) , we first illustrate conditional quantile estimation for the ar()-arch( ) model : where are i.i.d .random variables with and . inwhat follows , we denote by the -field generated by .provided that is independent of , the conditional quantile of given can be expressed as where .since the quantile of is unknown , it is apparent that the parameters in ( [ cond quantile of ar - arch ] ) are not identifiable . as in ,this problem can be overcome by reparameterizing the arch component as follows : with , and . here, is only proportional to the conditional standard deviation , and thus , can be interpreted to be a conditional scale : this reparameterization procedure expresses the arch model as a conditional scale model with no scale constraints on the i.i.d .the conditional quantile in this case is then expressed as wherein the parameters can be shown to be identifiable : see lemma [ identification lemma ] that deals with more general arma - agarch models .in fact , the condition in ( [ ar - arch eqn ] ) is not necessarily required to deduce the conditional quantile function , since the conditional quantile specification in ( [ cond quantile of repra ar - arch ] ) is also valid for the ar()-arch( ) model without assuming this condition .as seen in section [ subsec23 ] , conditional quantile estimators and their asymptotic properties are irrelevant to the location constraint on , and thus , the condition of is not needed for estimating conditional quantiles .an analogous approach will be taken to handle the quantile regression for general location - scale models .let us consider the general conditional location - scale model of the form : where and respectively denote and for some measurable functions ; denotes the true model parameter ; is a model parameter space ; are i.i.d .random variables with an unknown common distribution function . many conditionally heteroscedastic time series models can be described by the autoregressive representation addressed in ( [ loc - scale model ] ) .for example , the reparameterized ar()-arch( ) model in section [ subsec21 ] can be expressed as a form of ( [ loc - scale model ] ) with , and .further , it can be readily seen that invertible arma models and stationary garch models also admit the form of ( [ loc - scale model ] ) : see theorem 2.1 of for the latter . in section [ sec3 ] , the arma - agarch model will be expressed as a form of ( [ loc - scale model ] ) . in order to facilitate the conditional quantile estimation , model ( [ loc - scale model ] ) is assumed to be a reparameterized version of the time series models as discussed in section [ subsec21 ] , and as such , the innovation distribution is not assumed to have zero mean and unit variance and is interpreted to be a relative conditional scale rather than variance . however , restricted to arma - agarch models in sections [ sec3][sec5 ] , we focus on the case of considering the popularity in practice .in what follows , the following conditions are presumed : * satisfying ( [ loc - scale model ] ) is strictly stationary and ergodic . * is independent of for .conditions * ( m1 ) * and * ( m2 ) * hold for a broad class of time series models .for example , verified that the garch model is strictly stationary if and only if its lyapunov exponent is negative , which actually entails * ( m2)*. provided sufficient conditions for the stationarity and ergodicity in general conditional variance models . provided such conditions in nonlinear ar models with garch errors . in section [ sec3 ] , we specify some conditions for the arma - agarch model to admit the autoregressive representation in ( [ loc - scale model ] ) and also to satisfy * ( m1 ) * and * ( m2)*. under * ( m2 ) * , the quantile of conditional on the past observations is given by for , wherein the innovation quantile appears as a new parameter .we denote by the true parameter vector .note that the conditional quantile can be expressed as a function of the infinite number of past observations and parameter .then , taking into consideration the form of , given the stationary solution to model ( [ loc - scale model ] ) and a parameter vector , we introduce conditional quantile functions : where is a parameter within a domain that allows the above autoregressive representation . in practice , since is unobservable , we can not obtain .thus , we approximate them with observable . a typical example is , where all with are put to be 0 : see .one can also use a model specific approximation as in section [ sec3 ] .then , the quantile regression estimator of for model ( [ loc - scale model ] ) is defined by where is a parameter space , , and denotes the indicator function .in this subsection , we show the strong consistency and asymptotic normality of the quantile regression estimator defined in ( [ def of qre ] ) .the result is applicable to various mean - variance time series models including the arma - agarch model handled in section [ sec3 ] .the asymptotic properties are proved by utilizing the affinity between and , similarly to the case of the qml estimator in garch - type models : see , , , , and the references therein .however , the asymptotic normality is derived in a nonstandard situation , as discussed below , owing to the non - differentiability of the loss function . in what follows, we define for matrix . to verify the consistency of , we introduce the following assumptions : ( c1 ) : : the quantile of is unique , that is , for all .( c2 ) : : belongs to which is a compact subset of .( c3 ) : : \(i ) is continuous in a.s . ;( ii ) < \infty ] , + ( iii ) < \infty ] for the agarch( ) case .it also follows from the theorem that is a function of and has the following arch( ) representation where for and .given the stationary agarch process , assumption * ( a2 ) * below implies that is stationary and ergodic , and has the ar( ) representation : where for : see .combining ( [ h_t^2 representation ] ) and ( [ f_t representation ] ) , model ( [ arma model eqn])([agarch model eqn ] ) is shown to admit the autoregressive representation in ( [ loc - scale model ] ) .in addition , it follows from * ( a2 ) * that is a function of , so is measurable with respect to the -field generated by .therefore , since , * ( m2 ) * is satisfied , and then , the quantile of conditional on is given by , where is the quantile of , , and given in ( [ h_t^2 representation ] ) . to estimate the conditional quantiles of , we now construct the quantile regression estimator of .denote by a parameter vector which belongs to .if the parameter space satisfies assumption * ( a4 ) * below , given the stationary solution and , we can define the stationary processes , and consecutively as follows : for .then , it can be seen that . in practice , ( ) can not be computed excepting the ar()-asymmetric arch( ) model case as mentioned in section [ subsec22 ] . to compute an approximated conditional quantile function , we define , and by using the same equations ( [ def of eps_t(varphi)])([def of agarch q_t(theta ) ] ) for and by setting the initial values , , and for . here, we denote and .then , the quantile regression estimator of for the arma - agarch model ( [ arma model eqn])([agarch model eqn ] ) is defined by ( [ def of qre ] ) . to show the identifiability of the conditional quantile functions, we introduce the following assumptions . assumptions * ( a3)*(i ) and ( ii ) are the standard identifiability conditions for agarch and arma models , respectively . * ( a5 ) * assumes that is a continuous random variable , which is common in real applications .( a1 ) : : for some and the lyapunov exponent associated with and is strictly negative .( a2 ) : : all zeros of and lie outside the unit disc .( a3 ) : : \(i ) and for each , , have no common zeros and ; + ( ii ) and have no common zeros and .( a4 ) : : and for all , for and .( a5 ) : : the support of the distribution of is . ( a6 ) : : .[ identification lemma ] suppose that assumptions * ( a1)(a5 ) * hold in the model ( [ arma model eqn])([agarch model eqn ] ) and a.s .for some and .then , we have the following : * if , then . *if , then it holds either that and or that , , , and .lemma [ identification lemma ] ensures that the identifiability assumption * ( c4 ) * for the arma - agarch model holds if and it shows that only ar and ma coefficients are identifiable in the case of .for the consistency of , we added the finite first moment condition of the agarch process , which is equivalent to under * ( a2)*. an application of theorem [ strong consistency ] and lemma [ identification lemma ] yields the strong consistency addressed below .[ agarch(1,1 ) moment remark ] in the garch case , presented a necessary and sufficient condition for the stationarity and fractional moments including * ( a6)*. for the agarch( ) case , such a condition can be obtained by using theorem 2.1 of and theorem 6 of : for , the agarch( ) process is strictly stationary with if and only if .as in there , one can use minkowski s inequality for and the one : , for .[ consistency for agarch ] suppose that assumptions * ( c2 ) * and * ( a1)(a6 ) * hold in model ( [ arma model eqn])([agarch model eqn ] ) .then , we have the following : * if , a.s . as . * if , a.s . as . to ensure the -consistency of , moment conditions * ( n3)*(ii ) and ( iii ) are necessary .it turns out that these conditions are implied by , or equivalently , . for the asymptotic normality , we assume the following moment condition : ( a1 ) : : and . by theorem 6.(ii ) of , * ( a1 )* implies that the model ( [ agarch model eqn ] ) has a stationary solution with .thus , * ( a1 ) * becomes redundant .lemma [ positive definiteness ] below ensures assumption * ( n5 ) * , which is related to the non - singularity of the asymptotic covariance matrix .the proof of lemma [ positive definiteness ] is deferred to the supplementary material .[ positive definiteness ] if assumptions * ( n2 ) * , * ( a1 ) * , and * ( a2)(a5 ) * hold in the model ( [ arma model eqn])([agarch model eqn ] ) and , then in ( [ def of j(tau ) ] ) and in theorem [ asymptotic normality ] are positive definite .[ identifiability proof remark ] lemmas [ identification lemma ] and [ positive definiteness ] can be verified by using a technique in .the method shares a common idea with that used for the verification of identifiability in and , but is seemingly more widely applicable .[ asymptotics for agarch ] suppose that assumptions * ( c2 ) * , * ( n1 ) * , * ( n2 ) * , * ( a1 ) * , and * ( a2)(a5 ) * hold in the model ( [ arma model eqn])([agarch model eqn ] ) . if , then converges in distribution to the one in theorem [ asymptotic normality ] .[ lipschitz in agarch ] in view of ( [ def of h_t(vpt ) ] ) and ( [ def of agarch q_t(theta ) ] ) , it can be shown that is lipschitz continuous but is discontinuous : see the proof of theorem [ asymptotics for agarch ] . in the pure agarch and arma - garch model cases , is twice continuously differentiable .it is notable that the quantile regression yields a -consistent estimation of arma - agarch parameters under the mild moment condition of * ( a1 ) * , which is a finite second moment condition on both the innovations and observations .it is well known in the garch model that the popular gaussian qmle is -consistent under but converges at a slower rate if the innovation is heavy - tailed , that is , : see .this fact also holds in the reparameterized garch model as in section [ subsec21 ] : see section 5 of .in fact , the fourth moment condition of innovations is indispensable for obtaining the usual -rate in various garch - type models : see and .further , for mean - variance models such as the arma - garch model , the gaussian qml estimation additionally requires a finite fourth moment of observations , that is , : see and . in the estimation of garch - type models ,researchers have paid considerable attention to relaxing moment conditions and seeking robust methods against heavy - tailed distributions of innovations or observations .for example , showed that the -consistency of the two - sided exponential qmle requires only in the garch model , and verified it under in the arma - garch model .these moment conditions can be additionally relaxed by using weighted likelihoods ( ) or other non - gaussian likelihoods ( ) . in view of these results , it can be reasoned that quantile regression approach in this study also makes a reasonably good robust method in a broad class of time series models .as mentioned in section [ subsec23 ] , the quantile regression for the location - scale models in ( [ loc - scale model ] ) requires a different conditional quantile specification when .thus , it is necessary to test whether is or not , especially for the values of around 0.5 : if , the conditional quantile of is just the conditional location and the results of can be applied . under the null hypothesis of this testing problem , we can see that the other parameters are not identified by lemma [ identification lemma ] .inference in a similar situation can be found in and references therein .we leave the development of such a test as a task of our future study .in this simulation study , we examine a finite sample performance of the quantile regression estimation and illustrate its robustness against the heavy - tailed distribution of innovations .the samples are generated from the following arma()-agarch( ) model : with , and .as for the distribution of innovation , we consider the two cases : * standard normal distribution ; * standardized skewed -distribution with degrees of freedom and skew parameter .the skewness of distribution ( b ) is approximately : see . by using remark [ agarch(1,1 ) moment remark ], we can check the stationarity and moment condition of for the two distributions . for case ( a ), the agarch( ) process has a finite forth moment since . for case ( b ), it only holds that and since and according to a monte carlo computation .the sample size is 2,000 and the repetition number is always . in computing quantile regression estimates , the nelder - mead method in ` r ` is employed and the gaussian - qml estimates are used as initial values for the optimization process ..performance of the quantile regression estimators for ( a ) [ cols="<,<,^,^,^,^,^,^,^ " , ] ( b ) ( c ) ( d ) ( e ) and ( f ) at every probability level .the shaded region illustrates confidence intervals .the dashed and dotted lines represent the corresponding qml estimates and the confidence intervals , respectively . for ,circles in ( c ) denote consistent estimates while crosses in others denote inconsistent ones ., width=604 ] table [ qml fits ] reports the gaussian - qml estimates of the parameters in model ( [ arma model eqn])([agarch model eqn ] ) with , and .the large value of indicates the asymmetry in volatility , that is , negative values of returns result in a bigger increase in future volatility than positive values .the significance of the ar coefficient indicates that the conditional location - scale model is better fitted to the data than pure volatility models .meanwhile , using the parameter estimates and residuals , it is obtained that with standard error , which seemingly indicates the validity of * ( a1)*. figure [ quantile reg fit ] illustrates the results of the quantile regression estimation at every probability level .the confidence intervals are obtained based on the asymptotic covariance estimator in ( [ asymptotic covariance estimator ] ) .the test for is not available at present , but one can guess that would be at some by a rule - of - thumb . then , owing to theorem [ consistency for agarch ] and lemma [ identification lemma ] , it can be determined that the estimates at the excepting the ar coefficients are inconsistent .overall , our findings show that the quantile regression estimates have the values similar to the qmles , but some remarkable differences exist between both and and and for the lower values of .for instance , it can be seen from ( c ) of figure [ quantile reg fit ] that the values of are more deviated from the estimate in the lower conditional quantiles .further , it can be reasoned from ( f ) of figure [ quantile reg fit ] that the asymmetry of volatility still remains even after fitting the agarch model .for simplicity , we suppress the dependence of and on .further , we denote and define where and are those defined in section [ subsec21 ] . in the proof of the asymptotic normality ,the main difficulty arises from the lack of smoothness and stationarity of the objective function .lemma [ g_n(theta ) approximation ] below validates a quadratic approximation of by applying lemma [ br lemma ] which deals with the lack of smoothness and extends lemma 3 of . here , we can obtain from the approximation .then , lemma [ tilde g approximation ] below justifies a quadratic expansion of in a -neighborhood of , which yields the desired asymptotic normality result .+ * proof of theorem [ strong consistency]*. to establish the consistency , we show that converges uniformly to a continuous function on a.s . and the limit has a unique minimum at .let be the space of continuous real - valued functions on equipped with the sup - norm .since is strictly stationary ergodic , * ( c3 ) * implies that is a stationary ergodic sequence of -valued random elements : see proposition 2.5 of .note that due to the lipschitz continuity of and * ( c3)*(ii ) , we have < \infty ] . also , from * ( c6 ) * , we have now ,we show that is uniquely minimized at .recall that and . by * ( c5 ) * and the fact that , , we have \\ & = e \left [ h_t({\alpha^\circ } ) h\left ( \frac{q_t(\theta)-f_t({\alpha^\circ})}{h_t({\alpha^\circ } ) } \right ) \right],\end{aligned}\ ] ] where ] as .* there exist a and a stationary ergodic sequence with <\infty ] .the proof is essentially the same as that of lemma 4 of except that the summands in are not i.i.d . but a sequence of martingale differences .we take to be 1 for convenience .first , we show that ( b ) implies that satisfies the bracketing condition in .denote ] .hence , satisfies the bracketing condition . for each ,put .let be the ball of radius centered at and let be the annulus .then , for given and , there is a partition of .it follows from ( [ bound of e_t - t of br ftns ] ) that for , \right\ } + { 1\over\sqrt{n } } { \sum_{t=1}^n}e\left [ \overset{-}{f}_i({\mathbf{z}}_t ) - \overset{\circ}{f}_i({\mathbf{z}}_t ) \big| \mathcal{g}_{t-1 } \right ] \\ & \leq \mathcal{w}_n ( \overset{-}{f}_i(\cdot ) ) + \sqrt{n } { \varepsilon}r(k ) \left ( { 1\over n b_0}{\sum_{t=1}^n}{b_{t-1 } } \right).\end{aligned}\ ] ] if we set , tends to 1 by ergodicity .further , as in , it can be seen that then , using the arguments as in the rest part of the proof of lemma 4 of , we can establish the lemma .[ g_n(theta ) approximation ] under assumptions * ( c3 ) , ( c5 ) * and * ( n1)(n3 ) * , we have + { n^{-1/2}}{\|\theta-\theta^\circ \| } r_n(\theta),\end{gathered}\ ] ] where is defined in ( [ def of j(tau ) ] ) and as , {p}}0\end{aligned}\ ] ] for every sequence tending to 0 . note that is lipschitz continuous in and its derivative is excepting . by * (n3)*(i ) , is lipschitz continuous in with probability 1 .thus , is absolutely continuous in ] . in view of ( [ decomposition ] ) and ( [ for r_1n ] ) , it suffices to verify that for every sequence of tending to 0 , {p}}0\end{aligned}\ ] ] and {p}}0.\end{aligned}\ ] ] we first verify ( [ stdiffcond ] ) utilizing lemma [ br lemma ] . note that for , where . using this and the inequality , we have that for all small , thus , using the dominated convergence theorem , * ( n1 ) * and * ( n3 ) * , we can have & \leq 0 + 2 e\left [ \left\| { \partial q_t(\theta^\circ ) \over \partial\theta } \right\|^2 i(y_t = q_t(\theta^\circ ) ) \right ] = 0.\end{aligned}\ ] ] similarly , for all with and , where . as in the proof of theorem [ strong consistency ] , it can be shown that and are stationary and ergodic due to * ( n3)*(i ) .further , and are -measurable for all .note that by the mean value theorem , * ( n1 ) * and * ( c5 ) * , \right| \leq { 2 c_0^{-1 } \|f_u\|_{\infty}rm_{1 t } } , \end{aligned}\ ] ] where , so that for all with and , \leq \left ( m_{2 t } + { 2 c_0^{-1 } \|f_u\|_{\infty}m_{1t}^2 } \right ) r.\end{aligned}\ ] ] then , combining ( [ checking condition ( a ) ] ) and ( [ checking condition ( b ) ] ) and applying lemma [ br lemma ] componentwise , we get ( [ stdiffcond ] ) .next , we verify ( [ calculation of cond mean ] ) . in view of ( [ e g theta - g theta_0 ] ) and * ( n1 ) * , we have that for , and thus , .as mentioned in remark [ remark : lipschitz continuity ] , owing to * ( n3)*(i ) , we can express where hence , by using the fundamental theorem of calculus , the term in ( [ calculation of cond mean ] ) can be seen to be no more than as in the proof of theorem [ strong consistency ] , owing to * ( n3)*(i ) , forms a stationary and ergodic sequence of random elements with values in the space of continuous functions from to .further , by * ( c5 ) * , * ( n1 ) * and * ( n3)*(ii ) , we have <\infty ] by theorem 2.7 of . then , since =f_u(\xi^\circ ) j(\tau) ] and thus , for any , there exists such that for all large with .therefore , ( [ calculation of cond mean ] ) is verified , which completes the proof . [ tilde g approximation ] under the conditions in lemma [ g_n(theta ) approximation ] and * ( n4 ) *, we have + r_n(\theta),\end{gathered}\ ] ] where {p}}0 ] .+ * proof of theorem [ consistency for agarch]*. note that * ( a5 ) * is sufficient for * ( c1 ) * and that * ( c5 ) * trivially holds with a.s. we now verify * ( c3 ) * and * ( c6)*. recall that is equivalent to under * ( a2)*. for any analytic function on , we denote by the coefficient of in its taylor s series expansion . due to * ( a4 ) * , we can express thus , it can be seen that is of the form in ( [ def of q_t(theta ) ] ) . for any polynomial of degree , we define . note that implies ( see lemma 2.1 of ) . due to * ( a4 ) * and the compactness of , we have that and , from which it can be shown that and for all ( see , e.g. , theorem 3.1.1 of ) .further , one can see that , and decay exponentially fast uniformly on . by using these ,the fact that , ( [ eps filter representation ] ) , and ( [ h_t filter representation ] ) , we have which ensures * ( c3)*(ii ) .note that the recursion for in section [ sec3 ] can be expressed as where denotes the backshift operator , for , and for .then , owing to ( [ tilde eps backshift ] ) , we have that for , similarly , with the initial values of for , we can express it is easy to check that further , since , it can be easily seen that . since , we have and thus , due to lemma 2.2 of , converges with probability 1 .further , since for all , it follows that this together with ( [ closeness of eps_t and tilde ] ) implies * ( c6 ) * , and henceforth , an application of lemma [ identification lemma](i ) and theorem [ strong consistency ] validates theorem [ consistency for agarch](i ) .next , we deal with the case when . since * ( c3 ) * and * ( c6 ) * are satisfied, uniformly converges a.s . to , which is the one defined in the proof of theorem [ strong consistency ] .note that and if and only if in this case. then , it follows from lemma [ identification lemma](ii ) that implies and . due to the compactness of , for each generic point of the underlying probability space , there exists a subsequence tending to a limit . from the uniform convergence and the continuity of , we have that as . since and , we have .it follows from the above argument that , and , .we have proved that any convergent subsequence of tends to the corresponding true parameter vector , which validates theorem [ consistency for agarch](ii ) .due to * ( n2 ) * , we can choose a neighborhood where s , s and s are uniformly bounded away from . from ( [ def of eps_t(varphi ) ] ) and ( [ h_t filter representation ] ) , the first derivatives of are given as follows : where it can be seen that the above derivatives are all continuously differentiable in except for .in particular , is discontinuous .however , one can see that is lipschitz continuous in and thus , * ( n3)*(i ) is satisfied .since , ( [ bdd for eps_t , h_t ] ) becomes and .thus , we have .note that for and .then , using , we get this in turn implies .further , by virtue of lemma 3.2 of , similarly , we can have .hence , * ( n3)*(ii ) is satisfied . on the other hand ,simple algebras show that . then , using this and lemma 3.3 of , it can be readily checked that .hence , by using * ( n3)*(ii ) and the equality we can see that * ( n3)*(iii ) holds .meanwhile , owing to ( [ tilde eps filter representation ] ) , ( [ tilde h_t^2 filter representation ] ) and ( [ derivative of q_t ] ) , we can derive in a similar fashion to obtain ( [ closeness of eps_t and tilde])([closeness of h_t^2 and tilde ] ) . thus , by using the inequality we can see that , which ensures * ( n4)*(ii ) .further , by using similar arguments to verify ( [ bddness of tilde eps_t ] ) and * ( n3)*(iii ) , one can easily check that * ( n4)*(iii ) holds . finally , * ( n5 ) * is a direct result of lemma [ positive definiteness ] .therefore , the asymptotic normality is asserted by theorem [ asymptotic normality ] .this completes the proof .
|
this paper considers quantile regression for a wide class of time series models including arma models with asymmetric garch ( agarch ) errors . the classical mean - variance models are reinterpreted as conditional location - scale models so that the quantile regression method can be naturally geared into the considered models . the consistency and asymptotic normality of the quantile regression estimator is established in location - scale time series models under mild conditions . in the application of this result to arma - agarch models , more primitive conditions are deduced to obtain the asymptotic properties . for illustration , a simulation study and a real data analysis are provided . * quantile regression for location - scale time series models with conditional heteroscedasticity * jungsik noh and sangyeol lee + of texas southwestern medical center + national university revised february 28 , 2015 * msc2010 subject classifications * : primary 62m10 ; secondary 62f12 . + * key words and phrases * : quantile regression , conditional location - scale time series models , arma - agarch models , caviar models , consistency , asymptotic normality , identifiability condition . + * abbreviated title * : quantile regression for location - scale time series models
|
in agreeing to summarize the results of this meeting , i had a few moments of doubt .first , we were all dismayed not to benefit from john bahcall s 30 years of wisdom in the field .second , a responsible effort would require many hours of close attention to 4.5 days of fascinating talks , diverting time from beautiful walks and museums .most of all , i had a lingering concern that , on the last day of the conference , jerry ostriker would arrive from princeton , just in time to set us straight !as it turned out , this was an enjoyable task , with stimulating talks on new data and new ideas .i believe our field of qso absorption - line studies is in a privileged era . like the roman god janus , we look backward toward the past and forward to the future , both in our scientific tools and our theoretical paradigms ( table 1 ) .l c l + + & & + & & + 4-meter telescopes & .......... & 8 - 10 meter telescopes + galactic halos & .......... & cdm / hydro paradigm + interstellar models & .......... & cosmological models + ly clouds + igm & .......... & the `` cosmic web '' + since the discovery of the high - redshift ly forest over 25 years ago , these absorption features in the spectra of qsos have been used as evolutionary probes of the intergalactic medium ( igm ) , galactic halos , and now large - scale structure and chemical evolution .it is fascinating how rapidly our interpretation of these absorbers has changed , since they were interpreted as relatively small ( 10 kpc ) , pressure - confined clouds of zero - metallicity gas left over from the era of recombination . to be sure ,the lack of strong clustering in velocity provided ample grounds for distinctions from qso metal - line systems and galactic halos . however , these distinctions are clearly weakening . in the next few years , i expect that many of the divisions between research in `` interstellar '' and `` intergalactic '' matter and between `` qso absorption clouds '' and `` cosmological structure '' will fade away .we may even begin to understand more about how galaxies and their halos were assembled .replacing the individual area studies will be a number of hybrid problems : * cdm / hydrodynamics + feedback from star formation * interface between galaxies , the ism , and the igm * chemical evolution and heavy - element transport * reionization and the assembly of galaxies substantial portions of our july 1997 meeting were spent discussing these issues . because i can not do justice to all the individual talks ( 65 by my count ), i will instead describe several outstanding problems in four scientific areas : ( 1 ) _ the history of baryons _ ; ( 2 ) _ the history of metals _ ; ( 3 ) _ reionization of the igm _ ; and ( 4 ) _ the assembly of galaxies_. i will conclude by providing `` wish lists '' of scientific projects for observers , instrumentalists , and theorists .one of the compelling reasons to study intergalactic ly clouds is that they may contain an appreciable fraction of the high- baryons . to the extent that the ly absorbers are associated with large - scale structure and galaxy formation , the evolution of the igm should parallel the evolution of galaxies and the history of baryons .therefore , a major task is to understand the physical significance of various features in the column - density distribution of ly absorbers .as shown in figure 1 , ly absorbers range over nearly 10 orders of magnitude in h i column density , roughly from to . at the lower end ,the _ keck _ telescope has detected weak ly absorption down to log n . at the upper end , damped ly absorbers have been seen up to log n .what are the physical reasons for this range and for features in the approximate power - law distribution ?more specifically , we should be concerned about the following questions and issues : * is there a turnover in the distribution at log n ?these weak absorbers , which may arise in very low - density regions of the igm , may produce substantial he ii absorption toward high- qsos .* what is the physical significance of the steepening in the distribution above n ?this turnover has been noticed for years , but it was difficult to verify owing to curve - of - growth uncertainties .clouds at may contain most of the baryons in the ly forest ( for a distribution ) .clouds at are used for double - qso cloud size estimates and for metal - line detections of c iv and si iv .we need to understand their structure and shape .* can we detect the transition from atomic ( h i ) to molecular ( h ) gas in the damped ly absorbers ?the expected turnover should be seen above log n and related to high - redshift co and the first stars . * for chemical evolution models , it is important to reconcile the baryon evolution rate , , with the star formation rate , and the metal formation rate , .what is the role of the igm in this network ?the ly forest probably contains substantially more baryons than the damped ly absorbers ( ) .the metallicity of the ly forest is solar , while that of the damped ly absorbers is solar .the larger baryon reservoir in the forest may therefore be a significant part of the metal inventory .five years ago , most astronomers believed the ly forest clouds to be pristine .the observations that a high percentage of ly clouds with n contain heavy elements ( c iv , si iv ) were astonishing .recent estimates of the metal abundance are to times solar metallicity and suggest a ( si / c ) enhancement by about a factor 2 over solar ratios .the si iv lines are especially interesting , since si is thought to be formed by -capture processes in massive stars and expelled by type ii supernovae .we need to clarify some implications of these data : * where and when were the heavy elements formed ?although the _ hubble _ deep field and related observations suggest that the bulk of metal production and star formation occurred at , the metals in the ly forest obviously formed earlier .how much earlier ? was it in disks , dwarf galaxies , or low - mass objects such as proto - globular clusters ? were these metals blown out , stripped by mergers , or transported by other means ? understanding these processes may clarify the `` astration '' of deuterium by the first generations of stars .* wherever the heavy elements were produced , they can not have been transported far from their source .over 1 gyr , metal - laden gas moving at 100 km s would travel 100 kpc , approximately the size of typical galactic halos and some ly clouds .local effects from massive star formation could cause the ionization states of si and c to differ significantly from pure qso photoionization . a key experiment would be to infer the size of the metal - bearing ly clouds from double - quasar coincidences. however , these moderate - column ( log n ) clouds are sufficiently rare that good statistics will be difficult to obtain . *converting the observed n(si iv)/n(c iv ) to accurate si / c abundances requires a clear understanding of the ionization mechanism ( photoionization versus collisional ionization ) .if photoionization , we need a much better idea of the spectral shape , , produced by quasars and starburst galaxies at redshifts .the photon range from 25 rydbergs is particularly important , since it covers the ionization edges of relevant si and c ions . in the ly forest , it is important to understand the si iv / c iv ratios , component by component , as is often done in the analysis of interstellar absorption profiles . * the abundances of the iron - group and other elements ( fe , ni , zn , si , cr ) in the damped ly systems need to be confirmed .their ratios provide strong suggestions of massive - star nucleosynthesis and hints of dust depletion .zinc may provide particularly important clues .* i was impressed by attempts to invert the mg ii , fe ii , and c iv line profiles to produce kinematics of the metal - line absorbers .however , i suspect that in most cases the inversion is not unique ; departures from simple orbits are likely , as shocks and gas dynamics are important .most of us carry a mental picture that the igm was reionized at high redshift ( ) by the first quasars and first massive stars .there are hints that it could occur even earlier . however , the redshift history of reionization is poorly known , except for theoretical prejudices based on cdm models for structure formation .up to , the quasar luminosity function is fairly well known , but the same can not be said of quasars at or of starburst galaxies at any redshift .the following issues remain controversial : * are there missing qsos at owing to dust obscuration ? theoretical suggestions( fall & pei 1989 ) of a substantial population of `` missing quasars '' have not yet been confirmed .there are hints of dust depletion from zn / cr ratios in damped ly systems , and conflicting results from red- and radio - selected high- quasars .one recent qso luminosity function ( pei 1995 ) produces too few lyman continuum photons to reionize hydrogen by .thus , additional ionizing sources are needed : either qsos or starburst galaxies .* when were the _ first _ o stars ? if massive star formation is a natural result of the first star formation , then these objects will dominate the feedback to the gaseous environment , including dissociation of h , production of key heavy elements ( o , si , s ) , and generation of large volumes of hot gas through supernovae and stellar winds . the details of this feedback depend on the galactic environment ( dwarfs , spirals , halos ) . *how much of the energy input to the igm is mechanical ?massive stars produce both hot gas and ionizing radiation . if the `` mechanical '' energy input is released through blast waves , the affected volume scales with energy as as , whereas metal production scales as .thus , low - luminosity sources ( dwarf galaxies ) may dominate metal dissemination . *have we detected the era of helium reionization ?dieter reimers showed us intriguing evidence for patchy he ii absorption toward a quasar at .can this be reconciled with gunn - peterson observations at that suggest reionization in hydrogen ?theoretical models of the qso luminosity function and igm opacity suggest that the igm should be reionized in he ii at .perhaps this is evidence for hot - star ionizing sources , with little 4 ryd ( he ii ) continuum .observations are badly needed to push our understanding of the reionization epoch back to .first , we need to find qsos at , perhaps from the sloan sky survey .possible probes of the high- era include searches in the radio , microwave , far - infrared , and near - infrared bands .as planning begins for the ngst ( next generation space telescope ) , the infrared band ( m ) offers promise for deep searches for high- , dust - obscured qso as well as high- supernovae .detecting 21-cm emission at might be possible with a sufficiently large array of radio dishes .the ly absorption from the neutral igm prior to reionization might show up in high- spectra of quasars at . to probe even higher redshifts, one might consider searches for redshifted metal fine - structure lines such as [ c ii ] 158 m and [ o i ] 63 m , which would appear at ] .the standard ( cdm ) model of galaxy formation predicts a `` bottom - up '' hierarchy of structure formation . if clumps of form massive stars at , they could have significant effects on lyman continuum radiation , hot gas , and heavy - element transport .if the sub - clumps form in the halos of proto - galaxies , or fall in gravitationally , cloud - cloud collisions are likely to occur .what are the implications of the resulting shock waves for line profiles of mg ii and c iv absorbers ?shocks will generate hot gas at ^ 2 $ ] , sufficient to produce c iv by collisional ionization .are the observed line profiles evidence for such effects ?finally , we heard several speakers speculate on the formation of large gas disks in the context of damped ly absorbers .are these dlas actually thick disks of 30 kpc size or 58 kpc clumps as predicted by some numerical modelers ?if the dlas are as small as 58 kpc , it may be difficult to understand their frequency , , and there may be an angular momentum problem . following the implications of the cdm scenario , how are the small pieces of proto - galaxies assembled ?what are the roles of radiative cooling and sub - clump mergers ?i conclude this review with lists of ideas and scientific tools for workers in our field .i have given separate discussions for observers and theorists .i continue to be amazed by the beauty of the hires spectra taken by the _ keck telescope_.these new optical data have changed the field of qso absorption lines in so many areas .my first wish is that the new 810 telescopes and spectrographs become sufficiently productive to compete with _keck_. even though many astronomers are actively using _ keck _ for qso studies , we can foresee the time when several new telescopes come on line : the _ hobby - eberly telescope _ , the _ vlt _ , and _ gemini_.a general lesson learned from the _ keck _ experience is that high - resolution spectroscopy is one of the most powerful tools in astrophysics .that power should be extended to other wavelength bands .large instruments are needed : * * ultraviolet : * to study d / h evolution , the he ii gunn - peterson effect , chemical evolution of metals , and damped ly systems , we need a uv spectrograph with effective area . for comparison , the current spectrographs aboard _hubble _ have for moderate - resolution ( 30 - 50 km s ) . in 2002, hst will be upgraded with the _ cosmic origins spectrograph _ , which will provide 1000 - 1500 effective area .the next generation of uv instruments should consider taking another factor - of - ten step in spectroscopic throughput .* * infrared & sub - millimeter : * both the ngst and first telescopes are designed to provide 4-meter apertures that access the near - ir and sub - mm respectively .these instruments should provide powerful imaging of the era beyond redshift .the spectroscopic capabilities may provide further surprises for detecting protogalaxies along with their first stars and supernovae . * * millimeter : * as noted earlier , relating the damped ly absorbers ( proto - galactic gaseous disks ) to the first spiral galaxies will require us to follow the transition from atomic to molecular gas ( from h i to co ) .the mm - array offers the chance to make these comparisons . * * x - ray : * the equivalent x - ray instrument for high - resolution spectroscopy with sufficient throughput to match the optical could be the htxs ( `` high throughput x - ray spectroscopy '' ) mission .designed with 110 m of effective area , this set of x - ray telescopes would have the capability of studying the `` x - ray gunn - peterson effect '' in heavy - element k - edge absorption through the metal - contaminated parts of the igm .these observations could detect the hot gas invisible in h i and he ii absorption .these absorption signatures are expected to be extremely weak ( ) .many of the talks at this meeting worked within the new cosmological paradigm for the ly clouds . over the past five years, numerical models have increased their accuracy and predictive power immensely , to the point where they are now able to provide constraints on , , and galaxy formation .there is still some ways to go , however , and i offer the following wishes : * computers are increasing in their speed and capacity .many of us , including the modelers , anticipate seeing their simulations run to and computed in a box of size mpc .* the models need a better justification for the redshift at which quasars and star formation turn on . as noted earlier ( 2.3 ) we have little information on these epochs of reionization . *the models need to incorporate better small - scale physics .i have the impression that the gravitational collapse of large - scale gaseous structures is treated fairly well. however , once stars and quasars turn on , the microphysics [ supernovae , stellar winds , superbubbles , heavy element transport , hot gas , radiative transfer ] needs to be handled in a realistic fashion .these `` local effects '' are the next hurdle in complexity . * to agree with the h i column density distribution , the numerical models require a lower ionizing radiation field than that inferred from the proximity effect .models for the ly absorber distribution constrain the ratio , where is the specific intensity at the lyman limit , in units of ergs s hz sr . these parameters need to be reconciled with independent inferences of the baryon density from deuterium measurements , ( tytler 1997 ) , and of the radiation field ( giallongo et al . 1996 ; cooke et al . 1997 ) .some of the models described at this meeting suggest a sizeable discrepancy .for example , zhang et al . (1997 ) require a photoionization rate s , which corresponds to for background spectral slope . *fluctuations in the ionizing background are quite important and should be included in the models .the patchy he ii absorption reported by reimers may be one manifestation of this .more generally , the baryons are not exposed to a constant , optically - thin radiation field .
|
was nt this a fun meeting ? yes , except for the rain . this summary highlights four scientific themes of the iap conference , plus a `` wishlist '' of future projects for observers and theorists .
|
the quantitative analysis of stellar spectra is one of the most important tools of modern astrophysics .basically all our knowledge about structure and evolution of stars , and hence about galactic evolution in general , rests on the interpretation of their electromagnetic spectrum .the formation of the observed spectrum is usually confined to a very thin layer on top of the stellar core , the atmosphere .spectral analysis is performed by modeling the temperature and pressure stratification of the atmosphere and computing synthetic spectra which are then compared to observation .fitting synthetic spectra from a grid of models yields the basic photospheric parameters , effective temperature , surface gravity , and chemical composition .comparison with theoretical evolutionary calculations allows the derivation of stellar parameters like mass , radius and total luminosity .the so - called classical stellar atmosphere problem considers the transfer of electromagnetic radiation , released by interior energy sources , through the outermost layers of a star into free space by making three specific physical assumptions . at firstit is assumed that the atmosphere is in hydrostatic equilibrium , thus , the matter which interacts with photons is at rest .second , the transfer of energy through the atmosphere is entirely due to photons , i.e.heat conduction and large scale convection are regarded as negligible ( so - called radiative equilibrium ) .the effectiveness of photon transfer depends on the total opacity and emissivity of the matter which are strongly state and frequency dependent quantities .they depend in detail on the occupation density of atomic levels which in turn are determined by the local temperature and electron density as well as by the radiation field , whose nature is non - local in character .the occupation of any atomic level is balanced by radiative and collisional population and de - population processes ( statistical equilibrium ; our third assumption ) , i.e. the interaction of atoms with other particles and photons .mathematically , the whole problem consists of the solution of the radiation transfer equations simultaneously with the equations for hydrostatic and radiative equilibrium , together with the statistical equilibrium , or , rate equations .a stellar atmosphere is radiating into the circumstellar space and thus evidently is an open thermodynamic system , hence it can not be in thermodynamic equilibrium ( te ) and thus we can not simply assign a temperature .the `` local thermodynamic equilibrium '' ( lte ) is a working hypothesis which assumes te not for the atmosphere as a whole but for small volume elements . as a consequence ,the atomic population numbers are depending only on the local ( electron ) temperature and electron density via the saha - boltzmann equations .computing models by replacing the saha - boltzmann equations by the rate equations are called non - lte ( or nlte ) models .this designation is unfortunate because still , the velocity distribution of particles is assumed to be maxwellian , i.e. we can still define a local temperature .nlte calculations are tremendously more costly than lte calculations , however , it is hard to predict if nlte effects are important in a specific problem. generally , nlte effects are large at high temperatures and low densities , which implies intense radiation fields hence frequent radiative processes and less frequent particle collisions which tend to enforce lte conditions . relaxing the lte assumption leads to the classical model atmosphere problem , i.e. solution of the radiation transfer equations assuming hydrostatic , radiative and statistical equilibrium .such models are applicable to the vast majority of stars .the numerical problem going from lte to realistic nlte models has only recently been solved and is the topic of this paper .we now have the tools in hand to consider non - classical models , which consider the radiation transfer in more general environments , for example in expanding stellar atmospheres .this is the topic of another paper in this volume .stellar atmosphere modeling has made significant progress within the recent years .this is based on the development of new numerical techniques for model construction as well as on the fact that reliable atomic data have become available for many species .of course these achievements go along with a strong increase of computing power . model atmospheres assuming lte have been highly refined by the inclusion of many more atomic and molecular opacity sources , however , elaborated numerical techniques for lte model computation are available for many years .the progress is most remarkable in the field of nlte model atmospheres .the replacement of the saha - boltzmann equations ( lte ) by the atomic rate equations ( nlte ) requires a different numerical solution technique , otherwise metal opacities can not be accounted for at all .such techniques were developed with big success during the last decade , triggered by important papers by cannon and scharmer .the accelerated lambda iteration ( ali ) is the basis of this development .combined with statistical methods we are finally able to compute so - called metal line blanketed nlte models ( considering many millions of spectral lines ) with a very high level of sophistication . in this paperwe discuss the basic ideas behind the new numerical methods for nlte modeling .at first we state the classical model atmosphere problem and describe the ali solution technique .we then focus on the nlte metal line blanketing problem and its solution by the introduction of the superlevel concept and statistical methods to treat the opacities ( opacity sampling and opacity distribution functions ) .finally we demonstrate successful applications of the new models by presenting a few exemplary case studies .in the following text we outline the general stellar atmosphere problem , but will discuss various details of numerical implementation as applied to our computer program pro2 .we assume plane parallel geometry , which is well justified for most stars because the atmospheres are thin compared to the stellar radius .the only parameters which characterize uniquely such an atmosphere are the effective temperature ( ) , which is a measure for the amount of energy transported through the atmosphere per unit area and time ( see eq.[nominal ] ) , the surface gravity ( ) , and the chemical composition .generalization to spherical symmetry to account for extended ( static ) atmospheres mainly affects the radiation transfer equation and is straightforward . to construct model atmosphereswe have developed our program which solves simultaneously a set of equations that is highly coupled and non - linear . because of the coupling , no equation is determining uniquely a single quantity all equations determine a number of state parameters .however , each of them is usually thought of as determining a particular quantity .these equations are : * the radiation transfer equations which are solved for the ( angular ) mean intensities , on a pre - chosen frequency grid comprising points .the formal solution is given by , where is the source function as defined later ( eq.[source ] ) .although is written as an operator , one may think of as a _ process _ of obtaining the mean intensity from the source function . * the hydrostatic equilibrium equation which determines the total particle density . *the radiative equilibrium equation from which the temperature follows . * the particle conservation equation , determining the electron density . * the statistical equilibrium equations which are solved for the population densities of the atomic levels allowed to depart from lte ( nlte levels ) . * the definition equation for a fictitious massive particle density which is introduced for a convenient representation of the solution procedure .this set of equations has to be solved at each point of a grid comprising depth points .thus we are looking for solution vectors the complete linearization ( cl ) method solves this set by linearizing the equations with respect to all variables .the basic advantage of the ali ( or `` operator splitting '' ) method is that it allows to eliminate at the outset the explicit occurrence of the mean intensities from the solution scheme by expressing these variables by the current , yet to be determined , occupation densities and temperature .this is accomplished by an iteration procedure which may be written as ( suppressing indices indicating depth and frequency dependency of variables ) : this means that the actual mean intensity at any iteration step is computed by applying an approximate lambda operator ( alo ) on the actual ( thermal ) source function plus a correction term that is computed from quantities known from the previous iteration step .this correction term includes the exact lambda operator which guarantees the exact solution of the radiation transfer problem in the limit of convergence : .the use of in eq.[ali ] only indicates that a formal solution of the transfer equation is performed but in fact the operator is usually not constructed explicitly . insteada feautrier solution scheme or any other standard method can be employed to solve the transfer equation that is set up as a differential equation .the resulting set of equations for the reduced solution vectors is of course still non - linear .the solution is obtained by linearization and iteration which is performed either with a usual newton - raphson iteration or by other , much faster methods like the quasi - newton or kantorovich variants .the first model atmosphere calculations with the ali method were performed by werner .another advantage of the ali method is that the explicit depth coupling of the solution vectors eq.[psi1 ] through the transfer equation can be avoided if one restricts to diagonal ( i.e. local ) approximate -operators .then the solution vectors eq.[psi2 ] are independent from each other and the solution procedure within one iteration step of eq.[ali ] is much more straightforward .depth coupling is provided by the correction term that involves the exact solution of the transfer equation . the hydrostatic equation which also gives an explicit depth coupling ,may be taken out of the set of equations and can as experience shows be solved in between two iteration steps of eq.[ali ]. then full advantage of a local alo can be taken .the linearized system may be written as where is the current estimate for the solution vector at depth and is the correction vector to be computed .using a tri - diagonal operator the resulting system for is like in the classical cl scheme of block tri - diagonal form coupling each depth point to its nearest neighbors : the quantities are ( ) matrices where is the total number of physical variables , i.e. , , and is the residual error in the equations .the solution is obtained by the feautrier scheme . with starting values ) and we sweep from the outer boundary of the atmosphere inside and calculate at each depth : at the inner boundary we have and sweeping back outside we calculate the correction vectors , first and then successively . as already mentioned , the system eq.[tri ] breaks into independent equations ( ) when a local operator is used . the additional numerical effort to set up the subdiagonal matrices and matrix multiplications in the tri - diagonal caseis outweighed by the faster global convergence of the ali cycle , accomplished by the explicit depth coupling in the linearization procedure .the principal advantage of the ali over the cl method becomes clear at this point .each matrix inversion in eq.[feau ] requires operations whereas in the cl method operations are needed . since the number of frequency points is much larger than the number of levels , the matrix inversion in the cl approachis dominated by .recent developments concern the problem that the total number of atomic levels tractable in nlte with the ali method described so far is restricted to the order of 250 , from our experience with pro2 .this limit is a consequence of the non - linearity of the equations , and in order to overcome it , measures must be taken in order to achieve a linear system whose numerical solution is much more stable .such a pre - conditioning procedure has been first applied in the ali context by werner & husfeld .more advanced work achieves linearity by replacing the operator with the operator ( and by judiciously considering some populations as `` old '' and some as `` new '' ones within an ali step ) which is formally defined by writing where the total opacity ( as defined in sect.[opa ] ) is calculated from the previous ali cycle .the advantage is that the emissivity ( sect.[opa ] ) is linear in the populations , whereas the source function is not .hence the new operator gives the solution of the transfer problem by acting on a linear function .this idea is based on rybicki & hummer who applied it to the line formation problem , i.e. restricting the set of equations to the transfer and rate equations and regarding the atmospheric structure as fixed .hauschildt generalized it to solve the full model atmosphere problem .in addition , splitting the set of statistical equations and solving it separately for each chemical element means that now many hundreds of levels per species are tractable in nlte .a very robust method and fast variant of the ali method , the ali / cl hybrid scheme , allows for the linearization of the radiation field for selected frequencies , but it is not implemented in pro2 .any numerical method requires a formal solution ( i.e. atmospheric structure already given ) of the radiation transfer problem .the radiation transfer at any particular depth point can be described by the following equation , formally written for positive and negative ( which is the cosine of the angle between direction of propagation and outward directed normal to the surface ) separately , i.e. for inward and for outward directional intensities with frequency : .\ ] ] is the optical depth ( which can be defined via the column mass that is used in the other structural equations and later introduced in sect.[defm ] by , with the mass density ) and is the local source function . introducing the feautrier variable obtain the second - order form : .\ ] ] we may separate the thomson emissivity term ( scattering from free electrons , assumed coherent , with cross - section ) from the source function so that where is the ratio of thermal emissivity to total opacity as described in detail below ( sect.[opa ] ) : . since the mean intensity is the angular integral over the feautrier intensitythe transfer equation becomes thomson scattering complicates the situation by the explicit angle coupling but the solution can be obtained with the standard feautrier scheme . assuming complete frequency redistribution in spectral lines , no explicit frequency coupling occurs so that the parallel solution for all frequencies enables a very efficient vectorization on the computer .the following boundary conditions are used for the transfer equation . at the inner boundary where the optical depth is at maximum , , we have where we specify from the diffusion approximation : is the planck function and the nominal ( frequency integrated ) eddington flux : with the stefan - boltzmann constant . at the outer boundarywe take , assuming that is a linear function of for . since , it is not exactly valid to assume no incident radiation at the stellar surface .instead we specify after scharmer & nordlund : \ ] ] which follows from eq.[te ] assuming for .then we get the boundary conditions are discretized performing taylor expansions which yield second - order accuracy .the statistical equilibrium equations are set up according to .the number of atomic levels , ionization stages and chemical species , as well as all radiative and collisional transitions are taken from the input model atom supplied by the user ( sect.[atom ] ) .ionization into excited states of the next ionization stage is allowed for .dielectronic recombination and autoionization processes can also be included in the model atom . as usual the atomic energy levels are ordered sequentially by increasing excitation energy , starting with the lowest ionization stage .then for each atomic level of any ionization stage of any species the rate equation describes the equilibrium of rates into and rates out of this level : the rate coefficients have radiative and collisional components : .radiative upward and downward rates are respectively given by : photon cross - sections are denoted by . is the boltzmann lte population ratio in the case of line transitions : , where the are the statistical weights .the lte population number of a particular level is defined relative to the ground state of the next ion , so that in the case of recombination from a ground state we have by definition with the saha - boltzmann factor where is the ionization potential of the level .care must be taken in the case of recombination from an excited level into the next low ion . then .dielectronic recombination is included following .assuming now that is a ground state of ion , then the recombination rate into level of ion via an autoionization level ( with ionization potential , having a negative value when lying above the ionization limit ) is : the reverse process , the autoionization rate , is given by : the oscillator strength for the stabilizing transition ( i.e. transition i ) is denoted by , and is the mean intensity averaged over the line profile .the program simply takes from the continuum frequency point closest to the transition frequency , which is reasonable because the autoionization line profiles are extremely broad .the population of autoionization levels is assumed to be in lte and therefore such levels do not appear explicitly in the rate equations .the computation of collisional rates is generally dependent on the specific ion or even transition .several options , covering the most important cases , may be chosen by the user .the rate equation for the highest level of a given chemical species is redundant .it is replaced by the abundance definition equation .this equation simply relates the total population of all levels of a particular species to the total population of all hydrogen levels .summation over all levels usually includes not only nlte levels but also levels which are treated in lte , according to the specification in the model atom . denoting the number of ionization stages of species with , the number of nlte and lte levels per ion with and , respectively , we can write : = y_k \left[\sum_{i=1}^{nl(h)}n_{i}+\sum_{i=1}^{lte(h)}n_{i}^{\star}+n_p\right ] .\ ] ] on the right hand side we sum up all hydrogen level populations including the proton density , and is the number abundance ratio of species relative to hydrogen .we close the system of statistical equilibrium equations by invoking charge conservation .we denote the total number of chemical species with , the charge of ion with ( in units of the electron charge ) and write : =n_e .\ ] ] we introduce a vector comprising the occupation numbers of all nlte levels , . then the statistical equilibrium equation is written as : the gross structure of the rate matrix is of block matrix form , because transitions between levels occur within one ionization stage or to the ground state of the next ion .the structure is complicated by ionizations into excited levels and by the abundance definition and charge conservation equations which give additional non - zero elements in the corresponding lines of .radiative equilibrium denotes the fact that the energy transport is exclusively performed by photons .it can be enforced by adjusting the temperature stratification either during the linearization procedure or in between ali iterations . in the former case a linear combination of two different formulations is used and in the latter case a classical temperature correction procedure ( unsld - lucy ) , generalized to nlte problems , is utilized .the latter is particularly interesting , because it allows to exploit the blocked form of the rate coefficient matrix .this will enable an economic block - by - block solution followed by a subsequent unsld - lucy temperature correction step .on the other side , however , this correction procedure may decelerate the global convergence behavior of the ali iteration .the two forms of writing down the radiative equilibrium condition follow from the postulation that the energy emitted by a volume element per unit time is equal to the absorbed energy per unit time ( integral form ) : where scattering terms in and cancel out .this formulation is equivalent to invoking flux constancy throughout the atmosphere ( differential form ) involving the nominal flux ( eq.[nominal ] ) : where is the variable eddington factor , defined as and computed from the feautrier variable ( eq.[udef ] ) after the formal solution .as discussed e.g. in the differential form is more accurate at large depths , while the integral form behaves numerically better at small depths . instead of arbitrarily selecting that depth in the atmosphere where we switch from one formulation to the other, we use a linear combination of both constraint equations which guarantees a smooth transition with depth , based on physical grounds . before adding up both equations we have to take two measures . at firstwe divide eq.[int ] by the absorption mean of the opacity , , for scaling reasons : where is the true opacity without electron scattering . then we multiply eq.[diff ] with a similar average of the diagonal elements of the matrix : these two steps determine the relative weight of both equations in a particular depth .numerical experience shows that it is necessary to damp overcorrections by adding the following term , which is computed from quantities of the previous iteration step and which vanishes in the limit of convergence , to the right hand side of eq.[diff ] : we write the equation of radiative equilibrium in its final form : we note that explicit depth coupling is introduced by the differential form eq.[diff ] through the derivative even if a purely local operator is used .therefore the linearization procedure can no longer be performed independently at each depth point and the question becomes relevant at which boundary to start with .numerical experience shows that it is essential to start at the outer boundary and to continue going inwards .if a tri - diagonal operator is used , nearest neighbor depth coupling is introduced anyhow .the program user can choose either the linear combination eq.[combi ] or the purely integral form eq.[int ] , the latter may be necessary to start the iteration under certain circumstances . the linear combination ,however , is found to give a much faster convergence behavior . closely following lucy ( but avoiding the eddington approximation and using variable eddington factors instead ) and generalizing to nlte one can derive for each depth point a temperature correction to be applied to the actual temperature in order to achieve flux constancy . using ,the zeroth momentum ( i.e. angle averaged form ) of the radiation transfer eq.[te ] is : with from eq.[ali ] and the eddington flux . in the lte case with electron scattering, can be written as the sum of a thermal and a scattering contribution : in the nlte case we formally write in analogy : with quantities and which can be freely evaluated but which are not independent of each other , since must be expressed by in order to yield on the r.h.s . of eq.[gamma ] . with this substitution eq.[zmrt ] reads : integrating over frequencies , the condition of flux conservation then reads : where we used the following definitions for , , and : since we can choose freely , we can define which opacities shall contribute , finally resulting in a favorable scaling of factors in eq.[lucy ] .usually we start with all processes included in to begin with moderate corrections .following hauschildt ( priv .one can optionally exclude bound - bound or bound - free transitions which is necessary if strong lines or continua dominate numerically the radiative equilibrium in optically thin regions .note that this measure does not affect the solution in the case of convergence , but only the convergence rate .without such an acceleration , the unsld - lucy procedure may run into pseudo - convergence . integrating the first momentum of the radiation transfer equation over frequency we obtain : using eq.[zmrti ] and the depth integrated form of eq.[fmrti ] we proceed as described by lucy .we finally obtain , with frequency averaged eddington factors and as well as defined in analogy to eq.[mean ] , the temperature correction at any depth : \ ] ] where is the difference between the actual and the nominal eddington flux . in practiceit is useful to accelerate this procedure by extrapolating the last , say , ten corrections .the unsld - lucy procedure provides model atmospheres with a relative deviation from the flux constancy smaller than 10 which is a factor of ten better when compared to the procedure employing eq.[diff ] . due to the decoupling of the temperature from the statistical equilibriumthe unsld - lucy procedure is numerically much more stable allowing to calculate models which otherwise failed to converge .the price is a slower overall convergence of the ali iteration by a factor of two .we write the equation for hydrostatic equilibrium as : where is the surface gravity and the column mass . is the total pressure comprising gas , radiation and turbulent pressures , so that : with boltzmann s constant and the turbulent velocity .the hydrostatic equation may either be solved simultaneously with all other equations or separately in between iterations .the overall convergence behavior is usually the same in both cases .if taken into the linearization scheme and a local operator is used then , like in the case of the radiative equilibrium equation , explicit depth coupling enters via the depth derivative .again , solution of the linearized equations has to proceed inwards starting at the outer boundary . the starting value in the first depth point ( subscript ) is where is the variable eddington factor denoting the ratio of at the surface , kept fixed during linearization . the total particle density is the sum of electron density plus the population density of all atomic states , lte and nlte levelswe may write down the particle conservation equation in the following form that contains explicitly only the hydrogen population numbers : \sum_{k=1}^{natom}y_k .\ ] ] a fictitious massive particle density is introduced for notational convenience .it is defined by the mass of a chemical species in amu is denoted by . introducing the mass of a hydrogen atom ,we may simply write for the material density thermal opacity and emissivity are made up by atomic radiative bound - bound , bound - free and free - free transitions . for each chemical species we compute and sum up : \nonumber\end{aligned}\ ] ] where the total opacity includes thomson scattering , i.e. , and .\nonumber\end{aligned}\ ] ] the first index of variables marked with two indices denotes the ionization stage and the second one denotes the ionic level .thus denotes the cross - section for photoionization from level of ion into level of ion .the double summation over the bound - free continua takes into account the possibility that a particular level may be ionized into more than one level of the next high ion .again , note the definition of the lte population number in this case , which depends on the level of the parent ion : note also , that the concept of lte levels ( whose population densities do enter , e.g. the number or charge conservation equations ) in the atomic models of complex ions is therefore not unambiguous .the present code always assumes that lte levels in the model atoms are populated in lte with respect to the ground state of the upper ion .the source function used for the approximate radiation transfer is the ratio , thus , excludes thomson scattering . for the exact formal solution of course , the total opacity in the expression eq.[source ] includes the thomson term ( ) .as high - lying atomic levels are strongly perturbed by other charged particles in the plasma they are broadened and finally dissolved .this effect is observable by line merging at series limits and has to be accounted for in line profile analyses .moreover , line overlap couples the radiation field in many lines and flux blocking can strongly affect the global atmospheric structure .numerically , we treat the level dissolution in terms of occupation probabilities , which for lte plasmas can be defined as the ratio of the level populations to those in absence of perturbations . a phenomenological theory for these quantitieswas given in .the non - trivial generalization to nlte plasmas was performed by hubeny . in practicean individual occupation probability factor ( depending on , and principal quantum number ) , is applied to each atomic level which describes the probability that the level is dissolved .furthermore , the rate equations eq.[rates ] must be generalized in a unique and unambiguous manner .for details see . as an example, fig.[humi ] shows these occupation probabilities for hydrogen and helium levels as a function of depth in a white dwarf atmosphere .in all constraint equations described above the mean intensities are substituted by the approximate radiation field eq.[ali ] in order to eliminate these variables from the solution vector eq.[psi1 ] . in principlethe approximate lambda operator may be of arbitrary form as long as the iteration procedure converges . in practicehowever an optimum choice is desired in order to achieve convergence with a minimum amount of iteration steps .the history of the alos is interesting and was summarized in detail by hubeny . of utmost importance were two papers by olson and collaborators who overcame the major drawback of early alos , namely the occurrence of free parameters controlling the convergence process , and who found the optimum choice of alos .our model atmosphere program enables the use of either a diagonal or a tri - diagonal alo , both are set up following . in this case the mean intensity at a particular depth in the current iteration step is computed solely from the local source function and a correction term , the latter involving the source functions ( of all depths ) from the previous iteration . dropping the iteration count and introducing indices denoting depth points we can rewrite eq.[ali ] : in the discrete form we now think of as a matrix acting on a vector whose elements comprise the source functions of all depths .then is the diagonal element of the matrix corresponding to depth point .writing ( for numerical computation see eq.[matrix ] below ) we have a purely local expression for the mean intensity : much better convergence is obtained if the mean intensity is computed not only from the local source function but also from the source function of the neighboring depths points .then the matrix representation of is of tri - diagonal form and we may write where and represent the upper and lower subdiagonal elements of and the source functions at the adjacent depths . in analogythe correction term becomes again all quantities for the computation of are from the previous iteration and the first term denotes the exact formal solution of the transfer equation .we emphasize again that the actual source functions in eq.[trij ] are computed from the actual population densities and temperature which are unknown .we therefore have a non - linear set of equations which is solved by either a newton - raphson iteration or other techniques , resulting in the solution of a tri - diagonal linear equation of the form eq.[tri ] .as was shown in the elements of the optimum matrix are given by the corresponding elements of the exact matrix .the diagonal and subdiagonal elements are computed from : with . at large optical depths with increasing steps( the depth grid is equidistant in ) the subdiagonals and vanish and the diagonal approaches unity , resembling the fact that the radiation field is more and more determined by local properties of the matter . at very small optical depths all elements of vanish , reflecting the non - localness of the radiation field in this case .pro2 allows usage of an acceleration scheme to speed up convergence of the iteration cycle eq.[ali ] .we implemented the scheme originally proposed by ng .it extrapolates the correction vector from the previous three iterations . from our experiencethe extrapolation often yields over - corrections resulting in alternating convergence or even divergence . and usually the application of a tri - diagonal alo results in a satisfactorily fast convergence so that the acceleration scheme is rarely used .the complete set of non - linear equations for a single iteration step eq.[ali ] comprises at each depth the equations for statistical , radiative , and hydrostatic equilibrium and the particle conservation equation . for the numerical solutionwe introduce discrete depth and frequency grids .the equations are then linearized and solved by a suitable iterative scheme .explicit angle dependency of the radiation field is not required here and consequently eliminated by the use of variable eddington factors .angle dependency is only considered in the formal solution of the transfer equation .the program requires an input model atmosphere structure as a starting approximation together with an atomic data file , as well as a frequency grid .depth and frequency grids are therefore set up in advance by separate programs .a depth grid is set up by an auxiliary program which computes , starting from a gray approximation , a lte continuum model using the unsld - lucy temperature correction procedure . in this program depthpoints are set equidistantly on a logarithmic ( rosseland ) optical depth scale .the user may choose the inner and outer boundary points and the total number of grid points ( typically 90 ) .the converged lte model ( temperature and density structure , given on a column mass depth scale ) is written to a file that is read by pro2 .the nlte code uses the column mass as an independent depth variable .the frequency grid is established based upon the atomic data input file ( see sect.[atom ] ) .frequency points are set blue- and redward of each absorption edge and for each spectral line .gaps are filled up by setting continuum points .finally , the quadrature weights are computed .the user may change default options for this procedure .frequency integrals appearing e.g.in eq.[combi ] are replaced by quadrature sums and differential quotients involving depth derivatives by difference quotients .all variables are replaced by where denotes a small perturbation of .terms not linear in these perturbations are neglected .the perturbations are expressed by perturbations of the basic variables : as an illustrative example we linearize the equation for radiative equilibrium .most other linearized equations may be found in .assigning two indices ( for depth and for frequency of a grid with nf points ) to the variables and denoting the quadrature weights with eq.[combi ] becomes : +\delta\chi{_{di}}[s{_{di}}-j{_{di } } ] ) } \nonumber\\ & & + { \bar\lambda_j^\star}\sum_{i=1}^{nf } \frac{w_i}{\delta\tau_i}(\delta j{_{di}}f{_{di}}-\delta j{_{d-1,i}}f{_{d-1,i } } ) = f_0+{\bar\lambda_j^\star}{\cal h}- \nonumber\\ & & \sum_{i=1}^{nf}w_i\frac{\chi{_{di}}}{\bar\kappa_j}(s{_{di}}-j{_{di } } ) -{\bar\lambda_j^\star}\sum_{i=1}^{nf}\frac{w_i}{\delta\tau_i}(f{_{di}}j{_{di}}-f{_{d-1,i}}j{_{d-1,i } } ) .\end{aligned}\ ] ] note that we do not linearize .because of this , convergence properties may be significantly deteriorated in some cases .perturbations are expressed by eq.[deltax ] , and the perturbation of the mean intensity is , according to eq.[trij ] , given through the perturbations of the source function at the actual and the two adjacent depths : where are the matrix elements from eq.[matrix ] .the involve the term which is neglected because we only want to account for nearest neighbor coupling .we write with the help of eq.[deltax ] and observe that for any variable derivatives of opacity and emissivity with respect to temperature , electron and population densities are computed from analytical expressions ( see e.g. ) .we finally get from eq.[lin ] : \right.\nonumber \\ & & \left.+{\bar\lambda_j^\star}\sum_{i}^{nf}\frac{w_i}{\delta\tau_i}(f{_{di}}b{_{di}}-f{_{d-1,i}}a{_{di } } ) \frac{\partial s{_{di}}}{\partial t } \right\}+ \nonumber\\ & & \delta t{_{d+1,i}}\left\ { \sum_{i}^{nf}-\frac{w_i}{\bar\kappa_j } \frac{\partial s{_{d+1,i}}}{\partial t}\chi{_{di}}a{_{d+1,i}}\right.\nonumber \\ & & \left .+ { \bar\lambda_j^\star}\sum_{i}^{nf}\frac{w_i}{\delta\tau_i } ( f{_{di}}a{_{d+1,i}}-f{_{d-1,i}}b{_{d+1,i}})\frac{\partial s{_{d+1,i}}}{\partial t } \right\}+ \nonumber \\ & & \delta n_{e_{d-1,i}}\{\cdots\ } + \delta n_{e_{d , i}}\{\cdots\ } + \delta n_{e_{d+1,i}}\{\cdots\}+ \nonumber \\ & & \sum_{l=1}^{nl}\delta n_{l_{d-1,i}}\{\cdots\ } + \sum_{l=1}^{nl}\delta n_{l_{d , i}}\{\cdots\ } + \sum_{l=1}^{nl}\delta n_{l_{d+1,i}}\{\cdots\ } \nonumber \\ & & = { \rm r.h.s.}\end{aligned}\ ] ] curly brackets denote terms that are similar to those multiplied with the perturbations of the temperature . instead of partial derivatives in respect to , they contain derivatives in respect to and the populations .they all represent coefficients of the matrices in eq.[tri ] . as described in sect.[eins2 ]the linearized equations have a tri - diagonal block - matrix form , see eq.[tri ] .inversion of the grand matrix ( sized , i.e. about in typical applications ) is performed with a block - gaussian elimination scheme , which means that our iteration of the non - linear equations represents a multi - dimensional newton - raphson method . the problem is structurally simplified when explicit depth coupling is avoided by the use of a local alo , however , the numerical effort is not much reduced , because in both cases the main effort lies with the inversion of matrices sized .the newton - raphson iteration involves two numerically expensive steps , first setting up the jacobian ( comprising ) and then inverting it .additionally , the matrix inversions in eq.[feau ] limit their size to about because otherwise numerical accuracy is lost .two variants recently introduced in stellar atmosphere calculations are able to improve both , numerical accuracy and , most of all , computational speed .broyden s variant belongs to the family of so - called quasi - newton methods and it was first used in model atmosphere calculations in .it avoids the repeated set - up of the jacobian by the use of an update formula . on top of this , it also gives an update formula for the _ inverse _ jacobian . in the case of a local alo the solution of the linearized system at any depth is be the -th iterate of the inverse jacobian , then an update can be found from : where denotes the dyadic product and where we have defined : the convergence rate is super - linear , i.e. slower than the quadratic rate of the newton - raphson method , but this is more than compensated by the tremendous speed - up for a single iteration step .it is not always necessary to begin the iteration with the calculation of an exact jacobian and its inversion .experience shows that in an advanced stage of the overall ( ali- ) iteration eq.[ali ] ( i.e. when corrections become small , of the order 1% ) we can start the linearization cycle eq.[xxx ] by using the inverse jacobian from the previous overall iteration .computational speed - up is extreme in this case , however , it requires storage of the jacobians of all depths .more difficult is the application to the tri - diagonal alo case .here we have to update the grand matrix which , as already mentioned , is of block tri - diagonal form .we can not update their inverse , because it is never computed explicitly .furthermore we need an update formula that preserves the block tri - diagonal form which is a prerequisite for its inversion by the feautrier scheme eq.[feau ] .such a formula was found by schubert : where with the structure matrix as defined by : the vectors and are defined as above but now they span over the quantities of all instead of a single depth point . with this formulawe obtain new submatrices and with which the feautrier scheme eq.[feau ] is solved again .this procedure saves the computation of derivatives .another feature realized in our program also saves the repeated inversion of by updating its inverse with the broyden formula eq.[xxx ] .similar to the diagonal alo case it is also possible to pass starting matrices from one overall iteration eq.[ali ] to the next for the update of and the matrix . in both casesthe user specifies two threshold values for the maximum relative correction in which cause the program to switch from newton - raphson to broyden stages 1 and 2 . during stage 1 each new overall cycle eq.[ali ]is started with an exact calculation and inversion of all matrices involved and in stage 2 these matrices are passed through each iteration .another variant , the kantorovich method was recently introduced into model atmosphere calculations .it is more simple and straightforward to implement .this method simply keeps fixed the jacobian during the linearization cycle and it is surprisingly stable .in fact it turns out to be even more stable ( i.e. it can be utilized in an earlier stage of iteration ) than the broyden method in the tri - diagonal alo case .the user of pro2 may choose this variant in two stages in analogy to the broyden variant .it was found that in the stage 2 it is necessary to update the jacobian , say , every 5 or 10 overall iterations in order to prevent divergence .despite the capacity increase for the nlte treatment of model atmosphere problems by introducing the ali method combined with pre - conditioning techniques , the blanketing by millions of lines from the iron group elements arising from transitions between some levels could only be attacked with the help of statistical methods .these have been introduced into nlte model atmosphere work by anderson . at the outset, model atoms are constructed by combining many thousand of levels into a relatively small number of superlevels which can be treated with ali ( or other ) methods .then , in order to reduce the computational effort , two approaches were developed which vastly decrease the number of frequency points ( and hence the number of transfer equations to be solved ) to describe properly the complex frequency dependence of the opacity .these two approaches have their roots in lte modeling techniques , where for the same reason statistical methods are applied for the opacity treatment : the opacity distribution function ( odf ) and opacity sampling ( os ) approaches . both are based on the circumstance that the opacity ( in the lte approximation ) is a function of two only local thermodynamic quantities .roughly speaking , each opacity source can be written in terms of a population density and a photon cross - section for the respective radiative transition : + + + in lte the population follows from the saha - boltzmann equations , hence .the os and odf methods use such pre - tabulated ( on a very fine frequency mesh ) during the model atmosphere calculations .the nlte situation is more complicated , because pre - tabulation of opacities is not useful .the population densities at any depth now also depend explicitly on the radiation field ( via the rate equations which substitute the te saha - boltzmann statistics ) and thus on the populations in each other depth of the atmosphere . as a consequence , the os and odf methodsare not applied to opacity tabulations , but on tabulations of the photon cross - sections .these do depend on local quantities only , e.g. line broadening by stark and doppler effects is calculated from and . in the nlte casethe cross - section takes over the role which the opacity played in the lte case .so , strictly speaking , the designation os and odf is not quite correct in the nlte context .the strategy in our code is the following .before any model atmosphere calculation is started , the atomic data are prepared by constructing superlevels , and the cross - sections for superlines .then these cross - sections are either sampled on a coarse frequency grid or odfs are constructed .these data are put into the model atom which is read by the code .the code does not know if os or odfs are used , i.e. it is written to be independent of any of these approaches .the large number of atomic levels in a single ionization stage is grouped into a small number of typically 1020 superlevels or , energy bands .grouping is performed by inspecting a level diagram ( fig.[levelfig ] ) which shows the number of levels ( times their statistical weight ) per energy bin as a function of excitation energy .gaps and peaks in this distribution are used to define energy bands .each of these bands is then treated as a single nlte level with suitably averaged statistical weight and energy .all individual lines connecting levels out of two distinct bands are combined to a band - band transition with a so - called complex photon cross - section .this cross - section essentially is a sum of all individual line profiles which however conserves the exact location of the lines in the frequency spectrum .this co - addition is performed once and for all and on a very fine frequency mesh to account for the profile shape of every line , before any model atmosphere calculation begins .these complex cross - sections ( examples are seen in the top panels of figs.[xfig ] and [ odffig ] ) are tabulated and later used to construct odfs or to perform os for the model calculations .= 9.1 cm each of the model bands is treated as one single nlte level with an average energy and statistical weight which are computed from all the individual levels ( , ) within a particular band : where . is a characteristic temperature , pre - chosen and fixed throughout the model calculations , and at which the ionization stage in question is most strongly populated .energy levels of all iron group elements in the same ionization stage contribute to these model bands according to their abundance .all individual line transitions with cross - sections between two model bands and are combined to one complex band - band transition with a cross - section as described by : is the normalized profile of an individual line .this means that all individual lines are correctly accounted for in a sense that their real position within the frequency spectrum is not affected by the introduction of atomic model bands .the complex cross - sections ( each possibly representing many thousand individual lines ) are computed in _ advance _ of the model atmosphere calculations on a fine frequency grid with a resolution smaller than one thermal doppler width ( typically ) .this is done at two values for the electron density ( and ) and the nlte code accounts for depth dependent electron collisional broadening by interpolation .individual line photon cross - sections are represented by voigt profiles including stark broadening .collisional excitation rates between atomic model bands are treated with a generalized van regemorter formula : where $ ] . is the ionization potential of hydrogen ( in electron volts ) , is the first exponential integral and a constant depending on the ionic charge .the involve the f - values of all individual lines and they are computed together with the radiative cross - sections .third degree polynomials in are fitted to and the coefficients are written into the atomic input data file for the nlte code .photoionization cross - sections for iron group elements have numerous strong resonances that are difficult to deal with . as a first approximation one can calculate hydrogen - like cross - sections for the individual levels and combine them to a complex ionization cross - section for every model band : this cross - section is stored in a file and read by the code . other data to be used alternatively ( available e.g. from the opacity project )may easily be prepared and stored in such a file by the user . for collisionalionization one may select seaton s formula with a mean ( hydrogen - like ) ionization cross - section .the os or , alternatively , odf approaches are introduced merely in order to save computing time during the model atmosphere calculations . in principleit is possible to proceed directly with the complex cross - sections constructed as described above .however , this would require a very fine frequency mesh over the entire spectrum in order to discretize the cross - sections in a similar detailed manner , resulting in some frequency points .since computation time scales linearly with the number of frequency points in the ali formalism , a reduction to some thousand or ten thousand points easily reduces the computational effort by an order of magnitude .opacity sampling is the more straightforward approach .the fine cross - section is sampled by a coarse frequency grid and the resulting coarse cross - section is used for the model calculation ( fig.[xfig ] ) .individual lines are no longer accounted for in an exact way , but this is not necessary in order to account for the line blanketing effects , i.e. effects of metal lines on the global atmospheric structure like surface cooling and backwarming of deeper layers . a high resolutionsynthetic spectrum can be obtained easily after model construction by performing a single solution of the radiation transfer equation on a fine frequency mesh .= 9.1 cm the quality of the sampling procedure can be checked by a quadrature of the cross - section on the frequency grid ( with weights ) : renormalization may be performed if necessary .this reduction of the cross - sections by sampling is also performed before the model calculations begin .the alternative way is the construction of opacity distribution functions ( or , more correctly , cross - section distribution functions ) .each complex cross - section is re - ordered in such a way that the resulting odf is a monotonous function ( see fig.[odffig ] , middle panel ) .the resulting smooth run of the cross - section over frequency can be approximated by a simple step function with typically one dozen of intervals .this cross - section is then fed into the model atmosphere code which can use a coarse frequency mesh to appropriately incorporate the odfs . in order to avoid unrealistic systematic effects , however ,the cross - section bins within each odf are re - shuffled randomly ( bottom panel of fig.[odffig ] ) .= 9.1 cm many numerical tests concerning model atom construction with superlevels were performed by studying the effects of details in band definition and widths . also , the resulting model atmospheres using odf and os approaches were compared and generally , good agreement was found .all recent progress in stellar atmosphere modeling would have been impossible without the availability of atomic input data .major data sources were put at public disposal by the opacity project and the work by kurucz .these sources provide energy levels , transition probabilities , and bound - free photon cross - sections .the iron project is delivering electron collision strengths which are important for nlte calculations and which were hardly available up to now .we can not over - emphasize these vital contributions to our work .the atomic species that are to be included in the model atmosphere calculations are entirely determined by an atomic data input file . for each ionization stage of any chemical elementthe user defines atomic levels by the ionization potential and statistical weight and assigning a name ( character string ) to them .these level names are used to define radiative and collisional bound - bound and bound - free transitions among the levels as well as free - free opacities .the declaration of such transitions is generally complemented by a number which specifies the formula which pro2 shall use to calculate cross - sections for the rates and opacities .depending on the formula chosen by the user , additional input data are occasionally expected , such like oscillator strengths or line broadening data .alternatively , photon cross - sections for lines and continua may be read from external files whose names need to be declared with the definition of the transitions .construction of model atoms involving large datasets , e.g. from the opacity project , is automated and requires a minimum of work by the user .the interested reader is referred to a comprehensive user s guide for pro2 available from the authors .one important motivation for developing and applying the ali method for stellar atmospheres was the unsolved problem of nlte metal line blanketing in hot stars .we want to focus here on two topics which highlight the successful application of the new models . the first concerns the balmer line problem which until recently appeared to be a fundamental drawback of nlte models .the second example describes the abundance determination of iron group elements in evolved compact stars by constructing self - consistent models which can reproduce simultaneously the observed spectral properties of white dwarfs and subdwarf o stars ( sdo ) from the optical region through the extreme ultraviolet regime .fitting synthetic profiles to observed balmer lines is the principal ingredient of most spectroscopic analyses .the so - called balmer line problem represents the failure to achieve a consistent fit to the hydrogen balmer line spectrum of any hot sdo star whose effective temperature exceeds about 70000k .results of determinations drastically differ , up to a factor of two , depending on which particular line is fitted .this problem was uncovered a few years ago during nlte analyses of very hot subdwarfs and central stars of planetary nebulae . since then, it cast severe doubt upon nlte model atmosphere analysis techniques as a whole . with new available models computed with the ali method we were able to demonstrate that the problem is due to the neglect or improper inclusion of metal opacities .we showed that the balmer line problem can be solved when surface cooling by photon escape from the stark wings of lines from the c , n , o elements is accounted for ( see figs.[tempfig ] and [ bd28fig ] ) . the optical spectra of hot white dwarfs and sdo stars are dominated by helium and/or hydrogen lines .metals are highly ionized and their spectral lines are almost exclusively located in the uv and extreme uv regions .high resolution spectroscopy with the international ultraviolet explorer ( iue ) has revealed a wealth of spectral features from iron and nickel which , however , could not be analyzed because of the lack of appropriate nlte calculations .first attempts for quantitative analyses were performed with line formation calculations on pre - specified temperature and pressure model structures which in turn were obtained from simplified lte or nlte models , i.e. disregarding metal line blanketing effects .subsequently fully line blanketed lte models were employed , however , nlte effects turned out to be non - negligible .our latest models include 1.5 million lines from the iron group elements , which are taken from kurucz s line list . as an example for the quality of the fits we can achieve , fig.[feige67fig ] shows a portion of the uv spectrum of the sdo star feige 67 and the best fitting model .the derived abundances suggest that radiative levitation is responsible for the extraordinarily high heavy element abundances in these stars .we have described in detail the numerical solution of the classical model atmosphere problem .the construction of metal line blanketed models in hydrostatic and radiative equilibrium under nlte conditions was the last and long - standing problem of classical model atmosphere theory and it is finally solved with a high degree of sophistication .application of these models leads to highly successful analyses of hot compact stars .spectral properties from the extreme uv through the optical region are for the first time correctly reproduced by these models .the essential milestones for this development , starting from the pioneering work of auer & mihalas are : * introduction of the accelerated lambda iteration ( ali , or `` operator splitting '' methods ) , based upon early work by cannon and scharmer .first ali model atmospheres were constructed by werner . *introduction of statistical approaches to treat the iron group elements in nlte by anderson .* computation of atomic data by kurucz , by the opacity project and subsequent improvements , and by the iron project .we would like to thank wolf - rainer hamann , ulrich heber , ivan hubeny and thomas rauch for discussions , help , and contributions when we developed the pro2 code .we thank ivan hubeny for carefully reading the manuscript , which helped to improve this paper .this work was funded during the recent years by the dfg and dara / dlr through several grants .werner k. , rauch t. , dreizler s. 1998 , a user s guide for the pro2 nlte model atmosphere program package , internal report , institut fr astronomie und astrophysik , universitt tbingen ( http://astro.uni-tuebingen.de/ )
|
we introduce the classical stellar atmosphere problem and describe in detail its numerical solution . the problem consists of the solution of the radiation transfer equations under the constraints of hydrostatic , radiative and statistical equilibrium ( non - lte ) . we outline the basic idea of the accelerated lambda iteration ( ali ) technique and statistical methods which finally allow the construction of non - lte model atmospheres considering the influence of millions of metal absorption lines . some applications of the new models are presented .
|
some time ago bessis and bessis ( bb from now on ) proposed a new perturbation approach in quantum mechanics based on the application of a factorization method to a riccati equation derived from the schrdinger one .they obtained reasonable results for some of the energies of the quartic anharmonic oscillator by means of perturbation series of order fourth and sixth , without resorting to a resummation method . in spite of this success ,bb s method has passed unnoticed as far as we know .the purpose of this paper is to investigate bb s perturbation method in more detail . in section [ sec : method ] we write it in a quite general way and derive other approaches as particular cases . in section [ sec : results ] we carry out perturbation calculations of sufficiently large order and try to find out numerical evidence of convergence .one dimensional anharmonic oscillators prove to be a suitable benchmark for present numerical tests . for simplicitywe restrict to ground states and choose straightforward logarithmic perturbation theory instead of the factorization method proposed by bb .finally , in section [ sec : conclusions ] we discuss the results and draw some conclusions .in standard rayleigh schrdinger perturbation theory we try to solve the eigenvalue equation by expanding the energy and eigenfunction in a taylor series about .this method is practical provided that we can solve the eigenvalue equation for . in some casesit is more convenient to construct a parameter dependent hamiltonian operator that one can expand in a taylor series about in such a way that we can solve the eigenvalue equation for . in this casewe expand the eigenfunctions and eigenvalues in taylor series : there are many practical examples of application of this alternative approach .in particular , bb suggested the following form of : comparing equations ( [ eq : h(beta)_series ] ) and ( [ eq : h(beta)_bessis ] ) we conclude that in principle there is enormous flexibility in the choice of the operator coefficients as we show below by derivation of two known particular cases . if we restrict the expansion ( [ eq : h(beta)_bessis ] ) to just one term then .choosing , where is a parameter dependent hamiltonian operator with known eigenvalues and eigenfunctions , we obtain the method proposed by killingbeck some time ago . the main strategy behind this approachis to choose an appropriate value of the adjustable parameter leading to a renormalized perturbation series with the best convergence properties .if we consider two terms of the form and , then we derive the hamiltonian operator +\beta ^{2}\lambda \hat{h}^{\prime } ] , then it is reasonable to truncate the perturbation series so that is as small as possible . in this casewe find that is the energy coefficient with the smallest absolute value so that our best estimate is }=1.06036215 ] is reasonably close to the exact eigenvalue in table [ tab:1 ] . in principle, it is not surprising that perturbation theory yields poorer results for than for . for the pure octic anharmonic oscillator look for polynomial solutions of the form in this case we have calculated less perturbation coefficients because they require more computer memory and time .surprisingly , the values of latexmath:[ ] which is quite close to the exact one in table [ tab:1 ] .the surprising fact that the convergence properties of the perturbation series are clearly poorer for than for suggests that there should be better solutions for the former case .if we try then the values of are smaller than those obtained earlier ( compare with in figure [ fig:1 ] ) . the coefficient with the smallest absolute value is and our best estimate results to be }=1.1448015 ] on the perturbation series for the cases , , and and show results in table [ tab:1 ] .notice that the pad approximants sum the series to a great accuracy but they are less efficient for and .this is exactly what is known to happen with the standard perturbation series for anharmonic oscillators .however , pad approximants appear to improve the accuracy of present perturbation results in all the cases discussed above .present numerical investigation on the perturbation method proposed by bessis and bessis suggests that although the series may be divergent they are much more accurate than those derived from standard rayleigh schrdinger perturbation theory .one obtains reasonable eigenvalues for difficult anharmonic problems of the form , and results deteriorate much less dramatically than those from the standard rayleigh schrdinger perturbation series as the anharmonicity exponent increases . in order to facilitate the calculation of perturbation corrections of sufficiently large order we restricted our analysis to polynomial solutions that are suitable for the ground state .the treatment of rational solutions for excited states ( like those considered by bb ) is straightfordward but increasingly more demanding .following bb we have implemented perturbation theory by transformation of the linear schrdinger equation into a nonlinear riccati one . in this way the appropriate form of each potential coefficient reveals itself more clearly as shown in section [ sec : results ] for the quartic model .however , in principle one can resort to any convenient algorithm because the perturbation method is sufficiently general as shown in section 2 .a remarkable advantage of the method of bb , which may not be so clear in their paper , is its extraordinary flexibility as shown by the two solutions obtained above for the case .moreover , the method of bb , unlike two other renormalization approaches derived above as particular cases , does not require and adjustable parameter to give acceptable results .
|
we investigate the convergence properties of a perturbation method proposed some time ago and reveal some of it most interesting features . anharmonic oscillators in the strong coupling limit prove to be appropriate illustrative examples and benchmark .
|
this section is divided into two parts .the first provides a broad historical perspective and the second illustrates key physical and conceptual problems of quantum gravity .general relativity and quantum theory are among the greatest intellectual achievements of the 20th century .each of them has profoundly altered the conceptual fabric that underlies our understanding of the physical world .furthermore , each has been successful in describing the physical phenomena in its own domain to an astonishing degree of accuracy . andyet , they offer us _strikingly _ different pictures of physical reality . indeed , at first one is surprised that physics could keep progressing blissfully in the face of so deep a conflict .the reason of course is the ` accidental ' fact that the values of fundamental constants in our universe conspire to make the planck length so small and planck energy so high compared to laboratory scales . it is because of this that we can happily maintain a schizophrenic attitude and use the precise , geometric picture of reality offered by general relativity while dealing with cosmological and astrophysical phenomena , and the quantum - mechanical world of chance and intrinsic uncertainties while dealing with atomic and subatomic particles .this strategy is of course quite appropriate as a practical stand .but it is highly unsatisfactory from a conceptual viewpoint .everything in our past experience in physics tells us that the two pictures we currently use must be approximations , special cases that arise as appropriate limits of a single , universal theory .that theory must therefore represent a synthesis of general relativity and quantum mechanics .this would be the quantum theory of gravity .not only should it correctly describe all the known physical phenomena , but it should also adequately handle the planck regime .this is the theory that we invoke when faced with phenomena , such as the big bang and the final state of black holes , where the worlds of general relativity and quantum mechanics must unavoidably meet .the necessity of a quantum theory of gravity was pointed out by einstein already in a 1916 paper in the preussische akademie sitzungsberichte .he wrote : * _ nevertheless , due to the inneratomic movement of electrons , atoms would have to radiate not only electromagnetic but also gravitational energy , if only in tiny amounts .as this is hardly true in nature , it appears that quantum theory would have to modify not only maxwellian electrodynamics but also the new theory of gravitation ._ papers on the subject began to appear in the thirties most notably by bronstein , rosenfeld and pauli .however , detailed work began only in the sixties .the general developments since then loosely represent four stages , each spanning roughly a decade . in this section, i will present a sketch these developments .first , there was the beginning : exploration .the goal was to do unto gravity as one would do unto any other physical field .the electromagnetic field had been successfully quantized using two approaches : canonical and covariant . in the canonical approach ,electric and magnetic fields obeying heisenberg s uncertainty principle are at the forefront , and quantum states naturally arise as gauge - invariant functionals of the vector potential on a spatial three - slice . in the covariant approach on the on the other hand ,one first isolates and then quantizes the two radiative modes of the maxwell field in space - time , without carrying out a ( 3 + 1)-decomposition , and the quantum states naturally arise as elements of the fock space of photons .attempts were made to extend these techniques to general relativity . in the electromagnetic casethe two methods are completely equivalent .only the emphasis changes in going from one to another . in the gravitational case ,however , the difference is _profound_. this is not accidental .the reason is deeply rooted in one of the essential features of general relativity , namely the dual role of the space - time metric . to appreciate this point , let us begin with field theories in minkowski space - time , say maxwell s theory to be specific . here , the basic dynamical field is represented by a tensor field on minkowski space . the space - time geometry provides the kinematical arena on which the field propagates .the background , minkowskian metric provides light cones and the notion of causality .we can foliate this space - time by a one - parameter family of space - like three - planes , and analyze how the values of electric and magnetic fields on one of these surfaces determine those on any other surface .the isometries of the minkowski metric let us construct physical quantities such as fluxes of energy , momentum , and angular momentum carried by electromagnetic waves . in general relativity , by contrast, there is no background geometry .the space - time metric itself is the fundamental dynamical variable . on the one hand it is analogous to the minkowski metric in maxwell s theory ; it determines space - time geometry , provides light cones , defines causality , and dictates the propagation of all physical fields ( including itself ) . on the other handit is the analog of the newtonian gravitational potential and therefore the basic dynamical entity of the theory , similar in this respect to the of the maxwell theory .this dual role of the metric is in effect a precise statement of the equivalence principle that is at the heart of general relativity .it is this feature that is largely responsible for the powerful conceptual economy of general relativity , its elegance and its aesthetic beauty , its strangeness in proportion .however , this feature also brings with it a host of problems .we see already in the classical theory several manifestations of these difficulties .it is because there is no background geometry , for example , that it is so difficult to analyze singularities of the theory and to define the energy and momentum carried by gravitational waves .since there is no a priori space - time , to introduce notions as basic as causality , time , and evolution , one must first solve the dynamical equations and _ construct _ a space - time . as an extreme example ,consider black holes , whose definition requires the knowledge of the causal structure of the entire space - time .to find if the given initial conditions lead to the formation of a black hole , one must first obtain their maximal evolution and , using the causal structure determined by that solution , ask if its future infinity has a past boundary .if it does , space - time contains a black hole and the boundary is its event horizon .thus , because there is no longer a clean separation between the kinematical arena and dynamics , in the classical theory substantial care and effort is needed even in the formulation of basic physical questions . in quantum theorythe problems become significantly more serious . to see this , recall first that , because of the uncertainty principle , already in non - relativistic quantum mechanics , particles do not have well - defined trajectories ; time - evolution only produces a probability amplitude , , rather than a specific trajectory , . similarly , in quantum gravity , even after evolving an initial state, one would not be left with a specific space - time . in the absence of a space - time geometry ,how is one to introduce even habitual physical notions such as causality , time , scattering states , and black holes ?the canonical and the covariant approaches have adopted dramatically different attitudes to face these problems . in the canonical approach , one notices that , in spite of the conceptual difficulties mentioned above , the hamiltonian formulation of general relativity is well - defined and attempts to use it as a stepping stone to quantization .the fundamental canonical commutation relations are to lead us to the basic uncertainty principle .the motion generated by the hamiltonian is to be thought of as time evolution .the fact that certain operators on the fixed ( ` spatial ' ) three - manifold commute is supposed to capture the appropriate notion of causality .the emphasis is on preserving the geometrical character of general relativity , on retaining the compelling fusion of gravity and geometry that einstein created . in the first stage of the program , completed in the early sixties , the hamiltonian formulation of the classical theorywas worked out in detail by dirac , bergmann , arnowitt , deser and misner and others .the basic canonical variable was the 3-metric on a spatial slice .the ten einstein s equations naturally decompose into two sets : four constraints on the metric and its conjugate momentum ( analogous to the equation of electrodynamics ) and six evolution equations .thus , in the hamiltonian formulation , general relativity could be interpreted as the dynamical theory of 3-geometries .wheeler therefore baptized it _ geometrodynamics _ . in the second stage , this framework was used as a point of departure for quantum theory .the basic equations of the quantum theory were written down and several important questions were addressed .wheeler also launched an ambitious program in which the internal quantum numbers of elementary particles were to arise from non - trivial , microscopic topological configurations and particle physics was to be recast as ` chemistry of geometry ' .however , most of the work in quantum geometrodynamics continued to remain formal ; indeed , even today the field theoretic difficulties associated with the presence of an _ infinite number of degrees of freedom _ remain unresolved .furthermore , even at the formal level , is has been difficult to solve the quantum einstein s equations .therefore , after an initial burst of activity , the quantum geometrodynamics program became stagnant .interesting results have been obtained in the limited context of quantum cosmology where one freezes all but a finite number of degrees of freedom . however , even in this special case , the initial singularity could not be resolved without additional ` external ' inputs into the theory .sociologically , the program faced another limitation : concepts and techniques which had been so successful in quantum electrodynamics appeared to play no role here .in particular , in quantum geometrodynamics , it is hard to see how gravitons are to emerge , how scattering matrices are to be computed , how feynman diagrams are to dictate dynamics and virtual processes are to give radiative corrections . to use a well - known phrase , the emphasis on geometry in the canonical program drove a wedge between general relativity and the theory of elementary particles . " in the covariant approach the emphasis is just the opposite .field - theoretic techniques are put at the forefront .the first step in this program is to split the space - time metric in two parts , , where is to be a background , kinematical metric , often chosen to be flat , is newton s constant , and , the deviation of the physical metric from the chosen background , the dynamical field .the two roles of the metric tensor are now split .the overall attitude is that this sacrifice of the fusion of gravity and geometry is a moderate price to pay for ushering - in the powerful machinery of perturbative quantum field theory .indeed , with this splitting most of the conceptual problems discussed above seem to melt away .thus , in the transition to the quantum theory it is only that is quantized .quanta of this field propagate on the classical background space - time with metric .if the background is in fact chosen to be flat , one can use the casimir operators of the poincar group and show that the quanta have spin two and rest mass zero .these are the gravitons .the einstein - hilbert lagrangian tells us how they interact with one another .thus , in this program , quantum general relativity was first reduced to a quantum field theory in minkowski space .one could apply to it all the machinery of perturbation theory that had been so successful in particle physics .one now had a definite program to compute amplitudes for various scattering processes .unruly gravity appeared to be tamed and forced to fit into the mold created to describe other forces of nature .thus , the covariant quantization program was more in tune with the mainstream developments in physics at the time . in 1963 feynman extended perturbative methods from quantum electrodynamics to gravity .a few years later dewitt carried this analysis to completion by systematically formulating the feynman rules for calculating scattering amplitudes among gravitons and between gravitons and matter quanta .he showed that the theory is unitary order by order in the perturbative expansion . by the early seventies ,the covariant approach had led to several concrete results .consequently , the second stage of the covariant program began with great enthusiasm and hope .the motto was : go forth , perturb , and expand .the enthusiasm was first generated by the discovery that yang - mills theory coupled to fermions is renormalizable ( if the masses of gauge particles are generated by a spontaneous symmetry - breaking mechanism ) .this led to a successful theory of electroweak interactions .particle physics witnessed a renaissance of quantum field theory .the enthusiasm spilled over to gravity .courageous calculations were performed to estimate radiative corrections .unfortunately , however , this research soon ran into its first road block .the theory was shown to be non - renormalizable when two loop effects are taken into account for pure gravity and already at one loop for gravity coupled with matter . to appreciate the significance of this result ,let us return to the quantum theory of photons and electrons .this theory is perturbatively renormalizable .this means that , although individual terms in the perturbation expansion of a physical amplitude may diverge due to radiative corrections involving closed loops of virtual particles , these infinities are of a specific type ; they can be systematically absorbed in the values of free parameters of the theory , the fine structure constant and the electron mass .thus , by renormalizing these parameters , individual terms in the perturbation series can be systematically rendered finite . in quantum general relativity ,such a systematic procedure is not available ; infinities that arise due to radiative corrections are genuinely troublesome .put differently , quantum theory acquires an infinite number of undetermined parameters .although one can still use it as an effective theory in the low energy regime , regarded as a fundamental theory , it has no predictive power at all ! buoyed , however , by the success of perturbative methods in electroweak interactions , the community was reluctant to give them up in the gravitational case .in the case of weak interactions , it was known for some time that the observed low energy phenomena could be explained using fermi s simple four - point interaction .the problem was that this fermi model led to a non - renormalizable theory .the correct , renormalizable model of glashow , weinberg and salam agrees with fermi s at low energies but marshals new processes at high energies which improve the ultraviolet behavior of the theory .it was therefore natural to hope that the situation would be similar in quantum gravity .general relativity , in this analogy , would be similar to fermi s model .the fact that it is not renormalizable was taken to mean that it ignores important processes at high energies which are , however , unimportant at low energies , i.e. , at large distances .thus , the idea was that the correct theory of gravity would differ from general relativity but only at high energies , i.e. , near the planck regime . with this aim, higher derivative terms were added to the einstein - hilbert lagrangian .if the relative coupling constants are chosen judiciously , the resulting theory does in fact have a better ultraviolet behavior .stelle , tamboulis and others showed that the theory is not only renormalizable but asymptotically free ; it resembles the free theory in the high energy limit .thus , the initial hope of ` curing ' quantum general relativity was in fact realized .however , it turned out that the hamiltonian of this theory is unbounded from below , and consequently the theory is drastically unstable ! in particular , it violates unitarity ; probability fails to be conserved .the success of the electroweak theory suggested a second line of attack . in the approaches discussed above, gravity was considered in isolation .the successful unification of electromagnetic and weak interactions suggested the possibility that a consistent theory would result only when gravity is coupled with suitably chosen matter .the most striking implementation of this viewpoint occurred in supergravity . here , the hope was that the bosonic infinities of the gravitational field would be cancelled by those of suitably chosen fermionic sources , giving us a renormalizable quantum theory of gravity .much effort went into the analysis of the possibility that the most sophisticated of these theories supergravity can be employed as a genuine grand unified theory .supergravity was likely to be the final theory . ]it turned out that some cancellation of infinities does occur and that supergravity is indeed renormalizable to two loops even though it contains matter fields coupled to gravity .furthermore , its hamiltonian is manifestly positive and the theory is unitary .however , it is believed that at fifth and higher loops it is again non - renormalizable . by and large ,the canonical approach was pursued by relativists and the covariant approach by particle physicists . in the mid eighties, both approaches received unexpected boosts .these launched the third phase in the development of quantum gravity .a group of particle physicists had been studying string theory to analyze strong interactions from a novel angle .the idea was to replace point particles by 1-dimensional extended objects strings and associate particle - like states with various modes of excitations of the string .initially there was an embarrassment : in addition to the spin-1 modes characteristic of gauge theories , string theory included also a spin-2 , massless excitation .but it was soon realized that this was a blessing in disguise : the theory automatically incorporated a graviton . in this sense, gravity was already built into the theory ! however , it was known that the theory had a potential quantum anomaly which threatened to make it inconsistent . in the mid - eighties ,greene and schwarz showed that there is an anomaly cancellation and perturbative string theory could be consistent in certain space - time dimensions 26 for a purely bosonic string and 10 for a superstring . since strings were assumed to live in a flat background space - time , one could apply perturbative techniques .however , in this reincarnation , the covariant approach underwent a dramatic revision .since it is a theory of extended objects rather than point particles , the quantum theory has brand new elements ; it is no longer a local quantum field theory .the field theoretic feynman diagrams are replaced by world - sheet diagrams .this replacement dramatically improves the ultraviolet behavior and , although explicit calculations have been carried out only at 2 or 3 loop order , it is widely believed that the perturbation theory is _ finite _ to all orders ; it does not even have to be renormalized .the theory is also unitary .it has a single , new fundamental constant the string tension and , since various excited modes of the string represent different particles , there is a built - in principle for unification of all interactions ! from the viewpoint of local quantum field theories that particle physicists have used in studying electroweak and strong interactions , this mathematical structure seems almost magical . therefore there is a hope in the string community that this theory would encompass all of fundamental physics ; it would be the ` theory of everything ' .unfortunately , it soon became clear that string perturbation theory also faces some serious limitations .perturbative finiteness would imply that each term in the perturbation series is ultra - violet finite .however gross and periwal have shown that in the case of bosonic strings , when summed , the series diverges and does so uncontrollably .( technically , it is not even borel - summable . ) they also gave arguments that the conclusion would not be changed if one uses superstrings instead .independent support for these arguments has come from work on random surfaces due to amborjan and others .one might wonder why the divergence of the sum should be regarded as a serious failure of the theory .after all , in quantum electrodynamics , the series is also believed to diverge .recall however that quantum electrodynamics is an inherently incomplete theory .it ignores many processes that come into play at high energies or short distances .in particular , it completely ignores the microstructure of space - time and simply assumes that space - time can be approximated by a smooth continuum even below the planck scale .therefore , it can plead incompleteness and shift the burden of this infinity to a more complete theory . a ` theory of everything ' on the other hand ,has nowhere to hide .it can not plead incompleteness and shift its burden .it must face the planck regime squarely . if the theory is to be consistent , it must have key non - perturbative structures .the current and the fourth stage of the particle physics motivated approaches to quantum gravity is largely devoted to unravelling such structures and using them to solve the outstanding physical problems .examples of such initiatives are : applications of the ads / cft conjecture , use of d - branes and analysis of dualities between various string theories . on the relativity side ,the third stage began with the following observation : the geometrodynamics program laid out by dirac , bergmann , wheeler and others simplifies significantly if we regard a spatial connection rather than the 3-metric as the basic object .in fact we now know that , among others , einstein and schrdinger had recast general relativity as a theory of connections already in the fifties . however , they used the ` levi - civita connections ' that features in the parallel transport of vectors and found that the theory becomes rather complicated .this episode had been forgotten and connections were re - introduced in the mid - eighties . however , now these were ` spin - connections ' , required to parallel propagate spinors , and they turn out to _ simplify _ einstein s equations considerably .for example , with the dynamical evolution dictated by einstein s equations can now be visualized simply as a _ geodesic motion _ on the space of spin - connections ( with respect to a natural metric extracted from the constraint equations ) . since general relativity is now regarded as a dynamical theory of connections , this reincarnation of the canonical approach is called ` connection - dynamics ' .perhaps the most important advantage of the passage from metrics to connections is that the phase - space of general relativity is now the same as that of gauge theories .the ` wedge between general relativity and the theory of elementary particles ' that weinberg referred to is largely removed without sacrificing the geometrical essence of general relativity .one could now import into general relativity techniques that have been highly successful in the quantization of gauge theories . at the kinematic level ,then , there is a unified framework to describe all four fundamental interactions .the dynamics , of course , depends on the interaction . in particular , while there is a background space - time geometry in electroweak and strong interactions , there is none in general relativity . therefore , qualitatively new features arise .these were exploited in the late eighties and early nineties to solve simpler models general relativity in 2 + 1 dimensions ; linearized gravity clothed as a gauge theory ; and certain cosmological models . to explore the physical , 3 + 1 dimensional theory , a ` loop representation 'was introduced by rovelli and smolin . here, quantum states are taken to be suitable functions of loops on the 3-manifold .this led to a number of interesting and intriguing results , particularly by gambini , pullin and their collaborators , relating knot theory and quantum gravity .thus , there was rapid and unanticipated progress in a number of directions which rejuvenated the canonical quantization program .since the canonical approach does not require the introduction of a background geometry or use of perturbation theory , and because one now has access to fresh , non - perturbative techniques from gauge theories , in relativity circles there is a hope that this approach may lead to well - defined , _ non - perturbative _ quantum general relativity ( or its supersymmetric version , supergravity ) .however , a number of these considerations remained rather formal until mid - nineties .passage to loop representation required an integration over the infinite dimensional space of connections and the formal methods were insensitive to possible infinities lurking in the procedure .indeed , such integrals are notoriously difficult to perform in interacting field theories . to pay due respect to the general covariance of einstein s theory , one needed diffeomorphism invariant measures andthere were folk - theorems to the effect that such measures did not exist !fortunately , the folk - theorems turned out to be incorrect . to construct a well - defined theory capable of handling field theoretic issues , a _ quantum theory of riemannian geometry _was systematically constructed in the mid - nineties .this launched the fourth ( and the current ) stage in the canonical approach .just as differential geometry provides the basic mathematical framework to formulate modern gravitational theories in the classical domain , quantum geometry provides the necessary concepts and techniques in the quantum domain .it is a rigorous mathematical theory which enables one to perform integration on the space of connections for constructing hilbert spaces of states and to define geometric operators corresponding , e.g. to areas of surfaces and volumes of regions , even though the classical expressions of these quantities involve non - polynomial functions of the riemannian metric .there are no infinities .one finds that , at the planck scale , geometry has a definite discrete structure .its fundamental excitations are 1-dimensional , rather like polymers , and space - time continuum arises only as a coarse - grained approximation .the fact that the structure of space - time at planck scale is qualitatively different from minkowski background used in perturbative treatments reinforced the idea that quantum general relativity ( or supergravity ) may well be non - perturbatively finite . as we will see in section [ s3 ] quantum geometry effectshave already been shown to resolve the big - bang singularity and solve some of the long - standing problems associated with black holes .the first three stages of developments in quantum gravity taught us many valuable lessons .perhaps the most important among them is the realization that perturbative , field theoretic methods which have been so successful in other branches of physics are simply inadequate in quantum gravity .the assumption that space - time can be replaced by a smooth continuum at arbitrarily small scales leads to inconsistencies .we can neither ignore the microstructure of space - time nor presuppose its nature .we must let quantum gravity itself reveal this structure to us .irrespective of whether one works with strings or supergravity or general relativity , one has to face the problem of quantization non - perturbatively . in the current , fourth stageboth approaches have undergone a metamorphosis .the covariant approach has led to string theory and the canonical approach developed into loop quantum gravity .the mood seems to be markedly different . in both approaches ,non - perturbative aspects are at the forefront and conceptual issues are again near center - stage . however , there are also key differences .most work in string theory involves background fields and uses higher dimensions and supersymmetry as _ essential _ ingredients .the emphasis is on unification of gravity with other forces of nature .loop quantum gravity , on the other hand , is manifestly background independent .supersymmetry and higher dimensions do not appear to be essential .however , it has not provided any principle for unifying interactions . in this sense ,the two approaches are complementary rather than in competition .each provides fresh ideas to address some of the key problems but neither is complete . for brevity and to preserve the flow of discussion ,i have restricted myself to the ` main - stream ' programs whose development can be continuously tracked over several decades .however , i would like to emphasize that there are a number of other fascinating and highly original approaches particularly causal dynamical triangulations , euclidean quantum gravity , discrete approaches , twistor theory and the theory of h - spaces , asymptotic quantization , non - commutative geometry and causal sets . approaches to quantum gravity face two types of issues : problems that are ` internal ' to individual programs and physical and conceptual questions that underlie the whole subject .examples of the former are : incorporation of physical rather than half flat gravitational fields in the twistor program , mechanisms for breaking of supersymmetry and dimensional reduction in string theory , and issues of space - time covariance in the canonical approach . in this sub - section, i will focus on the second type of issues by recalling some of the long standing issues that _ any _ satisfactory quantum theory of gravity should address . _ big - bang and other singularities _ : it is widely believed that the prediction of a singularity , such as the big - bang of classical general relativity , is primarily a signal that the physical theory has been pushed beyond the domain of its validity .a key question to any quantum gravity theory , then , is : what replaces the big - bang ?are the classical geometry and the continuum picture only approximations , analogous to the ` mean ( magnetization ) field ' of ferro - magnets ?if so , what are the microscopic constituents ?what is the space - time analog of a heisenberg quantum model of a ferro - magnet ? when formulated in terms of these fundamental constituents , is the evolution of the _ quantum _ state of the universe free of singularities ?general relativity predicts that the space - time curvature must grow unboundedly as we approach the big - bang or the big - crunch but we expect the quantum effects , ignored by general relativity , to intervene , making quantum gravity indispensable before infinite curvatures are reached .if so , what is the upper bound on curvature ?how close to the singularity can we ` trust ' classical general relativity ?what can we say about the ` initial conditions ' , i.e. , the quantum state of geometry and matter that correctly describes the big - bang ?if they have to be imposed externally , is there a _ physical _ guiding principle ? _ black holes : _ in the early seventies , using imaginative thought experiments , bekenstein argued that black holes must carry an entropy proportional to their area .about the same time , bardeen , carter and hawking ( bch ) showed that black holes in equilibrium obey two basic laws , which have the same form as the zeroth and the first laws of thermodynamics , provided one equates the black hole surface gravity to some multiple of the temperature in thermodynamics and the horizon area to a corresponding multiple of the entropy .however , at first this similarity was thought to be only a formal analogy because the bch analysis was based on _ classical _ general relativity and simple dimensional considerations show that the proportionality factors must involve planck s constant .two years later , using quantum field theory on a black hole background space - time , hawking showed that black holes in fact radiate quantum mechanically as though they are black bodies at temperature . using the analogy with the first law, one can then conclude that the black hole entropy should be given by .this conclusion is striking and deep because it brings together the three pillars of fundamental physics general relativity , quantum theory and statistical mechanics .however , the argument itself is a rather hodge - podge mixture of classical and semi - classical ideas , reminiscent of the bohr theory of atom .a natural question then is : what is the analog of the more fundamental , pauli - schrdinger theory of the hydrogen atom ?more precisely , what is the statistical mechanical origin of black hole entropy ?what is the nature of a quantum black hole and what is the interplay between the quantum degrees of freedom responsible for entropy and the exterior curved geometry ? can one derive the hawking effect from first principles of quantum gravity ?is there an imprint of the classical singularity on the final quantum description , e.g. , through ` information loss ' ? _ planck scale physics and the low energy world : _ in general relativity , there is no background metric , no inert stage on which dynamics unfolds .geometry itself is dynamical .therefore , as indicated above , one expects that a fully satisfactory quantum gravity theory would also be free of a background space - time geometry .however , of necessity , a background independent description must use physical concepts and mathematical tools that are quite different from those of the familiar , low energy physics .a major challenge then is to show that this low energy description does arise from the pristine , planckian world in an appropriate sense , bridging the vast gap of some 16 orders of magnitude in the energy scale . in this ` top - down ' approach , does the fundamental theory admit a ` sufficient number ' of semi - classical states ?do these semi - classical sectors provide enough of a background geometry to anchor low energy physics ? can one recover the familiar description ?if the answers to these questions are in the affirmative , can one pin point why the standard ` bottom - up ' perturbative approach fails ? that is , what is the essential feature which makes the fundamental description mathematically coherent but is absent in the standard perturbative quantum gravity ? there are of course many more challenges : adequacy of standard quantum mechanics , the issue of time , of measurement theory and the associated questions of interpretation of the quantum framework , the issue of diffeomorphism invariant observables and practical methods of computing their properties , practical methods of computing time evolution and s - matrices , exploration of the role of topology and topology change , etc , etc . in loop quantum gravity described in the rest of this review, one adopts the view that the three issues discussed in detail are more basic from a physical viewpoint because they are rooted in general conceptual questions that are largely independent of the specific approach being pursued .indeed they have been with us longer than any of the current leading approaches .the rest of this review focusses on riemannian quantum geometry and loop quantum gravity .it is organized as follows .section 2 summarizes the underlying ideas , key results from quantum geometry and status of quantum dynamics in loop quantum gravity .the framework has led to a rich set of results on the first two sets of physical issues discussed above .section 3 reviews these applications .section 4 is devoted to outlook .the apparent conflict between the canonical quantization method and space - time covariance is discussed in appendix [ a1 ] .in this section , i will briefly summarize the salient features and current status of loop quantum gravity .the emphasis is on structural and conceptual issues ; detailed treatments can be found in more complete and more technical recent accounts and references therein .( the development of the subject can be seen by following older monographs . ) in this approach , one takes the central lesson of general relativity seriously : gravity _ is _ geometry whence , in a fundamental theory , there should be no background metric . in quantum gravity , geometry and matter should _ both _ be ` born quantum mechanically ' .thus , in contrast to approaches developed by particle physicists , one does not begin with quantum matter on a background geometry and use perturbation theory to incorporate quantum effects of gravity .there _ is _ a manifold but no metric , or indeed any other physical fields , in the background . in classical gravity, riemannian geometry provides the appropriate mathematical language to formulate the physical , kinematical notions as well as the final dynamical equations .this role is now taken by _ quantum _riemannian geometry , discussed below . in the classical domain , generalrelativity stands out as the best available theory of gravity , some of whose predictions have been tested to an amazing degree of accuracy , surpassing even the legendary tests of quantum electrodynamics .therefore , it is natural to ask : _ does quantum general relativity , coupled to suitable matter _ ( or supergravity , its supersymmetric generalization ) _ exist as consistent theories non - perturbatively ?_ there is no implication that such a theory would be the final , complete description of nature .nonetheless , this is a fascinating open question , at least at the level of mathematical physics .as explained in section [ s1.1 ] , in particle physics circles the answer is often assumed to be in the negative , not because there is concrete evidence against non - perturbative quantum gravity , but because of the analogy to the theory of weak interactions . there , one first had a 4-point interaction model due to fermi which works quite well at low energies but which fails to be renormalizable .progress occurred not by looking for non - perturbative formulations of the fermi model but by replacing the model by the glashow - salam - weinberg renormalizable theory of electro - weak interactions , in which the 4-point interaction is replaced by and propagators .therefore , it is often assumed that perturbative non - renormalizability of quantum general relativity points in a similar direction .however this argument overlooks the crucial fact that , in the case of general relativity , there is a qualitatively new element .perturbative treatments pre - suppose that the space - time can be assumed to be a continuum _ at all scales _ of interest to physics under consideration .this assumption is safe for weak interactions . in the gravitational case , on the other hand , the scale of interest is _ the planck length _ and there is no physical basis to pre - suppose that the continuum picture should be valid down to that scale .the failure of the standard perturbative treatments may largely be due to this grossly incorrect assumption and a non - perturbative treatment which correctly incorporates the physical micro - structure of geometry may well be free of these inconsistencies . are there any situations , outside loop quantum gravity , where such physical expectations are borne out in detail mathematically ?the answer is in the affirmative .there exist quantum field theories ( such as the gross - neveau model in three dimensions ) in which the standard perturbation expansion is not renormalizable although the theory is _ exactly soluble _ !failure of the standard perturbation expansion can occur because one insists on perturbing around the trivial , gaussian point rather than the more physical , non - trivial fixed point of the renormalization group flow .interestingly , thanks to recent work by lauscher , reuter , percacci , perini and others there is now non - trivial and growing evidence that situation may be similar in euclidean quantum gravity .impressive calculations have shown that pure einstein theory may also admit a non - trivial fixed point .furthermore , the requirement that the fixed point should continue to exist in presence of matter constrains the couplings in non - trivial and interesting ways .however , as indicated in the introduction , even if quantum general relativity did exist as a mathematically consistent theory , there is no a priori reason to assume that it would be the ` final ' theory of all known physics .in particular , as is the case with classical general relativity , while requirements of background independence and general covariance do restrict the form of interactions between gravity and matter fields and among matter fields themselves , the theory would not have a built - in principle which _ determines _ these interactions .put differently , such a theory would not be a satisfactory candidate for unification of all known forces .however , just as general relativity has had powerful implications in spite of this limitation in the classical domain , quantum general relativity should have qualitatively new predictions , pushing further the existing frontiers of physics .indeed , unification does not appear to be an essential criterion for usefulness of a theory even in other interactions .qcd , for example , is a powerful theory even though it does not unify strong interactions with electro - weak ones .furthermore , the fact that we do not yet have a viable candidate for the grand unified theory does not make qcd any less useful .although loop quantum gravity does not provide a natural unification of dynamics of all interactions , as indicated in section [ s1.1 ] this program does provide a kinematical unification .more precisely , in this approach one begins by formulating general relativity in the mathematical language of connections , the basic variables of gauge theories of electro - weak and strong interactions .thus , now the configuration variables are not metrics as in wheeler s geometrodynamics , but certain _ spin - connections _ ; the emphasis is shifted from distances and geodesics to holonomies and wilson loops .consequently , the basic kinematical structures are the same as those used in gauge theories .a key difference , however , is that while a background space - time metric is available and crucially used in gauge theories , there are no background fields whatsoever now .their absence is forced upon us by the requirement of diffeomorphism invariance ( or ` general covariance ' ) .now , as emphasized in section [ s1.1 ] , most of the techniques used in the familiar , minkowskian quantum theories are deeply rooted in the availability of a flat back - ground metric . in particular , it is this structure that enables one to single out the vacuum state , perform fourier transforms to decompose fields canonically into creation and annihilation parts , define masses and spins of particles and carry out regularizations of products of operators .already when one passes to quantum field theory in _ curved _ space - times , extra work is needed to construct mathematical structures that can adequately capture underlying physics . in our case ,the situation is much more drastic : there is no background metric whatsoever ! therefore new physical ideas and mathematical tools are now necessary .fortunately , they were constructed by a number of researchers in the mid - nineties and have given rise to a detailed quantum theory of geometry .because the situation is conceptually so novel and because there are no direct experiments to guide us , reliable results require a high degree of mathematical precision to ensure that there are no hidden infinities .achieving this precision has been a priority in the program .thus , while one is inevitably motivated by heuristic , physical ideas and formal manipulations , the final results are mathematically rigorous . in particular , due care is taken in constructing function spaces , defining measures and functional integrals , regularizing products of field operators , and calculating eigenvectors and eigenvalues of geometric operators .consequently , the final results are all free of divergences , well - defined , and respect the background independence and diffeomorphism invariance .let us now turn to specifics . for simplicity ,i will focus on the gravitational field ; matter couplings are discussed in .the basic gravitational configuration variable is an -connection , on a 3-manifold representing ` space ' . as in gauge theories ,the momenta are the ` electric fields ' . will refer to the tangent space of while the ` internal ' indices will refer to the lie algebra of .] however , in the present gravitational context , they acquire an additional meaning : they can be naturally interpreted as orthonormal triads ( with density weight ) and determine the dynamical , riemannian geometry of .thus , in contrast to wheeler s geometrodynamics , the riemannian structures , including the positive - definite metric on , is now built from _momentum _ variables .the basic kinematic objects are : i ) holonomies of , which dictate how spinors are parallel transported along curves or edges ; and ii ) fluxes of electric fields , , smeared with test fields on a 2-surface .the holonomies the raison dtre of connections serve as the ` elementary ' configuration variables which are to have unambiguous quantum analogs .they form an abelian algebra , denoted by .similarly , the fluxes serve as ` elementary momentum variables ' .their poisson brackets with holonomies define a derivation on . in this sense as in hamiltonian mechanics on manifolds momentaare associated with ` vector fields ' on the configuration space .the first step in quantization is to use the poisson algebra between these configuration and momentum functions to construct an abstract -algebra of elementary quantum operators .this step is straightforward .the second step is to introduce a representation of this algebra by ` concrete ' operators on a hilbert space ( which is to serve as the kinematic setup for the dirac quantization program ) . for systems with an infinite number of degrees of freedom, this step is highly non - trivial . in minkowskian field theories , for example, the analogous kinematic -algebra of canonical commutation relations admits infinitely many _ inequivalent _ representations even after asking for poicar invariance !the standard fock representation is uniquely selected _ only _ when a restriction to non - interacting theories is made .the general viewpoint is that the choice of representation is dictated by ( symmetries and more importantly ) the dynamics of the theory under consideration .a priori this task seems daunting for general relativity .however , it turns out that the diffeomorphism invariance dictated by ` background independence' is enormously more powerful than poincar invariance .recent results by lewandowski , okolow , sahlmann and thiemann show that _ the algebra admits a unique diffeomorphism invariant state _ ! using it , through a standard procedure due to gelfand , naimark and segal , one can construct a unique representation of .thus , remarkably , there is a unique kinematic framework for _ any _ diffeomorphism invariant quantum theory for which the appropriate point of departure is provided by , _ irrespective of the details of dynamics _ !chronologically , this concrete representation was in fact introduced in early nineties by ashtekar , baez , isham and lewandowski .it led to the detailed theory of quantum geometry that underlies loop quantum gravity .once a rich set of results had accumulated , researchers began to analyze the issue of uniqueness of this representation and systematic improvements over several years culminated in the simple statement given above .let me describe the salient features of this representation .quantum states span a specific hilbert space consisting of wave functions of connections which are square integrable with respect to a natural , diffeomorphism invariant measure .this space is very large .however , it can be conveniently decomposed into a family of orthogonal , _ finite _ dimensional sub - spaces , labelled by graphs , each edge of which itself is labelled by a spin ( i.e. , half - integer ) .( the vector stands for the collection of half - integers associated with all edges of . )one can think of as a ` floating lattice ' in `floating ' because its edges are arbitrary , rather than ` rectangular ' .( indeed , since there is no background metric on , a rectangular lattice has no invariant meaning . )mathematically , can be regarded as the hilbert space of a spin - system .these spaces are extremely simple to work with ; this is why very explicit calculations are feasible .elements of are referred to as _ spin - network states _ . in the quantum theory ,the fundamental excitations of geometry are most conveniently expressed in terms of holonomies .they are thus _ one - dimensional , polymer - like _ and , in analogy with gauge theories , can be thought of as ` flux lines ' of electric fields / triads .more precisely , they turn out to be _ flux lines of area _ , the simplest gauge invariant quantities constructed from the momenta : an elementary flux line deposits a quantum of area on any 2-surface it intersects .thus , if quantum geometry were to be excited along just a few flux lines , most surfaces would have zero area and the quantum state would not at all resemble a classical geometry . this state would be analogous , in maxwell theory , to a ` genuinely quantum mechanical state ' with just a few photons . in the maxwell case , one must superpose photons coherently to obtain a semi - classical state that can be approximated by a classical electromagnetic field . similarly , here , semi - classical geometries can result only if a huge number of these elementary excitations are superposed in suitable dense configurations .the state of quantum geometry around you , for example , must have so many elementary excitations that approximately of them intersect the sheet of paper you are reading . even in such states , the geometry is still distributional , concentrated on the underlying elementary flux lines .but if suitably coarse - grained , it can be approximated by a smooth metric .thus , the continuum picture is only an approximation that arises from coarse graining of semi - classical states .the basic quantum operators are the holonomies along curves or edges in and the fluxes of triads . both are densely defined and self - adjoint on .furthermore detailed work by ashtekar , lewandowski , rovelli , smolin , thiemann and others shows that _ all eigenvalues of geometric operators constructed from the fluxes of triad are discrete _ .this key property is , in essence , the origin of the fundamental discreteness of quantum geometry .for , just as the classical riemannian geometry of is determined by the triads , all riemannian geometry operators such as the area operator associated with a 2-surface or the volume operator associated with a region are constructed from . however , since even the classical quantities and are non - polynomial functionals of triads , the construction of the corresponding and is quite subtle and requires a great deal of care .but their final expressions are rather simple . in this regularization, the underlying background independence turns out to be a blessing . for , diffeomorphism invariance constrains the possible forms of the final expressions _severely _ and the detailed calculations then serve essentially to fix numerical coefficients and other details .let me illustrate this point with the example of the area operators . since they are associated with 2-surfaces the states are 1-dimensional excitations , the diffeomorphism covariance requires that the action of on a state must be concentrated at the intersections of with . the detailed expression bears out this expectation :the action of on is dictated simply by the spin labels attached to those edges of which intersect .for all surfaces and 3-dimensional regions in , and are densely defined , self - adjoint operators ._ all their eigenvalues are discrete ._ naively , one might expect that the eigenvalues would be uniformly spaced given by , e.g. , integral multiples of the planck area or volume .indeed , for area , such assumptions were routinely made in the initial investigations of the origin of black hole entropy and , for volume , they are made in quantum gravity approaches based on causal sets where discreteness is postulated at the outset . in quantum riemannian geometry , this expectation is _ not _ borne out ; the distribution of eigenvalues is quite subtle .in particular , the eigenvalues crowd rapidly as areas and volumes increase . in the case of area operators ,the complete spectrum is known in a _ closed form _ , and the first several hundred eigenvalues have been explicitly computed numerically . for a large eigenvalue ,the separation between consecutive eigenvalues decreases exponentially : ! because of such strong crowding , the continuum approximation becomes excellent quite rapidly just a few orders of magnitude above the planck scale . at the planck scale, however , there is a precise and very specific replacement .this is the arena of quantum geometry .the premise is that the standard perturbation theory fails because it ignores this fundamental discreteness .there is however a further subtlety .this non - perturbative quantization has a one parameter family of ambiguities labelled by .this is called the barbero - immirzi parameter and is rather similar to the well - known -parameter of qcd . in qcd ,a single classical theory gives rise to inequivalent sectors of quantum theory , labelled by .similarly , is classically irrelevant but different values of correspond to unitarily inequivalent representations of the algebra of geometric operators .the overall mathematical structure of all these sectors is very similar ; the only difference is that the eigenvalues of all geometric operators scale with .for example , the simplest eigenvalues of the area operator in the quantum sector is given by .this fact has led to a misunderstanding in certain particle physics circles where is thought of as a regulator responsible for discreteness of quantum geometry . as explained above, this is _ not _ the case ; is analogous to the qcd and quantum geometry is discrete in _ every _ permissible -sector .note also that , at the classical level , the theory is equivalent to general relativity only if is _ positive _ ; if one sets by hand , one can not recover even the kinematics of general relativity . similarly , at the quantum level , setting would lead to a meaningless theory in which _ all _ eigenvalues of geometric operators vanish identically . ][ 2.1 ] a_\{j } = 8 ^ 2 _ i where is a collection of 1/2-integers , with for some .since the representations are unitarily inequivalent , as usual , one must rely on nature to resolve this ambiguity : just as nature must select a specific value of in qcd , it must select a specific value of in loop quantum gravity . with one judicious experiment e.g ., measurement of the lowest eigenvalue of the area operator for a 2-surface of any given topology we could determine the value of and fix the theory . unfortunately ,such experiments are hard to perform !however , we will see in section [ s3.2 ] that the bekenstein - hawking formula of black hole entropy provides an indirect measurement of this lowest eigenvalue of area for the 2-sphere topology and can therefore be used to fix the value of .quantum geometry provides a mathematical arena to formulate non - perturbative dynamics of candidate quantum theories of gravity , without any reference to a background classical geometry . in the case of general relativity, it provides tools to write down quantum einstein s equations in the hamiltonian approach and calculate transition amplitudes in the path integral approach . until recently, effort was focussed primarily on hamiltonian methods .however , over the last four years or so , path integrals called _ spin foams_ have drawn a great deal of attention .this work has led to fascinating results suggesting that , thanks to the fundamental discreteness of quantum geometry , path integrals defining quantum general relativity may be finite . a summary of these developments can be found in . in this section , i will summarize the status of the hamiltonian approach . for brevity , i will focus on source - free general relativity ,although there has been considerable work also on matter couplings . for simplicity , let me suppose that the ` spatial ' 3-manifold is compact. then , in any theory without background fields , hamiltonian dynamics is governed by constraints .roughly this is because in these theories diffeomorphisms correspond to gauge in the sense of dirac . recall that , on the maxwell phase space , gauge transformations are generated by the functional which is constrained to vanish on physical states due to gauss law .similarly , on phase spaces of background independent theories , diffeomorphisms are generated by hamiltonians which are constrained to vanish on physical states . in the case of general relativity , there are three sets of constraints .the first set consists of the three gauss equations _i:= _ a e^a_i = 0 , which , as in yang - mills theories , generates internal rotations on the connection and the triad fields .the second set consists of a co - vector ( or diffeomorphism ) constraint _ b : = e^a_i f_ab^i = 0 , which generates spatial diffeomorphism on ( modulo internal rotations generated by ) . finally , there is the key scalar ( or hamiltonian ) constraint : = ^ijk e^a_i e^b_j f_abk + = 0 which generates time - evolutions .( the are extrinsic curvature terms , expressible as poisson brackets of the connection , the total volume constructed from triads and the first term in the expression of given above .we will not need their explicit forms . )our task in quantum theory is three - folds : i ) elevate these constraints ( or their ` exponentiated versions ' ) to well - defined operators on the kinematic hilbert space ; ii ) select physical states by asking that they be annihilated by these constraints ; iii ) introduce an inner - product and interesting observables , and develop approximation schemes , truncations , etc to explore physical consequences .i would like to emphasize that , even if one begins with einstein s equations at the classical level , non - perturbative dynamics gives rise to interesting quantum corrections .consequently , _ the effective classical equations derived from the quantum theory exhibit significant departures from classical einstein s equations_. this fact has had important implications in quantum cosmology .let us return to the three tasks . since the canonical transformations generated by the gauss and the diffeomorphism constraints have a simple geometrical meaning , completion of i ) in these cases is fairly straightforward . for the hamiltonian constraint , on the other hand , there are no such guiding principles whence the procedure is subtle .in particular , specific regularization choices have to be made .consequently , the final expression of the hamiltonian constraint is not unique .a systematic discussion of ambiguities can be found in . at the present stage of the program ,such ambiguities are inevitable ; one has to consider all viable candidates and analyze if they lead to sensible theories .interestingly , observational input from cosmology are now been used to constrain the simplest of these ambiguities . in any case, it should be emphasized that the availability of well - defined hamiltonian constraint operators is by itself a notable technical success .for example , the analogous problem in quantum geometrodynamics a satisfactory regularization of the wheeler - dewitt equation is still open although the formal equation was written down some thirty five years ago . to be specific ,i will first focus on the procedure developed by lewandowski , rovelli , smolin and others which culminated in a specific construction due to thiemann. steps ii ) and iii ) have been completed for the gauss and the diffeomorphism constraints .the mathematical implementation required a very substantial extension of the algebraic quantization program initiated by dirac , and the use of the spin - network machinery of quantum geometry .again , the detailed implementation is a non - trivial technical success and the analogous task has not been completed in geometrodynamics because of difficulties associated with infinite dimensional spaces .thiemann s quantum hamiltonian constraint is first defined _ on the space of solutions to the gauss constraint _ .the regularization procedure requires several external inputs .however , a number of these ambiguities disappear when one restricts the action of the constraint operator to the space of solutions of the diffeomorphism constraint . on this space , the problem of finding a general solution to the hamiltonian constraint can be systematically reduced to that of finding _ elementary _ solutions , a task that requires only analysis of linear operators on certain finite dimensional spaces . in this sense , step ii )has been completed for all constraints .this is a non - trivial result .however , it is still unclear whether this theory is physically satisfactory ; at this stage , it is in principle possible that it captures only an ` exotic ' sector of quantum gravity .a _ key open problem _ in loop quantum gravity is to show that the hamiltonian constraint either thiemann s or an alternative such as the one of gambini and pullin admits a ` sufficient number ' of semi - classical states .progress on this problem has been slow because the general issue of semi - classical limits is itself difficult in _ any _ background independent approach .however , a systematic understanding has now begun to emerge and is providing the ` infra - structure ' needed to analyze the key problem mentioned above .finally , while there are promising ideas to complete step iii ) , substantial further work is necessary to solve this problem .recent advance in quantum cosmology , described in section [ s3.1 ] , is an example of progress in this direction and it provides a significant support for the thiemann scheme , but of course only within the limited context of mini - superspaces . to summarize , from the mathematical physics perspective , in the hamiltonian approach the crux of dynamics lies in quantum constraints .the quantum gauss and diffeomorphism constraints have been solved satisfactorily and it is significant that detailed regularization schemes have been proposed for the hamiltonian constraint .but it is not clear if any of the proposed strategies to solve this constraint incorporates the familiar low energy physics in full theory , i.e. , beyond symmetry reduced models .novel ideas are being pursued to address this issue .i will summarize them in section [ s4 ] ._ remarks : _ + 1. there has been another concern about the thiemann - type regularizations of the hamiltonian constraint which , however , is less specific .it stems from the structure of the constraint algebra . on the space of solutions to the gauss constraints, the hamiltonian constraint operators do not commute .this is compatible with the fact that the poisson brackets between these constraints do not vanish in the classical theory .however , it is not obvious that the commutator algebra correctly reflects the classical poison bracket algebra . to shed light on this issue , gambini , lewandowski , marolf and pullin introduced a certain domain of definition of the hamiltonian constraint which is smaller than the space of all solutions to the gauss constraints but larger than the space of solutions to the gauss _ and _ diffeomorphism constraints .it was then found that the commutator between any two hamiltonian constraints vanishes identically .however , it was also shown that the operator representing the right side of the classical poisson bracket _ also vanishes _ on all the quantum states in the new domain . therefore , while the vanishing of the commutator of the hamiltonian constraint was initially unexpected , this analysis does not reveal a clear - cut problem with these regularizations .one can follow this scheme step by step in 2 + 1 gravity where one knows what the result should be .one can obtain the ` elementary solutions ' mentioned above and show that all the standard quantum states including the semi - classical ones can be recovered as linear combinations of these elementary ones .as is almost always the case with constrained systems , there are _ many more solutions _ and the ` spurious ones ' have to be eliminated by the requirement that the physical norm be finite . in 2 + 1gravity , the connection formulation used here naturally leads to a complete set of dirac observables and the inner - product can be essentially fixed by the requirement that they be self - adjoint . in 3 + 1 gravity , by contrast , we do not have this luxury and the problem of constructing the physical inner - product is therefore much more difficult .however , the concern here is that of weeding out unwanted solutions rather than having a ` sufficient number ' of semi - classical ones , a significantly less serious issue at the present stage of the program .in this section , i will summarize two developments that answer several of the questions raised under first two bullets in section [ s2.1 ] . over the last five years, quantum geometry has led to some striking results of direct physical interest .the first of these concerns the fate of the big - bang singularity . traditionally , in quantum cosmology one has proceeded by first imposing spatial symmetries such as homogeneity and isotropy to freeze out all but a finite number of degrees of freedom _ already at the classical level _ and then quantizing the reduced system . in the simplest case ,the basic variables of the reduced classical system are the scale factor and matter fields .the symmetries imply that space - time curvature goes as , where depends on the matter field under consideration .einstein s equations then predict a big - bang , where the scale factor goes to zero and the curvature blows up . as indicated in section [ s2.1 ] , this is reminiscent of what happens to ferro - magnets at the curie temperature : magnetization goes to zero and the susceptibility diverges . by analogy ,the key question is : do these ` pathologies ' disappear if we re - examine the situation in the context of an appropriate quantum theory ?in traditional quantum cosmologies , without additional input , they do not .that is , typically , to resolve the singularity one either has to introduce matter with unphysical properties or additional boundary conditions , e.g. , by invoking new principles . in a series of seminal papers bojowaldhas shown that the situation in loop quantum cosmology is quite different : the underlying quantum geometry makes a _ qualitative _ difference very near the big - bang . at first , this seems puzzling because after symmetry reduction , the system has only a _finite _ number of degrees of freedom .thus , quantum cosmology is analogous to quantum mechanics rather than quantum field theory .how then can one obtain qualitatively new predictions ?ashtekar , bojowald and lewandowski clarified the situation : if one follows the program laid out in the full theory , then even for the symmetry reduced model one is led to an inequivalent quantum theory a new quantum mechanics !let me make a small detour to explain how this comes about .consider the simplest case spatially homogeneous , isotropic models . in the standard geometrodynamic treatment ,the operator corresponding to the scale factor is self - adjoint and has zero as part of its _ continuous _ spectrum .now , there is a general mathematical result which says that any measurable function of a self - adjoint operator is again self - adjoint . since the function on the spectrum of is well - defined except at and since this is a subset of zero measure of the continuous spectrum, is self - adjoint and is the natural candidate for the operator analog of .this operator is unbounded above whence the curvature operator is also unbounded in the quantum theory . in connection - dynamics , the information of geometry is encoded in the triad which now has a single independent component related to the scale factor via . to pass to quantum theory, one follows the procedure used in the full theory .therefore , only the holonomies are well - defined operators ; _ connections are not _ !since connections in this model have the same information as the ( only independent component of the ) extrinsic curvature , the resulting theory is _ inequivalent _ to the one used in geometrodynamics .. then , one is led to a quantum theory which is inequivalent from the standard schrdinger mechanics because , while there is a well - defined operator corresponding to for each real number , it fails to be weakly continuous .hence there is no operator corresponding to ( the connection ) itself .the operator on the other hand is well - defined .the hilbert space is not but where is the bohr compactification of the real line and the natural haar measure thereon .not surprisingly , the structure of is the quantum cosmology analog of that of of the full theory . ] specifically , eigenvectors of are now normalizable .thus ; one has a direct sum rather than a direct integral .hence the spectrum of is equipped with a _discrete _ topology and is _ no longer a subset of zero measure_. therefore , the naive inverse of is not even densely defined , let alone self - adjoint .the operator corresponding to ( or any inverse power of ) has to be defined differently .fortunately , one can again use a procedure introduced by thiemann in the full theory and show that this can be done .the operator , so constructed , has the physically expected properties .for instance , it commutes with .the product of the eigenvalues of and equals one to 1 part in already when , and becomes even closer to one as the universe expands .however , in the deep planck regime very near the big - bang , the operator of loop quantum cosmology is _ qualitatively _ different from its analog in geometrodynamics : is _ bounded above _ in the full hilbert space ! consequently , curvature is also bounded above .if classically it goes as , then the loop quantum cosmology upper bound is about times the curvature at the horizon of a solar mass black hole .this is a huge number . _ but it is finite . _ the mechanism is qualitatively similar to the one which makes the ground state energy of a hydrogen atom in quantum theory , , finite even though it is infinite classically : in the expression of the upper bound of curvature , again intervenes in the denominator .this completes the detour .let us now consider dynamics .since the curvature is bounded above in the entire hilbert space , one might hope that the quantum evolution may be well - defined right through the big - bang singularity .is this in fact the case ?the second surprise is that the answer is in the affirmative .more precisely , the situation can be summarized as follows .as one might expect , the ` evolution ' is dictated by the hamiltonian constraint operator .let us expand out the quantum state as where are the eigenstates of and denotes matter fields .then , the hamiltonian constraint takes the form : [ 3.2 ] c^+ _ p+4p_o ( ) + c^o _ p( ) + c^- _ p-4p_o ( ) = ^2 _ _ p ( ) where are fixed functions of ; the barbero - immirzi parameter ; a constant , determined by the lowest eigenvalue of the area operator and is the matter hamiltonian . again , using the analog of the thiemann regularization from the full theory , one can show that the matter hamiltonian is a well - defined operator . primarily, being a constraint equation , ( [ 3.2 ] ) restricts the physically permissible .however , _ if _ we choose to interpret the eigenvalues of ( i.e. , the square of the scale factor times the sign of the determinant of the triad ) as a time variable , ( [ 3.2 ] ) can be interpreted as an ` evolution equation ' which evolves the state through discrete time steps .the highly non - trivial result is that the coefficients are such that _ one can evolve right through the classical singularity _ ,i.e. , to the past , right through .thus , the infinities predicted by the classical theory at the big - bang are artifacts of assuming that the classical , continuum space - time approximation is valid right up to the big - bang . in the quantum theory ,the state can be evolved through the big - bang without any difficulty. however , the classical space - time description completely fails near the big - bang ; figuratively , the classical space - time ` dissolves ' .this resolution of the singularity without any ` external ' input ( such as matter violating energy conditions ) is dramatically different from what happens with the standard wheeler - dewitt equation of quantum geometrodynamics . however , for large values of the scale factor , the two evolutions are close ; as one would have hoped , quantum geometry effects intervene only in the ` deep planck regime ' and resolve the singularity . from this perspective , then , one is led to say that the most striking of the consequences of loop quantum gravity are not seen in standard quantum cosmology because it ` washes out ' the fundamental discreteness of quantum geometry . the detailed calculations have revealed another surprising feature .the fact that the quantum effects become prominent near the big bang , completely invalidating the classical predictions , is pleasing but not unexpected .however , prior to these calculations , it was not clear how soon after the big - bang one can start trusting semi - classical notions and calculations . it would not have been surprising if we had to wait till the radius of the universe became , say , a few billion times the planck length .these calculations strongly suggest that a few tens of planck lengths should suffice .this is fortunate because it is now feasible to develop quantum numerical relativity ; with computational resources commonly available , grids with points are hopelessly large but one with points could be manageable . finally , quantum geometry effects also modify the _matter hamiltonian in interesting ways .in particular , they introduce an _ anti - damping _ term in the evolution of the scalar field during the initial inflationary phase , which can drive the scalar field to values needed in the chaotic inflation scenario , without having to appeal to the occurrence of large quantum fluctuations .such results have encouraged some phenomenologists to seek for signatures of loop quantum gravity effects in the observations of the very early universe .however , these applications lie at the forefront of today s research and are therefore not as definitive .loop quantum cosmology illuminates dynamical ramifications of quantum geometry but within the context of mini - superspaces where all but a finite number of degrees of freedom are frozen . in this sub - section ,i will discuss a complementary application where one considers the full theory but probes consequences of quantum geometry which are not sensitive to full quantum dynamics the application of the framework to the problem of black hole entropy .this discussion is based on work of ashtekar , baez , bojowald , corichi , domagala , krasnov , lewandowski and meissner , much of which was motivated by earlier work of krasnov , rovelli and others . as explained in the introduction , since mid - seventies, a key question in the subject has been : what is the statistical mechanical origin of the entropy of large black holes ?what are the microscopic degrees of freedom that account for this entropy ?this relation implies that a solar mass black hole must have quantum states , a number that is _ huge _ even by the standards of statistical mechanics . where do all these states reside ? to answer these questions , in the early nineties wheelerhad suggested the following heuristic picture , which he christened ` it from bit ' .divide the black hole horizon into elementary cells , each with one planck unit of area , and assign to each cell two microstates , or one ` bit ' .then the total number of states is given by where is the number of elementary cells , whence entropy is given by .thus , apart from a numerical coefficient , the entropy ( ` it ' ) is accounted for by assigning two states ( ` bit ' ) to each elementary cell .this qualitative picture is simple and attractive .but can these heuristic ideas be supported by a systematic analysis from first principles ?quantum geometry has supplied such an analysis . as one would expect , while some qualitative features of this picture are borne out , the actual situation is far more subtle .a systematic approach requires that we first specify the class of black holes of interest .since the entropy formula is expected to hold unambiguously for black holes in equilibrium , most analyses were confined to _stationary _ , eternal black holes ( i.e. , in 4-dimensional general relativity , to the kerr - newman family ) . from a physical viewpointhowever , this assumption seems overly restrictive .after all , in statistical mechanical calculations of entropy of ordinary systems , one only has to assume that the given system is in equilibrium , not the whole world .therefore , it should suffice for us to assume that the black hole itself is in equilibrium ; the exterior geometry should not be forced to be time - independent .furthermore , the analysis should also account for entropy of black holes which may be distorted or carry ( yang - mills and other ) hair .finally , it has been known since the mid - seventies that the thermodynamical considerations apply not only to black holes but also to cosmological horizons .a natural question is : can these diverse situations be treated in a single stroke ? within the quantum geometry approach ,the answer is in the affirmative .the entropy calculations have been carried out in the ` isolated horizons ' framework which encompasses all these situations .isolated horizons serve as ` internal boundaries ' whose intrinsic geometries ( and matter fields ) are time - independent , although space - time geometry as well as matter fields in the external space - time region can be fully dynamical .the zeroth and first laws of black hole mechanics have been extended to isolated horizons .entropy associated with an isolated horizon refers to the family of observers in the exterior , for whom the isolated horizon is a physical boundary that separates the region which is accessible to them from the one which is not .this point is especially important for cosmological horizons where , without reference to observers , one can not even define horizons .states which contribute to this entropy are the ones which can interact with the states in the exterior ; in this sense , they ` reside ' on the horizon .in the detailed analysis , one considers space - times admitting an isolated horizon as inner boundary and carries out a systematic quantization .the quantum geometry framework can be naturally extended to this case .the isolated horizon boundary conditions imply that the intrinsic geometry of the quantum horizon is described by the so called chern - simons theory on the horizon .this is a well - developed , topological field theory .a deeply satisfying feature of the analysis is that there is a seamless matching of three otherwise independent structures : the isolated horizon boundary conditions , the quantum geometry in the bulk , and the chern - simons theory on the horizon . in particular, one can calculate eigenvalues of certain physically interesting operators using purely bulk quantum geometry without any knowledge of the chern - simons theory , or using the chern - simons theory without any knowledge of the bulk quantum geometry .the two theories have never heard of each other .but the isolated horizon boundary conditions require that the two infinite sets of numbers match exactly .this is a highly non - trivial requirement .but the numbers do match , thereby providing a coherent description of the quantum horizon . in this description , the polymer excitations of the bulk geometry ,each labelled by a spin , pierce the horizon , endowing it an elementary area given by ( [ 2.1 ] ) .the sum adds up to the total horizon area .the intrinsic geometry of the horizon is flat except at these punctures , but at each puncture there is a _quantized _ deficit angle .these add up to endow the horizon with a 2-sphere topology . for a solar mass black hole ,a typical horizon state would have punctures , each contributing a tiny deficit angle .so , although quantum geometry _ is _ distributional , it can be well approximated by a smooth metric .the counting of states can be carried out as follows .first one constructs a micro - canonical ensemble by restricting oneself only to those states for which the total area , mass and angular momentum multipole moments and charges lie in small intervals around fixed values .( as is usual in statistical mechanics , the leading contribution to the entropy is independent of the precise choice of these small intervals . ) for each set of punctures , one can compute the dimension of the surface hilbert space , consisting of chern - simons states compatible with that set .one allows all possible sets of punctures ( by varying both the spin labels and the number of punctures ) and adds up the dimensions of the corresponding _ surface _ hilbert spaces to obtain the number of permissible surface states .one finds that the horizon entropy is given by [ 3.3 ] s_hor : = = - ( ) + o ( ) where is a root of an algebraic equation and denote quantities for which tends to zero as tends to infinity .thus , for large black holes , the leading term is indeed proportional to the horizon area .this is a non - trivial result ; for examples , early calculations often led to proportionality to the square - root of the area .however , even for large black holes , one obtains agreement with the hawking - bekenstein formula only in the sector of quantum geometry in which the barbero - immirzi parameter takes the value .thus , while all sectors are equivalent classically , the standard quantum field theory in curved space - times is recovered in the semi - classical theory only in the sector of quantum geometry .it is quite remarkable that thermodynamic considerations involving _ large _ black holes can be used to fix the quantization ambiguity which dictates such planck scale properties as eigenvalues of geometric operators .note however that the value of can be fixed by demanding agreement with the semi - classical result just in one case e.g ., a spherical horizon with zero charge , or a cosmological horizon in the de sitter space - time , or , .once the value of is fixed , the theory is completely fixed and we can ask : does this theory yield the hawking - bekenstein value of entropy of _ all _ isolated horizons , irrespective of the values of charges , angular momentum , and cosmological constant , the amount of distortion , or hair .the answer is in the affirmative .thus , the agreement with quantum field theory in curved space - times holds in _ all _ these diverse cases . why does not depend on other quantities such as chargesthis important property can be traced back to a key consequence of the isolated horizon boundary conditions : detailed calculations show that only the gravitational part of the symplectic structure has a surface term at the horizon ; the matter symplectic structures have only volume terms .( furthermore , the gravitational surface term is insensitive to the value of the cosmological constant . ) consequently , there are no independent surface quantum states associated with matter .this provides a natural explanation of the fact that the hawking - bekenstein entropy depends only on the horizon area and is independent of electro - magnetic ( or other ) charges .so far , all matter fields were assumed to be minimally coupled to gravity ( there was no restriction on their couplings to each other ) .if one allows non - minimal gravitational couplings , the isolated horizon framework ( as well as other methods ) show that entropy should depend not just on the area _ but also on the values of non - minimally coupled matter fields at the horizon_. at first , this non - geometrical nature of entropy seems to be a major challenge to approaches based on quantum geometry .however it turns out that , in presence of non - minimal couplings , the geometrical orthonormal triads are no longer functions just of the momenta conjugate to the gravitational connection _ but depend also on matter fields_. thus quantum riemannian geometry including area operators can no longer be analyzed just in the gravitational sector of the quantum theory .the dependence of the triads and area operators on matter fields is such that the counting of surface states leads precisely to the correct expression of entropy , again for the same value of the barbero - immirzi parameter .this is a subtle and highly non - trivial check on the robustness of the quantum geometry approach to the statistical mechanical calculation of black hole entropy .finally , let us return to wheeler s ` it from bit ' .the horizon can indeed be divided into elementary cells .but they need not have the same area ; the area of a cell can be where is an _arbitrary _ half - integer subject only to the requirement that does not exceed the total horizon area .wheeler assigned to each elementary cell two bits . in the quantum geometry calculation, this corresponds to focussing just on punctures .while the corresponding surface states are already sufficiently numerous to give entropy proportional to area , other states with higher values also contribute to the leading term in the expression of entropy .to summarize , quantum geometry naturally provides the micro - states responsible for the huge entropy associated with horizons . in this analysis ,all black holes and cosmological horizons are treated in an unified fashion ; there is no restriction , e.g. , to near - extremal black holes .the sub - leading term has also been calculated and shown to be . finally , in this analysis quantum einstein s equations _ are _ used . in particular , had we not imposed the quantum gauss and diffeomorphism constraints on surface states , the spurious gauge degrees of freedom would have given an infinite entropy .however , detailed considerations show that , because of the isolated horizon boundary conditions , the hamiltonian constraint has to be imposed just in the bulk . since in the entropy calculation one traces over bulk states , the final result is insensitive to the details of how this ( or any other bulk ) equation is imposed .thus , as in other approaches to black hole entropy , the calculation does not require a complete knowledge of quantum dynamics .from the historical and conceptual perspectives of section [ s1 ] , loop quantum gravity has had several successes .thanks to the systematic development of quantum geometry , several of the roadblocks encountered by quantum geometrodynamics were removed .functional analytic issues related to the presence of an infinite number of degrees of freedom are now faced squarely .integrals on infinite dimensional spaces are rigorously defined and the required operators have been systematically constructed . thanks to this high level of mathematical precision , the canonical quantization program has leaped past the ` formal ' stage of development . more importantly , although some key issues related to quantum dynamics still remain , it has been possible to use the parts of the program that are already well established to extract useful and highly non - trivial physical predictions . in particular , some of the long standing issues about the nature of the big - bang and properties of quantum black holes have been resolved . in this section , i will further clarify some conceptual issues , discuss current research and outline some directions for future . _ quantum geometry . _ from conceptual considerations , an important issue is the _ physical _ significance of discreteness of eigenvalues of geometric operators .recall first that , in the classical theory , differential geometry simply provides us with formulas to compute areas of surfaces and volumes of regions in a riemannian manifold . to turn these quantities into physical observables of general relativity , one has to define the surfaces and regions _ operationally _, e.g. using matter fields .once this is done , one can simply use the formulas supplied by differential geometry to calculate values of these observable .the situation is similar in quantum theory .for instance , the area of the isolated horizon is a dirac observable in the classical theory and the application of the quantum geometry area formula to _ this _ surface leads to physical results . in 2 + 1 dimensions , freidel , noui and perezhave recently introduced point particles coupled to gravity .the physical distance between these particles is again a dirac observable . when used in this context , the spectrum of the length operator has direct physical meaning . in all these situations ,the operators and their eigenvalues correspond to the ` proper ' lengths , areas and volumes of physical objects , measured in the rest frames .finally sometimes questions are raised about compatibility between discreteness of these eigenvalues and lorentz invariance . as was recently emphasized by rovelli , there is no tension whatsoever : it suffices to recall that discreteness of eigenvalues of the angular momentum operator of non - relativistic quantum mechanics is perfectly compatible with the rotational invariance of that theory ._ quantum einstein s equations ._ the challenge of quantum dynamics in the full theory is to find solutions to the quantum constraint equations and endow these physical states with the structure of an appropriate hilbert space .the general consensus in the loop quantum gravity community is that while the situation is well - understood for the gauss and diffeomorphism constraints , it is far from being definitive for the hamiltonian constraint .it _ is _ non - trivial that well - defined candidate operators representing the hamiltonian constraint exist on the space of solutions to the gauss and diffeomorphism constraints .however there are many ambiguities and none of the candidate operators has been shown to lead to a ` sufficient number of ' semi - classical states in 3 + 1 dimensions .a second important open issue is to find restrictions on matter fields and their couplings to gravity for which this non - perturbative quantization can be carried out to a satisfactory conclusion .as mentioned in section [ s1.1 ] , the renormalization group approach has provided interesting hints .specifically , luscher and reuter have presented significant evidence for a non - trivial fixed point for pure gravity in 4 dimensions .when matter sources are included , it continues to exist only when the matter content and couplings are suitably restricted. for scalar fields , in particular , percacci and perini have found that polynomial couplings ( beyond the quadratic term in the action ) are ruled out , an intriguing result that may ` explain ' the triviality of such theories in minkowski space - times .are there similar constraints coming from loop quantum gravity ? to address these core issues , at least four different avenues are being pursued .the first , and the closest to ideas discussed in section [ s2.4 ] , is the ` master constraint program ' recently introduced by thiemann .the idea here is to avoid using an infinite number of hamiltonian constraints , each smeared by a so - called ` lapse function ' .instead , one squares the integrand itself in an appropriate sense and then integrates it on the 3-manifold . in simple examples , this procedure leads to physically viable quantum theories . in the gravitational case , however , the procedure does not seem to remove any of the ambiguities . rather , its principal strength lies in its potential to complete the last step , iii ) , in quantum dynamics : finding the physically appropriate scalar product on physical states .the general philosophy is similar to that advocated by john klauder over the years in his approach to quantum gravity based on coherent states .a second strategy to solve the quantum hamiltonian constraint is due to gambini , pullin and their collaborators .it builds on their extensive work on the interplay between quantum gravity and knot theory .the more recent developments use the relatively new invariants of _ intersecting _ knots discovered by vassiliev .this is a novel approach which furthermore has a potential of enhancing the relation between topological field theories and quantum gravity . as our knowledge of invariants of intersecting knots deepens, this approach is likely to provide increasingly significant insights . in particular, it has the potential of leading to a formulation of quantum gravity which does not refer even to a background manifold ( see footnote 9 ) .the third approach comes from spin - foam models , mentioned in section [ s2.4 ] , which provide a path integral approach to quantum gravity .transition amplitudes from path integrals can be used to restrict the choice of the hamiltonian constraint operator in the canonical theory .this is a promising direction and freidel , noui , perez , rovelli and others are already carrying out detailed analysis of restrictions , especially in 2 + 1 dimensions . in the fourth approach ,also due to gambini and pullin , one first constructs consistent discrete theories at the classical level and then quantizes them . in this program , there are no constraints : they are solved classically to find the values of the ` lapse and shift fields ' which define ` time - evolution ' .this strategy has already been applied successfully to gauge theories and certain cosmological models .an added bonus here is that one can revive a certain proposal made by page and wootters to address the difficult issues of interpretation of quantum mechanics which become especially acute in quantum cosmology , and more generally in the absence of a background physical geometry ._ quantum cosmology ._ as we saw in section [ s3 ] , loop quantum gravity has resolved some of the long - standing physical problems about the nature of the big - bang . in quantum cosmology, there is ongoing work by ashtekar , bojowald , willis and others on obtaining ` effective field equations ' which incorporate quantum corrections .quantum geometry effects significantly modify the effective field equations and the modifications in turn lead to new physics in the early universe .in particular , bojowald and date have shown that not only is the initial singularity resolved but the ( belinski - khalatnikov - lifschitz type ) chaotic behavior predicted by classical general relativity and supergravity also disappears !this is perhaps not surprising because the underlying geometry exhibits quantum discreteness : even in the classical theory chaos disappears if the theory is truncated at any smallest , non - zero volume .there are also less drastic but interesting modifications of the inflationary scenario with potentially observable consequences .this is a forefront area and it is encouraging that loop quantum cosmology is already yielding some phenomenological results . _ quantum black holes . _ as in other approaches to black hole entropy , concrete progress could be made because the analysis does not require detailed knowledge of how quantum dynamics is implemented in _ full _ quantum theory .also , restriction to large black holes implies that the hawking radiation is negligible , whence the black hole surface can be modelled by an isolated horizon . to incorporate back - reaction, one would have to extend the present analysis to _ dynamical horizons _it is now known that , in the classical theory , the first law can be extended also to these time - dependent situations and the leading term in the expression of the entropy is again given by .hawking radiation will cause the horizon of a large black hole to shrink _ very _ slowly , whence it is reasonable to expect that the chern - simons - type description of the quantum horizon geometry can be extended also to this case .the natural question then is : can one describe in detail the black hole evaporation process and shed light on the issue of information loss .the standard space - time diagram of the evaporating black hole is shown in figure [ traditional ] .it is based on two ingredients : i ) hawking s original calculation of black hole radiance , in the framework of quantum field theory on a _ fixed _ background space - time ; and ii ) heuristics of back - reaction effects which suggest that the radius of the event horizon must shrink to zero .it is generally argued that the semi - classical process depicted in this figure should be reliable until the very late stages of evaporation when the black hole has shrunk to planck size and quantum gravity effects become important . since it takes a very long time for a large black hole to shrink to this size , one then argues that the quantum gravity effects during the last stages of evaporation will not be sufficient to restore the correlations that have been lost due to thermal radiation over such a long period .thus there is loss of information .intuitively , the lost information is ` absorbed ' by the final singularity which serves as a new boundary to space - time . however , loop quantum gravity considerations suggest that this argument is incorrect in two respects .first , the semi - classical picture breaks down not just at the end point of evaporation but in fact _ all along what is depicted as the final singularity_. recently , using ideas from quantum cosmology , the interior of the schwarzschild horizon was analyzed in the context of loop quantum gravity .again , it was found that the singularity is resolved due to quantum geometry effects .thus , the space - time does _ not _ have a singularity as its final boundary .the second limitation of the semi - classical picture of figure [ traditional ] is its depiction of the event horizon .the notion of an event horizon is teleological and refers to the _ global _ structure of space - time .resolution of the singularity introduces a domain in which there is no classical space - time , whence the notion ceases to be meaningful ; it is simply ` transcended ' in quantum theory .this leads to a new , possible paradigm for black hole evaporation in loop quantum gravity in which the dynamical horizons evaporate with emission of hawking radiation , the initial pure state evolves to a final pure state and there is no information loss .furthermore , the semi - classical considerations are not simply dismissed ; they turn out to be valid in certain space - time regions and under certain approximations .but for fundamental conceptual issues , they are simply inadequate .i should emphasize however that , although elements that go into the construction of this paradigm seem to be on firm footing , many details will have to be worked out before it can acquire the status of a model ._ semi - classical issues ._ a frontier area of research is contact with low energy physics . here ,a number of fascinating challenges appear to be within reach .fock states have been isolated in the polymer framework and elements of quantum field theory on quantum geometry have been introduced .these developments lead to concrete questions .for example , in quantum field theory in flat space - times , the hamiltonian and other operators are regularized through normal ordering . for quantum field theory on quantum geometry ,on the other hand , the hamiltonians are expected to be manifestly finite .can one then show that , in a suitable approximation , normal ordered operators in the minkowski continuum arise naturally from these finite operators ?can one ` explain ' why the so - called hadamard states of quantum field theory in curved space - times are special ?these issues also provide valuable hints for the construction of viable semi - classical states of quantum geometry .the final and much more difficult challenge is to ` explain ' why perturbative quantum general relativity fails if the theory exists non - perturbatively .as mentioned in section [ s1 ] , heuristically the failure can be traced back to the insistence that the continuum space - time geometry is a good approximation even below the planck scale . but a more detailed answer is needed .is it because , as recent developments in euclidean quantum gravity indicate , the renormalization group has a non - trivial fixed point ?_ unification ._ finally , there is the issue of unification . at a kinematical level, there is already an unification because the quantum configuration space of general relativity is the same as in gauge theories which govern the strong and electro - weak interactions .but the non - trivial issue is that of dynamics .i will conclude with a speculation .one possibility is to use the ` emergent phenomena ' scenario where new degrees of freedom or particles , which were not present in the initial lagrangian , emerge when one considers excitations of a non - trivial vacuum .for example , one can begin with solids and arrive at phonons ; start with superfluids and find rotons ; consider superconductors and discover cooper pairs . in loop quantum gravity, the micro - state representing minkowski space - time will have a highly non - trivial planck - scale structure .the basic entities will be 1-dimensional and polymer - like . even in absence of a detailed theory, one can tell that the fluctuations of these 1-dimensional entities will correspond not only to gravitons but also to other particles , including a spin-1 particle , a scalar and an anti - symmetric tensor .these ` emergent states ' are likely to play an important role in minkowskian physics derived from loop quantum gravity .a detailed study of these excitations may well lead to interesting dynamics that includes not only gravity but also a select family of non - gravitational fields .it may also serve as a bridge between loop quantum gravity and string theory . for , string theory has two a priori elements : unexcited strings which carry no quantum numbers and a background space - time .loop quantum gravity suggests that both could arise from the quantum state of geometry , peaked at minkowski ( or , de sitter ) space .the polymer - like quantum threads which must be woven to create the classical ground state geometries could be interpreted as unexcited strings .excitations of these strings , in turn , may provide interesting matter couplings for loop quantum gravity .my understanding of quantum gravity has deepened through discussions with a large number of colleagues . among them, i would especially like to thank john baez , peter bergmann , martin bojowald , alex corichi , steve fairhurst , christian fleischhack , laurent freidel , klaus fredenhagen , rodolfo gambini , amit ghosh , jim hartle , gary horowitz , ted jacobson , kirill krasnov , jerzy lewandowski , dieter lst , don marolf , jose mouro , ted newman , hermann nicolai , max niedermaier , karim noui , andrzej okow , roger penrose , alex perez , jorge pullin , carlo rovelli , joseph samuel , hanno sahlmann , ashoke sen , lee smolin , john stachel , daniel sudarsky , thomas thiemann , chris van den broeck , madhavan varadarajan , jacek wisniewski , josh willis , bill unruh , bob wald and jose - antonio zapata .this work was supported in part by the nsf grant phy 0090091 , the alexander von humboldt foundation and the eberly research funds of the pennsylvania state university .a common criticism of the canonical quantization program pioneered by dirac and bergmann is that in the very first step it requires a splitting of space - time into space and time , thereby doing grave injustice to space - time covariance that underlies general relativity .this is a valid concern and it is certainly true that the insistence on using the standard hamiltonian methods makes the analysis of certain conceptual issues quite awkward .loop quantum gravity program accepts this price because of two reasons .first , the use of hamiltonian methods makes it possible to have sufficient mathematical precision in the passage to quantum theory to resolve the difficult field theoretic problems , ensuring that there are no hidden infinities .the second and more important reason is that the mathematically coherent theory that results has led to novel predictions of direct physical interest .note however that the use of hamiltonian methods by itself does not require a 3 + 1 splitting .following lagrange , one can construct a ` covariant phase space ' from _ solutions _ to einstein s equations .this construction has turned out to be extremely convenient in a number of applications : quantum theory of linear fields in curved space - times ; perturbation theory of stationary stars and black holes , and derivation of expressions of conserved quantities in general relativity , including the ` dynamical ' ones such as the bondi 4-momentum at null infinity .therefore , it is tempting to use the covariant hamiltonian formulation as a starting point for quantization .in fact irving segal proposed this strategy for interacting quantum field theories in minkowski space - time already in the seventies .however , it was soon shown that his specific strategy is not viable beyond linear systems and no one has been able to obtain a satisfactory substitute .a similar strategy was tried for general relativity as well , using techniques form geometric quantization . recall that quantum states are square - integrable functions of only ` half ' the number of phase space variables usually the configuration variables . to single out their analogs , in geometric quantization one has to introduce additional structure on the covariant phase space , called a ` polarization ' .quantization is easiest if this polarization is suitably ` compatible ' with the hamiltonian flow of the theory .unfortunately , no such polarization has been found on the phase space of general relativity .more importantly , even if this technical problem were to be overcome , the resulting quantum theory would be rather uninteresting for the following reason . in order to have a globally well - defined hamiltonian vector field ,one would have to restrict oneself only to ` weak ' , 4-dimensional gravitational fields .quantization of such a covariant phase space , then , would not reveal answers to the most important challenges of quantum gravity which occur in the strong field regimes near singularities .let us therefore return to the standard canonical phase space and use it as the point of departure for quantization . in the classical regime ,the hamiltonian theory is , of course , completely equivalent to the space - time description . it does have space - time covariance , butit is not ` manifest ' .is this a deep limitation for quantization ? recall that a classical space - time is analogous to a full dynamical trajectory of a particle in non - relativistic quantum mechanics and particle trajectories have no physical role in the full quantum theory . indeed , even in a semi - classical approximation , the trajectories are fuzzy and smeared . for the same reason ,the notion of classical space - times and of space - time covariance is not likely to have a fundamental role in the full quantum theory .these notions have to be recovered only in an appropriate semi - classical regime .this point is best illustrated in 3-dimensional general relativity which shares all the conceptual problems with its 4-dimensional analog but which is technically much simpler and can be solved exactly .there , one can begin with a 2 + 1 splitting and carry out canonical quantization .one can identify , in the canonical phase space , a complete set of functions which commute with all the constraints .these are therefore ` dirac observables ' , associated with entire space - times . in quantum theory , they become self - adjoint operators , enabling one to interpret states and extract physical information from quantum calculations , e.g. , of transition amplitudes .it turns out that quantum theory states , inner - products , observables can be expressed purely combinatorially . in this description , in the full quantum theory there is no space , no time , no covariance to speak of .these notions emerge only when we restrict ourselves to suitable semi - classical states .what is awkward in the canonical approach is the _ classical limit _ procedure . in the intermediate steps of this procedure, one uses the canonical phase space based on a 2 + 1 splitting .but because this phase space description is equivalent to the covariant classical theory , in the final step one again has space - time covariance . to summarize, space - time covariance does not appear to have a fundamental role in the full quantum theory because there is neither space nor time in the full theory and it _ is _ recovered in the classical limit .the awkwardness arises only in the intermediate steps .this overall situation has an analog in ordinary quantum mechanics .let us take the hamiltonian framework of a non - relativistic system as the classical theory .then we have a ` covariance group ' that of the canonical transformations .to a classical physicist , this is geometrically natural and physically fundamental . yet , in full quantum theory , it has no special role .the theory of canonical transformations is replaced by the dirac s transformation theory which enables one to pass from one viable quantum representation ( e.g. , the q - representation ) to another ( e.g. the p - representation ) .the canonical group re - emerges only in the classical limit .however , in the standard q - representation , this recovery takes place in an awkward fashion . in the first step ,one recovers just the configuration space .but one can quickly reconstruct the phase space as the cotangent bundle over this configuration space , introduce the symplectic structure and recover the full canonical group as the symmetry group of the classical theory .we routinely accept this procedure and the role of ` phase - space covariance ' in quantization in spite of an awkwardness in an intermediate step of taking the classical limit .the canonical approach adopts a similar viewpoint towards space - time covariance .arnowitt r , deser s and misner c w 1962 the dynamics of general relativity , in _ gravitation : an introduction to current research _ed witten l ( john wiley , new york ) wheeler j a 1962 _ geometrodynamics _ , ( academic press , new york ) wheeler j a 1964 geometrodynamics and the issue of the final state _ relativity , groupos and topology _ eds dewitt c m and dewitt b s ( gordon and breach , new york ) komar a 1970 quantization program for general relativity , in _ relativity _ carmeli m , fickler s. i. and witten l ( eds ) ( plenum , new york ) ashtekar a and geroch r 1974 quantum theory of gravitation , _ rep .phys . _ * 37 * 1211 - 1256 weinberg s 1972 _ gravitation and cosmology _ ( john wiley , new york ) dewitt b s 1972 covariant quantum geometrodynamics , in _ magic without magic : john archibald wheeler _ ed klauder j r ( w. h. freeman , san fransisco ) isham c. j. 1975 an introduction to quantum gravity , in _ quantum gravity , an oxford symposium _ isham c j , penrose r and sciama d w ( clarendon press , oxford ) duff m 1975 covariant qauantization in _ quantum gravity , an oxford symposium _ isham c j , penrose r and sciama d w ( clarendon press , oxford ) penrose r 1975 twistor theory , its aims and achievements _ quantum gravity , an oxford symposium _ isham c j , penrose r and sciama d w ( clarendon press , oxford ) israel w and hawking s w eds 1980 _ general relativity , an einstein centenary survey _ ( cambridge up , cambridge ) .bergmann p g and komar a 1980 the phase space formulation of general relativity and approaches toward its canonical quantization _ general relativity and gravitation vol 1 , on hundred years after the birth of albert einstein _ , held a ed ( plenum , new york ) wolf h ( ed ) 1980 _ some strangeness in proportion _( addison wesley , reading ) hawking s w 1980 _ is end in sight for theoretical physics ? : an inaugural address _ (cambridge u p , cambridge ) kucha k 1981 canonical methods of quantization , in textitquantum gravity 2 , a second oxford symposium isham c j , penrose r and sciama d w ( clarendon press , oxford ) isham c j 1981 quantum gravity an overview , in textitquantum gravity 2 , a second oxford symposium isham c j , penrose r and sciama d w ( clarendon press , oxford ) ko m , ludvigsen m , newman e t and tod p 1981 the theory of space _ phys ._ * 71 * 51139 ashtekar a 1984 _ asymptotic quantization _( bibliopolis , naples ) ; also available at + http://cgpg.gravity.psu.edu/research/asymquant-book.pdf greene m b , schwarz j h and witten e 1987 , _ superstring theory , volumes 1 and 2 _ ( cambridge up , cambridge ) r. penrose and w. rindler , _ spinors and space - times , vol 2 , _( cambridge university press , cambridge 1988 ) ashtekar a 1991 _ lectures on non - perturbative canonical gravity , _ notes prepared in collaboration with r. s. tate ( world scientific , singapore ) ashtekar a , bombelli l and reula o 1991 the covariant phase space of asymptotically flat gravitational fields , in _ mechanics , analysis and geometry : 200 years after lagrange _ , ed francaviglia m ( north - holand , amsterdam ) ashtekar a mathematical problems of non - perturbative quantum general relativity in _ gravitation and quantizations : proceedings of the 1992 les houches summer school _ eds julia b and zinn - justin j ( elsevier , amsterdam ) ; also available as ` gr - qc/9302024 ` connes a 1994 _ non - commutative geometry , _ ( academic press , new york ) baez j and muniain j p 1994 _ gauge fields , knots and gravity _ ( world scientific , singapore ) wald r m 1994 _ quantum field theory in curved space - time and black hole thermodynamics _( chicago up , chicago ) gambini r and pullin j 1996 _ loops , knots , gauge theories and quantum gravity _( cambridge up , cambridge ) carlip s 1998 _ quantum gravity in 2 + 1 dimensions _( cambrige up , cambridge ) rovelli c 1998 loop quantum gravity _ living rev .rel . _ * 1 * 1 loll r 1998 discrete approaches to quantum gravity in four dimensions _ living rev ._ * 1 * 13 polchinski , j 1998 _ string theory , volumes 1 and 2 , _ ( cambridge up , cambridge ) ashtekar a 2000 quantum mechanics of geometry , in _ the universe : visions and perspectives _ , eds dadhich n and kembhavi a ( kluwer academic , dordretch ) ; ` gr - qc/9901023 ` bojowald m. 2001 _ quantum geometry and symmetry _ ( saker - verlag , aachen )wald r m 2001 black hole thermodynamics , _ living rev ._ 6 perez a 2003 spin foam models for quantum gravity _ class ._ * 20 * r43r104 klauder j 2003 affine quantum gravity _ int .phys . _ * d12 * 1769 - 1774 sorkin r 2003 causal sets : discrete gravity , ` gr - qc/0309009 ` ashtekar a and lewandowski l 2004 background independent quantum gravity : a status report , _ class_ * 21 * r53-r152 rovelli c 2004 _ quantum gravity _ ( cambridge university press , cambridge ) bojowald m and morales - tecotl h a 2004 cosmological applications of loop quantum gravity _ lect .notes phys . _* 646 * 421 - 462 , also available at ` gr - qc/0306008 ` ambjorn j , jurkiewicz j and loll r 2004 emergence of a 4d world from causal quantum gravity ` hep - th/0404156 ` gambini r and pullin j 2004 consistent discretizations and quantum gravity ` gr - qc/0408025 ` ashtekar a and krishnan b 2004 isolated and dynamical horizons and their properties , pre - print ` gr - qc/0407042 ` lewandowski j , okow a , sahlmann h , thiemann t 2004 uniqueness of the diffeomorphism invariant state on the quantum holonomy - flux algebra _ preprint _ liddle a r and lyth d h 2000 _ cosmological inflation and large scale structure _ ( cambridge up , cambridge ) domagala m and lewandowski j 2004 black hole entropy from quantum geometry _ preprint _ ` gr - qc/0407051 ` meissner k a 2004 black hole entropy in loop quantum gravity _ preprint _ `gr - qc/0407052 ` perez a and noui k 2004 three dimensional loop quantum gravity : coupling to point particles _ preprint _ ` gr - qc/0402111 ` ashtekar a and bojowald m 2004 non - singular quantum geometry of the schwarzschild black hole interior _ preprint _ ashtekar a and bojowald m 2004 black hole evaporation : a paradigm _ preprint _ perini d 2004 _ the asymptotic safety scenario forgravity and matter _ ph.d .dissertation , sissa thiemann t 2005 _ introduction to modern canonical quantum general relativity _( cambridge university press , cambridge ) ; draft available as ` gr - qc/0110034 `
|
the goal of this article is to present a broad perspective on quantum gravity for _ non - experts_. after a historical introduction , key physical problems of quantum gravity are illustrated . while there are a number of interesting and insightful approaches to address these issues , over the past two decades sustained progress has primarily occurred in two programs : string theory and loop quantum gravity . the first program is described in horowitz s contribution while my article will focus on the second . the emphasis is on underlying ideas , conceptual issues and overall status of the program rather than mathematical details and associated technical subtleties . h ( 1)u(1 ) m _ pacs 04.60pp , 04.60.ds , 04.60.nc , 03.65.sq_
|
for full exploitation of high resolution position sensitive detectors , it is crucial to determine the detector location and orientation to a precision better than their intrinsic resolution .it is a very demanding task to assemble a large number of detector units in a large and complex detector system to this high precision . also , after assembly , the position determination of the modules by optical survey has its limitations because of detectors obscuring each other .therefore the final tuning of detector and sensor positions is made by using reconstructed tracks . in this paperwe present an effective method by which individual sensors in a detector setup can be aligned to a high precision with respect to each other .the basic idea is illustrated in figure [ fig.illustr ] . using a large number of tracks , an optimum of each sensor position and orientationis determined such that the track fit residuals are minimized . and measured hits on a detector , the sensor is moved such as to minimize the residuals.,width=264 ] the outline of this paper is as follows : in section [ sec.literature ] we briefly review published alignment methods . in section [ sec.coordsys ]we introduce the basic notations and coordinate systems involved in our method . in section [ sec.description ] we present the detailed formulation of the method . in sections[ sec.testbeam ] and [ sec.simulation ] we demonstrate the performance of the method applied to a test beam setup and to a simulated pixel vertex detector , respectively .the cms pixel detector is is used as a model in the simulation .most hep experiments equipped with precise tracking detectors have to deal with misalignment issues , and several different approaches for alignment by tracks have been used and reported .most methods are iterative with 5 - 6 parameters solved at a time .several papers concerning different aspects of alignment in the delphi experiment can be found in the literature .for instance , and cosmic rays are used for the global alignment between sub - detectors vd , od and tpc .the most detailed delphi alignment paper deals with the alignment of the microvertex detector . in the aleph experiment , alignment was carried out wafer by wafer , and with 20 iterations and 20000 and 4000 events an accuracy of a few can be achieved . a different , computationally challenging approach is chosen in the sld experiment , where the algorithm requires simultaneous solution of 576 parameters leading to a 576 by 576 matrix inversion . in the sld vertex detector ,a recently developed matrix singular value decomposition technique is also used for internal alignment .our method is applicable to detector setups which consist of planar sensors like silicon pixel or strip detectors . for track reconstruction oneconventionally uses the local ( sensor ) coordinate system and the global detector system .the local system is defined with respect to a detector module ( sensor ) as follows : the origin is at the center of the sensor , the -axis is normal to the sensor , the -axis is along the precise coordinate and the -axis along the coarse coordinate .the global coordinates are denoted as .the transformation from the global to the local system goes as : where , , is a rotation and is the position of the detector center in global coordinates . in the very beginning of the experiment the rotation and the position determined by detector assembly and survey information . in the course of experimentthis information will be corrected by an incremental rotation and translation so that the new rotation and translation become : the correction matrix is expressed as : where and are small rotations by around the -axis , the ( new ) -axis and the ( new ) -axis , respectively .the position correction transforms to the local system as : with . using ( [ transf1]-[tr_inv ] )we find the corrected transformation from global to local system as : where the superscript stands for corrected. the task of the alignment procedure by tracks is to determine the corrective rotation and translation or as precisely as possible for each individual detector element .since the alignment corrections are small , the fitted trajectories can be approximated with a straight line in a vicinity of the detector plane .the size of this small region is determined by the alignment uncertainty which is expected to be at most a few hundred microns so that the straight line approximation is perfectly valid .the equation of a straight line in global coordinates , approximating the trajectory in a vicinity of the detector , can be written as : where is the trajectory impact point on the detector in question , is a unit vector parallel to the line and is a parameter .equation ( [ str_unc ] ) is for _ uncorrected _ detector positions . using eq .( [ trans2 ] ) the _ corrected _ straight line equation in the local system reads : where .a point which lies in the detector plane must fulfill the condition , where is normal to the detector . from this conditionwe can solve the parameter which gives the _ corrected _ impact or x - ing point on the detector : \cdot\hbfw } { \bfr_c\hbfs\cdot\hbfw}.\label{sol_t}\ ] ] the corrected impact point coordinates in the local system are then : \cdot\hbfw } { \bfr_c\hbfs\cdot\hbfw}\bfr_c\hbfs-\bfdq.\label{eq.qxc}\ ] ] since the uncorrected impact point is , eq .( [ eq.qxc ] ) can be written as : where is the uncorrected trajectory direction in the detectors local frame of reference .( [ eq.qxcl ] ) evaluates to : ) \frac{\bfdr\,\hbft}{[\bfdr\,\hbft]_3 } -\bfdq.\label{eq.qxcll}\ ] ] this expression provides us with a handle by which the unknowns and can be estimated by minimizing a respective function using a large number of tracks .we denote a measured point in local coordinates as .the corresponding trajectory impact point is . for simplicitywe omit the superscripts in the coordinates and . in stereo and pixel detectors we have two measurements , and , and in non - stereo strip detectors only one , . in the latter casethe coarse coordinate is redundant .the residual is either a 2-vector : or a scalar . in the following we treat the more general 2-vector case .the scalar case is a straightforward specification of the 2-vector formalism .the function to be minimized for a given detector is : where the sum is taken over the tracks . is the covariance matrix of the measurements associated with the track .the alignment correction coefficients , i.e. the three position parameters and the three orientation parameters are found iteratively by a general minimization procedure . at each step of the iterationone uses the so far best estimate of the alignment parameters in the track fit .let us denote these parameters as .then , according to the general solution , the iterative correction to has the following expression : ^{-1 } \left[\sum_j \bj_j^t\bv_j^{-1}\vepsb_j\right]\ ] ] where is a jacobian matrix of : an adequate starting point for the iteration is a null correction vector =*0*. in the general case of two measurements , is a matrix . in case of scalar , for single sided strip detectors , is a vector of 5 elements , because is redundant and can not be fitted. it will also be foreseen that only a sub - set of the 6 alignment parameters would be fitted and the others kept fixed . in this casethe dimension of the jacobian matrix reduces accordingly .the derivatives of the jacobian matrix can be computed to a good precision in the small correction angle approximation ( see below ) .the elements of the matrix for a given track are then : the quantities and are defined in the next section .we call `` tilts '' the angle corrections which are small enough to justify the approximations and . in this approximationthe correction matrix reads : using eq .( [ eq.tiltm ] ) we linearize eq .( [ eq.qxcll ] ) and get the following expressions for the corrections of the impact point coordinates as a function of the alignment correction parameters : where . the quantity is the angle between the track and the and is the angle between the track and the : , . with this approximation the residuals ( [ eq.epsilon ] ) depend linearly on all 6 parameters .hence the minimization problem is linear and can be solved by standard techniques without iteration .from eqs . ( [ eq.delux ] ) and ( [ eq.delvx ] )we can estimate the contributions of various misalignments to the hit measurement errors .for example the contribution of a misalignment around the -axis to the -coordinate is : the error is small near normal incident angles , but grows rapidly as a function of . at and near the edge of the sensor ( cm ) the error goes as so that for only error in the systematic error in the -coordinate is .the silicon detector team of helsinki institute of physics made a precision survey of detector resolution as a function of the angle of incidence of the tracks .the study was made in the cern h2 particle beam with a setup described in figure [ fig.sibt ] .one of the silicon strip detectors was fixed on a rotative support which allowed the tracks to enter between and degrees of incident angle .the angular dispersion of the beam was about 10mrad and the hits covered the full area of the test detector . in order to obtain reliable results it was extremely important to calibrate the tilt angle to a very high precision .our algorithm was used in the alignment calibration . in table[ tab.align ] we show the result of the alignment demonstrating the precision obtained by about 3000 beam tracks ..alignment parameters obtained by the algorithm [ cols="^,^,^",options="header " , ] with the precise alignment we have been able to determine the optimal track incident angle which minimizes the detector resolution .a monte carlo simulation code was written to test the alignment algorithm .high momentum tracks were simulated and driven through a set of detector planes .the simulated hits were fluctuated randomly to simulate measurement errors .gaussian multiple scattering was added quadratically using the highland approximation .the algorithm involves misalignment of a detector setup in order to simulate a realistic detector .the experimenters imperfect knowledge of the true position of the detector planes is simulated by reconstructing the trajectories in the ideal ( not misaligned ) detector .this means that in the transformation from local to global coordinate system one uses the ideal positions of the detector planes .the full algorithm in brief is as follows : 1 .creation of an ideal detector setup with no misalignments 2 .creation of a misaligned , realistic detector 3 .generation of the particles and hits in the misaligned detector simulating the real detector 4 .reconstruction of the particle trajectories in the nominal ( ideal ) detector thus using slightly wrong hit positions .this simulates the realistic situation in which the detector alignment is not yet performed . for the simulated detector typewe choose a vertex detector which is a simplification of the cms pixel barrel detector with two layers .the setup is illustrated in figure [ fig.vdet ] .there are 144 sensors in layer 1 and 240 sensors in layer 2 .the distance of the layer 1 from the beam line is about 4 cm and the layer 2 about 8 cm . in the simulation we used the following conditions : 1 .misalignment of chosen sensors : the shifts were chosen at random , each in the range and the tilts were chosen at random each in the range .beam and vertex constraints : the vertex positions were gaussian fluctuated around the center of the beam diamond with and cm and the tracks were fitted with the constraint to start from one point , i.e. from the primary vertex . inthe following we consider two different cases of misaligned detectors : 1 .all sensors in layer 2 fixed , all sensors in layer 1 misaligned .2 . only one sensor in layer 2 fixed , all remaining 383 sensors misaligned . in case ithe total number of fitted parameters is and we used about tracks .the case i appears to be an easy one with which the algorithm copes very well , as we see below .the second case we call extreme since the alignment is based on one reference sensor which covers only about 0.26% of the detector setup area .the total number of fitted parameters in this case was . in the following sectios we show perfomance results of the algorithm in these two cases .the convergence rate of the alignment procedure as a function of the iteration cycle is shown in figure [ fig.caserr ] .it appears that the convergence is fast in the easy case ( the 6 plots on the left ) where more than 60% of the sensors provide the reference .the convergence takes place after a couple of iterations . in the case where only one sensor is taken as a reference ( plots on the right of the figure ), the situation is different .it appears that the number of iterations needed varies between 20 and 100 from parameter to parameter .it is also seen that the converged parameter values are somewhat off from the true values , but the precision is reasonable .the precision of the fitted parameters in comparison with the true values is shown in figure [ fig.casecc ] on the left for the case i. the correlations are very strong .the typical deviation of the fitted parameters from the true value is less than for the offsets and a fraction of a milliradian for the tilts .the precision appears to be better than actually needed in this case , indicating that a smaller statistics would give a satisfactory result . in caseii ( the plots on the right of the figure ) a good correlation is observed , but the precision is somewhat more modest .for example the error in ( the shift normal to the sensor plane ) is still in most cases below .we have developed a sensor alignment algorithm which is mathematically and computationally simple .it is based on repeated track fitting and residuals optimization by minimization .the computation is simple , because the solution involves matrices whose dimension is at most .the method is capable of solving simultaneously all six alignment parameters per sensor for a detector setup with a large number of sensors .we have successfully applied the method in a precision survey of silicon strip detector resolution as a function of the tracks incident angle .furthermore , we have demonstrated the performance of the algorithm in case of a simulated two - layer pixel barrel vertex detector .the method performs very well in the case where the outer layer is taken as a reference and all inner sensors are to be aligned .the algorithm performs reasonably well also in the extreme case where only one sensor , representing some 0.26% of the total area , is taken as a reference for the alignment .+ 9 m.della negra et al ., `` cms tracker technical design report '' , cern / lhcc 98 - 6 .b. mours et al ., `` the design , construction and performance of the aleph silicon vertex detector '' , nucl . instr . and meth .a453 ( 1996 ) 101 - 115 .a. andreazza and e. piotto , `` the alignment of the delphi tracking detectors '' , delphi 99 - 153 track 94 ( 1999 ) .m. caccia and a. stocchi , `` the delphi vertex detector alignment : a pedagogical statistical exercise '' , infn ae 90 - 16 ( 1990 ) .k. abe et al . , `` design and performance of the sld vertex detector , a 307 mpixel tracking system '' , nucl .instr . and meth .a400 ( 1997 ) .d. j. jackson and dong su and f. j. wickens , `` internal alignment of the sld vertex detector using a matrix singular value decomposition technique '' , nucl .instr . and meth .a491 ( 2002 ) .c.eklund et al . , `` silicon beam telescope for cms detector tests '' , nucl .and meth .a430 ( 1999 ) 321 - 332 .k.banzuzi et al . , `` performance and calibration studies of silicon strip detectors in a test beam '' , nucl. instr . and meth .a453 ( 2000 ) 536 .v.l.highland , `` some practical remarks on multiple scattering '' , nucl . instr . and meth .129 ( 1975 ) 497 .d.kotlinski , `` the cms pixel detector '' , nucl .instr.and meth .a465 ( 2000 ) 46 .
|
good geometrical calibration is essential in the use of high resolution detectors . the individual sensors in the detector have to be calibrated with an accuracy better than the intrinsic resolution , which typically is of the order of . we present an effective method to perform fine calibration of sensor positions in a detector assembly consisting of a large number of pixel and strip sensors . up to six geometric parameters , three for location and three for orientation , can be computed for each sensor on a basis of particle trajectories traversing the detector system . the performance of the method is demonstrated with both simulated tracks and tracks reconstructed from experimental data . we also present a brief review of other alignment methods reported in the literature .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.